text stringlengths 4 602k |
|---|
A long time ago in a galaxy far away—NGC 4993, to be exact—two neutron stars collided and created a spectacular light show.
After billions of years spent slowly circling each other, in their last moments the two degenerate stars spiraled around each other thousands of times before finally smashing together at a significant fraction of light-speed, likely creating a black hole. The merger was so violent it shook the universe, emitting some 200 million suns’ worth of energy as perturbations in the fabric of spacetime called gravitational waves. Those waves propagated out from the merger like ripples on a pond, eventually washing over Earth — and into our planet’s premiere gravitational-wave detectors, the U.S.-built LIGO and European-built Virgo observatories.
Yet gravitational waves were not the merger’s only products. The event also emitted electromagnetic radiation — that is, light — marking the first time astronomers have managed to capture both gravitational waves and light from a single source. The first light from the merger was a brief, brilliant burst of gamma rays, a probable birth cry of the black hole picked up by NASA’s Fermi Gamma-Ray Space Telescope. Hours later astronomers using ground-based telescopes detected more light from the merger—a so-called “kilonova”—produced as debris from the merger expanded and cooled. For weeks much of the world’s astronomical community watched the kilonova as it slowly faded from view.
As astronomers studied the merger’s aftermath in various wavelengths of light, they saw signs of countless heavy elements forming instantly. Astronomers had long predicted merging neutron stars may be responsible for forming elements such as gold and titanium, neutron-rich metals that are not known to form in stars. Most everything they saw in the changing light of the merger’s kilonova matched those predictions, although no one definitively, directly saw the merger spewing out gold nuggets by any stretch.
Even seen across its estimated 130 million light-year separation from us, the event was big, bright and glorious. Based on the rarity of neutron stars—let alone ones that happen to merge—it is unlikely we will ever see such a display significantly closer to us. But let’s imagine if we could—if it happened in the Milky Way or one of its several satellite galaxies. Or, heaven forbid, in our immediate stellar neighborhood. What would we see? What effects would it have on our home world? Would the environment, civilization, even humanity, emerge intact?
Although LIGO, by design, can “hear” the mergers of massive objects such as neutron stars and black holes, astronomers were still lucky to detect this particular event. According to Gabriela González, a LIGO team member and astrophysicist at Louisiana State University, if the merger had been three to four times farther away, we would not have heard it at all. Ironically, LIGO’s exquisite tuning for detecting distant black hole mergers could make it miss big ones occurring around the solar system’s nearest neighboring stars. The immense and intense gravitational waves from such a nearby event “would probably be [greater] than the dynamic range of our instrument,” Gonzalez says.
Despite being strong enough to shake the universe, the gravitational waves from even a nearby merger of two large black holes would still be scarcely noticeable, because the shaking manifests on microscopic scales. (If gas, dust or any other matter was very close the merging black holes, however, astronomers might see light emitted from that infalling material as it plunges in.) “The amazing thing to me is that you could be so close to black holes colliding, even as close as just outside the solar system, and you wouldn’t even notice the stretching of spacetime with your eyes,” González says. “You would still need an instrument to see or measure it.”
In contrast, a kilonova from a neutron star merger in our galaxy would probably be quite noticeable. Gonzalez says it could suddenly appear as a bright star in the sky, and would be clearly detectable by LIGO, too. Rather than lasting for a matter of seconds, the gravitational waves heard by LIGO would be drawn out over minutes, even hours, as the neutron stars spiraled ever-closer together before their ultimate coalescence. It would be a bit like tuning into a live Grateful Dead jam instead of a studio version. (And yes, let’s say the song is “Dark Star” for our purposes.)
Even if LIGO tuned in, however, there are ways we might miss seeing much of the light from a nearby neutron star merger and its subsequent kilonova. Kari Frank, an astronomer at Northwestern University, says such a large, luminous event could end up obscured by dust and other stars—at least at visible and infrared wavelengths. In other words, LIGO and telescopes looking in wavelengths such as radio or x-ray might glimpse a nearby kilonova that optical astronomers would miss. “There have been supernovae—at least ones that we know of in our galaxy in the last 100 years or so—for which we didn’t see the explosion at all, we only saw what was left afterward,” Frank says. And a kilonova, for all the punch it packs, is only a fraction of the luminosity of a typical supernova.
Still, astronomers’ responses to any stellar cataclysm in or around the Milky Way would likely be swift. After all, there’s the example of supernova 1987A to consider.
The Big Boom
As its name suggests, supernova 1987A occurred in 1987, unfolding in a dwarf galaxy that orbits the Milky Way called the Large Magellanic Cloud. A star about eight times the sun’s mass collapsed in on itself and sent its outer envelope of gas out into interstellar space, forming a nebula of heavy elements and other debris before collapsing into either a neutron star or a black hole. It remains the only nearby supernova astronomers have seen in modern times.
Frank has studied the subsequent global campaign to observe supernova 1987A, focusing on how astronomers organized and executed their observations at a time when the internet was embryonic at best.”Somebody sees something, and they send out notices to everybody,” she says. “The people who first discovered it had to phone whomever they could to tell them that this thing was happening, that they saw this supernova in the sky that was really close by,” Frank says. “They sent these circulars—letters and things to people—and then everyone who could would go to their telescope and point to it.”
For months, astronomers worldwide scrutinized the event, utilizing almost every available telescope. “Everybody wanted to make sure that as many [telescopes] looked at it as possible,” Frank says. Eventually, things settled down, but several researchers—including Frank—are still studying the supernova’s remnants 30 years later. “For some people, it was life-changing, or at least career-changing,” Frank says. “This was the thing in astronomy that year.”
Like LIGO, the observation campaign for supernova 1987A involved thousands of collaborators. But not all of them shared in the glory of co-authoring any of the many resulting studies published in the scientific literature. Consequently, there’s no real head count of how many people participated. Counting collaborators working on the recent neutron star merger is much easier—some 3,000 authors across 67 papers, or an estimated 15 percent of the entire field of astrophysics.
The question of how many astrophysicists would receive credit for another event like supernova 1987A depends, in no small part, on just how close the event would be. If supernova 1987A had occurred much, much closer to Earth—around a nearby star, for instance—the key uncertainty could become not how many scientists observed the event, but how manysurvived it.
Death from Above
According to a 2016 study, supernovae occurring as close as 50 light-years from Earth could pose an imminent danger to Earth’s biosphere—humans included. The event would likely shower us in so much high-energy cosmic radiation that it could spark a planetary mass extinction. Researchers have tentatively linked past instances of spiking extinction rates and plummeting biodiversity to postulated astrophysical events, and in at least one case have even found definitive evidence for a nearby supernova as the culprit. Twenty million years ago, a star 325 light-years from Earth exploded, showering the planet in radioactive iron particles that eventuallysettled in deep-sea sediments on the ocean floor.That event, researchers speculate, may have triggered ice ages and altered the course of evolution and human history.
The exact details of past (and future) astrophysical cataclysms’ impact on Earth’s biosphere depend not only on their distance, but also their orientation. A supernova, for instance, can sometimes expel its energy in all directions—meaning it is not always a very targeted phenomenon. Merging black holes are expected to emit scarcely any radiation at all, making them surprisingly benign for any nearby biosphere. A kilonova, however, has different physics at play. Neutron stars are a few dozen kilometers in radius rather than a few million like a typical stars. When these dense objects merge, they tend to produce jets that blast out gamma rays from their poles.
“[W]hat it looks like to us, and the effect it has on us, would depend a lot on whether or not one of the jets was pointed directly at us,” Frank says. Based on its distance and orientation to Earth, a kilonova’s jets would walk the fine line between a spectacular light show and a catastrophic stripping away of the planet’s upper atmosphere. If a jet is pointed directly at us, drastic changes could be in store. And we probably wouldn’t see them coming. A kilonova begins with a burst of gamma rays—incredibly energetic photons that, by definition, move at light-speed, the fastest anything can travel through the universe. Because nothing else can move faster, those photons would strike first, and without warning.
“What [the gamma rays] would do, probably more than anything else, is dissolve the ozone layer,” says Andrew Fruchter, a staff astronomer at the Space Telescope Science Institute. Next, the sky would go blindingly white as the visible light from the kilonova encountered our planet. Trailing far behind the light would be slower-moving material ejected from the kilonova—radioactive particles of heavy elements that, sandblasting the Earth in sufficient numbers, could still pack a lethal punch.
That’s if the kilonova is close, though—within 50 light-years, give or take. At a safer distance, the gamma rays would still singe the ozone layer on the facing hemisphere, but the other side would be shielded by the planet’s bulk. “Most radiation happens very quickly, so half the Earth would be hidden,” Fruchter says. There would still be a momentarily blinding light. For a few weeks, a new star would burn bright in the sky before gradually fading back into obscurity.
Don’t let all this keep you up at night. Kilonovae are relatively rare cosmic phenomena, estimated to occur just once every 10,000 years in a galaxy like the Milky Way. That’s because neutron stars, which are produced by supernovae, hardly ever form as pairs. Usually, a neutron star will receive a hefty “kick” from its formative supernova; sometimes these kicks are strong enough to eject a neutron star entirely from its galaxy to hurtle at high speeds indefinitely through the cosmos. “When neutron stars are born, they’re often high-velocity. For them to survive in a binary is nontrivial,” Fruchter says. And the chances of two finding each other and merging after forming independently are, for lack of a better term, astronomically low.
The binary neutron stars we know of in our galaxy are millions or billions of years away from merging. Any local merger of neutron stars at all would take LIGO by surprise, given that the events are so rare, and astronomers might not even see the resulting kilonova at all. But if one did occur—say, in one of the Milky Way’s satellite galaxies—it would be a great reason to run to a telescope to witness the flash and fade of a brief, brilliant new “star.” The dangers would be nearly nonexistent, but not the payoff: Our generation of astronomers would have their own supernova 1987A to dissect. “This is a once-in-many-lifetimes kind of event,” Frank says. Thus, she says, we would need to follow something like it with all the world’s astronomical resources. “We have to remember to think beyond the initial explosion,” she adds. “Stuff might still happen and we have to keep a watch out for that.”
For now astronomers’ attentions are still fixated on the kilonova in NGC 4993. The Earth’s orbital motion has placed the sun between us and the distant galaxy, however, hiding the kilonova’s fading afterglow. When our view clears, in December, many of the world’s telescopic eyes will again turn to the small patch of sky containing the merger. In the meantime papers will be penned and published, careers minted, reputations secured. Science will march on, and wait—wait for the next possible glimpse of a kilonova, the whispers of a neutron star merger or, if we’re lucky, something new altogether.
This article was first published at ScientificAmerican.com. ©ScientificAmerican.com. All rights reserved Follow Scientific American on Twitter @SciAm and @SciamBlogs. Visit ScientificAmerican.com for the latest in science, health and technology news. |
Differentiation of Inverse Functions
In this calculus worksheet, students solve three problems regarding the differentiation of inverse functions. Students are also asked to show that a function is one to one and to evaluate functions at a given value.
12th - Higher Ed Math 3 Views 5 Downloads
Teaching Problem Solving Strategies in the 5-12 Curriculum
Address any kind of math concept or problem with a series of problem-solving strategies. Over 12 days of different activities and increasing skills, learners practice different ways to solve problems, check their answers, and reflect...
5th - 12th Math
New Review Word Problems Leading to Rational Equations
Show learners how to apply rational equations to the real world. Learners solve problems such as those involving averages and dilution. They write equations to model the situation and then solve them to answer the question — great...
10th - 12th Math CCSS: Designed
Calculus - Early Transcendentals
This textbook takes the learner from the basic definition of slope through derivatives, integrals, and vector multivariable calculus. Each section is composed primarily of examples, with theoretical introductions and explanations in...
9th - Higher Ed Math CCSS: Adaptable
New Review Hyper-Solving Quadratic Equations
How does a ghost solve a quadratic equation? By completing the scare! Groups of learners develop a slideshow presentation outlining the ways to solve quadratic equations. The plan outlines the requirements, making the expectations clear...
9th - 12th Math CCSS: Designed |
This resource will allow you to lead your students through careful observation and analysis of a woven, embellished textile from Indonesia, called a tapis. It is based on the Learning to Look method created by the Hood Museum of Art. This discussion-based approach will introduce you and your students to the five steps involved in exploring a work of art: careful observation, analysis, research, interpretation, and critique.
How to use this resource:
Print out this document for yourself.
Read through it carefully as you look at the image of the work of art.
When you are ready to engage your class, project the image of the work of art on a screen in your classroom. Use the questions provided below to lead the discussion.
Explain to students that a tapis (pronounced tah-PEACE) is a type of textile created and worn by wealthy women in Lampung province on the island of Sumatra in the Republic of Indonesia. (See map at the end of this document.) Tapis are made by sewing together long strips of handwoven cloth to form a large rectangle. This cloth is decorated and then sewn into a tube. Women wear tapis for important events and ceremonies along with headdresses, gold jewelry, and additional wraps and scarves.
Tapis were extremely valuable, in part because the materials that decorate them (gold-wrapped thread, silk floss, and mirrors) were forms of currency. By wearing a tapis, a woman was literally wearing her family’s wealth. Women have worn tapis in Lampung province for hundreds of years and still wear them today.
Step 1. Close Observation
Ask students to look carefully and describe everything they see. Start with broad, open-ended questions like:
* What do you notice about this tapis?
* What else do you see?
Become more and more specific as you guide your students’ eyes around the work with questions like these:
What do you notice about:
* the shape of this tapis?
* the colors used?
* the shapes and patterns?
* the way the patterns are arranged?
Step 2. Preliminary Analysis
Once you have listed everything you can see about the object, begin asking simple analytic questions that will deepen your students’ understanding of the work.
* What do you think is going on in this tapis detail?
* What do each of the shapes seem to represent?
* How do you think the artists created these images and patterns on the tapis?
* What process and materials do you think they used?
After each response, always ask, “How do you know?” or “How can you tell?” so that students will look to the work for visual evidence to support their theories.
Step 3. Research
At the end of this document, you will find some background information on this object. Read it or paraphrase it for your students.
Step 4. Interpretation
Interpretation involves bringing your close observation, preliminary analyses, and any additional information you have gathered about an art object together to try to understand what a work of art means. There are often no absolute right or wrong answers when interpreting a work of art. There are simply more thoughtful and better informed ones. Challenging your students to defend their interpretations based upon their visual analysis and their research is most important.
Some basic interpretation questions for this object might be:
* What does this tapis communicate about the wealth of the woman who made it and her family?
* What does this tapis tell us about what is valued and prized in this culture?
* Why do you think women traditionally wore the family’s wealth in this culture?
* What does this tradition tell us about women’s roles in this part of the world?
* How are these cultural traditions and roles similar to or different from your own?
Step 5. Critical Assessment and Response
Critical assessment and response involves a judgment about the success of a work of art. This step optional but should always follow the first four steps of the Learning to Look method. Art critics often engage in this further analysis and support their opinions based on careful study of and research about the work of art.
Critical assessment involves questions of value. For instance:
* Were the artists who created this tapis skillful and successful at expressing their culture’s values?
* Would this tapis have been valued in Lampung? Why or why not?
Another realm that this fifth stage can encompass is one’s response to a work of art. Different from assessment, the realm of response can be much more personal and subjective.
* How do you feel about this tapis? Do you like it?
* What do you think of the craftsmanship involved? Does it remind you of any traditions, ideas, or values in your own culture?
Unknown artist, Lampung Province, Sumatra, Indonesia
Tapis Raja Medal
Woven in cotton, embellished with gold-wrapped thread
Lister Family Collection
Photo © 2004 by John Bigelow Taylor
Textile arts have been prized in Asia for hundreds of years. Along the so-called Silk Roads—over land and sea—textiles were traded as currency and commodities in the pre-modern world. On the large group of islands that make up the Indonesian archipelago, specially crafted textiles have played and continue to play significant roles in ceremonial events. They are used to wrap precious heirlooms, enclose or mark sacred spaces, and give visual prominence to important participants and leaders.
In the Lampung province of Indonesia’s Sumatra Island, women have created elaborately worked tube-shaped sarongs called tapis. Tapis are constructed by sewing together strips of woven cloth and then decorating the large textile with a spectacular variety of ornamental materials. Although tapis may be worn by anyone today, this important personal adornment was once reserved for the prosperous nobility of the small “kingdoms” that established this region.
Ninth- and tenth-century records indicate that tapis were originally prestigious gifts for highly placed male leaders, but in the last several hundred years, tapis have been worn by women at ceremonies that celebrated marriages, coming-of-age rites, and the attainment of chiefly titles. The whole community observed these social achievements, highlighted by the spectacular displays of elite women adorned with ornate tapis, elaborate head-dresses, and masses of gold jewelry.
There is no evidence that tapis were worn to enhance or define a woman’s body in previous centuries. Rather, women designed and decorated tapis in order to convey wealth, social station, and family affiliation; the woman’s body was simply the vehicle of display.
Seagoing adventures and maritime commerce are dominant themes represented by a variety of motifs found in tapis. Lampung faces the Sunda Strait, which separates Indonesia's two best-known islands, Sumatra and Java, and is one of only two waterways through which ships can navigate between eastern and western Asia. As a result, the peoples of this province have enjoyed over two thousand years of maritime commercial connections with Asian and Middle Eastern empires and, more recently, Europeans and Americans. They traded pepper, elephants, gold, and other valuable commodities for prized textile goods and other items that could be used to embellish tapis, such as gold-wrapped thread and wire, silk floss, mica and mirrors, beads, metal sequins, and coins. (See detail A.)
As a result, throughout Southeast Asian coastal regions, wealth and status were symbolically intertwined with the concept of ships. In Lampung, tapis were the proof of a family’s success and were often decorated with the ships that brought their wealth. In particular, naga-ships were important motifs. An ancient mythological creature, part serpent and part dragon, the naga figures into Lampung’s early conceptualization of the world and the cosmos. Naga are associated with water, fertility, the moon, and the Milky Way; thus, successful transitions and voyages often were symbolized by naga.
This Tapis Raja Medal, or “King’s Procession Tapis” illustrates a procession of naga-ships that alternate with figures facing the viewer—perhaps depicting an admiring audience. Lampung chiefs and their wives were conveyed to a ritual site in small palanquins, or wagons. The wagons took the form of dragon-headed boats with long, curling tails. (See detail B.) The procession of occupants seated “on deck” mimicked the important return of successful sailors. These images were created with gold-wrapped threads that were bent into shape and sewn onto the surface of the tapis with silk thread, a technique called “couching.” The gold thread was too valuable to sew in and out of the fabric like traditional embroidery because half of the gold would have been hidden underneath. |
A recent study led by scientists at NASA’s Jet Propulsion Laboratory in Southern California has identified whether vegetated areas like forests and savannas around the world were carbon sources or sinks every year from 2000 to 2019.
The research found that over the course of those two decades, living woody plants were responsible for more than 80% of the sources and sinks on land, with soil, leaf litter, and decaying organic matter making up the rest. But they also saw that vegetation retained a far smaller fraction of the carbon than the scientists originally thought.
In addition, the researchers found that the total amount of carbon emitted and absorbed in the tropics was four times larger than in temperate regions and boreal areas (the northernmost forests) combined, but that the ability of tropical forests to absorb massive amounts of carbon has waned in recent years.
The decline in this ability is because of large-scale deforestation, habitat degradation, and climate change effects, like more frequent droughts and fires. In fact, the study, showed that 90% of the carbon that forests around the world absorb from the atmosphere is offset by the amount of carbon released by such disturbances as deforestation and droughts.
The scientists created maps of carbon sources and sinks from land-use changes like deforestation, habitat degradation, and forest planting, as well as forest growth. They did so by analyzing data on global vegetation collected from space using instruments such as NASA’s Geoscience Laser Altimeter System (GLAS) on board ICESat and the agency’s Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Terra and Aqua satellites, respectively.
The analysis used a machine-learning algorithm that the researchers first trained using vegetation data gathered on the ground and in the air using laser-scanning instruments.
“The Amazon was considered a substantial carbon sink because of large tracts of pristine forest that soak up carbon dioxide,” said Sassan Saatchi, principal scientist at JPL and the study lead investigator. “However, our results show that overall, the Amazon Basin is becoming almost neutral in terms of carbon balance because deforestation, degradation, and the impacts of warming, frequent droughts, and fires over the past two decades release carbon dioxide to the atmosphere.” |
An international team of astronomers is planning to use gravitational wave data to unravel the formation processes that created the first supermassive black holes. These gargantuan black holes lurk at the centre of most galaxies, including our own Milky Way, playing a pivotal role in galaxy formation and evolution.
So far astronomers have failed to rally around a single theory to explain the creation process that produces supermassive black holes. Leading theories include the formation of black holes due to the collapse of a first generation of colossal, ancient stars, or possibly a collision between two ancient stellar bodies that formed part of a vast star cluster. Each of these theories would result in the gravitational waves thrown out by the creation event exhibiting a specific mass signature.
A new study led by scientists from Durham University used data from the two confirmed instances of gravitational waves and fed it into a computer simulation known as the EAGLE project, which was supplemented by a simulation designed to calculate gravitational wave signals. EAGLE is in effect an attempt to create a detailed and faithful simulation of the large scale processes at work throughout the greater universe as we currently understand them.
The results suggest that future gravitational wave observatories, such as the proposed Evolved Laser Interferometer Space Antenna (eLISA) mission, will detect the minute ripples in the fabric of spacetime created by violent cosmic events roughly twice a year. eLISA will take the form of three separate spacecraft working in perfect harmony to form a laser interferometer similar to the LIGO instruments responsible for the initial detections of gravitational waves.
The eLISA spacecraft, which are set to launch in 2034, will orbit the Sun in a triangular formation, forming a vast interferometer 250,000 times larger than the detectors on Earth. The technology, a preliminary version of which was recently tested via the LISA Pathfinder mission, will allow for the detection of lower-frequency waves created by the collision of black holes – each of which could be over a million times the mass of the Sun.
By analysing the amplitude and frequency of the waves detected by missions such as eLISA, astronomers could ascertain the initial mass of the seeds of the earliest supermassive black holes which are thought to have formed some 13 billion years ago – relatively soon after the creation of the universe. The existing theory that correlates most accurately with the gravitational wave data would then become the leading origin theory for the creation of supermassive black holes.
Source: University of Durham |
Our planet now has about 10 trillion gigabytes of digital data, and every day humans generate emails, photos, tweets, and many other digital files that add another 2.5 million gigabytes of data.
Much of this data is stored in massive facilities known as “exabyte data centers” (exabytes = 1 billion gigabytes) that are the size of a football field, and cost about $1 billion to build and maintain.
Storing all the data of the world in a cup of DNA
Many scientists believe that the alternative solution lies in the molecule that contains our genetic information, RNA, or what is known as “DNA”, which can be developed to store huge amounts of information at a very high density.
In this context, Mark Bathe, professor of biological engineering at the Massachusetts Institute of Technology (MIT), says that “a mug of coffee filled with DNA could theoretically store all the world’s data,” as technology.org recently reported, citing Institute platform that published the research.
Bathe explains “We need new solutions to store these huge amounts of data that the world is producing and assembling, especially archival data, and RNA is a thousand times denser than flash memory, and another exciting property is that once you make the DNA polymer, it doesn’t consume any energy, and you can write DNA and then store it forever.”
Scientists have already proven that they can encode images and text pages in the form of RNA. However, there is also a need to find an easy way to access the required file from the many overlapping pieces of DNA, and this was a complex problem that scientists faced in the past, but Bathe and his colleagues solved this problem and found a way to do so by encapsulating each specific data file in a 6-μm silica particle, which was labeled with short DNA sequences that reveal its contents.
Using this method, the researchers showed that they could precisely pull out individual images stored as DNA sequences from a set of 20 images. Given the number of possible labels that could be used, this approach could scale up to 10files.
Digital storage systems encode text, images, or any other type of information as a string of 0, 1 or bits and bytes, and this same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, C.
For example, G and C could be used to represent 0 while A and T represent 1.
DNA has many other features that make it desirable as a storage medium: it is very stable, easy to use (albeit expensive), and because of its high density it saves a lot of space, 1 exabyte of stored data is barely 1 nm cubic, Which you can fit in the palm of your hand without feeling it instead of a huge football field.
One of the major obstacles to this type of storage is the high physical cost, with the cost of writing one petabyte (one million gigabytes) of data currently at about $1 trillion. To become a competitor to magnetic tape, which is often used to store archival data today, the cost would have to drop dramatically, and Bathe expects this to happen within a decade or two at the latest.
The main obstacle researchers faced
Aside from the cost, the main obstacle the research team has faced in using DNA to store data is the difficulty of finding the file you want among all the others.
“Assuming that the technologies for writing DNA get to a point where it’s cost-effective to write an exabyte or zettabyte of data in DNA, then what? You’re going to have a pile of DNA, which is a gazillion files, images or movies and other stuff, and you need to find the one picture or movie you’re looking for,” Bathe says. “It’s like trying to find a needle in a haystack.”
Currently, DNA files are conventionally retrieved using PCR (polymerase chain reaction). Each DNA data file includes a sequence that binds to a particular PCR primer. To pull out a specific file, that primer is added to the sample to find and amplify the desired sequence. However, one drawback to this approach is that there can be crosstalk between the primer and off-target DNA sequences, leading unwanted files to be pulled out. Also, the PCR retrieval process requires enzymes and ends up consuming most of the DNA that was in the pool.
What is the solution to this dilemma?
As an alternative approach, the MIT team has developed a new retrieval technology that involves encapsulating each coil stored in DNA in a small silica capsule. Each capsule is encoded with single-stranded DNA “barcodes” corresponding to the contents of the file, and these codes are the name of the capsule contained in the file.
To demonstrate this approach in a cost-effective manner, the researchers encoded 20 different images into pieces of DNA about 3,000 nucleotides long, which is equivalent to about 100 bytes. (They also showed that the capsules could fit DNA files up to a gigabyte in size.)
The result was astonishing. The raw materials were labeled with fluorescent or magnetic particles, making it easy to pull them out and make sure they match the required coil, and then pull or open that coil while leaving the rest of the DNA intact for return to storage. This search process allows typing words such as “President, America, the eighteenth century” to be President George Washington, which is the same as what is currently done while searching for such words in the Google search engine.
For their barcodes, the researchers used single-stranded DNA sequences from a library of 100,000sequences, each about 25 nucleotides long, developed by Stephen Elledge, a professor of genetics and medicine at Harvard Medical School. If you put two of these labels on each file, you can uniquely label 10 (10 billion) different files, and with four labels on each, you can uniquely label 10 files.
A giant leap in search technology
George Church, professor of genetics at Harvard Medical School, describes this technology as “a giant leap in knowledge management and research technology.”
“The rapid progress in writing, copying, reading, and low-energy archival data storage in DNA form has left poorly explored opportunities for precise retrieval of data files from huge (10 byte, zetta-scale) databases,” says Church, who was not involved in the study.
“The new study spectacularly addresses this using a completely independent outer layer of DNA and leveraging different properties of DNA (hybridization rather than sequencing), and moreover, using existing instruments and chemistries.” He added
It may take some time for the financial cost of this amazing method of storing digital data to come down, but it is definitely coming in the near future.
It remains to be mentioned that the research team that achieved this amazing achievement consists of Professor Dr. Mark Bathe as the team leader, researcher James Bandall from the Massachusetts Institute of Technology, Associate Professor at the Institute Watson Shepherd, and graduate student at the Institute Joseph Berlant. |
The results appear in a paper published Nov. 13 in the journal Science. The lead author is graduate student Roger Fu of MIT, working under Benjamin Weiss; Steve Desch of Arizona State University’s School of Earth and Space Exploration is a co-author of the paper.
“The measurements made by Fu and Weiss are astounding and unprecedented,” says Desch. “Not only have they measured tiny magnetic fields thousands of times weaker than a compass feels, they have mapped the magnetic fields’ variation recorded by the meteorite, millimeter by millimeter.”
It may seem all but impossible to determine how the solar system formed, given it happened about 4.5 billion years ago. But making the solar system was a messy process, leaving lots of construction debris behind for scientists to study.
Among the most useful pieces of debris are the oldest, most primitive and least altered type of meteorites, called the chondrites (KON-drites). Chondrite meteorites are pieces of asteroids, broken off by collisions, that have remained relatively unmodified since they formed at the birth of the solar system. They are built mostly of small stony grains, called chondrules, barely a millimeter in diameter.
Chondrules themselves formed through quick melting events in the dusty gas cloud — the solar nebula — that surrounded the young sun. Patches of the solar nebula must have been heated above the melting point of rock for hours to days. Dustballs caught in these events made droplets of molten rock, which then cooled and crystallized into chondrules.
As chondrules cooled, iron-bearing minerals within them became magnetized like bits on a hard drive by the local magnetic field in the gas. These magnetic fields are preserved in the chondrules even down to the present day.
The chondrule grains whose magnetic fields were mapped in the new study came from a meteorite named Semarkona, after the place in India where it fell in 1940. It weighed 691 grams, or about a pound and a half.
The scientists focused specifically on the embedded magnetic fields captured by “dusty” olivine grains that contain abundant iron-bearing minerals. These had a magnetic field of about 54 microtesla, similar to the magnetic field at Earth’s surface, which ranges from 25 to 65 microtesla.
Coincidentally, many previous measurements of meteorites also implied similar field strengths. But it is now understood that those measurements detected magnetic minerals contaminated by Earth’s magnetic field, or even from hand magnets used by meteorite collectors.
“The new experiments,” Desch says, “probe magnetic minerals in chondrules never measured before. They also show that each chondrule is magnetized like a little bar magnet, but with ‘north’ pointing in random directions.”
This shows, he says, they became magnetized before they were built into the meteorite, and not while sitting on Earth’s surface.
Shocks and more shocks
“My modeling for the heating events shows that shock waves passing through the solar nebula is what melted most chondrules,” Desch explains. Depending on the strength and size of the shock wave, the background magnetic field could be amplified by up to 30 times.
He says, “Given the measured magnetic field strength of about 54 microtesla, this shows the background field in the nebula was probably in the range of 5 to 50 microtesla.”
There are other ideas for how chondrules might have formed, some involving magnetic flares above the solar nebula, or passage through the sun’s magnetic field. But those mechanisms require stronger magnetic fields than what is measured in the Semarkona samples.
This reinforces the idea that shocks melted the chondrules in the solar nebula at about the location of today’s asteroid belt, which lies some two to four times farther from the sun than Earth now orbits.
Desch says, “This is the first really accurate and reliable measurement of the magnetic field in the gas from which our planets formed.”
Reference: Roger R. Fu, Benjamin P. Weiss, Eduardo A. Lima, Richard J. Harrison, Xue-Ning Bai, Steven J. Desch, Denton S. Ebel, Clement Suavet, Huapei Wang, David Glenn, David Le Sage, Takeshi Kasama, Ronald L. Walsworth, and Aaron T. Kuan.Solar nebula magnetic fields recorded in the Semarkona meteorite. Science, 13 November 2014 DOI: 10.1126/science.1258022 |
Perimeter - Problem 1 2,223 views
Remember that the perimeter of a polygon is the sum of the lengths of the sides of that polygon. For rectangles, we know that the perimeter is P = 2b + 2h, where b is the length of the base, and h is the length of the height. This is because rectangles are made up of two pairs of congruent segments.
If given a perimeter of a rectangle that is 44 cm, the length of the height is 12 cm, and the length of the base is unknown x, you can use the formula for the perimeter to solve for the length of the base. Doing so, you get:
Then, subtracting 24 from both sides:
Dividing both sides by 2:
So, the length of the base is 10 cm.
In this problem you are given the perimeter of this quadrilateral and you’re being asked to find this missing variable x. Well before we start, what is perimeter? Perimeter is the sum of the lengths of all the sides of a polygon and for rectangles we can use the short-cut that perimeter is equal to 2 times the base plus 2 times the height. So we can go back to our problem here, to our rectangle and we can say that our perimeter is equal to 2 times b, where b is you base plus 2 times h, where h is your height. And we have to identify what are b and h. Well the base is this side right here which is congruent to x, so we’re going to substitute in x for b. Your height, h is going to be the other side, so we’re going to have 12 and units here are centimeters. And as we’ve already indicated we know our perimeter is 44cm.
So if we substitute in for b, h and p into your equation, then we can solve for x. So instead of writing p I’m going to write 44. We’d have 2 time b, but we said is the same thing as x; so we’d have 2x plus 2 times our height and our height we said was 12cm. So now we have one equation, one variable x that we can solve.
So 44 is equal to 2x pus 2 times 12 is 24. So I’m going to subtract 24 from both sides. So this is just regular old algebra, I’m not doing anything you hopefully haven’t seen before. 44 minus 24 is 20 and that equals 2x. And last we’re going to divide by 2 and you find out that x must be 10. So up here where it says x equals? we’re going to make sure to include our units and we find out that x is 10. Where the key to this problem was remembering your perimeter formula, identifying your variables, substituting and solving. |
What is an SSL certificate?
An SSL certificate is a digital certificate that enables secure communication between a website and its visitors. It is used to encrypt the data exchanged between the website and the user's browser, thereby protecting it from unauthorized access. SSL stands for Secure Sockets Layer.
An SSL certificate is a digital certificate issued by Certificate Authorities (CA) to websites, authenticating their identity and enabling a secure connection between web pages and web browsers. SSL certificates inspire trust on the Internet because they show Internet users that the traffic between their web browser and a website is encrypted.
This secure connection is indicated by a padlock icon in the browser's address bar and the "https" protocol in the website's URL. SSL certificates are essential for online transactions, such as e-commerce purchases, as they ensure that sensitive information, such as credit card details, is transmitted securely. There is a danger of SSL certificates creating a false sense of security, though, as malicious websites can also get SSL certificates. For example, there’s been a rise in phishing websites that have been granted Domain Validated (DV) certificates from authorities that don’t moderate what sites get certificates. Additionally, SSL certificates can’t shield websites from malicious attacks like SQL injection or malware.
How does an SSL certificate work?
An SSL certificate works by using encryption algorithms to encrypt the data exchanged between a web browser and a website.
When the data is encrypted, it’s almost impossible for a threat actor to read. Examples of data include passwords, names, and financial information.
Here is the five-step process of how an SSL certificate works:
- A browser attempts to connect to a website with an SSL certificate.
- The website’s server sends a copy of its SSL certificate to the browser for validation. The website’s public key on the certificate will help encrypt data during the session.
- The browser validates the certificate to ensure it’s authentic, unrevoked, and unexpired. After validation, the browser uses the server’s public key to create an encrypted symmetric key and sends it to the server.
- The server uses its private key to decrypt the symmetric session key. It indicates that it’s ready to start an encryption session by sending back an acknowledgement encrypted with the session key.
- The browser and website now establish a secure and encrypted connection with the session key.
The entire process is also called an SSL handshake and is almost instantaneous. After the SSL certificate secures the connection:
- A padlock icon appears on the address bar of the browser right before the URL.
- The URL is preceded by the HTTPS (HyperText Transfer Protocol Secure) acronym.
What do SSL certificates include?
Domain name: The domain name or names the SSL certificate is valid for.
Issue details: The device or person the SSL certificate was issued to and the certificate authority that issued it.
Digital signature: The digital signature that verifies the authenticity of the certificate.
Dates: The issue and expiration date of the certificate.
Public key: An SSL certificate includes a public key while the private key is kept private.
What is an SSL certificate used for?
An SSL certificate is used to create a secure connection between a browser and a website. The certificate helps protect any data exchange between a device and a website’s server. In addition to encrypting connections, SSL certificates help protect the security of website users. Moreover, an SSL certificate helps develop confidence in a website.
Why do I need an SSL certificate for my website?
An SSL certificate should never be considered to be the only tool for website cybersecurity. Nonetheless, you need an SSL certificate for your website for the following reasons:
Data protection: An SSL encryption helps secure the data exchanged between your website and your visitors from cybercriminals. Not only is this good for business, but it can help your online platform comply with privacy laws.
Trust: The average Internet user will feel more confident browsing your website, knowing that the padlock sign and the HTTPS acronym signify a secure connection.
SEO: Websites with SSL certificates rank higher in search engines as they are perceived as secure. In fact, some browsers even flag http website and warn users to not visit such as site, as it is not secure.
Types of SSL certificates
Your website can obtain several different types of SSL certificates. Each certificate has its own strengths, verification processes, and costs.
Domain Validated (DV) certificates
DV certificates tend to be more cost-effective than EV or OV SSL certificates. While they’re usually the least expensive, they only provide basic verification. DV certificates are typically used by low-risk websites, such as message boards or blog pages.
Extended Validation (EV) certificates
An extended validation certificate involves the highest level of validation and is usually the priciest kind of SSL certificate. Renowned websites that offer online shopping or e-commerce usually use this type of certificate. In addition to the padlock sign, an EV SSL certificate shows an organization’s name and country in the address bar.
Organization Validated (OV) certificates
Like EV certificates, OV certificates provide a higher level of validation compared to DV SSL certificates. OV certificates also display a website owner’s information in the address bar.
Wildcard SSL certificates
A Wildcard SSL certificate secures a primary domain and an unlimited number of subdomains on one certificate. Companies with multiple subdomains under a single domain tend to obtain this type of certificate to reduce costs.
Multi-Domain SSL certificates
Multi-domain SSL certificates go a step further than Wildcard SSL certificates. They allow a single certificate to secure many domains and subdomains. These types of certificates are best for organizations with multiple websites.
Purchasing SSL certificate: How to purchase an SSL certificate
Start by researching different SSL certificates and picking the one that matches your needs. For example, if you have a single domain with many subdomains, you may need a Wildcard SSL certificate. Alternatively, you can settle for a DV certificate for your personal page.
After picking your certificate type, shop for your certificate by checking a Certificate Authority (CA) list. The certificates from Certificate Authorities vary by price. While some certificates are free, others can cost hundreds if not thousands of dollars.
Prepare your server to ensure your WHOIS record matches your application to your CA. Next, you’ll need to generate a Certificate Signing Request (CSR). Your CSR will carry your data and your public key. After submitting your CSR to your CA, you may need to provide other documentation, such as proof of ownership of your domain.
After receiving the certificate files, you can install your SSL. The process depends on your server type. Get in touch with your website provider for help if you need it. Later, you’ll need to configure your website to use HTTPS. You can use an SSL checker to ensure that your certificate is authentic and installed correctly.
Renewing SSL certificate: How to renew an SSL certificate
SSL certificates have an expiry date. Plan ahead to ensure your SSL renewal is on time. If you delay your renewal, your website may lose its trust seal.
The process of renewing an SSL certificate is similar to purchasing a new one. You’ll need to generate a new CSR and submit it to your CA. You’ll also need to install the updated SSL certificate files on your server like before and ensure that your certificate is correctly installed with an SSL checker.
SSL certificate pricing: How much is an SSL certificate?
SSL certificates vary significantly in pricing. While some SSL certificates can be obtained for free, enterprise-level SSL certificates can cost thousands of dollars a year. The cost of an SSL certificate is impacted by the number of domains and subdomains, the type of certificate, the level of security, and the reputation of the Certificate Authority (CA).
TLS vs SSL certificate: The difference between TLS vs. SSL
While SSL (Secure Sockets Layer) and TSL (Transport Layer Security) are both cryptographic protocols, TLS is the updated version. TLS is considered to be more secure and modern with a better TLS handshake. TLS is also backward compatible and can connect to an SSL server.
The terms “SSL” and “TLS” are often used interchangeably on the Internet, even though the latter is a replacement for the former. Many certificate issuers even refer to their TLS certificates as SSL certificates.
How to check if a site has an SSL certificate
Check if the address of the website starts with the “HTTPS” acronym. HTTPS is short for Hyper-Text Transfer Protocol Secure.
A website with an SSL certificate should have a padlock sign on the address bar. You can click the padlock sign to learn more about the SSL issuing authority, expiration date, and website owner.
Green address bar
The green address bar was an SSL indicator for websites with EV SSL certificates. However, major browser developers like Apple and Google consider it to be obsolete now.
Some websites and browser extensions can verify if a website has a valid SSL certificate. For example, you can enter the URL of a website in SSL Shopper’s SSL Checker to learn about its certification.
What is an SSL certificate error?
An SSL certificate error can occur when there’s an issue with a website’s certificate. An SSL certificate error can occur for multiple reasons. While some errors can be due to innocuous reasons, others can be due to malicious factors. It’s best to proceed with caution when opening a website that presents an SSL certificate error.
Expired SSL certificate
As mentioned, SSL certificates have expiry dates. Sometimes, website owners may forget to renew their SSL certificate. An expired SSL certificate will cause your browser to display an error message.
SSL certificate not trusted
Every browser can access a list of trusted SSL certificate providers. Your browser may tell you that a website’s SSL certificate is not to be trusted if the website’s issuing authority is not on the list or is suspect. For example, you may see an error if the certificate was self-signed or obtained from a fraudulent issuer.
After obtaining an SSL certificate, a website owner must install and configure it correctly. A misconfigured SSL certificate can result in an error for the website.
SSL certificates are issued to specific domains and subdomains. A mismatch in the records will force a browser to display an error message.
SSL certificate revoked
SSL certificate issuers may revoke a certificate before its expiry date if its private key was compromised or if the domain is closed. A website may also request that its certificate be revoked. Regardless of why the SSL certificate was revoked, it will result in an error.
Are SSL certificates free?
While not all SSL certificates are free, some are indeed free of cost. The free SSL certificates are usually Domain Validated (DV) certificates. They’re best for personal pages or small businesses. Larger websites should obtain paid certificates that offer better security and more features than DV SSL certificates.
Can a website without SSL be hacked?
An SSL certificate only secures the connection between a user and a website. A website with or without SSL certification can be hacked in a number of ways. Threat actors can exploit security vulnerabilities, weak login credentials, poor coding, outdated software, and other means to hack a website.
Website hacking techniques include:
Cross-site scripting (XSS).
Brute force attacks like this record breaking DDoS attack.
Phishing expeditions on website employees.
How long do SSL certificates last
There was a time when SSL certificates could be issued with an expiration period of five years. However, this time period has been adjusted several times. Since late 2020, an SSL certificate can’t be issued for more than 13 months.
Does an SSL certificate mean a website is safe to use?
Although an SSL certificate means that your connection to a website is secure, it doesn’t necessarily mean that the website is safe to use. For example, malicious websites can also obtain some types of SSL certificates, such as DV certificates.
While phishing websites can carry DV certifications, they’re designed to steal confidential information such as names, addresses, passwords, and credit card information. Phishing websites may look legitimate but can have grammatical errors, low-quality graphics, poor design, or offers that appear too good to be true.
In addition, threat actors can hack legitimate websites with SSL certificates by using different tools and exploitations.
Here are some steps that can help you check a website’s safety:
Use Malwarebytes Browser Guard to block web pages that contain malware, scams, and other malicious content.
Subscribe to a Virtual Private Network (VPN) service to encrypt your data and hide your IP address. You can learn how VPN works to encrypt your data and mask your location.
Ensure that the website URL is correctly spelled. A phishing website with a basic SSL certificate impersonating Walmart.com may have a very similar address that only varies by one or two characters. For example, instead of Walmart.com, it may say Walmert.com or Walmrat.com.
Look for the padlock sign and the HTTPS acronym on the address bar to ensure that it has an SSL certificate. At the very least, a website with an SSL certificate offers an encrypted connection.
Click on the padlock sign in the browser address bar to verify the identity of the website owner and check the certificate authority and expiration date.
Research the website’s reputation with a website safety checker.
A hacked or phishing website can also infect your system with malware. Get malware protection to ensure your computers and devices are free of malicious software. Follow these Internet safety tips for more security for your browser.
SSL (Secure Sockets Layer) certificate is a digital certificate that enables secure communication between a website and its visitors. It is used to encrypt the data exchanged between the website and the user's browser, thereby protecting it from unauthorized access.
SSL certificates are issued by trusted third-party certificate authorities and contain information about the website owner, the validity of the certificate, and the encryption key used for secure communication. When a user visits a website with an SSL certificate, their browser verifies the certificate's authenticity and establishes a secure connection with the website.
This secure connection is indicated by a padlock icon in the browser's address bar and the "https" protocol in the website's URL. SSL certificates are essential for online transactions, such as e-commerce purchases, as they ensure that sensitive information, such as credit card details, is transmitted securely.
Knowledge-based authentication: This type of authentication involves the user providing information that only they should know, such as a password, PIN, or answer to a security question.
Possession-based authentication: This type of authentication involves the user providing proof of possession of a physical object, such as a security token, smart card, or mobile device.
Inherence-based authentication: This type of authentication involves the user providing biometric information, such as a fingerprint, facial recognition, or iris scan, to verify their identity.
Here's how to get an SSL certificate in 2023:
Determine the type of SSL certificate you need: There are different types of SSL certificates available, such as Domain Validated (DV), Organization Validated (OV), and Extended Validation (EV) certificates. You need to determine which type of certificate is suitable for your website.
Choose a reputable SSL certificate provider: There are many SSL certificate providers available, and you need to choose a reputable one that offers the type of certificate you need.
Generate a Certificate Signing Request (CSR): A CSR is a file that contains your website's information, such as the domain name and public key. You need to generate a CSR from your web hosting control panel.
Submit the CSR to the SSL certificate provider: After generating the CSR, you need to submit it to the SSL certificate provider. They will use it to issue your SSL certificate.
Install the SSL certificate on your website: Once you receive the SSL certificate, you need to install it on your website. The process of installation varies depending on your web hosting provider and the type of SSL certificate you have.
Test the SSL certificate: After installing the SSL certificate, you need to test it to ensure that it is working correctly. You can use an SSL checker tool to verify the SSL certificate's installation and configuration.
Once your SSL certificate is installed and working correctly, your website will have a secure connection, and visitors will see the padlock icon and the "https" protocol in the URL. |
Independent and Dependent Variable: Contents (Click to skip to that section):
Other Types of Variables used in Calculus:
Independent variables are the “inputs” for functions. The value of an independent variable does not depend on other variables. Rather, changes in the independent variable (which take place in our “function machine”) cause a change in another variable (the dependent variable). A variable is simply something that we are trying to measure.
Most times you see an equation, the dependent variable will be isolated on one side. The independent variable is usually designated by x. The dependent variable is typically designated by y. However, any variables could be used in place of x or y. In the equation below the dependent variable is y:
The independent variable is x. The value of y will change depending on the value of x. For instance, suppose x were 4:
y = 5(4) + 6
y = 26
If x was 6, the value of the dependent variable (y) would change:
y = 5(6) + 6
y = 36
By changing the values of the independent variable from 4 to 6 we caused a change in the dependent variable.
When working with graphs the independent variable will always be located on the x-axis. On the graph below, the independent variable (minutes used per month), is located on the x-axis.
The graph tells us is that the independent variable (minutes used per month) causes a change in the dependent variable (cost in dollars).
The use of the terms “independent” and “dependent” only show us a relationship between the stated variables; It’s not stating that the independent variable is never influenced by any outside variables. In this example, we’re only interested in how cost is influenced by minutes used. However, your minutes used might go up and down depending on who you’re dating, how many friends you have, or whether great aunt Zelda is sick. But we aren’t interested in any of those “other” variable here, we’re only comparing how minutes and cost interact.
A dependent variable is a variable that depends on one or more other variables. In calculus, the dependent variable is the output of a function.
If you are given a simple formula, like y = x3, then y is a function of x (you can also say that “y depends on x.”
- x is the independent variable.
- y is the dependent variable.
The dependent variables are the output. For example, if x = 2, y = 23 = 2 * 2 * 2 = 8. In this example, 2 is the independent variable (graphed on the x-axis) and 8 is the dependent variable (graphed on the y axis).
These types of variables are useful for the study of derivatives. Usually in calculus we talk about incremental changes in the independent and dependent variable (Δx or Δy). The equation y = x3 becomes:
Δy = f(x3 + Δx) – Δx3 – f(x).
Geometrically, the equation defines the independent variable (Δx3) along the curve of the change Δx in x.
The dependent variable of an equation is fairly easy to identify. It will be the variable in the equation that has its value determined by the values given to the other variables. Let’s take a look at an example of an equation and identify the dependent variable.
y = 5x + 6
The dependent variable is y. The value of y depends on the value that we choose for x. x in this equation is the independent variable.
When working with graphs the dependent variable will always be located on the y-axis. Notice on the graph below that the dependent variable, cost in dollars, is located on the y-axis.
The graph tells us is that the dependent variable (cost in dollars) is dependent on the number of minutes used per month. The number of minutes used per month is the independent variable, which is located on the x-axis.
Categorical variables (perhaps not surprisingly) fall into a particular category. College major, political affiliation, or sexual orientation are all categories. The variables in those categories are called “categorical variables”.
- Category “college major”: Math, English, Geology.
- Category “eye color”: Blue, Brown, Green.
“College major” and “eye” color are the categories and are not categorical variables. The variables are within those categories and they are usually specific pieces of information that you want to input. They may come from an experiment or a survey. For example, let’s say you survey people and ask them for their horoscope sign. They would give you a categorical variable: Aquarius, Pisces, Leo. They wouldn’t respond “horoscope sign.”
There is no order to categorical variables. For example, you can’t order “blue, brown, and green”. If there is some kind of order, those variables would be ordinal variables instead. For example, you might categorize television prices by cheap, moderate and expensive.
Quantitative variables are the “x” and “y” of calculus: you can generally work with them mathematically (for example, you can add them). Categorical variables can describe those numbers.
- What is a Quantitative Variable?
- Difference Between Quantitative Variable and Qualitative Variable
- Quantitative and Qualitative Calculus
Quantitative variables can be counted or measured. You can think of them as numeric variables, and they usually answer questions like:
- How many?
- How much?
- How often?
Quantitative variables include things like age, height, or time to completion.
If a variable isn’t quantitative, then it’s qualitative. These are values you don’t find by measuring or counting. They describe characteristics like fur color, gender, location or profession.
The same object or unit can be described with both quantitative and qualitative variables. For example, a person can be described as 34 years old, with 4 children, and 6 feet tall, (all quantitative variables); but also as Dutch, a good cook, and a basketball player (qualitative variables).
Types of Quantitative Variable
We can divide all quantitative variables into two main subsets: continuous variables and discrete variables.
- Discrete variables have a value from a set of discrete whole values. The number of cars at the parking lot, the number of children in a classroom, or a number of business locations are all examples of discrete variables.
- Continuous variables take values from an interval and can take values at any point on that interval. A continuous variable has as many significant digits as the instrument you use to measure it allows. Temperature is an example of a continuous variable; you can have water at 25 degrees Celsius, but you can also have water at or 24.998 or 25.0004 degrees Celsius.
In this graph, categorical variables are shown on the y-axis and quantitative data on the x-axis.
Watch the video, or read the article below:
Quantitative means you can count it, like “number of corn fields in a square mile.” Qualitative means you can describe it, like “black fur on a cat.” A standard Deck of playing cards has both types of variable; The numbers on the cards (2 through 9) are quantitative, and the suits (Clubs, Diamonds, Hearts, Spades) are qualitative variables.
A simple way of looking at the differences:
- If it can be added, it’s quantitative.
- If you can’t add it, it’s qualitative.
- Horse + pony + saddle = ?
- Skin tone + eye color = ?
You could give numerical values to the qualitative variables so that you can add them. For example,
- Horse = 1
- Pony = 2
- Saddle = 3
Assigning a number doesn’t make them quantitative variables, but they do allow you to perform calculations.
Quantitative vs Qualitative?
If you aren’t sure what category your variables fall into (qualitative or quantitative), try this simple method to figure it our:
Step 1: Give the items in your set a category like “hair color” or “types of rice” or “clothing colors”. The name of the category is not important.
Step 2: Order the items in your category. Anything with numbers (like rolls of money, grade point averages, or car years) can be ordered. If you find this impossible, then you have a qualitative item.
Step 3: Make sure you haven’t added extra information. For example, you could rank shoe brands by popularity or how much they cost, but popularity and cost are separate from “make of shoe.” If the item is “shoes,” it’s qualitative. If the item is “cost of shoes,” it’s quantitative.
Calculus is usually defined as the quantitative study of change, using traditional algebraic methods, proportions, and ratios. However, many authors (including Piaget, 1946:1970; Nemirovsky, Tierney & Ogonowski, 1993; Kaput, 1994; Stroup, 2002; Confrey &
Smith, 1995 as cited in Jerde & Wilensky, 2010) have described “qualitative calculus” as understanding and reasoning about rates of change and accumulation without the traditional mathematical underpinnings.
You can code a categorical variable to make it look like a quantitative one. For example, you could code eye color as 1:blue, 2:brown, 3: green or 4:hazel. Although this can be useful for data analysis, it doesn’t become a quantitative variable because you assigned it a number.
What Can we do with Qualitative Calculus Today?
Dodge, Y.; Cox, D.; Commenges, D.; Davidson, A; Solomon, P.; and Wilson, S. (Eds.). The Oxford Dictionary of Statistical Terms, 6th ed. New York: Oxford University Press, 2006.
Jerde, M. & Wilensky, W. Qualitative Calculus of Systems: Exploring Students’ Understandings of Rate of Change and Accumulation in Multiagent Systems.
Gurney, D. Math 241 Course Notes. Qualitative Versus Quantitative. Retrieved from https://www2.southeastern.edu/Academics/Faculty/dgurney/Math241/StatTopics/QualVsQuant.htm on September 12, 2019
Australian Bureau of Statistics. Statistical Language- Quantitative and Qualitative Data. Retrieved from https://www.abs.gov.au/websitedbs/a3121120.nsf/home/statistical+language+-+quantitative+and+qualitative+data on September 13, 2019
Australian Bureau of Statistics. Statistical Language— What are VAriables? Retrieved from
https://www.abs.gov.au/websitedbs/a3121120.nsf/home/statistical+language+-+what+are+variables on September 13, 2019
West Virginia University. Lesson 5:
Expressing the Relationship between Independent and Dependent Variable Retrieved January 17, 2019 from http://nextgen.wvnet.edu/Courses/lesson.php?c=8&u=51&l=325&t=Lesson
Stephanie Glen. "Independent and Dependent Variable (Calculus)" From CalculusHowTo.com: Calculus for the rest of us! https://www.calculushowto.com/calculus-definitions/independent-and-dependent-variable/
Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free! |
Revision as of 22:43, 13 October 2009
IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ.
Recall the construction of a cartesian product of two sets: . We have functions and extracting the two sets from the product, and we can take any two functions and and take them together to form a function .Similarly, we can form the type of pairs of Haskell types:
An element of the pair is completely determined by the two elements included in it. Hence, if we have a pair of generalized elements and , we can find a unique generalized element such that the projection maps on this gives us the original elements back.
This argument indicates to us a possible definition that avoids talking about elements in sets in the first place, and we are lead to the
Definition A product of two objects A,B in a category C is an object equipped with maps such that for any other object V with maps , there is a unique map such that the diagram
Recall the construction of a cartesian product: for sets S,T, the set .
The cartesian product is one of the canonical ways to combine sets with each other. This is how we build binary operations, and higher ones - as well as how we formally define functions, partial functions and relations in the first place.
This, too, is how we construct vector spaces: recall that is built out of tuples of elements from , with pointwise operations. This constructions reoccurs all over the place - sets with structure almost always have the structure carry over to products by pointwise operations.
The product of sets is determined by the projection maps and . You know any element of by knowing what its two coordinates are, and any element of S with an element of T determines exactly one element of .
Given the cartesian product in sets, the important thing about the product is that we can extract both parts, and doing so preserves any structure present, since the structure is defined pointwise.
This is what we use to define what we want to mean by products in a categorical setting.
Definition Let C be a category. The product of two objects A,B is an object equipped with maps and such that any other object V with maps has a unique map such that both maps from V factor through the p1,p2.
In the category of Set, the unique map from V to would be given by q(v) = (q1(v),q2(v)).
The uniqueness requirement is what, in the theoretical setting, forces the product to be what we expect it to be - pairing of elements with no additional changes, preserving as much of the structure as we possibly can make it preserve.
In the Haskell category, the product is simply the Pair type:
Product a b = (a,b)
Recall from the first lecture, the product construction on categories: objects are pairs of objects, morphisms are pairs of morphisms, identity morphisms are pairs of identity morphisms, and composition is componentwise.
This is, in fact, the product construction applied to Cat - or even to CAT: we get functors P1,P2 picking out the first and second components, and everything works out exactly as in the cases above.
We keep writing the product here. The justification for this is: Theorem If P and P' are both product objects for the pair (A,B), then they are isomorphic.
Proof Consider the diagram: (( Diagram )) Both the vertical maps have unique existence, by the defining property of the product. Hence the composition of these two maps, is an endo-map of P (P') such that both projections factor through this endo-map. However, the identity map 1P (1P') is also such an endo-map, and again, by the definition of the product, a map to P (</math>P'</math>) that the projections factor through is uniquely determined. Hence the composition is the identity, and this argument holds, mutatis mutandis, for the other inverse. Hence these vertical maps are isomorphisms, inverse to each other, and thus P,P' are isomorphic. QED.
The other thing you can do in a Haskell data type declaration looks like this:
Coproduct a b = A a | B b
This type provides us with functions
A :: a -> Coproduct a b B :: b -> Coproduct a b
and hence looks quite like a dual to the product construction, in that the guaranteed functions the type brings are in the reverse directions from the arrows that the product projection arrows.
So, maybe what we want to do is to simply dualize the entire definition?
Definition Let C be a category. The coproduct of two objects A,B is an object A + B equipped with maps and such that any other object V with maps has a unique map such that v1 = vi1 and v2 = vi2.
In the Haskell case, the maps i1,i2 are the type constructors A,B. And indeed, this Coproduct, the union type construction, is the type which guarantees inclusion of source types, but with minimal additional assumptions on the type.
In the category of sets, the coproduct construction is one where we can embed both sets into the coproduct, faithfully, and the result has no additional structure beyond that. Thus, the coproduct in set, is the disjoint union of the included sets: both sets are included without identifications made, and no extra elements are introduced.
Proposition If C,C' are both coproducts for some A,B, then they are isomorphic.
The proof is almost exactly the same as the proof for the product case.
- Diagram definition
- Disjoint union in Set
- Coproduct of categories construction
- Union types
3 Algebra of datatypes
Recall from [User:Michiexile/MATH198/Lecture_3|Lecture 3] that we can consider endofunctors as container datatypes. Some of the more obvious such container datatypes include:
data 1 a = Empty data T a = T a
These being the data type that has only one single element and the data type that has exactly one value contained.Using these, we can generate a whole slew of further datatypes. First off, we can generate a data type with any finite number of elements by (n times). Remember that the coproduct construction for data types allows us to know which summand of the coproduct a given part is in, so the single elements in all the
Furthermore, we can note that , with the isomorphism given by the maps
f (Empty, T x) = T x g (T x) = (Empty, T x)
Thus we have the capacity to add and multiply types with each other. We can verify, for any types A,B,C
We can thus make sense of types like T3 + 2T2 (either a triple of single values, or one out of two tagged pairs of single values).
This allows us to start working out a calculus of data types with versatile expression power. We can produce recursive data type definitions by using equations to define data types, that then allow a direct translation back into Haskell data type definitions, such as:
The real power of this way of rewriting types comes in the recognition that we can use algebraic methods to reason about our data types. For instance:
List = 1 + T * List = 1 + T * (1 + T * List) = 1 + T * 1 + T * T* List = 1 + T + T * T * List
so a list is either empty, contains one element, or contains at least two elements. Using, though, ideas from the theory of power series, or from continued fractions, we can start analyzing the data types using steps on the way that seem completely bizarre, but arriving at important property. Again, an easy example for illustration:
List = 1 + T * List -- and thus List - T * List = 1 -- even though (-) doesn't make sense for data types (1 - T) * List = 1 -- still ignoring that (-)... List = 1 / (1 - T) -- even though (/) doesn't make sense for data types = 1 + T + T*T + T*T*T + ... -- by the geometric series identity
and hence, we can conclude - using formally algebraic steps in between - that a list by the given definition consists of either an empty list, a single value, a pair of values, three values, et.c.
At this point, I'd recommend anyone interested in more perspectives on this approach to data types, and thinks one may do with them, to read the following references:
3.1 Blog posts
3.2 Research papers
d for data types 7 trees into 1
- What are the products in the category C(P) of a poset P? What are the coproducts?
- Prove that any two coproducts are isomorphic.
- Write down the type declaration for at least two of the example data types from the section of the algebra of datatypes, and write a implementation for each.Functor |
What are statistics and probability
Welcome to ixl's probability and statistics page we offer fun, unlimited practice in 179 different probability and statistics skills. Probability and statistics activities for middle school and high school. Find out more about elsevier's publications and latest news in the field of statistics and probability. Probability how likely something is to happen many events can't be predicted with total certainty the best we can say is how likely they are to happen, using the idea of probability tossing a coin when a coin is tossed, there are two possible outcomes.
Successfully working your way through probability problems means understanding some basic rules of probability along with discrete and continuous probability distributions use some helpful study tips so you're well-prepared to take a probability exam. Probability is the mathematical languge of randomness which enables you to reason about or make predictive statements about outcomes of physical systems or processes that have randomness or uncertainty statistics works the other way: it describe. This online course is an introduction to statistics for those with little or no prior exposure to basic statistics using a simulation/resampling approach. Probability and odds are two basic statistic terms to describe the likeliness that an event will occur they are often used interchangeably in causal conversation or even in published materialhowever, they are not mathematically equivalent because they are looking at likeliness in different contexts. Learn statistics and probability for free—everything you'd want to know about descriptive and inferential statistics full curriculum of exercises and videos. Cahsee on target uc davis school university partnerships student workbook: statistics & probability 3 the mean can also be a decimal look at the next example.
Prob & stat vocab probability and statistics vocabulary list (definitions for middle school teachers) b • bar graph - a diagram representing the frequency distribution for nominal or discrete data it consists of a sequence of bars, or rectangles, corresponding to the possible values, and the. Introduction why have probability in a statistics textbook [very little in mathematics is truly self contained many branches of mathematics touch and interact with one another, and the fields of probability and statistics are no different.
Solving probability problems how to find probability of a sample point and probability of an event includes probability examples with solutions. High school: statistics & probability » introduction print this page decisions or predictions are often based on data—numbers in context these decisions or predictions would be easy if the data always sent a clear message, but the message is often obscured by variability. This syllabus section provides information on course meeting times, topics, learning objectives, basic course structure, collaboration policy, and grading. Probability & statistics [enter course] overview: this course introduces students to the basic concepts and logic of statistical reasoning and gives the students introductory-level practical ability to choose, generate, and properly interpret appropriate descriptive and inferential methods.
A summary of the lessons available on the statistics and probability section of the site includes pie charts, histograms, mean, median, mode among others. Resources / lessons / math / statistics and probability / probability / further concepts in proba further concepts in probability the study of further concepts in probability. Easier - a statistic is a fact or piece of information that is expressed as a number or percentagethe facts and figures that are collected and examined for information on a given subject are statistics probability is the likelihood of something happening or being true. Online course, k-8 teachers, mathematics, content, data analysis, statistics, and probability.
What are statistics and probability
Statistics and probability theory are widely used in areas as diverse as golf, law, and medicine to ascertain the likelihood of future events. Probability is starting with an animal, and figuring out what footprints it will make statistics is seeing a footprint, and guessing the animal probability is straightforward: you have the bear measure the foot size, the leg length, and you can deduce the footprints oh, mr bubbles weighs. Probability is related to statistics in a direct manner when oneis doing a research for statistics, probability has to be usedespecially in sampling a small region.
- Statistics & probability print this page grade 6 develop understanding of statistical variability ccssmathcontent6spa1 recognize a statistical question as one that anticipates variability in the data related to the question and accounts for it in the answers.
- Statistics and probability tutorial covers introduction, descriptive statistics, grouped frequencies and graphical descriptions, probability distributions of discrete variables, probability distributions of continuous variables, the normal distribution, sampling and combination of variables.
- Fundamental probability and statistics there are known knowns these are things we know that we know there are known unknowns that is to say, there are things that we know we don't know.
Cahsee on target uc davis school university partnerships answer key: statistics & probability 1 introduction to the cahsee the cahsee stands for the california high school exit exam. Data analysis, statistics, and probability mastery chapter ten 399 do not be intimidated by this section we will give you all the tools you need to succeed. Common core math - statistics and probability practice questions. Statistics and probability from term 1 2017, victorian government and catholic schools will use the new victorian curriculum f-10. |
In music theory, an interval is the difference between two pitches. An interval may be described as horizontal, linear, or melodic if it refers to successively sounding tones, such as two adjacent pitches in a melody, and vertical or harmonic if it pertains to simultaneously sounding tones, such as in a chord.
In Western music, intervals are most commonly differences between notes of a diatonic scale. The smallest of these intervals is a semitone. Intervals smaller than a semitone are called microtones. They can be formed using the notes of various kinds of non-diatonic scales. Some of the very smallest ones are called commas, and describe small discrepancies, observed in some tuning systems, between enharmonically equivalent notes such as C♯ and D♭. Intervals can be arbitrarily small, and even imperceptible to the human ear.
In physical terms, an interval is the ratio between two sonic frequencies. For example, any two notes an octave apart have a frequency ratio of 2:1. This means that successive increments of pitch by the same interval result in an exponential increase of frequency, even though the human ear perceives this as a linear increase in pitch. For this reason, intervals are often measured in cents, a unit derived from the logarithm of the frequency ratio.
In Western music theory, the most common naming scheme for intervals describes two properties of the interval: the quality (perfect, major, minor, augmented, diminished) and number (unison, second, third, etc.). Examples include the minor third or perfect fifth. These names identify not only the difference in semitones between the upper and lower notes, but also how the interval is spelled. The importance of spelling stems from the historical practice of differentiating the frequency ratios of enharmonic intervals such as G–G♯ and G–A♭.
- 1 Size
- 2 Main intervals
- 3 Interval number and quality
- 4 Shorthand notation
- 5 Inversion
- 6 Classification
- 7 Minute intervals
- 8 Compound intervals
- 9 Intervals in chords
- 10 Size of intervals used in different tuning systems
- 11 Interval root
- 12 Interval cycles
- 13 Alternative interval naming conventions
- 14 Pitch-class intervals
- 15 Generic and specific intervals
- 16 Generalizations and non-pitch uses
- 17 See also
- 18 Notes
- 19 References
- 20 External links
The size of an interval (also known as its width or height) can be represented using two alternative and equivalently valid methods, each appropriate to a different context: frequency ratios or cents.
The size of an interval between two notes may be measured by the ratio of their frequencies. When a musical instrument is tuned using a just intonation tuning system, the size of the main intervals can be expressed by small-integer ratios, such as 1:1 (unison), 2:1 (octave), 3:2 (perfect fifth), 4:3 (perfect fourth), 5:4 (major third), 6:5 (minor third). Intervals with small-integer ratios are often called just intervals, or pure intervals.
Most commonly, however, musical instruments are nowadays tuned using a different tuning system, called 12-tone equal temperament. As a consequence, the size of most equal-tempered intervals cannot be expressed by small-integer ratios, although it is very close to the size of the corresponding just intervals. For instance, an equal-tempered fifth has a frequency ratio of 27⁄12:1, approximately equal to 1.498:1, or 2.997:2 (very close to 3:2). For a comparison between the size of intervals in different tuning systems, see section Size in different tuning systems.
The standard system for comparing interval sizes is with cents. The cent is a logarithmic unit of measurement. If frequency is expressed in a logarithmic scale, and along that scale the distance between a given frequency and its double (also called octave) is divided into 1200 equal parts, each of these parts is one cent. In twelve-tone equal temperament (12-TET), a tuning system in which all semitones have the same size, the size of one semitone is exactly 100 cents. Hence, in 12-TET the cent can be also defined as one hundredth of a semitone.
Mathematically, the size in cents of the interval from frequency f1 to frequency f2 is
The table shows the most widely used conventional names for the intervals between the notes of a chromatic scale. A perfect unison (also known as perfect prime) is an interval formed by two identical notes. Its size is zero cents. A semitone is any interval between two adjacent notes in a chromatic scale, a whole tone is an interval spanning two semitones (for example, a major second), and a tritone is an interval spanning three tones, or six semitones (for example, an augmented fourth).[a] Rarely, the term ditone is also used to indicate an interval spanning two whole tones (for example, a major third), or more strictly as a synonym of major third.
Intervals with different names may span the same number of semitones, and may even have the same width. For instance, the interval from D to F♯ is a major third, while that from D to G♭ is a diminished fourth. However, they both span 4 semitones. If the instrument is tuned so that the 12 notes of the chromatic scale are equally spaced (as in equal temperament), these intervals also have the same width. Namely, all semitones have a width of 100 cents, and all intervals spanning 4 semitones are 400 cents wide.
The names listed here cannot be determined by counting semitones alone. The rules to determine them are explained below. Other names, determined with different naming conventions, are listed in a separate section. Intervals smaller than one semitone (commas or microtones) and larger than one octave (compound intervals) are introduced below.
or perfect intervals
|0||Perfect unison[b]||P1||Diminished second||d2||Play (help·info)|
|1||Minor second||m2||Augmented unison[b]||A1||Semitone,[c] half tone, half step||S||Play (help·info)|
|2||Major second||M2||Diminished third||d3||Tone, whole tone, whole step||T||Play (help·info)|
|3||Minor third||m3||Augmented second||A2||Play (help·info)|
|4||Major third||M3||Diminished fourth||d4||Play (help·info)|
|5||Perfect fourth||P4||Augmented third||A3||Play (help·info)|
|6||Diminished fifth||d5||Tritone[a]||TT||Play (help·info)|
|7||Perfect fifth||P5||Diminished sixth||d6||Play (help·info)|
|8||Minor sixth||m6||Augmented fifth||A5||Play (help·info)|
|9||Major sixth||M6||Diminished seventh||d7||Play (help·info)|
|10||Minor seventh||m7||Augmented sixth||A6||Play (help·info)|
|11||Major seventh||M7||Diminished octave||d8||Play (help·info)|
|12||Perfect octave||P8||Augmented seventh||A7||Play (help·info)|
Interval number and quality
In Western music theory, an interval is named according to its number (also called diatonic number) and quality. For instance, major third (or M3) is an interval name, in which the term major (M) describes the quality of the interval, and third (3) indicates its number.
The number of an interval is the number of letter names it encompasses or staff positions it encompasses. Both lines and spaces (see figure) are counted, including the positions of both notes forming the interval. For instance, the interval C–G is a fifth (denoted P5) because the notes from C to the G above it encompass five letter names (C, D, E, F, G) and occupy five consecutive staff positions, including the positions of C and G. The table and the figure above show intervals with numbers ranging from 1 (e.g., P1) to 8 (e.g., P8). Intervals with larger numbers are called compound intervals.
There is a one-to-one correspondence between staff positions and diatonic-scale degrees (the notes of a diatonic scale)[d]. This means that interval numbers can be also determined by counting diatonic scale degrees, rather than staff positions, provided that the two notes that form the interval are drawn from a diatonic scale. Namely, C–G is a fifth because in any diatonic scale that contains C and G, the sequence from C to G includes five notes. For instance, in the A♭-major diatonic scale, the five notes are C–D♭–E♭–F–G (see figure). This is not true for all kinds of scales. For instance, in a chromatic scale, the notes from C to G are eight (C–C♯–D–D♯–E–F–F♯–G). This is the reason interval numbers are also called diatonic numbers, and this convention is called diatonic numbering.
If one adds any accidentals to the notes that form an interval, by definition the notes do not change their staff positions. As a consequence, any interval has the same interval number as the corresponding natural interval, formed by the same notes without accidentals. For instance, the intervals C–G♯ (spanning 8 semitones) and C♯–G (spanning 6 semitones) are fifths, like the corresponding natural interval C–G (7 semitones).
Notice that interval numbers represent an inclusive count of encompassed staff positions or note names, not the difference between the endpoints. In other words, one starts counting the lower pitch as one, not zero. For that reason, the interval C–C, a perfect unison, is called a prime (meaning "1"), even though there is no difference between the endpoints. Continuing, the interval C–D is a second, but D is only one staff position, or diatonic-scale degree, above C. Similarly, C–E is a third, but E is only two staff positions above C, and so on. As a consequence, joining two intervals always yields an interval number one less than their sum. For instance, the intervals C–E and E–G are thirds, but joined together they form a fifth (C–G), not a sixth. Similarly, a stack of three thirds, such as C–E, E–G, and G–B, is a seventh (C–B), not a ninth.
This scheme applies to intervals up to an octave (12 semitones). For larger intervals, see § Compound intervals below.
The name of any interval is further qualified using the terms perfect (P), major (M), minor (m), augmented (A), and diminished (d). This is called its interval quality. It is possible to have doubly diminished and doubly augmented intervals, but these are quite rare, as they occur only in chromatic contexts. The quality of a compound interval is the quality of the simple interval on which it is based.
Perfect intervals are so-called because they were traditionally considered perfectly consonant, although in Western classical music the perfect fourth was sometimes regarded as a less than perfect consonance, when its function was contrapuntal.[vague] Conversely, minor, major, augmented or diminished intervals are typically considered less consonant, and were traditionally classified as mediocre consonances, imperfect consonances, or dissonances.
Within a diatonic scale[d] all unisons (P1) and octaves (P8) are perfect. Most fourths and fifths are also perfect (P4 and P5), with five and seven semitones respectively. One occurrence of a fourth is augmented (A4) and one fifth is diminished (d5), both spanning six semitones. For instance, in a C-major scale, the A4 is between F and B, and the d5 is between B and F (see table).
By definition, the inversion of a perfect interval is also perfect. Since the inversion does not change the pitch class of the two notes, it hardly affects their level of consonance (matching of their harmonics). Conversely, other kinds of intervals have the opposite quality with respect to their inversion. The inversion of a major interval is a minor interval, the inversion of an augmented interval is a diminished interval.
Major and minor
As shown in the table, a diatonic scale[d] defines seven intervals for each interval number, each starting from a different note (seven unisons, seven seconds, etc.). The intervals formed by the notes of a diatonic scale are called diatonic. Except for unisons and octaves, the diatonic intervals with a given interval number always occur in two sizes, which differ by one semitone. For example, six of the fifths span seven semitones. The other one spans six semitones. Four of the thirds span three semitones, the others four. If one of the two versions is a perfect interval, the other is called either diminished (i.e. narrowed by one semitone) or augmented (i.e. widened by one semitone). Otherwise, the larger version is called major, the smaller one minor. For instance, since a 7-semitone fifth is a perfect interval (P5), the 6-semitone fifth is called "diminished fifth" (d5). Conversely, since neither kind of third is perfect, the larger one is called "major third" (M3), the smaller one "minor third" (m3).
Within a diatonic scale,[d] unisons and octaves are always qualified as perfect, fourths as either perfect or augmented, fifths as perfect or diminished, and all the other intervals (seconds, thirds, sixths, sevenths) as major or minor.
Augmented and diminished
Augmented intervals are wider by one semitone than perfect or major intervals, while having the same interval number (i.e., encompassing the same number of staff positions). Diminished intervals, on the other hand, are narrower by one semitone than perfect or minor intervals of the same interval number. For instance, an augmented third such as C–E♯ spans five semitones, exceeding a major third (C–E) by one semitone, while a diminished third such as C♯–E♭ spans two semitones, falling short of a minor third (C–E♭) by one semitone.
The augmented fourth (A4) and the diminished fifth (d5) are the only augmented and diminished intervals that appear in diatonic scales[d] (see table).
Neither the number, nor the quality of an interval can be determined by counting semitones alone. As explained above, the number of staff positions must be taken into account as well.
- A♭–B♯ is a second, as it encompasses two staff positions (A, B), and it is doubly augmented, as it exceeds a major second (such as A–B) by two semitones.
- A–C♯ is a third, as it encompasses three staff positions (A, B, C), and it is major, as it spans 4 semitones.
- A–D♭ is a fourth, as it encompasses four staff positions (A, B, C, D), and it is diminished, as it falls short of a perfect fourth (such as A–D) by one semitone.
- A♯-E is a fifth, as it encompasses five staff positions (A, B, C, D, E), and it is triply diminished, as it falls short of a perfect fifth (such as A–E) by three semitones.
|Interval name||Staff positions|
|4||doubly augmented second||A♭||B♯|
|4||triply diminished fifth||A♯||E|
Intervals are often abbreviated with a P for perfect, m for minor, M for major, d for diminished, A for augmented, followed by the interval number. The indication M and P are often omitted. The octave is P8, and a unison is usually referred to simply as "a unison" but can be labeled P1. The tritone, an augmented fourth or diminished fifth is often TT. The interval qualities may be also abbreviated with perf, min, maj, dim, aug. Examples:
- m2 (or min2): minor second,
- M3 (or maj3): major third,
- A4 (or aug4): augmented fourth,
- d5 (or dim5): diminished fifth,
- P5 (or perf5): perfect fifth.
A simple interval (i.e., an interval smaller than or equal to an octave) may be inverted by raising the lower pitch an octave or lowering the upper pitch an octave. For example, the fourth from a lower C to a higher F may be inverted to make a fifth, from a lower F to a higher C.
There are two rules to determine the number and quality of the inversion of any simple interval:
- The interval number and the number of its inversion always add up to nine (4 + 5 = 9, in the example just given).
- The inversion of a major interval is a minor interval, and vice versa; the inversion of a perfect interval is also perfect; the inversion of an augmented interval is a diminished interval, and vice versa; the inversion of a doubly augmented interval is a doubly diminished interval, and vice versa.
For example, the interval from C to the E♭ above it is a minor third. By the two rules just given, the interval from E♭ to the C above it must be a major sixth.
Since compound intervals are larger than an octave, "the inversion of any compound interval is always the same as the inversion of the simple interval from which it is compounded."
For intervals identified by their ratio, the inversion is determined by reversing the ratio and multiplying by 2. For example, the inversion of a 5:4 ratio is an 8:5 ratio.
For intervals identified by an integer number of semitones, the inversion is obtained by subtracting that number from 12.
Since an interval class is the lower number selected among the interval integer and its inversion, interval classes cannot be inverted.
Intervals can be described, classified, or compared with each other according to various criteria.
Melodic and harmonic
An interval can be described as
- Vertical or harmonic if the two notes sound simultaneously
- Horizontal, linear, or melodic if they sound successively.
Diatonic and chromatic
- A diatonic interval is an interval formed by two notes of a diatonic scale.
- A chromatic interval is a non-diatonic interval formed by two notes of a chromatic scale.
The table above depicts the 56 diatonic intervals formed by the notes of the C major scale (a diatonic scale). Notice that these intervals, as well as any other diatonic interval, can be also formed by the notes of a chromatic scale.
The distinction between diatonic and chromatic intervals is controversial, as it is based on the definition of diatonic scale, which is variable in the literature. For example, the interval B–E♭ (a diminished fourth, occurring in the harmonic C-minor scale) is considered diatonic if the harmonic minor scales are considered diatonic as well. Otherwise, it is considered chromatic. For further details, see the main article.
By a commonly used definition of diatonic scale[d] (which excludes the harmonic minor and melodic minor scales), all perfect, major and minor intervals are diatonic. Conversely, no augmented or diminished interval is diatonic, except for the augmented fourth and diminished fifth.
The distinction between diatonic and chromatic intervals may be also sensitive to context. The above-mentioned 56 intervals formed by the C-major scale are sometimes called diatonic to C major. All other intervals are called chromatic to C major. For instance, the perfect fifth A♭–E♭ is chromatic to C major, because A♭ and E♭ are not contained in the C major scale. However, it is diatonic to others, such as the A♭ major scale.
Consonant and dissonant
Consonance and dissonance are relative terms that refer to the stability, or state of repose, of particular musical effects. Dissonant intervals are those that cause tension and desire to be resolved to consonant intervals.
These terms are relative to the usage of different compositional styles.
- In 15th- and 16th-century usage, perfect fifths and octaves, and major and minor thirds and sixths were considered harmonically consonant, and all other intervals dissonant, including the perfect fourth, which by 1473 was described (by Johannes Tinctoris) as dissonant, except between the upper parts of a vertical sonority—for example, with a supporting third below ("6-3 chords"). In the common practice period, it makes more sense to speak of consonant and dissonant chords, and certain intervals previously considered dissonant (such as minor sevenths) became acceptable in certain contexts. However, 16th-century practice was still taught to beginning musicians throughout this period.
- Hermann von Helmholtz (1821–1894) defined a harmonically consonant interval as one in which the two pitches have an upper partial (an overtone) in common. This essentially defines all seconds and sevenths as dissonant, and the above thirds, fourths, fifths, and sixths as consonant.
- David Cope (1997) suggests the concept of interval strength, in which an interval's strength, consonance, or stability is determined by its approximation to a lower and stronger, or higher and weaker, position in the harmonic series. See also: Lipps–Meyer law and #Interval root
All of the above analyses refer to vertical (simultaneous) intervals.
Simple and compound
A simple interval is an interval spanning at most one octave (see Main intervals above). Intervals spanning more than one octave are called compound intervals, as they can be obtained by adding one or more octaves to a simple interval (see below for details).
Steps and skips
Linear (melodic) intervals may be described as steps or skips. A step, or conjunct motion, is a linear interval between two consecutive notes of a scale. Any larger interval is called a skip (also called a leap), or disjunct motion. In the diatonic scale,[d] a step is either a minor second (sometimes also called half step) or major second (sometimes also called whole step), with all intervals of a minor third or larger being skips.
For example, C to D (major second) is a step, whereas C to E (major third) is a skip.
More generally, a step is a smaller or narrower interval in a musical line, and a skip is a wider or larger interval, where the categorization of intervals into steps and skips is determined by the tuning system and the pitch space used.
Melodic motion in which the interval between any two consecutive pitches is no more than a step, or, less strictly, where skips are rare, is called stepwise or conjunct melodic motion, as opposed to skipwise or disjunct melodic motions, characterized by frequent skips.
Two intervals are considered enharmonic, or enharmonically equivalent, if they both contain the same pitches spelled in different ways; that is, if the notes in the two intervals are themselves enharmonically equivalent. Enharmonic intervals span the same number of semitones.
For example, the four intervals listed in the table below are all enharmonically equivalent, because the notes F♯ and G♭ indicate the same pitch, and the same is true for A♯ and B♭. All these intervals span four semitones.
|Interval name||Staff positions|
|4||doubly augmented second||G♭||A♯|
When played as isolated chords on a piano keyboard, these intervals are indistinguishable to the ear, because they are all played with the same two keys. However, in a musical context, the diatonic function of the notes these intervals incorporate is very different.
The discussion above assumes the use of the prevalent tuning system, 12-tone equal temperament ("12-TET"). But in other historic meantone temperaments, the pitches of pairs of notes such as F♯ and G♭ may not necessarily coincide. These two notes are enharmonic in 12-TET, but may not be so in another tuning system. In such cases, the intervals they form would also not be enharmonic. For example, in quarter-comma meantone, all four intervals shown in the example above would be different.
There are also a number of minute intervals not found in the chromatic scale or labeled with a diatonic function, which have names of their own. They may be described as microtones, and some of them can be also classified as commas, as they describe small discrepancies, observed in some tuning systems, between enharmonically equivalent notes. In the following list, the interval sizes in cents are approximate.
- A Pythagorean comma is the difference between twelve justly tuned perfect fifths and seven octaves. It is expressed by the frequency ratio 531441:524288 (23.5 cents).
- A syntonic comma is the difference between four justly tuned perfect fifths and two octaves plus a major third. It is expressed by the ratio 81:80 (21.5 cents).
- A septimal comma is 64:63 (27.3 cents), and is the difference between the Pythagorean or 3-limit "7th" and the "harmonic 7th".
- A diesis is generally used to mean the difference between three justly tuned major thirds and one octave. It is expressed by the ratio 128:125 (41.1 cents). However, it has been used to mean other small intervals: see diesis for details.
- A diaschisma is the difference between three octaves and four justly tuned perfect fifths plus two justly tuned major thirds. It is expressed by the ratio 2048:2025 (19.6 cents).
- A schisma (also skhisma) is the difference between five octaves and eight justly tuned fifths plus one justly tuned major third. It is expressed by the ratio 32805:32768 (2.0 cents). It is also the difference between the Pythagorean and syntonic commas. (A schismic major third is a schisma different from a just major third, eight fifths down and five octaves up, F♭ in C.)
- A kleisma is the difference between six minor thirds and one tritave or perfect twelfth (an octave plus a perfect fifth), with a frequency ratio of 15625:15552 (8.1 cents) ( Play (help·info)).
- A septimal kleisma is six major thirds up, five fifths down and one octave up, with ratio 225:224 (7.7 cents).
- A quarter tone is half the width of a semitone, which is half the width of a whole tone. It is equal to exactly 50 cents.
In general, a compound interval may be defined by a sequence or "stack" of two or more simple intervals of any kind. For instance, a major tenth (two staff positions above one octave), also called compound major third, spans one octave plus one major third.
Any compound interval can be always decomposed into one or more octaves plus one simple interval. For instance, a major seventeenth can be decomposed into two octaves and one major third, and this is the reason why it is called a compound major third, even when it is built by adding up four fifths.
The diatonic number DNc of a compound interval formed from n simple intervals with diatonic numbers DN1, DN2, ..., DNn, is determined by:
which can also be written as:
The quality of a compound interval is determined by the quality of the simple interval on which it is based. For instance, a compound major third is a major tenth (1+(8–1)+(3–1) = 10), or a major seventeenth (1+(8–1)+(8–1)+(3–1) = 17), and a compound perfect fifth is a perfect twelfth (1+(8–1)+(5–1) = 12) or a perfect nineteenth (1+(8–1)+(8–1)+(5–1) = 19). Notice that two octaves are a fifteenth, not a sixteenth (1+(8–1)+(8–1) = 15). Similarly, three octaves are a twenty-second (1+3*(8–1) = 22), and so on.
Main compound intervals
or perfect intervals
|13||Minor ninth||m9||Augmented octave||A8|
|14||Major ninth||M9||Diminished tenth||d10|
|15||Minor tenth||m10||Augmented ninth||A9|
|16||Major tenth||M10||Diminished eleventh||d11|
|17||Perfect eleventh||P11||Augmented tenth||A10|
|19||Perfect twelfth or Tritave||P12||Diminished thirteenth||d13|
|20||Minor thirteenth||m13||Augmented twelfth||A12|
|21||Major thirteenth||M13||Diminished fourteenth||d14|
|22||Minor fourteenth||m14||Augmented thirteenth||A13|
|23||Major fourteenth||M14||Diminished fifteenth||d15|
|24||Perfect fifteenth or Double octave||P15||Augmented fourteenth||A14|
It is also worth mentioning here the major seventeenth (28 semitones)—an interval larger than two octaves that can be considered a multiple of a perfect fifth (7 semitones) as it can be decomposed into four perfect fifths (7 × 4 = 28 semitones), or two octaves plus a major third (12 + 12 + 4 = 28 semitones). Intervals larger than a major seventeenth seldom come up, most often being referred to by their compound names, for example "two octaves plus a fifth" rather than "a 19th".
Intervals in chords
Chords are sets of three or more notes. They are typically defined as the combination of intervals starting from a common note called the root of the chord. For instance a major triad is a chord containing three notes defined by the root and two intervals (major third and perfect fifth). Sometimes even a single interval (dyad) is considered a chord. Chords are classified based on the quality and number of the intervals that define them.
Chord qualities and interval qualities
The main chord qualities are: major, minor, augmented, diminished, half-diminished, and dominant. The symbols used for chord quality are similar to those used for interval quality (see above). In addition, + or aug is used for augmented, ° or dim for diminished, ø for half diminished, and dom for dominant (the symbol − alone is not used for diminished).
Deducing component intervals from chord names and symbols
The main rules to decode chord names or symbols are summarized below. Further details are given at Rules to decode chord names and symbols.
- For 3-note chords (triads), major or minor always refer to the interval of the third above the root note, while augmented and diminished always refer to the interval of the fifth above root. The same is true for the corresponding symbols (e.g., Cm means Cm3, and C+ means C+5). Thus, the terms third and fifth and the corresponding symbols 3 and 5 are typically omitted. This rule can be generalized to all kinds of chords,[e] provided the above-mentioned qualities appear immediately after the root note, or at the beginning of the chord name or symbol. For instance, in the chord symbols Cm and Cm7, m refers to the interval m3, and 3 is omitted. When these qualities do not appear immediately after the root note, or at the beginning of the name or symbol, they should be considered interval qualities, rather than chord qualities. For instance, in CmM7 (minor major seventh chord), m is the chord quality and refers to the m3 interval, while M refers to the M7 interval. When the number of an extra interval is specified immediately after chord quality, the quality of that interval may coincide with chord quality (e.g., CM7 = CMM7). However, this is not always true (e.g., Cm6 = CmM6, C+7 = C+m7, CM11 = CMP11).[e] See main article for further details.
- Without contrary information, a major third interval and a perfect fifth interval (major triad) are implied. For instance, a C chord is a C major triad, and the name C minor seventh (Cm7) implies a minor 3rd by rule 1, a perfect 5th by this rule, and a minor 7th by definition (see below). This rule has one exception (see next rule).
- When the fifth interval is diminished, the third must be minor.[f] This rule overrides rule 2. For instance, Cdim7 implies a diminished 5th by rule 1, a minor 3rd by this rule, and a diminished 7th by definition (see below).
- Names and symbols that contain only a plain interval number (e.g., “seventh chord”) or the chord root and a number (e.g., “C seventh”, or C7) are interpreted as follows:
- If the number is 2, 4, 6, etc., the chord is a major added tone chord (e.g., C6 = CM6 = Cadd6) and contains, together with the implied major triad, an extra major 2nd, perfect 4th, or major 6th (see names and symbols for added tone chords).
- If the number is 7, 9, 11, 13, etc., the chord is dominant (e.g., C7 = Cdom7) and contains, together with the implied major triad, one or more of the following extra intervals: minor 7th, major 9th, perfect 11th, and major 13th (see names and symbols for seventh and extended chords).
- If the number is 5, the chord (technically not a chord in the traditional sense, but a dyad) is a power chord. Only the root, a perfect fifth and usually an octave are played.
The table shows the intervals contained in some of the main chords (component intervals), and some of the symbols used to denote them. The interval qualities or numbers in boldface font can be deduced from chord name or symbol by applying rule 1. In symbol examples, C is used as chord root.
|Main chords||Component intervals|
|CM, or Cmaj||M3||P5|
|Minor triad||Cm, or Cmin||m3||P5|
|Augmented triad||C+, or Caug||M3||A5|
|Diminished triad||C°, or Cdim||m3||d5|
|Dominant seventh chord||C7, or Cdom7||M3||P5||m7|
|Minor seventh chord||Cm7, or Cmin7||m3||P5||m7|
|Major seventh chord||CM7, or Cmaj7||M3||P5||M7|
|Augmented minor seventh chord||C+7, Caug7,
C7♯5, or C7aug5
|Diminished seventh chord||C°7, or Cdim7||m3||d5||d7|
|Half-diminished seventh chord||Cø7, Cm7♭5, or Cm7dim5||m3||d5||m7|
Size of intervals used in different tuning systems
|Comparison of interval width (in cents)|
In this table, the interval widths used in four different tuning systems are compared. To facilitate comparison, just intervals as provided by 5-limit tuning (see symmetric scale n.1) are shown in bold font, and the values in cents are rounded to integers. Notice that in each of the non-equal tuning systems, by definition the width of each type of interval (including the semitone) changes depending on the note that starts the interval. This is the art of just intonation. In equal temperament, the intervals are never precisely in tune with each other. This is the price of using equidistant intervals in a 12-tone scale. For simplicity, for some types of interval the table shows only one value (the most often observed one).
In 1⁄4-comma meantone, by definition 11 perfect fifths have a size of approximately 697 cents (700 − ε cents, where ε ≈ 3.42 cents); since the average size of the 12 fifths must equal exactly 700 cents (as in equal temperament), the other one must have a size of about 738 cents (700 + 11ε, the wolf fifth or diminished sixth); 8 major thirds have size about 386 cents (400 − 4ε), 4 have size about 427 cents (400 + 8ε, actually diminished fourths), and their average size is 400 cents. In short, similar differences in width are observed for all interval types, except for unisons and octaves, and they are all multiples of ε (the difference between the 1⁄4-comma meantone fifth and the average fifth). A more detailed analysis is provided at 1⁄4-comma meantone Size of intervals. Note that 1⁄4-comma meantone was designed to produce just major thirds, but only 8 of them are just (5:4, about 386 cents).
The Pythagorean tuning is characterized by smaller differences because they are multiples of a smaller ε (ε ≈ 1.96 cents, the difference between the Pythagorean fifth and the average fifth). Notice that here the fifth is wider than 700 cents, while in most meantone temperaments, including 1⁄4-comma meantone, it is tempered to a size smaller than 700. A more detailed analysis is provided at Pythagorean tuning#Size of intervals.
The 5-limit tuning system uses just tones and semitones as building blocks, rather than a stack of perfect fifths, and this leads to even more varied intervals throughout the scale (each kind of interval has three or four different sizes). A more detailed analysis is provided at 5-limit tuning#Size of intervals. Note that 5-limit tuning was designed to maximize the number of just intervals, but even in this system some intervals are not just (e.g., 3 fifths, 5 major thirds and 6 minor thirds are not just; also, 3 major and 3 minor thirds are wolf intervals).
The above-mentioned symmetric scale 1, defined in the 5-limit tuning system, is not the only method to obtain just intonation. It is possible to construct juster intervals or just intervals closer to the equal-tempered equivalents, but most of the ones listed above have been used historically in equivalent contexts. In particular, the asymmetric version of the 5-limit tuning scale provides a juster value for the minor seventh (9:5, rather than 16:9). Moreover, the tritone (augmented fourth or diminished fifth), could have other just ratios; for instance, 7:5 (about 583 cents) or 17:12 (about 603 cents) are possible alternatives for the augmented fourth (the latter is fairly common, as it is closer to the equal-tempered value of 600 cents). The 7:4 interval (about 969 cents), also known as the harmonic seventh, has been a contentious issue throughout the history of music theory; it is 31 cents flatter than an equal-tempered minor seventh. For further details about reference ratios, see 5-limit tuning#The justest ratios.
Although intervals are usually designated in relation to their lower note, David Cope and Hindemith both suggest the concept of interval root. To determine an interval's root, one locates its nearest approximation in the harmonic series. The root of a perfect fourth, then, is its top note because it is an octave of the fundamental in the hypothetical harmonic series. The bottom note of every odd diatonically numbered intervals are the roots, as are the tops of all even numbered intervals. The root of a collection of intervals or a chord is thus determined by the interval root of its strongest interval.
As to its usefulness, Cope provides the example of the final tonic chord of some popular music being traditionally analyzable as a "submediant six-five chord" (added sixth chords by popular terminology), or a first inversion seventh chord (possibly the dominant of the mediant V/iii). According to the interval root of the strongest interval of the chord (in first inversion, CEGA), the perfect fifth (C–G), is the bottom C, the tonic.
Interval cycles, "unfold [i.e., repeat] a single recurrent interval in a series that closes with a return to the initial pitch class", and are notated by George Perle using the letter "C", for cycle, with an interval-class integer to distinguish the interval. Thus the diminished-seventh chord would be C3 and the augmented triad would be C4. A superscript may be added to distinguish between transpositions, using 0–11 to indicate the lowest pitch class in the cycle.
Alternative interval naming conventions
As shown below, some of the above-mentioned intervals have alternative names, and some of them take a specific alternative name in Pythagorean tuning, five-limit tuning, or meantone temperament tuning systems such as quarter-comma meantone. All the intervals with prefix sesqui- are justly tuned, and their frequency ratio, shown in the table, is a superparticular number (or epimoric ratio). The same is true for the octave.
Typically, a comma is a diminished second, but this is not always true (for more details, see Alternative definitions of comma). For instance, in Pythagorean tuning the diminished second is a descending interval (524288:531441, or about −23.5 cents), and the Pythagorean comma is its opposite (531441:524288, or about 23.5 cents). 5-limit tuning defines four kinds of comma, three of which meet the definition of diminished second, and hence are listed in the table below. The fourth one, called syntonic comma (81:80) can neither be regarded as a diminished second, nor as its opposite. See Diminished seconds in 5-limit tuning for further details.
|Generic names||Specific names|
|Quality and number||Other naming convention||Pythagorean tuning||5-limit tuning||1⁄4-comma|
or perfect prime
|lesser diesis (128:125)|
greater diesis (648:625)
or augmented prime
|2||major second||M2||tone, whole tone, whole step||sesquioctavum (9:8)|
|3||minor third||m3||sesquiquintum (6:5)|
|4||major third||M3||sesquiquartum (5:4)|
|5||perfect fourth||P4||sesquitertium (4:3)|
|7||perfect fifth||P5||sesquialterum (3:2)|
|12||perfect octave||P8||duplex (2:1)|
Additionally, some cultures around the world have their own names for intervals found in their music. For instance, 22 kinds of intervals, called shrutis, are canonically defined in Indian classical music.
Up to the end of the 18th century, Latin was used as an official language throughout Europe for scientific and music textbooks. In music, many English terms are derived from Latin. For instance, semitone is from Latin semitonus.
The prefix semi- is typically used herein to mean "shorter", rather than "half". Namely, a semitonus, semiditonus, semidiatessaron, semidiapente, semihexachordum, semiheptachordum, or semidiapason, is shorter by one semitone than the corresponding whole interval. For instance, a semiditonus (3 semitones, or about 300 cents) is not half of a ditonus (4 semitones, or about 400 cents), but a ditonus shortened by one semitone. Moreover, in Pythagorean tuning (the most commonly used tuning system up to the 16th century), a semitritonus (d5) is smaller than a tritonus (A4) by one Pythagorean comma (about a quarter of a semitone).
|Quality and number||Short||Latin|
|Augmented unison||A1||unisonus superflua|
|Augmented second||A2||tonus superflua|
|Augmented third||A3||ditonus superflua|
|6||Diminished fifth||d5||semidiapente, semitritonus|
|8||Minor sixth||m6||hexachordum minus, semitonus maius cum diapente, tetratonus|
|Augmented fifth||A5||diapente superflua|
|9||Major sixth||M6||hexachordum maius, tonus cum diapente|
|10||Minor seventh||m7||heptachordum minus, semiditonus cum diapente, pentatonus|
|Augmented sixth||A6||hexachordum superflua|
|11||Major seventh||M7||heptachordum maius, ditonus cum diapente|
|Augmented seventh||A7||heptachordum superflua|
In post-tonal or atonal theory, originally developed for equal-tempered European classical music written using the twelve-tone technique or serialism, integer notation is often used, most prominently in musical set theory. In this system, intervals are named according to the number of half steps, from 0 to 11, the largest interval class being 6.
In atonal or musical set theory, there are numerous types of intervals, the first being the ordered pitch interval, the distance between two pitches upward or downward. For instance, the interval from C upward to G is 7, and the interval from G downward to C is −7. One can also measure the distance between two pitches without taking into account direction with the unordered pitch interval, somewhat similar to the interval of tonal theory.
The interval between pitch classes may be measured with ordered and unordered pitch-class intervals. The ordered one, also called directed interval, may be considered the measure upwards, which, since we are dealing with pitch classes, depends on whichever pitch is chosen as 0. For unordered pitch-class intervals, see interval class.
Generic and specific intervals
In diatonic set theory, specific and generic intervals are distinguished. Specific intervals are the interval class or number of semitones between scale steps or collection members, and generic intervals are the number of diatonic scale steps (or staff positions) between notes of a collection or scale.
Notice that staff positions, when used to determine the conventional interval number (second, third, fourth, etc.), are counted including the position of the lower note of the interval, while generic interval numbers are counted excluding that position. Thus, generic interval numbers are smaller by 1, with respect to the conventional interval numbers.
|Specific interval||Generic interval||Diatonic name|
|Number of semitones||Interval class|
|Augmented fourth |
Generalizations and non-pitch uses
The term "interval" can also be generalized to other music elements besides pitch. David Lewin's Generalized Musical Intervals and Transformations uses interval as a generic measure of distance between time points, timbres, or more abstract musical phenomena.
- Music and mathematics
- Circle of fifths
- List of pitch intervals
- List of meantone intervals
- Ear training
- Regular temperament
- The term tritone is sometimes used more strictly as a synonym of augmented fourth (A4).
- The perfect and the augmented unison are also known as perfect and augmented prime.
- The minor second (m2) is sometimes called diatonic semitone, while the augmented unison (A1) is sometimes called chromatic semitone.
- The expression diatonic scale is herein strictly defined as a 7-tone scale, which is either a sequence of successive natural notes (such as the C-major scale, C–D–E–F–G–A–B, or the A-minor scale, A–B–C–D–E–F–G) or any transposition thereof. In other words, a scale that can be written using seven consecutive notes without accidentals on a staff with a conventional key signature, or with no signature. This includes, for instance, the major and the natural minor scales, but does not include some other seven-tone scales, such as the melodic minor and the harmonic minor scales (see also Diatonic and chromatic).
- General rule 1 achieves consistency in the interpretation of symbols such as CM7, Cm6, and C+7. Some musicians legitimately prefer to think that, in CM7, M refers to the seventh, rather than to the third. This alternative approach is legitimate, as both the third and seventh are major, yet it is inconsistent, as a similar interpretation is impossible for Cm6 and C+7 (in Cm6, m cannot possibly refer to the sixth, which is major by definition, and in C+7, + cannot refer to the seventh, which is minor). Both approaches reveal only one of the intervals (M3 or M7), and require other rules to complete the task. Whatever is the decoding method, the result is the same (e.g., CM7 is always conventionally decoded as C–E–G–B, implying M3, P5, M7). The advantage of rule 1 is that it has no exceptions, which makes it the simplest possible approach to decode chord quality.
According to the two approaches, some may format the major seventh chord as CM7 (general rule 1: M refers to M3), and others as CM7 (alternative approach: M refers to M7). Fortunately, even CM7 becomes compatible with rule 1 if it is considered an abbreviation of CMM7, in which the first M is omitted. The omitted M is the quality of the third, and is deduced according to rule 2 (see above), consistently with the interpretation of the plain symbol C, which by the same rule stands for CM.
- All triads are tertian chords (chords defined by sequences of thirds), and a major third would produce in this case a non-tertian chord. Namely, the diminished fifth spans 6 semitones from root, thus it may be decomposed into a sequence of two minor thirds, each spanning 3 semitones (m3 + m3), compatible with the definition of tertian chord. If a major third were used (4 semitones), this would entail a sequence containing a major second (M3 + M2 = 4 + 2 semitones = 6 semitones), which would not meet the definition of tertian chord.
- Prout, Ebenezer (1903), "I-Introduction", Harmony, Its Theory And Practise (30th edition, revised and largely rewritten ed.), London: Augener; Boston: Boston Music Co., p. 1, ISBN 978-0781207836
- Lindley, Mark; Campbell, Murray; Greated, Clive. "Interval". In Deane L. Root. Grove Music Online. Oxford Music Online. Oxford University Press. (subscription required)
- Aldwell, E; Schachter, C.; Cadwallader, A., "Part 1: The Primary Materials and Procedures, Unit 1", Harmony and Voice Leading (4th ed.), Schirmer, p. 8, ISBN 978-0495189756
- Duffin, Ross W. (2007), "3. Non-keyboard tuning", How Equal Temperament Ruined Harmony (and Why You Should Care) (1st ed.), W. W. Norton, ISBN 978-0-393-33420-3
- "Prime (ii). See Unison" (from Prime. Grove Music Online. Oxford University Press. Accessed August 2013. (subscription required))
- Definition of Perfect consonance in Godfrey Weber's General music teacher, by Godfrey Weber, 1841.
- Kostka, Stephen; Payne, Dorothy (2008). Tonal Harmony, p. 21. First Edition, 1984.
- Prout, Ebenezer (1903). Harmony: Its Theory and Practice, 16th edition. London: Augener & Co. (facsimile reprint, St. Clair Shores, Mich.: Scholarly Press, 1970), p. 10. ISBN 0-403-00326-1.
- See for example William Lovelock, The Rudiments of Music (New York: St Martin's Press; London: G. Bell, 1957):[page needed], reprinted 1966, 1970, and 1976 by G. Bell, 1971 by St Martins Press, 1981, 1984, and 1986 London: Bell & Hyman. ISBN 9780713507447 (pbk). ISBN 9781873497203
- Drabkin, William (2001). "Fourth". The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell. London: Macmillan Publishers.
- Helmholtz, Hermann L. F. On the Sensations of Tone as a Theoretical Basis for the Theory of Music Second English Edition translated by Ellis, Alexander J. (1885) reprinted by Dover Publications with new introduction (1954) ISBN 0-486-60753-4, page 182d "Just as the coincidences of the two first upper partial tones led us to the natural consonances of the Octave and Fifth, the coincidences of higher upper partials would lead us to a further series of natural consonances."
- Cope, David (1997). Techniques of the Contemporary Composer, pp. 40–41. New York, New York: Schirmer Books. ISBN 0-02-864737-8.
- Wyatt, Keith (1998). Harmony & Theory... Hal Leonard Corporation. p. 77. ISBN 0-7935-7991-0.
- Bonds, Mark Evan (2006). A History of Music in Western Culture, p.123. 2nd ed. ISBN 0-13-193104-0.
- Aikin, Jim (2004). A Player's Guide to Chords and Harmony: Music Theory for Real-World Musicians, p. 24. ISBN 0-87930-798-6.
- Károlyi, Otto (1965), Introducing Music, p. 63. Hammondsworth (England), and New York: Penguin Books. ISBN 0-14-020659-0.
- Hindemith, Paul (1934). The Craft of Musical Composition. New York: Associated Music Publishers. Cited in Cope (1997), p. 40–41.
- Perle, George (1990). The Listening Composer, p. 21. California: University of California Press. ISBN 0-520-06991-9.
- Gioseffo Zarlino, Le Istitutione harmoniche ... nelle quali, oltre le materie appartenenti alla musica, si trovano dichiarati molti luoghi di Poeti, d'Historici e di Filosofi, si come nel leggerle si potrà chiaramente vedere (Venice, 1558): 162.
- J. F. Niermeyer, Mediae latinitatis lexicon minus: Lexique latin médiéval–français/anglais: A Medieval Latin–French/English Dictionary, abbreviationes et index fontium composuit C. van de Kieft, adiuvante G. S. M. M. Lake-Schoonebeek (Leiden: E. J. Brill, 1976): 955. ISBN 90-04-04794-8.
- Robert De Handlo: The Rules, and Johannes Hanboys, The Summa: A New Critical Text and Translation, edited and translated by Peter M. Lefferts. Greek & Latin Music Theory 7 (Lincoln: University of Nebraska Press, 1991): 193fn17. ISBN 0803279345.
- Roeder, John. "Interval Class". In Deane L. Root. Grove Music Online. Oxford Music Online. Oxford University Press. (subscription required)
- Lewin, David (1987). Generalized Musical Intervals and Transformations, for example sections 3.3.1 and 5.4.2. New Haven: Yale University Press. Reprinted Oxford University Press, 2007. ISBN 978-0-19-531713-8
- Ockelford, Adam (2005). Repetition in Music: Theoretical and Metatheoretical Perspectives, p. 7. ISBN 0-7546-3573-2. "Lewin posits the notion of musical 'spaces' made up of elements between which we can intuit 'intervals'....Lewin gives a number of examples of musical spaces, including the diatonic gamut of pitches arranged in scalar order; the 12 pitch classes under equal temperament; a succession of time-points pulsing at regular temporal distances one time unit apart; and a family of durations, each measuring a temporal span in time units....transformations of timbre are proposed that derive from changes in the spectrum of partials..."
- Gardner, Carl E. (1912): Essentials of Music Theory, p. 38
- Encyclopædia Britannica, Interval
- Lissajous Curves: Interactive simulation of graphical representations of musical intervals, beats, interference, vibrating strings
- Elements of Harmony: Vertical Intervals |
Paleontology is a field that tries to answer questions about the past. Its goal is to reconstruct the evolution of life on Earth, and its methods are not unlike those used in other fields of science. Paleontologists use fossil records to understand how ancient organisms evolved into new forms over time.
They accomplish this by studying fossils preserved in rock strata, which can tell us about changes across species over millions of years. The reason paleontology differs from other sciences like astronomy and chemistry is that it studies historical processes—it looks at what happened in the past rather than what happens now or might happen in the future. It's also different from archaeology because it doesn't usually focus on humans but rather animals (and plants).
The fossil record is your data. It's what you're studying and it's the reason that you're interested in paleontology. You're using scientific methods to get answers from that data.
The scientific method is a series of steps:
- Data is collected (e.g., fossils)
- Data is analyzed (e.g., using morphometrics)
- Results are published (e.g., peer reviewed articles)
Once we locate fossils, we prepare them in the lab. The process starts with removing sedimentary rock from around the fossil. The goal is to reveal as much of the fossil as possible without damaging it, because once you break up a fossil, it's lost forever.
After removing all of the surrounding sedimentary rock, we often need to expose different parts of the specimen by chiseling away chunks of surrounding rock and clay. Then we use compounds like sodium carbonate (Na2CO3) or potassium feldspar (KAlSi3O8) that dissolve easily in water but not in oil-based solvents such as acetone or xylene so that they can act like glue when they're applied directly onto exposed surfaces. Some fossils are still buried in rock.
This can be a challenge for paleontologists, but it's not impossible to crack open the rock and reveal what's inside. After all, we've been doing this for hundreds of years!
There are two main ways that paleontologists remove fossilized bones from their rocky tombs: chiseling and hammering. Chisels are used for very small rocks; hammers work better on larger pieces of stone. A hammer has two sides: one side has a blunt edge and one has a sharper edge. The blunt side is used to break off large pieces while the sharp side helps break apart smaller rocks into smaller pieces so they can be removed easier by hand or with another tool such as chisel or toothbrush. Once we clean a fossil, we usually want to make a cast of it. The process is called “casting” or “master molding” and involves making a mold around the fossil using some sort of material (often plaster). Usually this is done with something called alginate, which is like rubber that has been dissolved in hot water and then poured into the desired shape as soon as it cools.
Once you have your master mold made, you fill it with more liquid plaster (also known as investment) and let that dry. When it's fully dry, you use an air gun to blow off any excess material on top of the cast. You then take away all of the supports holding up your original fossil so that they don't show up in your final product—for example, if there were some kind of structure around where your dinosaur stood or sat while alive then those things will probably show up when casting because they were attached!
The final result after removing all support structures should be something like what's pictured below:
Analyzing a fossil can involve many different approaches and techniques.
- Shape analysis is a simple technique that can be used to compare the shapes of different fossils. It involves measuring and recording the lengths, widths, and angles of certain landmarks on an object. The locations of these landmarks are usually determined by reference to anatomical features such as the mid-line (center line) of bones or other structures like teeth.
- Landmarking is another approach that uses points on a fossil to define specific measurements. With this method, you can determine how far apart two parts of your specimen are from each other by measuring the distances between pairs of landmarks and then multiplying them together so they're all converted into consistent units (for example centimeters).
- Statistical shape analysis is used when there are many different variables being considered at once—in other words it's a way of analyzing data with multiple dimensions (e.g., length, width). This method involves calculating distances between points on an object and applying those distances to create points in three dimensional space using mathematical formulas like linear regression analysis
One way paleontologists analyze fossils is by measuring their shapes. Shape is a function of both size and shape, so if you want to measure the shape of a fossil it’s not enough just to take its measurements in two dimensions; instead, you need landmarks on the fossil that are meaningful for how it was used by its owner (usually an animal). Landmarks can be anything distinctive in appearance—such as holes or bumps—or even areas where there are distinct textures.
For example: If you were analyzing a trilobite, which has many eyes along the edges of its body, then early in your research process you would mark these eyes with a pencil so that when you later measured them with calipers they will come out as part of your final dataset.
Shape is complicated, and we can't always just look at an animal and know exactly how its body will change. That's where computers come in! They can help us measure shape by doing many different kinds of calculations.
The most common type of computer program used to measure shape is called a "morphometric" program, or "mophometrics". Mophometrics are mathematical algorithms that tell you things like how long something is or how tall it is (or maybe even whether it has back legs).
Morphometrics is the use of morphological measurements to characterize living and extinct organisms. Although it has been most commonly used in biology, paleontologists are increasingly utilizing this technique to quantify shape variation among fossil specimens and compare them with extant species. The main advantage of morphometric analyses over traditional taxonomic methods is that they can be performed independently of any comparative data on living species, which can often be difficult or impossible to obtain for many fossil species. The process typically involves taking multiple measurements from each fossil specimen and then using statistical analyses to compare those measurements with those taken from other fossils (or even modern animals).
The most common way that paleontologists measure shape variation is by using bilateral landmarks—these are small points on an object where two lines cross at 90° angles. This works well because most vertebrates have bilateral symmetry (meaning they're symmetric along their long axis), so it's easy to identify these landmarks without having to measure every point on a specimen individually (which would take forever). Typically, there are 19-36 landmarks used for each measurement. For example, if you were measuring the shape of the skull of a whale, you would mark the tip of its snout and then use that point as a landmark for all other measurements. Landmarks should be evenly spaced around the measurement area so that they can be easily identified and measured in an accurate way. They should also be easy to locate so they can be found again if needed—this is particularly important if only some measurements will be taken from one specimen due to limited availability of bones or fossils at certain museums or locations with collections stored away from public view.
First, you have to decide what's important to measure on your fossil. Landmarks can be anything, but usually they're anatomical features like snouts or eye sockets. The main thing is that they should be easy for someone else who doesn't know your specific fossil to find and count on their own.
Because different species have different body shapes, the number of landmarks will vary depending on what you're measuring. If you're trying to compare two different dinosaurs in length, then it probably makes sense to use more than one landmark (two snout tips might not be enough). But if all you want is an estimate of how big an organism was overall then one set of eye sockets may be enough!
To ensure consistency across specimens studied by other researchers later on down the line though; I would recommend using four landmarks at most: two midline skull points (one above each eye), one center point between these two skull points (to mark where halfway up the head would sit) and lastly one end point marking where distance from tip-to-tip ends at this center point (not counting any nose!).
Then you choose where on the fossil those important parts are and mark them with landmarks. After you've chosen your landmarks, you use software to map each landmark onto a new place on the fossil like localizing them into a coordinate system. Once all of your landmarks are localized, you can start asking questions about your data!
Once you've chosen your landmarks, you use software to map each landmark onto a new place on the fossil like localizing them into a coordinate system. Once all of your landmarks are localized, you can start asking questions about your data!
This is where things get fun! You can do things like:
- Make an outline of the fossil. The outlines of fossils are called "coefficients" and they're super important in certain types of studies (like morphometrics). If you have lots and lots of fossils in one place (like a bone bed), then this will be easy for you since their shapes are already pretty obvious. But sometimes fossils aren't so easy to see because they're covered with rock or sandstone or whatever else makes up most rocks on earth today. So if this happens to be true for whatever fossil(s) you've chosen to study, don't worry too much about making an outline just yet—just worry about marking out as many landmarks as possible first before committing yourself too heavily into anything more time consuming than necessary!
The next time you see a fossil, take a moment to think about how much work went into its discovery. To make these specimens available to us, paleontologists must spend years searching for them and collecting data on their shape. Once they're in the lab, though, we still have much work ahead of us before we can understand what happened millions of years ago! |
An emotion is a "complex reaction pattern, involving experiential, behavioral, and physiological elements, by which the individual attempts to deal with a personally significant matter of event." It arises without conscious effort and is either positive or negative in its valence.
Other closely related terms are:
- affect, a synonym for emotion
- affect display, external display of emotion
- disposition, referring to a durable differentiating characteristic of a person, a tendency to react to situations with a certain emotion
- feeling, which usually refers to the subjective, phenomenological aspect of emotion
- mood, which refers to an emotional state of duration intermediate between an emotion and a disposition
Emotion is derived from French émotion, from émouvoir, 'excite' based on Latin emovere, from e- (variant of ex-) 'out' and movere 'move'. "Motivation" is also derived from movere.
Definitions of emotion
Emotion is very complex, and the term has no single universally accepted definition. The study of emotions is part of psychology, neuroscience, and ethics.
According to Sloman, emotions are cognitive processes. Some authors emphasize the difference between human emotions and the affective behavior of animals.
We often talk about brains as information-processing systems, but any account of the brain that lacks an account of emotions, motivations, fears, and hopes is incomplete. Emotions are measurable physical responses to salient stimuli: the increased heartbeat and perspiration that accompany fear, the freezing response of a rat in the presence of a cat, or the extra muscle tension that accompanies anger. Feelings, on the other hand, are the subjective experiences that sometimes accompany these processes: the sensations of happiness, envy, sadness, and so on. Emotions seem to employ largely unconscious machinery—for example, brain areas involved in emotion will respond to angry faces that are briefly presented and then rapidly masked, even when subjects are unaware of having seen the face. Across cultures the expression of basic emotions is remarkably similar, and as Darwin observed, it is also similar across all mammals. There are even strong similarities in physiological responses among humans, reptiles, and birds when showing fear, anger, or parental love.
Modern views propose that emotions are brain states that quickly assign value or valence to outcomes and provide a simple plan of action. Thus, emotion can be viewed as a type of computation, a rapid, automatic summary that initiates appropriate actions. When a bear is galloping toward you, the rising fear directs your brain to do the right things (determining an escape route) instead of all the other things it could be doing (rounding out your grocery list). When it comes to perception, you can spot an object more quickly if it is, say, a spider rather than a roll of tape. In the realm of memory, emotional events are laid down differently by a parallel memory system involving a brain area called the amygdala.
One goal of emotional neuroscience is to understand the nature of the many disorders of emotion, depression being the most common and costly. Impulsive aggression and violence are also thought to be consequences of faulty emotion regulation.
The function of emotion (relations between: Emotion, Meta-emotion, and Reason)
Emotion is generally regarded by Western civilization as the antithesis of reason. This distinction stems from Western philosophy specifically Cartesian dualism and modern interpretations of Stoicism, and is reflected in common phrases like appeal to emotion or your emotions have taken over.
In Paul D. MacLean's Triune brain model, emotions are defined as the responses of the Mammalian cortex. Emotion competes with even more instinctive responses from the Reptilian cortex and the more logical, reasoning neocortex. However, current research on the neural circuitry of emotion suggests that emotion is an essential part of human decision-making and planning, and that the famous distinction made by Descartes between reason and emotion is not as clear as it seems.
Emotions can be undesired either to the individual experiencing them, but also can be undesired to the other persons, groups of persons, organizations, sub-cultures, and civilizations such as Western civilization, which can be viewed as the emotion being subjected to the individual's or someone else's discouraging meta-emotion about the undesired emotion or can be even repressed by the meta-emotions. Thus one of the most distinctive, and perhaps challenging, facts about human beings is this potential for entanglement, or possibly opposition, between emotion, meta-emotion, will, and reason.
Some state that there is no empirical support for any generalization suggesting the antithesis between reason and emotion: indeed, anger or fear can often be thought of as a systematic response to observed facts. In any case, it is clear that the relation between logic and argument and emotion is one which merits careful study.
Emotion as the subject of scientific research has multiple dimensions: behavioral, physiological, subjective, and cognitive. Sloman argues that many emotions are side-effects of the operations of complex mechanisms (e.g. 'alarm' mechanisms) required in animals or machines with multiple motives and limited capacities and resources for coping with a changing and unpredictable world, just as 'thrashing' can sometimes occur as a side-effect of scheduling and memory management mechanisms required in a computer operating system for purposes other than producing thrashing. Such side effects are sometimes useful, but sometimes they are dysfunctional. Other theorists, often influenced by writings of Antonio Damasio argue that emotions themselves are necessary for any intelligent system (natural or artificial).
Psychiatrist William Glasser's theory of the human control system states that behavior is composed of four simultaneous components: deeds, ideas, emotions, and physiological states. He asserts that we choose the idea and deed and that the associated emotions and physiological states also occur but cannot be chosen independently. He calls his construct a total behavior to distinguish it from the common concept of behavior. He uses the verbs to describe what is commonly seen as emotion. For example, he uses 'to depress' to describe the total behavior commonly known as depression which, to him, includes depressing ideas, actions, emotions, and physiological states. Dr. Glasser also further asserts that internal choices (conscious or unconscious) cause emotions instead of external stimuli.
Many psychologists adopt the ABC model, which defines emotions in terms of three fundamental attributes: A. physiological arousal, B. behavioral expression (e.g. facial expressions), and C. conscious experience, the subjective feeling of an emotion. All three attributes are necessary for a full fledged emotional event, though the intensity of each may vary greatly.
Robert Masters makes the following distinctions between affect, feeling and emotion: "As I define them, affect is an innately structured, non-cognitive evaluative sensation that may or may not register in consciousness; feeling is affect made conscious, possessing an evaluative capacity that is not only physiologically based, but that is often also psychologically (and sometimes relationally) oriented; and emotion is psychosocially constructed, dramatized feeling."
In pop culture there are sub-cultures which cultivate the expressions of anger and rebelliousness even when they are not really angry, its members encouraging each other to express the anger by internalizing meta-gladness about it. Encouragement (i.e. meta-gladness) and discouragement (i.e. psychological repression) of selected emotions - instead of mere awareness and equal interest in all emotions - can be considered as additional source of organizational climate, family dynamics, psychodynamics, personality traits, and of mental disorders, including depression among others.
Emotions in the Philosophy of Mind
In opposition to the traditional Philosophy of Mind that has considered emotions only as non-essential addition, at best giving a flavour to rational intellectual thought, the authors of naturalistic Philosophy of Mind inspired by prospects of building robots and other autonomous agents are starting to give emotions a central role as an indispensable constituent for adaptive agency (see DeLancey 2002/2004).
Emotions in Decision Making
There is increasing support for treating people's emotions as an information source in their decision making process.
Emotions in Philosophy
What is the relationship between reason and emotion?
4 Maccabees echoes nearly the same idea, and "philosophically" discusses the reason versus emotion in an argument that if reason rules the emotions that prevent self-control, then it may rule the emotions that stop people from acting justly (malice) and courageously (anger, fear and pain), and describes primary emotions using a branching and farming analogy. In short:
- The two primary emotions are pleasure and pain, which can affect body or soul, and cause many effects. Pleasure can be preceded by desire and followed by delight. Pain can be preceded by fear and followed by grief. Anger embraces pleasure and pain. In pleasure is a malevolent tendency, causing complexity; in the soul it boasts, covets and craves honor, rivalry, and malice; in the body it causes careless eating, gluttony, and the greedy consumption of food.
- Summary: Pleasure and Pain are two plants growing from the body and the soul, and have many offshoots, each of which Reason weeds, prunes, ties up, waters, irrigates, and so tames the jungle of habits and emotions.
Such basic views of emotions have seen the world through thousands of years, leading to ideas like the age of reason, age of enlightenment (ironically scorned by many Christians) and logical positivism, and affecting the history of logic, reason and science from its roots to its latest stems.Conversely, emotional people experience reason as cold, irrational and evil, despite its benefits. There is no use to proving wrong such meaningless, logic-eschewing beliefs that don't want to or claim to be reasonable.
Theoretical traditions in Psychogical Emotion Research
Several theoretical traditions in emotion research have been offered. These traditions are not mutually exclusive and many researchers incorporate multiple perspectives in their work.
William James in the late 19th century believed that emotional experience is largely due to the experience of bodily changes. These changes might be visceral, postural, or facially expressive. The most basic of these somatic theories is the James-Lange theory. This theory and its derivates state that a changed situation leads to a changed bodily state. It is this bodily state which in turn gives rise to an emotion. Hence the emotion fear upon encountering a bear in the woods would follow from:
- Spot a bear
- -> Heart begins beating faster; adrenalin is being produced
- -> The emotion fear arises
- -> Heart begins beating faster; adrenalin is being produced
This approach underlies experiment where through manipulating the bodily state, a desired emotion is induced (e.g. in laughter therapy).
Walter Cannon provided empirical evidence against the dominance of the James-Lange theory of the physiological aspects emotions in the second edition of Bodily Changes in Pain, Hunger, Fear and Rage. Cannon and Bard came up with a different account of the relations between emotions and behavior; where a certain situation leads to an emotion; which in turn activates a typical behavior. Here the emotion fear upon encountering a bear in the woods would result in:
- Spot a bear
- -> The emotion fear arises
- -> Run away
- -> The emotion fear arises
Research in social psychology interprets emotions as a combination of two elements; physiological arousal and cognitive interpretation. The earliest account of such a theory is the Singer-Schachter theory that is based on experiments that varied arousal introducing chemical (adrenaline) and put the participants in different situations. The combination of the appraisal of the situation (cognitive) and whether participants received adrenaline or a placebo together determined the response. In the example of the bear this would lead to:
- Spot a bear
- -> Adrenalin is released, hearts starts beating faster
- -> The sight of a bear is interpreted as being dangerous for the health (note this needs not necessarily be a conscious appraisal)
- -> The emotion fear arises.
Several other theories have a similar ideas, for example, the framework proposed by Nico Frijda where such appraisal leads to action tendencies is related to this idea.
In all these theories, the different emotions causes a detectable physical response in the body. These responses are often perceived as sensation in the body; for example:
- Fear is felt as a heightened heartbeat, increased “flinch” response, and increased muscle tension.
- Anger, based on sensation, seems indistinguishable from fear.
- Happiness is often felt as an expansive or swelling feeling in the chest and the sensation of lightness or buoyancy, as if standing underwater.
- Sadness is often experienced as a feeling of tightness in the throat and eyes, and relaxation in the arms and legs.
- Shame can be felt as heat in the upper chest and face.
- Desire can be accompanied by a dry throat, heavy breathing, and increased heart rate.
The evolutionary perspective
A fourth theoretical tradition has been gaining influence once more (see: Cornelius, 1996). This fourth, evolutionary tradition, started in the late 19th century with Charles Darwin's publication of a book on the expression of emotions in man and animals. Darwin's original thesis was that emotions evolved via natural selection for reasons of warning other creatures about your intentions (e.g. a cat with a high back is angry and will strike you unless you back off). Darwin argued that for mankind emotions were no longer functional but are epiphenomena of functional associated habits. Such an evolutionary origin would predict emotions to be cross-culturally universal. Confirmation of this biological origin was provided by Paul Ekman's seminal research on facial expressions in humans. Other research in this area focuses on physical displays of emotion including body language of animals and facial expressions in humans. (See Affect display.) The increased potential in neuroimaging has allowed investigation of this idea focusing on the working brain itself. Important neurological advances where made from this perspectives in the 1990s by, for example, Joseph LeDoux and Antonio Damasio.
Primary and secondary emotion
- Smell carries directly to limbic areas of the mammalian brain via nerves running from the olfactory bulbs to the septum, amygdala, and hippocampus. In the acquatic brain, olfaction was critical for detecting food, foes, and mates from a distance in murky waters.
- An emotional feeling, like an aroma, has a volatile or "thin-skinned" quality because sensory cells lie on the exposed exterior of the olfactory epithelium (i.e., on the bodily surface itself).
- A sudden scent, like a whiff of smelling salts, may jolt the mind. The force of a mood is reminiscent of a smell's intensity (e.g., soft and gentle, pungent, or overpowering), and similarly permeates and fades as well. The design of emotion cues, in tandem with the forebrain's olfactory prehistory, suggests that the sense of smell is the neurological model for our emotions.
Secondary emotions (i.e., feelings attached to objects [e.g., to dental drills], events, and situations through learning) require additional input, based largely on memory, from the prefrontal and somatosensory cortices. The stimulus may still be processed directly via the amygdala but is now also analyzed in the thought process. Thoughts and emotions are interwoven: every thought, however bland, almost always carries with it some emotional undertone, however subtle.
Neurobiological theories of emotion
Based on discoveries made through neural mapping of the limbic system, the neurobiological explanation of human emotion is that emotion is a pleasant or unpleasant mental state organized in the limbic system of the mammalian brain.
Defined as such, these emotional states are specific manifestations of non-verbally expressed feelings of agreement, amusement, anger, certainty, control, disagreement, disgust, disliking, embarrassment, fear, guilt, happiness, hate, interest, liking, love, sadness, shame, surprise, and uncertainty. If distinguished from reactive responses of reptiles, emotions would then be mammalian elaborations of general vertebrate arousal patterns, in which neurochemicals (e.g., dopamine, noradrenaline, and serotonin) step-up or step-down the brain's activity level, as visible in body movements, gestures, and postures. In mammals, primates, and human beings, feelings are displayed as emotion cues.
For example, the human emotion of love is proposed to have evolved from paleocircuits of the mammalian brain (specifically, modules of the cingulated gyrus) designed for the care, feeding, and grooming of offspring. Paleocircuits are neural platforms for bodily expression configured millions of years before the advent of cortical circuits for speech. They consist of pre-configured pathways or networks of nerve cells in the forebrain, brain stem and spinal cord. They evolved prior to the earliest mammalian ancestors, as far back as the jawless fishes, to control motor function.
Presumably, before the mammalian brain, life in the non-verbal world was automatic, preconscious, and predictable. The motor centers of reptiles react to sensory cues of vision, sound, touch, chemical, gravity, and motion with pre-set body movements and programmed postures. With the arrival of night-active mammals, circa 180 million years ago, smell replaced vision as the dominant sense, and a different way of responding arose from the olfactory sense, which is proposed to have developed into mammalian emotion and emotional memory. In the Jurassic Period, the mammalian brain invested heavily in olfaction to succeed at night as reptiles slept — one explanation for why olfactory lobes in mammalian brains are proportionally larger than in the reptiles. These odor pathways gradually formed the neural blueprint for what was later to become our limbic brain.
Emotions are thought to be related to activity in brain areas that direct our attention, motivate our behavior, and determine the significance of what is going on around us. Pioneering work by Broca (1878), Papez (1937), and MacLean (1952) suggested that emotion is related to a group of structures in the center of the brain called the limbic system, which includes the hypothalamus, cingulate cortex, hippocampi, and other structures. More recent research has shown that some of these limbic structures are not as directly related to emotion as others are, while some non-limbic structures have been found to be of greater emotional relevance. The following brain structures are currently thought to be most involved in emotion:
- Amygdala — The amygdalae are two small, round structures located anterior to the hippocampi near the temporal poles. The amygdalae are involved in detecting and learning what parts of our surroundings are important and have emotional significance. They are critical for the production of emotion, and may be particularly so for negative emotions, especially fear.
- Prefrontal cortex — The term prefrontal cortex refers to the very front of the brain, behind the forehead and above the eyes. It appears to play a critical role in the regulation of emotion and behavior by anticipating the consequences of our actions. The prefrontal cortex may play an important role in delayed gratification by maintaining emotions over time and organizing behavior toward specific goals.
- Anterior cingulate — The anterior cingulate cortex (ACC) is located in the middle of the brain, just behind the prefrontal cortex. The ACC is thought to play a central role in attention, and may be particularly important with regard to conscious, subjective emotional awareness. This region of the brain may also play an important role in the initiation of motivated behavior.
- Ventral striatum — The ventral striatum is a group of subcortical structures thought to play an important role in emotion and behavior. One part of the ventral striatum called the nucleus accumbens is thought to be involved in the experience of goal-directed positive emotion. Individuals with addictions experience increased activity in this area when they encounter the object of their addiction.
- Insula — The insular cortex is thought to play a critical role in the bodily experience of emotion, as it is connected to other brain structures that regulate the body’s autonomic functions (heart rate, breathing, digestion, etc.). This region also processes taste information and is thought to play an important role in experiencing the emotion of disgust.
Positive and negative perception
Like aromas, emotions are experienced as either positive or negative, pleasant or unpleasant; emotions do not seem to be neutral. Like odors, feelings come and go, but are logical, and clearly show upon our face in mood signs. It is likely that many emotions evolved from aroma paleocircuits a. in subcortical nuclei (e.g., the paleocortex of the amygdala), and b. in layers of nerve cells within the forebrain's outer covering of neocortex. The latter's stratified architecture resembles that of the olfactory bulb, which is organized in layers as well.
Sociology of Emotions
Systematic observations of group interaction found that a substantial portion of group activity is devoted to the socio-emotional issues of expressing affect and dealing with tension. Simultaneously, field studies of social attraction in groups revealed that feelings of individuals about each other collate into social networks, a discovery that still is being explored in the field of social network analysis.
Ethnomethodology revealed emotional commitments to everyday norms through purposeful breaching of the norms. For example, students acting as boarders in their own homes reported others' astonishment, bewilderment, shock, anxiety, embarrassment, and anger; family members accused the students of being mean, inconsiderate, selfish, nasty, or impolite. Actors who breach a norm themselves feel waves of emotion, including apprehension, panic, and despair. However, habitual rule breaking leads to declining stress, and may eventually end in enjoyment.
T. David Kemper proposed that people in social interaction have positions on two relational dimensions: status and power. Emotions emerge as interpersonal events change or maintain individuals' status and power. For example, affirming someone else's exalted status produces love-related emotions. Increases or decreases in one's own and other's status or power generate specific emotions whose quality depends on the patterns of change.
Sociologist Randall Collins has stated that emotional energy is the main motivating force in social life, for love and hatred, investing, working or consuming, rendering cult or waging war. Emotional energy ranges from the highest heights of enthusiasm, self-confidence and initiative to the deepest depths of apathy, depression and retreat. Emotional energy comes from variously successful or failed chains of interaction rituals, that is, patterned social encounters –from conversation or sexual flirtation through Christmas family dinners or office work to mass demonstrations, organizations or revolutions. In the latter, the coupling of participants' behavior synchronizes their nervous systems to the point of generating a collective effervescence, one observable in their mutual focus and emotional entraining, as well as in their loading of emotional and symbolic meaning to entities which subsequently become emblems of the ritual and of the membership group endorsing, preserving, promoting and defending them. Thus social life would be most importantly about generating and distributing emotional energy.
Thomas J. Scheff established that many cases of social conflict are based on a destructive and often escalating, but stoppable and reversible shame-rage cycle: when someone results or feels shamed by another, their social bond comes under stress. This can be cooperatively acknowledged, talked about and – most effectively when possible - laughed at so their social bond may be restored. Yet, when shame is not acknowledged, but instead negated and repressed, it becomes rage, and rage may drive to aggressive and shaming actions that feed-back negatively on this self-destructive situation. The social management of emotions might be the fundamental dynamics of social cooperation and conflict around resources, complexity, conflict and moral life. It is well-established sociological fact that expression and feeling of the emotion of anger, for example, is strongly discouraged (repressed) in girls and women in many cultures, while fear is discouraged in boys and men. Some cultures and sub-cultures encourage or discourage happiness, sadness, jealousy, excitedness, and many other emotions. The free expression of the emotion of disgust is considered socially unacceptable in many countries.
Arlie Hochschild proposed that individuals apply cultural and ideological standards to judge the suitability of emotions occurring during a social interaction, and then manage their feelings to produce acceptable displays. Hochschild showed that jobs often require such emotional labor. Her classic study of emotional labor among flight attendants found that an industry speed-up, reducing contact between flight attendants and passengers, made it impossible for flight attendants to deliver authentic emotional labor, so they ended up surface-acting superficial smiles. Peggy Thoits divided emotion management techniques into implementation of new events and reinterpretation of past events. Thoits noted that emotions also can be managed with drugs, by performing faux gestures and facial expressions, or by cognitive reclassifications of one's feelings.
Affect Control Theory which was originated by David R. Heise proposes that social actions are designed by their agents to create impressions that befit sentiments reigning in a situation. Emotions are transient personal states depending on the current impression of the emoting person, and on the comparison of that impression with the sentiment attached to the person's identity.
Classification of emotions
There has been considerable debate whether emotions should be classified as a position in a continuum (e.g. the circumplex model by Russell, or many of the valence approaches in social psychology) or whether emotions are best identified as distinct (basic) states.
Classification by basic emotions
One of the most influential classification approaches in the study of emotion is Robert Plutchik’s classification into eight primary emotions. The emotions that Plutchik lists as primary are:
Similar to the way primary colors combine, primary emotions are believed to blend together to form the full spectrum of human emotional experience. Plutchik reasons that these eight are primary on evolutionary grounds, by relating each to behavior with survival value. For example: fear motivates flight from danger, anger motivates fighting for survival. They are considered to be part of our biological heritage and built into human nature.Paul Ekman devised a similar list of basic emotions from cross-cultural research on the Fore tribesmen of Papua New Guinea. He found that even members of an isolated, stone age culture could reliably identify the expressions of emotion in photographs of people from cultures which the Fore were not yet familiar, and concluded that the facial expression of some basic emotions is innate. The following is Ekman’s list of basic emotions:
Lazarus (1991) similarly offers a taxonomy of 'Core Relational Themes' for various emotions; these help define both function and eliciting conditions. They include a demeaning offense against me and mine for anger; facing an immediate, concrete, and overwhelming physical danger for fear; having experienced an irrevocable loss for sadness; taking in or being too close to an indigestible object or idea (metaphorically speaking) for disgust; making reasonable progress toward the realization of a goal for happiness.
Emotions and Psychotherapy
Depending on the particular school's general emphasize either on cognitive component of emotion, physical energy discharging, or on symbolic movement and facial expression components of emotion, different schools of psychotherapy approach human emotions differently. While, for example, the school of Re-evaluation Counseling propose that distressing emotions are to be relieved by “discharging” them - hence crying, laughing, sweating, shaking, and trembling. other more cognitively oriented schools approach them via their cognitive components, or via symbolic movement and facial expression components (like in contemporary Gestalt therapy).
Meta-emotion refers in accordance with the general definition of the prefix "meta-" to second-order emotions about first-order emotions. Meta-emotions can be short-lived or long-lived. The latter can be a source of discouragement or even psychological repression, or encouragement of specific emotions, having implications for personality traits, psychodynamics, organizational climate, emotional disorders, but also emotional awareness, and emotional intelligence.
Emotions and computer models, artificial intelligence and computing
A flurry of recent work in computer science, engineering, psychology and neuroscience is aimed at developing devices that recognize human affect display and modelling emotions generally (Fellous, Armony & LeDoux, 2002).
Emotion in animals
Animals have physiological responses that are analogous to human emotional responses, as has been recognized at least since Darwin published The Expression of Emotions in Man and Animals in 1872.
- vandenBos, Gary B. (2006). APA Dictionary of Psychology. Washington, DC: American Psychological Association
- Emotional Competency discussion of emotion
- Sloman, Aaron (1981) Why Robots Will Have Emotions. In proc.. University of Sussex, UK
- Damasio, Antonio (1994) Descartes Error Penguin Putnam, New York, New York
- Damasio, Antonio (1994) Descartes Error Penguin Putnam, New York, New York
- Masters, Robert (2000), Compassionate Wrath: Transpersonal Approaches to Anger
- 4 Maccabees
- Darwin, Charles (1872). The Expression of Emotions in Man and Animals. Note: This book was originally published in 1872, but has been reprinted many times thereafter by different publishers
- Hare, A. P. (1976). Handbook of small group research (2nd ed.). New York: Free Press, Chapter 3
- Hare, A. P. (1976). Handbook of small group research (2nd ed.). New York: Free Press, Chapter 7
- Milgram, S. (1974, ). An interview with Carol Tavris. Psychology Today, pp. 70-73
- Kemper, T. D. (1978). A social interactional theory of emotion. New York: Wiley
- Collins, Randall. (2004) Interaction Ritual Chains. Princeton University Press
- Scheff, Thomas J, and Retzinger, Suzanne. (1991) Emotions and violence : shame and rage in destructive conflicts. Lexington, Mass: Lexington Books
- Hochschild, A. R. (1983). The managed heart: The commercialization of human feeling. Berkeley: University of California Press
- Thoits, P. A. (1990). Emotional deviance: research agendas. T. D. Kemper (Ed.), Research agendas in the sociology of emotions (pp. 180–203). Albany: State University of New York Press
- Ekman, P. & Friesen, W. V (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and encoding. Semiotica, 1, 49–98.
- Counseling recovery processes - RC website
- On Emotion - an article from Manchester Gestalt Centre website
- Arbib, M. and Fellous, J-M (editors). (2005) Who Needs Emotions?: The Brain Meets the Robot. Oxford, New York: Oxford University Press.
- Cornelius, R. (1996). The science of emotion. New Jersey: Prentice Hall.
- DeLancey, C. (2002/2004). "Passionate Engines: What Emotions Reveal about Mind and Artificial Intelligence", Oxford University Press.
- Ekman P. (1999). "Basic Emotions". In: T. Dalgleish and M. Power (Eds.). Handbook of Cognition and Emotion. John Wiley & Sons Ltd, Sussex, UK:.
- Ekman P. (1999). "Facial Expressions" in Handbook of Cognition and Emotion. Dalgleish T & Power M, Eds. John Wiley & Sons Ltd. New York, New York.
- Fellous, J.M., Armony, J.L., & LeDoux, J.E. (2002). "Emotional Circuits and Computational Neuroscience" in 'The handbook of brain theory and neural networks' Second Edition. M.A. Arbib (editor), The MIT Press.
- Frijda, Nico H. (1986). The Emotions. Maison des Sciences de l'Homme and Cambridge University Press.
- Jaeger, C., & Bartsch, A. (2006), "Meta-emotions". Grazer Philosophische Studien, 73, 179–204.
- Lazarus, R. (1991). "Emotion and adaptation". New York: Oxford University Press.
- LeDoux, J.E. (1986). The neurobiology of emotion. Chap. 15 in J E. LeDoux & W. Hirst (Eds.) Mind and Brain: dialogues in cognitive neuroscience. New York: Cambridge.
- Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In R. Plutchik & H. Kellerman (Eds.), Emotion: Theory, research, and experience: Vol. 1. Theories of emotion (pp. 3–33). New York: Academic.
- Moore, S. C. & Oaksford, M. (2002) Emotional Cognition: From Brain to Behaviour. Amsterdam: John Benjamin’s Publishing Company.
- Loewenstein, G. F., Weber, E. U., Hsee, C.K., & Welch, E. 2001. Risk as feelings. Psychological Bulletin, 127: 267–286
- Mellers, B., &McGraw, A. P. (2001). Anticipated emotions as guides to choice. Current Directions in Psychological Science. 10(6), 210–214.
- Isen, A. M. (2001). An influence of positive affect on decision making in complex situations: Theoretical issues with practical implications. Journal of Consumer Psychology, 11(2), 75–85
- William James
- Charles Darwin
- Ivan Pavlov
- James Papez
- Paul D. MacLean
- Paul Ekman
- Antonio Damasio
- Candace Pert
- Robert Plutchik
- Joseph LeDoux
- Robert Zajonc
- Baruch Spinoza
- Klaus Scherer
- Richard Lazarus
- Lisa Feldman Barrett
- Gerald Clore
- Anat Refaeli
- Affective computing
- Affective neuroscience
- Affective science
- Anticipatory Grief
- Emotional dysregulation — as in a borderline personality
- Emotions and Culture
- Emotion and memory
- Emotion in negotiation
- List of emotions
- Emotional bias
- Emotional detachment
- Emotional distance
- Descartes' Error
- Emotional conflict
- Emotional contagion
- Emotional Intelligence
- Group Emotion
- Philosophy of Mind
- Dissociation (psychology)
- Emotions and Disease, an exhibition developed by the History of Medicine Division of the National Library of Medicine.
- A consultancy started by Shombit Sengupta works on infusing emotional surplus on organizational offerings
- Emotions, a well-organized collection of links on emotion by Michael Speer.
- An emotion research page at Salk Institute, maintained by Jean-Marc Fellous and Eva Hudlicka
- The affective computing portal
- Socially intelligent agents page
- Low emotional intelligence and alexithymia
- The Psychology of Emotions, Feelings and Thoughts, Free Online Book
ar:عاطفة bg:Емоция cs:Cit da:Følelse de:Emotion et:Emotsiooneo:Emociogl:Emoción ko:감정 hr:Osjećaji io:Emoco it:Emozione he:רגש mk:Чувство nl:Emotieno:Emosjonsq:Emocioni simple:Emotion sl:Čustvo sr:Страст fi:Tunne ta:உணர்ச்சிuk:Емоція yi:געפיל |
Number & Operations – Fractions – 4th Grade
Build fractions from unit fractions.
Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.g., by using visual fraction models and equations to represent the problem.
Teaching students the strategies to solve word problems is aided by the use of graphic organizers. Students need to be comfortable using visual fraction models to add and subtract fractions in context.
Student Knowledge Goals
I can create an equation with fractions to represent a word problem.
I can solve word problems involving fractions with like denominators.
I can create visual fraction models to solve a word problem.
I know the strategies for solving addition and subtraction problems with fractions with like denominators (e.g., visual fraction models, using the properties of addition, drawings, objects, etc.).
I can use what I know about addition and subtraction with whole numbers and apply it to fractions.
I can add or subtract fractions that have like denominators to solve the equation for a word problem.
Student Video Lessons
Learn Zillion - Solve word problems involving addition and subtraction of fractions with like denominators
Virtual Nerd - Solve word problems involving addition and subtraction of fractions
Online Problems and Assessments
Khan Academy – Questions and Video Lessons
Add and subtract fractions with like denominators: word problems
Add and subtract fractions with like denominators in recipes
Fraction Word Problems |
|Part of a series on|
Encomienda (Spanish pronunciation: [eŋkoˈmjenda] (listen)) was a Spanish labor system that rewarded conquerors with the labor of particular groups of subject people. It was first established in Spain following the Christian recovery of their territories under Muslim rule (known as the Reconquista). And it was applied on a much larger scale during the Spanish colonization of the Americas and the Spanish Philippines. Conquered peoples were considered vassals of the Spanish monarch. The Crown awarded an encomienda as a grant to a particular individual. In the conquest era of the sixteenth century, the grants were considered to be a monopoly on the labor of particular groups of indigenous peoples, held in perpetuity by the grant holder, called the encomendero, and his descendants.
Encomiendas devolved from their original Iberian form into a form of "communal" slavery. In the encomienda, the Spanish Crown granted a person a specified number of natives from a specific community but did not dictate which individuals in the community would have to provide their labor. Indigenous leaders were charged with mobilizing the assessed tribute and labor. In turn, encomenderos were to ensure that the encomienda natives were given instruction in the Christian faith and Spanish language, and protect them from warring tribes or pirates; they had to suppress rebellion against Spaniards, and maintain infrastructure. In return, the natives would provide tributes in the form of metals, maize, wheat, pork, or other agricultural products.
With the ousting of Christopher Columbus, the Spanish crown sent a royal governor, Fray Nicolás de Ovando, who established the formal encomienda system. In many cases natives were forced to do hard labor and subjected to extreme punishment and death if they resisted. However, Queen Isabella I of Castile forbade slavery of the native population and deemed the indigenous to be "free vassals of the crown". Various versions of the Leyes de Indias or Laws of the Indies from 1512 onwards attempted to regulate the interactions between the settlers and natives. Both natives and Spaniards appealed to the Real Audiencias for relief under the encomienda system.
Encomiendas had often been characterized by the geographical displacement of the enslaved and breakup of communities and family units, but in Mexico, the encomienda ruled the free vassals of the crown through existing community hierarchies, and the natives were allowed to keep in touch with their families and homes. This was not true in all areas as in some parts of Hispaniola, Nicaragua, and Guatemala entire regions were depopulated by enslavement. Unlike in the case of the enslavement of Africans, in which mostly adult males were enslaved, the majority of indigenous slaves under encomienda were women and children.
The abolition of the Encomienda in 1542 marks the first major movement towards the abolition of slavery in the Western world. However, coerced labor continued in other forms throughout the Spanish colonies.
- 1 History
- 2 Encomenderos
- 3 Establishment
- 4 Deaths, disease, and accusations of ethnocide or genocide
- 5 Abolition
- 6 Repartimiento
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
The heart of encomienda and encomendero lies in the Spanish verb encomendar, "to entrust". The encomienda was based on the reconquista institution in which adelantados were given the right to extract tribute from Muslims or other peasants in areas that they had conquered and resettled.
The encomienda system traveled to America as the result of the implantation of Castilian law over the territory. The system was created in the Middle Ages and was pivotal to allow for the repopulation and protection of frontier land during the reconquista. The encomienda established a relationship similar to a feudal relationship, in which military protection was traded for certain tributes or by specific work. It was especially prevalent among military orders that were entrusted with the protection of frontier areas. The king usually intervened directly or indirectly in the bond, by guaranteeing the fairness of the agreement and intervening militarily in case of abuse.
The encomienda system in Spanish America differed from the Peninsular institution. The encomenderos did not own the land on which the natives lived. The system did not entail any direct land tenure by the encomendero; native lands were to remain in the possession of their communities. This right was formally protected by the crown of Castile because the rights of administration in the New World belonged to this crown and not to the Catholic monarchs as a whole.
The first grantees of the encomienda or encomenderos were usually conquerors who received these grants of labor by virtue of participation in a successful conquest. Later, some receiving encomiendas in New Spain (Mexico) were not conquerors themselves but were sufficiently well connected that they received grants.
In his study of the encomenderos of early colonial Mexico, Robert Himmerich y Valencia divides conquerors into those who were part of Hernán Cortés' original expedition, calling them "first conquerors", and those who were members of the later Narváez expedition, calling them "conquerors". The latter were incorporated into Cortes' contingent. Himmerick designated as pobladores antiguos (old settlers) a group of undetermined number of encomenderos in New Spain, men who had resided in the Caribbean region prior to the Spanish conquest of Mexico.
Holders of encomiendas also included women and indigenous elite. Doña Maria Jaramillo, the daughter of Doña Marina and conqueror Juan Jaramillo, received income from her deceased father's encomiendas. Two of Moctezuma's daughters, Doña Isabel Moctezuma and her younger sister, Doña Leonor Moctezuma, were granted extensive encomiendas in perpetuity by Hernan Cortes. Doña Leonor Moctezuma married in succession two Spaniards, and left the encomiendas to her daughter by her second husband. Vassal Inca rulers appointed after the conquest also sought and were granted encomiendas.
The status of humans as wards of the trustees under the encomienda system served to "define the status of the Indian population": the natives were free men, not slaves or serfs. But some Spaniards treated them as poorly as slaves.
The encomienda was essential to the Spanish crown's sustaining its control over North, Central and South America in the first decades after the colonization. It was the first major organizational law instituted on the continent, which was affected by war, widespread disease epidemics caused by Eurasian diseases, and resulting turmoil. Initially, the encomienda system was devised to meet the needs of the early agricultural economies in the Caribbean. Later it was adopted to the mining economy of Peru and Upper Peru. The encomienda lasted from the beginning of the sixteenth century to the seventeenth century.
Philip II, enacted a law on 11 June 1594 to establish the encomienda in the Philippines, where he made grants to the local nobles (principalía). They used the encomienda to gain ownership of large expanses of land, many of which (such as Makati) continue to be owned by affluent families.
In 1501 Queen Isabella declared Native Americans as subjects to the crown, and so, as Castilians and legal equals to Spanish Castilians. This implied that enslaving them was illegal except on very specific conditions. It also allowed the establishment of encomiendas, since the encomienda bond was a right reserved to full subjects to the crown. In 1503, the crown began to formally grant encomiendas to conquistadors and officials as rewards for service to the crown. The system of encomiendas was aided by the crown's organizing the indigenous into small harbors known as reducciones, with the intent of establishing new towns and populations.
Each reducción had a native chief responsible for keeping track of the laborers in his community. The encomienda system did not grant people land, but it indirectly aided in the settlers' acquisition of land. As initially defined, the encomendero and his heirs expected to hold these grants in perpetuity. After a major crown reform in 1542, known as the New Laws, encomendero families were restricted to holding the grant for two generations. When the crown attempted to implement the policy in Peru, shortly after the 1535 Spanish conquest, Spanish recipients rebelled against the crown, killing the viceroy, Don Blasco Núñez Vela.
In Mexico, viceroy Don Antonio de Mendoza decided against implementing the reform, citing local circumstances and the potential for a similar conqueror rebellion. To the crown he said, "I obey crown authority but do not comply with this order." The encomienda system was ended legally in 1720, when the crown attempted to abolish the institution. The encomenderos were then required to pay remaining encomienda laborers for their work.
The encomiendas became very corrupt and harsh. In the neighborhood of La Concepción, north of Santo Domingo, the adelantado of Santiago heard rumors of a 15,000-man army planning to stage a rebellion. Upon hearing this, the Adelantado captured the caciques involved and had most of them hanged.
Later, a chieftain named Guarionex laid havoc to the countryside before an army of about 3,090 routed the Ciguana people under his leadership. Although expecting Spanish protection from warring tribes, the islanders sought to join the Spanish forces. They helped the Spaniards deal with their ignorance of the surrounding environment.
As noted, the change of requiring the encomendado to be returned to the crown after two generations was frequently overlooked, as the colonists did not want to give up the labor or power. The Codice Osuna, one of many colonial-era Aztec codices (indigenous manuscripts) with native pictorials and alphabetic text in Nahuatl, there is evidence that the indigenous were well aware of the distinction between indigenous communities held by individual encomenderos and those held by the crown.
The phrase "sin indios no hay Indias" (without Indians, there are no Indies – i.e. America), popular in Spanish America especially in the 16th century, emphasizes the economic importance and appeal of this indentured labor. It was ranked higher than allocations of precious metals or other natural resources. Land awardees customarily complained about how "worthless" territory was without a population of encomendados.
Deaths, disease, and accusations of ethnocide or genocide
Raphael Lemkin (coiner of the term genocide) considers Spain's abuses of the native population of the Americas to constitute cultural and even outright genocide including the abuses of the Encomienda system. He described slavery as "cultural genocide par excellence" noting "it is the most effective and thorough method of destroying culture, of desocializing human beings." He considers colonist guilty due to failing to halt the abuses of the system despite royal orders. He also notes the sexual abuse of Spanish colonizers of Native women as acts of "biological genocide." Economic historian Timothy J. Yeager argued the encomienda was deadlier than conventional slavery due to individual laborer's life being disposable in the face of simply being replaced with a laborer from the same plot of land. University of Hawaii historian David Stannard describes the encomienda as a genocidal system which "had driven many millions of native peoples in Central and South America to early and agonizing deaths."
Yale University's genocide studies program supports this view regarding abuses in Hispaniola. Andrés Reséndez argues that even though the Spanish were aware of the spread of smallpox, they made no mention of it until 1519, a quarter century after Columbus arrived in Hispaniola. Instead he contends that enslavement in gold and silver mines was the primary reason why the Native American population of Hispaniola dropped so significantly and that even though disease was a factor, the native population would have rebounded the same way Europeans did during the Black Death if it were not for the constant enslavement they were subject to. According to anthropologist Jason Hickel, a third of Arawak workers died every six months from lethal forced labor in the mines.
Scope and number of victims
Yale University's genocide studies program while citing the decline of the Taíno population of Hispaniola in 1492 to 1514 as an example of genocide notes that the indigenous population declined from a population between 100,000 and 1,000,000 to only 32,000 a decline of 68% to over 96%.
The native people of Mexico experienced a series of outbreaks of disease in the wake of European conquest, including a catastrophic epidemic that began in 1545 which killed an estimated 5 million to 15 million people, or up to 80% of the native population of Mexico, followed by a second epidemic from 1576 to 1578 killing an additional 2 to 2.5 million people, or about 50% of the remaining native population. Recent research suggests that these infections appear to have been aggravated by the extreme climatic conditions of the time and by the poor living conditions and harsh treatment of the native people under the encomienda system of New Spain.
Enslavement and the encomienda was a heavy cause of depopulation in Guatemala as Bartolomé de Las Casas writes: "one could make a whole book ... out of the atrocities, barbarities, murders, clearances, ravages and other foul injustices perpetrated ... by those that went to Guatemala" The afflictions of Old World diseases, war and overwork in the mines and encomiendas took a heavy toll on the inhabitants of eastern Guatemala, to the extent that indigenous population levels never recovered to their pre-conquest levels. The main cause of the drastic depopulation of Lake Izabal and the Motagua Delta was the constant slave raids by the Miskito Sambu of the Caribbean coast that effectively ended the Maya population of the region; the captured Maya were sold into slavery in the British colony of Jamaica. Over the course of the Spanish conquest of Guatemala the Spanish exported 50,000 Maya slaves
The most proportionally widespread and deadly use of forced labor by Spain was likely in Nicaragua in which between 450,000 and 500,000 indigenous peoples were deported from Nicaragua as slaves-(though some slave originated from different territories) compared to a total indigenous Nicaraguan population of around 600,000 to 1,000,000. Due to mass infection of foreign diseases and enslavement 99% of the population of western Nicaragua perished in 60 years with 575,000 indigenous Nicaraguans having died overall across the country. The indigenous population of the Izalco region in El Salvador was also among the most exploited for force labor in terms of cacao production; 400,000 indigenous peoples perished from disease, warfare and slavery out of 700,000 to 800,000 native Salvadorans and 150,000 of the 400,000 to 600,000 indigenous Hondurans were enslaved (amounting to 25% to 37.5% of their population)
Peru was a hotspot for native labor due to its large silver reserves. In silver mountains such as Cerro Rico many native workers died due to the harsh conditions of the mine life and natural gases. At such a high altitude, pneumonia was always a concern, and mercury poisoning took the lives of many involved in the refining process. Some writers such as Eduardo Galeano, in his work Open Veins of Latin America, estimates that up to eight million have died in the Cerro Rico since the 16th century. Though this number has been attributed to the entirety of the Viceroyalty of Peru by Josiah Conder, who added that these numbers also take into account any depopulation of areas around mines. In 1574, the Viceroy of Peru Diego Lopez de Velasco investigated the encomiendas. He concluded there were 32,000 Spanish families in the New World, 4,000 of whom had encomiendas. They oversaw 1,500,000 natives paying tribute, and 5 million "civilized" natives. The work of historians such as Peter Bakewell, Noble David Cook, Enrique Tandeter and Raquel Gil Montero portray a more accurate description of the human-labor issue (free and non-free workers) with completely different estimates to Eduardo Galeano alleged number of deaths.
One source claims the Spanish conquest was responsible for 1,400,000 to 2,300,000 deaths explicitly excluding tens of millions of deaths from New World disease; while Rudolph Rummel claims that 2 to 15 million indigenous peoples where killed by what he calls "democide"-(government caused murder) in the colonization of the Americas mostly in Latin America-(mostly implying anywhere from just over half to all but 1 so around 1,000,001 to 14,999,999 deaths.)
Skepticism toward alleged demographic declines and accusations of genocide
Noble David Cook, writing about the Black Legend and the conquest of the Americas wrote, "There were too few Spaniards to have killed the millions who were reported to have died in the first century after Old and New World contact" and instead suggests the near total decimation of the indigenous population of Hispaniola as mostly having been caused by diseases like smallpox.
Since 1960 historians, such as Julian Juderias, Woodrow Borah and Sheburne Cooke have challenged both the numbers and the causes offered by Raphael Lemkin. Brendan D. O'Fallona and Lars Fehren-Schmitz separately estimated a historic native mortality of about 50% loss with a quick recovery and little loss in diversity. Quentin D Atkinson Cook and Borah University of California, Berkeley conducted a decade long study on the historical native demographics of Mexico and estimated that the overall decrease in native population was only 3%. Rosenblat estimates a lower number for Mexico and Colombia. Acuna-Soto R1, Romero LC, and Maguire JH suggested the rate of mortality from disease in native American populations at around 45%.
Regardless of the specific number, it is widely agreed that the peak in mortality started in 1545 and peaked some years later after the New Laws were put in place, the encomienda system was abolished, and women, and more importantly children, were allowed to migrate. What mortality of the native population did occur was mainly attributable to disease. Most scholars agree that the main culprits were European infantile diseases like smallpox, measles, and chicken pox. Elsa Malvido suggests that the plague caused the hemorrhagic fevers described by the Spanish physicians, while a recent, controversial study recently proposed by microbiologist Rodolfo Acuna-Soto suggests that the diseases that decimated the population were actually a native hemorrhagic plague carried by rats.
The encomienda system was the subject of controversy in Spain and its territories almost from its start. In 1510, an Hispaniola encomendero named Valenzuela murdered a group of Native American leaders who had agreed to meet for peace talks in full confidence. The Taíno Cacique Enriquillo rebelled against the Spaniards between 1519 and 1533. In 1538, Emperor Charles V, realizing the seriousness of the Taíno revolt, changed the laws governing the treatment of people laboring in the encomiendas. Conceding to Las Casas's viewpoint, the peace treaty between the Taínos and the audiencia was eventually disrupted in four to five years. The crown also actively prosecuted abuses of the encomienda system, through the Law of Burgos (1512–13) and the New Law of the Indies (1542).
The priest of Hispaniola and former encomendero Bartolomé de las Casas underwent a profound conversion after seeing the abuse of the native people. He dedicated his life to writing and lobbying to abolish the encomienda system, which he thought systematically enslaved the native people of the New World. Las Casas participated in an important debate, where he pushed for the enactment of the New Laws and an end to the encomienda system. The Laws of Burgos and the New Laws of the Indies failed in the face of colonial opposition and, in fact, the New Laws were postponed in the Viceroyalty of Peru. When Blasco Núñez Vela, the first viceroy of Peru, tried to enforce the New Laws, which provided for the gradual abolition of the encomienda, many of the encomenderos were unwilling to comply with them and revolted against him.
The New Laws of 1542
When the news of this situation and of the abuse of the institution reached Spain, the New Laws were passed to regulate and gradually abolish the system in America, as well as to reiterate the prohibition of enslaving Native Americans. By the time the new laws were passed, 1543, the Spanish crown had acknowledged their inability to control and properly ensure compliance of traditional laws overseas, so they granted to Native Americans specific protections not even Spaniards had, such as the prohibition of enslaving them even in the case of crime or war. These extra protections were an attempt to avoid the proliferation of irregular claims to slavery. Andrés Reséndez, however, presents a more pessimistic view that such intentions where subsequently corrupted, and shortly after the abolition tens of thousands of people continued to be worked to death in depopulated islands throughout Hispaniola, and he describes indigenous forced labor as "a set of kaleidoscopic practices suited to different markets and regions. The Spanish crown's formal prohibition of slavery of native people in 1542 gave rise to a number of related institutions, such as encomiendas, repartimientos, the selling of convict labor, and ultimately debt peonage ... In other words, formal slavery was replaced by multiple forms of informal labor coercion and enslavement that were extremely difficult to track, let alone eradicate."
The encomienda system was generally replaced by the crown-managed repartimiento system throughout Spanish America after mid-sixteenth century. Like the encomienda, the new repartimiento did not include the attribution of land to anyone, rather only the allotment of native workers. But they were directly allotted to the crown, who, through a local crown official, would assign them to work for settlers for a set period of time, usually several weeks. The repartimiento was an attempt "to reduce the abuses of forced labour". As the number of natives declined and mining activities were replaced by agricultural activities in the seventeenth century, the hacienda, or large landed estates in which laborers were directly employed by the hacienda owners (hacendados), arose because land ownership became more profitable than acquisition of forced labor.
- James Lockhart and Stuart Schwartz, Early Latin America. New York: Cambridge University Press 138.
- Ida Altman, et al., The Early History of Greater Mexico, Pearson, 2003, p. 47
- Rodriguez, Junius P. (2007). Encyclopedia of Slave Resistance and Rebellion. 1. p. 184. ISBN 978-0-313-33272-2.
- Ida Altman, et al., The Early History of Greater Mexico, Pearson, 2003, 143
- Charles Gibson, The Aztecs Under Spanish Rule, Stanford, 1964.
- Trever, David. "The new book 'The Other Slavery' will make you rethink American history". Los Angeles Times. Archived from the original on 2019-06-20.
- Churchill, Ward (1999). "Genocide of native populations in Mexico, Central America, and the Caribbean Basin". In Israel W. Charny (ed.). Encyclopedia of Genocide. Santa Barbara, California, US: ABC-CLIO. p. 433. ISBN 0-87436-928-2. OCLC 911775809.
- Grenke, Arthur (2005). God, Greed, and Genocide: The Holocaust Through the Centuries. Washington, DC, US: New Academia Publishing. p. 142. ISBN 0-9767042-0-X. OCLC 255346071.
- Feldman, Lawrence H (1998). Motagua Colonial. Raleigh, North Carolina, US: Boson Books. p. 12. ISBN 978-1-886420-51-9. OCLC 82561350.
- Lindley, Robin. "The Other Slavery: An Interview with Historian Andrés Reséndez". History News Network. Archived from the original on 2019-06-20.
- "Encomienda". Encyclopædia Britannica Online. 26 September 2008.
- Scott, Meredith, "The Encomienda System Archived 2005-12-18 at the Wayback Machine".
- Robert Himmerich y Valencia, The Encomenderos of New Spain, 1521-1555, Austin: University of Texas Press, 1991 p. 178
- Himmerich y Valencia (1991), The Encomenderos, pp. 195-96
- Samora, Julian; Patricia Vandel Simon. "A History of the Mexican-American People". Archived from the original on April 2, 2009. Retrieved 2009-05-18.
- Himmerich y Valencia (1991), 27
- Clendinnen, Inga; Ambivalent Conquests: Maya and Spaniard in Yucatán, 1517–1570. (p. 83) ISBN 0-521-37981-4
- Anderson, Dr. Eric A (1976). The encomienda in early Philippine colonial history (PDF). Quezon City: Journal of Asian Studies. pp. 27–32.
- Arthur S. Aiton, Antonio de Mendoza, First Viceroy of New Spain, Durham: Duke University Press 1972.
- Pietro Martire D'Anghiera (July 2009). De Orbe Novo, the Eight Decades of Peter Martyr D'Anghera. p. 121. ISBN 9781113147608. Retrieved 10 July 2010.
- Pietro Martire D'Anghiera (July 2009). De Orbe Novo, the Eight Decades of Peter Martyr D'Anghera. p. 143. ISBN 9781113147608. Retrieved 10 July 2010.
- Pietro Martire D'Anghiera (July 2009). De Orbe Novo, the Eight Decades of Peter Martyr D'Anghera. p. 132. ISBN 9781113147608. Retrieved 10 July 2010.
- Codice Osuna, Ediciones del Instituto Indigenista Interamericano, Mexico 1947, pp. 250-254
- Raphael Lemkin's History of Genocide and Colonialism
Holocaust Memorial Museum https://www.ushmm.org/confront-genocide/speakers-and-events/all-speakers-and-events/raphael-lemkin-history-of-genocide-and-colonialism[permanent dead link]
- Yeager, Timothy J. (December 1995). "Encomienda or Slavery? The Spanish Crown's Choice of Labor Organization in Sixteenth-Century Spanish America". The Journal of Economic History. 55 (4): 842–859. doi:10.1017/S0022050700042182. JSTOR 2123819.
- Stannard, David E. (1993). American Holocaust: The Conquest of the New World. Oxford University Press. p. 139. ISBN 978-0195085570.
- Hispaniola Case Study: Colonial Genocides
Date range of image:1492 to 1514 https://gsp.yale.edu/case-studies/colonial-genocides-project/hispaniola
- Reséndez, Andrés (2016). The Other Slavery: The Uncovered Story of Indian Enslavement in America. Houghton Mifflin Harcourt. p. 17. ISBN 978-0547640983.
- Hickel, Jason (2018). The Divide: A Brief Guide to Global Inequality and its Solutions. Windmill Books. p. 70. ISBN 978-1786090034.
- Rodolfo Acuna-Soto, David W. Stahle, Malcolm K. Cleaveland, and Matthew D. Therrell (April 2002). "Megadrought and Megadeath in 16th Century Mexico", Emerg Infect Dis., 8(4), pp. 360–362. doi: 10.3201/eid0804.010175. Retrieved 16 Jan. 2018.
- de Las Casas, Bartolomé (1992) . Nigel Griffin (ed.). A Short Account of the Destruction of the Indies. London, UK and New York, US: Penguin Books. ISBN 978-0-14-044562-6. OCLC 26198156. p. 54.
- Fuentes y Guzman, Francisco Antonio de; Justo Zaragoza (notes and illustrations) (1882). Luis Navarro (ed.). Historia de Guatemala o Recordación Florida (in Spanish). I. Madrid, Spain: Biblioteca de los Americanistas. OCLC 699103660. p.
- Dary Fuentes, Claudia (2008). Ethnic Identity, Community Organization and Social Experience in Eastern Guatemala: The Case of Santa María Xalapán (in Spanish). Albany, New York, US: ProQuest/College of Arts and Sciences, Department of Anthropology: University at Albany, State University of New York. ISBN 978-0-549-74811-3. OCLC 352928170. p. 60.
- Jones, Grant D. (2000). "The Lowland Maya, from the Conquest to the Present". In Adams, Richard E.W.; Macleod, Murdo J. (eds.). The Cambridge History of the Native Peoples of the Americas. Vol. II: Mesoamerica, part 2. Cambridge, UK: Cambridge University Press. pp. 346–391. ISBN 978-0-521-65204-9. OCLC 33359444. pp. 360–361.
- "Victimario Histórico Militar Capítulo IX De las 16 mayores Guerras y Genocidios del siglo XVI de 60.000 a 3.000.000 de muertos".
- Newson, Linda (1982). "The Depopulation of Nicaragua in the Sixteenth Century*". Journal of Latin American Studies. 14 (2): 255–256. doi:10.1017/S0022216X00022422. ISSN 1469-767X.
- Daniel Rogers, J.; Wilson, Samual M. (1993-01-31). Fowler, William R. (1993). "The Living Pay for the Dead: Trade, Exploitation, and Social Change in Early Colonial Izalco, El Salvador". In J. Daniel Rogers, Samual M. Wilson (eds.) Ethnohistory and Archaeology: Approaches to Postcontact Change in the Americas. p. 181. ISBN 9780306441769.
- ""BBC - A History of the World - About: Transcripts - Episode 80 - Pieces of eight"".
- Modern Traveler. London: J. Duncan. 1830.
- Crow, John A. The Epic of Latin America.
- Bakewell, Peter. Miners of the Red Mountain: Indian Labor in Potosi, 1545–1650. University of New Mexico Press. 2010.
- Demographic Collapse: Indian Peru, 1520–1620 (Cambridge Latin American Studies)
- Tandeter, Enrique. Coaccion y mercado. La mineria de plata en el Potosi colonial, 1692–1826. Siglo XXI Editores 2001.
- "Free and Unfree Labour in the Colonial Andes" (PDF). Instituto Superior de Estudios Sociales (CONICET-UNT), Tucuman. 2011. Archived from the original (PDF) on 2016-12-20. Retrieved 2019-06-18.
- Rummel, R.J. "DEATH BY GOVERNMENT Chapter 3 Pre-Twentieth Century Democide".
- Noble David Cook (13 February 1998). Born to Die: Disease and New World Conquest, 1492–1650. Cambridge University Press. pp. 9–14. ISBN 978-0-521-62730-6.
- Brendan D. O'Fallona and Lars Fehren-Schmitz. Native Americans experienced a strong population bottleneck coincident with European contact. Proc Natl Acad Sci U S A. 2011 Dec 20; 108(51): 20444–20448. Published online 2011 Dec 5. doi: 10.1073/pnas.1112563108 PMC 3251087 PMID 22143784 Anthropology
- Cook, S. F. y W. W. Borah (1963), The Indian population of Central Mexico, Berkeley (Cal.), University of California Press
- Acuna-Soto R1, Romero LC, Maguire JH. Large epidemics of hemorrhagic fevers in Mexico 1545-1815.
- Francisco Guerra. Origen de las epidemias en la conquista de América
- RODOLFO ACUNA-SOTO, LETICIA CALDERON ROMERO, AND JAMES H. MAGUIRE LARGE EPIDEMICS OF HEMORRHAGIC FEVERS IN MEXICO 1545–1815. Am. J. Trop. Med. Hyg., 62(6), 2000, pp. 733–739
- David M. Traboulay (1994). Columbus and Las Casas: the conquest and Christianization of America, 1492–1566. p. 44. ISBN 9780819196422. Retrieved 10 July 2010.
- Bartolomé de Las Casas, who arrived in the New World in 1502, averred that greed was the reason Christians “murdered on such a vast scale,” killing “anyone and everyone who has shown the slightest sign of resistance,” and subjecting “all males to the harshest and most iniquitous and brutal slavery that man has ever devised for oppressing his fellow-men, treating them, in fact, worse than animals.” Reséndez, Andrés. The Other Slavery: The Uncovered Story of Indian Enslavement in America (Kindle Locations 338-341). Houghton Mifflin Harcourt. Kindle Edition.
- Benjamin Keen, Bartolome de las Casas in history: toward an understanding of the man and his work. (DeKalb: Northern Illinois University, 1971), 364–365.
- Suárez Romero. LA SITUACIÓN JURÍDICA DEL INDIO DURANTE LA CONQUISTA ESPAÑOLA EN AMÉRICA. REVISTA DE LA FACULTAD DE DERECHO DE MÉXICO TOMO LXVIII, Núm.270 (Enero-Abril 2018)
- Tindall, George Brown & David E. Shi (1984). America: A Narrative History (Sixth ed.). W. W. Norton & Company, Inc., 280.
- Austin, Shawn Michael. (2015) "Guaraní kinship and the encomienda community in colonial Paraguay, sixteenth and early seventeenth centuries", Colonial Latin American Review, 24:4, 545-571, DOI: 10.1080/10609164.2016.1150039
- * Avellaneda, Jose Ignacio (1995). The Conquerors of the New Kingdom of Granada. Albuquerque: University of New Mexico Press. ISBN 978-0-8263-1612-7.
- Chamberlain, Robert S., "Simpson's the Encomienda in New Spain and Recent Encomienda Studies" The Hispanic American Historical Review 34.2 (May 1954):238–250.
- Gibson, Charles, The Aztecs Under Spanish Rule. Stanford: Stanford University Press 1964.
- Guitar, Lynne (1997). "Encomienda System". In Junius P. Rodriguez (ed.). The Historical Encyclopedia of World Slavery. 1, A-K. Santa Barbara, CA: ABC-CLIO. pp. 250–251. ISBN 978-0-87436-885-7. OCLC 37884790.
- Himmerich y Valencia, Robert (1991). The Encomenderos of New Spain, 1521–1555. Austin: University of Texas Press. ISBN 0-292-72068-8.
- Keith, Robert G. "Encomienda, Hacienda, and Corregimiento in Spanish America: A Structural Analysis," Hispanic American Historical Review 52, no. 3 (1971): 431-446.
- Lockhart, James, "Encomienda and Hacienda: The Evolution of the Great Estate in the Spanish Indies," Hispanic American Historical Review 49, no. 3 (1969)
- McAlister, Lyle N. (1984). Spain and Portugal in the New World, 1492-1700. University of Minnesota Press. ISBN 978-0816612161.
- Ramirez, Susan E. "Encomienda" in Encyclopedia of Latin American History and Culture, vol. 2, pp. 492–3. New York: Charles Scribner's Sons 1996.
- Simpson, Leslie Byrd Simpson, The Encomienda in New Spain: The Beginning of Spanish Mexico (1950)
- Yeager, Timothy J. (1995). "Encomienda or Slavery? The Spanish Crown's Choice of Labor Organization in Sixteenth-Century Spanish America". The Journal of Economic History. 55 (4): 842–859. doi:10.1017/S0022050700042182. JSTOR 2123819.
- Zavala, Silvio. De Encomienda y Propiedad Territorial en Algunas Regiones de la América Española. Mexico City: Aurrúa 1940.
- "Encomienda" Encyclopædia Britannica
- Spain's American Colonies and the Encomienda System. ThoughtCo. September 10, 2018 |
In physics, and more in particular in the theory of electromagnetism, magnetic induction (also known as magnetic flux density) describes a magnetic field (a vector) at every point in space. The magnetic induction is commonly denoted by B(r,t) and is a vector field, that is, it depends on position r and time t. In non-relativistic physics, the space on which B is defined is the three-dimensional Euclidean space —the infinite world that we live in. The field B is closely related to the magnetic field H, often called the magnetic field intensity, and sometimes just the H-field. In fact, some authors refer to B as the magnetic field and to H as an auxiliary field.
The physical source of the field B can be
- one or more permanent magnets (see Coulomb's magnetic law); more microscopically, the fundamental spins of elementary particles like electrons, and their orbital angular momentum.
- one or more electric currents (see Biot-Savart's law),
- time-dependent electric fields (see displacement current),
or combinations of these three. A magnetic field exists in the neighborhood of these sources. In general the strength of the magnetic field decreases as a low power of 1/R, the inverse of the distance R to the source.
A magnetic force can act on
- a permanent magnet (which is a magnetic dipole or—approximately—two magnetic monopoles),
- magnetizable (ferromagnetic) material like iron,
- moving electric charges (through the Lorentz force)
- elementary particles through their intrinsic spin, which is related to their intrinsic magnetic properties through their gyromagnetic ratios.
To give an indication of magnitudes: the magnetic field (or better: magnetic induction) of the Earth is about 0.5 G (50 μT). A horse shoe magnet is about 100 G. A medical MRI diagnostic machine typically supports a field of up to 2 T (20 kG). The strongest magnets in laboratories are currently about 30 T (300 kG).
Note on nomenclature
Most textbooks on electricity and magnetism distinguish the magnetic field H and the magnetic induction B. Yet, in practice physicists and chemists almost always call B the magnetic field, which is because the term "induction" suggests an induced magnetic moment. Since an induced moment is usually not in evidence, the term induction is felt to be confusing. Among scientists phrases are common as: "This EPR spectrum was measured at a magnetic field of 3400 gauss", and "Our magnet can achieve magnetic fields as high as 20 tesla". That is, most scientists use the term "field" with units tesla or gauss, while strictly speaking, gauss and tesla are units of B. Some authors go one step further and reserve the name "magnetic field" for B and refer to H as the "auxiliary magnetic field".
Relation between B and H
where μ0 is the magnetic constant (equal to 4π⋅10−7 N/A2). Note that in Gaussian units the dimensions of H (Oer) and of B (G = gauss) are equal, 1 Oer = 1 G, although the units have an unrelated definition (Oer is based on the field of a solenoid, and G is magnetic flux/surface). In the absence of a magnetizable medium it is unnecessary to introduce both B and H, because they differ by an exact and constant factor (unity for Gaussian units and μ0 for SI units).
However, treating all the charges in a system at a microscopic level is impractical, and approximations are introduced. Some of the system is treated microscopically, and some is treated as "materials", in particular, dielectrics and magnetic materials. The response of a magnetic material to magnetic flux is introduced through the magnetization of the material, another vector field M(r, t).
In the presence of a magnetizable medium the relation between B and H involves the magnetization M of the medium,
To actually determine the system behavior, the magnetization M must be determined in terms of either B or H so that the system response depends only upon one field variable. This determination of M can be very complicated. For example, it may involve introduction of quantum mechanics and statistical mechanics as studied in the field of condensed matter physics. However, in many non-ferromagnetic media, the magnetization M is linear in H,
For a magnetically isotropic medium the magnetic susceptibility tensor χ is a constant times the identity 3×3 matrix, χ = χm 1. For an isotropic medium the relation between B and H is in SI and Gaussian units, respectively,
The material constant μ, which expresses the "ease" of magnetization of the medium, is the magnetic permeability of the medium. In most non-ferromagnetic materials χm << 1 and consequently B ≈ μ0H (SI) or B ≈ H (Gaussian). For ferromagnetic materials the magnetic permeability μ can be sizeable (χm >> 1). In that case the magnetization of the medium greatly enhances the magnetic field.
The two macroscopic Maxwell equations that contain charges and currents, are equations for H and electric displacement D. This is a consequence of the fact that current densities J and electric fields E (due to charges) are modified by the magnetization M and the polarization P of the medium. In SI units the Maxwell equation for the magnetic field is:
The microscopic (no medium) form of this equation is obtained by eliminating D and H via D = ε0E and H = B/μ0 (P = 0 and M = 0).
The two Maxwell equations that do not contain currents and charges give relations between the fundamental fields E and B, instead of between the auxiliary fields H and D. For instance, Faraday's induction law in SI units is,
This equation is valid microscopically (vacuum) as well as macroscopically (in presence of a medium). But, of course, in the microscopic case the detailed microscopic currents and charges due to the elementary sources appear, while in the macroscopic case some of these microscopic currents and charges are subsumed in the material properties, the various permittivities and permeabilities, for example. Thus the E- and B-fields in the two situations differ, with the macroscopic fields being averaged to remove some of the microscopic detail. |
The Consumer Price Index (CPI) is a monthly measurement of U.S. prices for household goods and services. It reports inflation (rising prices) and deflation (falling prices). Both can hurt a healthy economy.
The Federal Reserve, the U.S. central bank, monitors price changes to ensure economic growth remains stable. If the Federal Reserve detects too much inflation or deflation, it uses monetary policy tools to intervene.
What Is the CPI?
The CPI is the U.S. government's measurement of price changes in a typical "basket" of goods and services bought by urban consumers.
- Alternate Name: CPI for All Urban Consumers (CPI-U).
- Acronym: CPI
CPI and inflation are often used interchangeably, as inflation is the percentage increase or decrease of CPI over a certain period of time.
What's in the CPI Basket?
The basket represents the prices of a cross-section of goods and services commonly bought by urban households. The cross-section represents around 93% of the U.S. population, and factors in a sample of 14,500 families and 80,000 consumer prices.
Here are the major categories in the basket and how much each contributed to the CPI as of April 2021.
|Consumer Price Index Categories|
|Energy (Incl. Gasoline)||7%|
|Commodities (Incl. Medication and Autos)||20%|
For those who own their homes, the CPI calculates the owner's equivalent of primary residence (OER) instead of the monthly mortgage payment. The OER is what the owners predict how much rent would be if they rented the home.
The CPI could give a false low-inflation reading due to low rents, even when home prices are high. Low rents can result from fewer renters and increased vacancies, as low interest rates spur more home purchases. At the same time, housing prices could rise due to increased market activity.
Conversely, rising interest rates might lead to fewer buyers in the market and falling home prices. As more people compete for apartments, rents go up.
This is why the CPI didn't warn of asset inflation during the housing bubble of 2005. The CPI includes sales taxes. It excludes income taxes and the prices of investments, such as stocks and bonds.
How the CPI Is Calculated
The BLS computes the CPI by taking the average weighted cost of a basket of goods in a month and dividing it by the same basket the previous month. It then multiplies this percentage by 100 to get the number for the index.
Consumer Price Index =
Cost of Basket (This Month) / Cost of Basket (Last Month) X 100
The index shows how much the prices have changed since the base year of 1982. For example, in May 2021 the index was 269.2. That's how much prices have increased since the base period of 1982 to 1984 was established at roughly 100.
The BLS conveniently publishes the percentage change since last month or last year. In May 2021, prices rose by 0.6% from April. In April 2021, there was an increase of 0.8% in the index from March.
The CPI for May 2021 was 0.6% higher than April 2021 and increased by 5.0% from May 2020.
Why the CPI Is Important
The CPI measures inflation, which is one of the greatest threats to a healthy economy. Inflation eats away at your standard of living if your income doesn't keep pace with rising prices. Over time, your cost of living increases.
A high inflation rate can hurt the economy. Since everything costs more, manufacturers produce less and may be forced to lay off workers.
CPI Affects the Fed
The Fed uses the CPI to determine whether economic policies need to be modified to prevent inflation.
In the past, when recognizing inflation was on the horizon, the Fed used contractionary monetary policy to slow economic growth. It changed the fed funds rate to make loans more expensive, which tightened the money supply–the total amount of credit allowed into the market. Slowed economic growth and demand put downward pressure on prices and returned the economy to a healthy growth rate of 2% to 3% a year.
On Aug. 27, 2020, the Federal Reserve announced a change—it will allow a target inflation rate of more than 2% to help ensure maximum employment. Over time, 2% inflation growth is preferred, but the Fed is willing to allow higher rates if inflation has been low for a while.
CPI Affects Other Government Agencies
The government uses the CPI to improve benefit levels for recipients of Social Security and other government programs that provide financial assistance.
CPI Affects Housing and Investments
Landlords use the CPI forecasts to determine future rent increases in contracts.
An increased CPI can depress bond prices, too. Fixed-income investments tend to lose value during inflation. As a result, investors demand higher yields on these investments to make up for the loss in value.
These yield demands can increase interest rates, which then increases costs for businesses borrowing money to expand. The net effect is a decrease in earnings, which could depress the stock market.
The CPI measures two commodities with wild price swings: food and energy commodities (oil and gasoline). These products are traded constantly on the commodities market. Traders can bid prices up or down based on news such as wars in oil-producing countries or droughts. As a result, the CPI often reflects these price swings.
The "core" CPI solves the problem of volatile food and energy prices by excluding food and energy. In the past, the Fed considered core CPI when deciding whether to raise the fed funds rate. The core CPI is useful because food, oil, and gas prices are volatile, and the Fed's tools are slow-acting.
Historical CPI & Inflation
The U.S. inflation rate by year shows that fluctuations in CPI used to be much worse. In 1946, inflation hit a record annual high of 18.1% year-on-year.
Inflation next broke a record in 1974, when it hit 12.3% year-on-year while the economy contracted 0.5%. That anomaly is called stagflation.
Deflation occurred between 1930 and 1933. Prices fell 10.7% in September 1932 compared to September 1931. Congress had imposed the Smoot-Hawley Tariff two years earlier, which created a trade war that lowered prices and worsened the Great Depression.
The BLS publishes a handy inflation calculator you can use to plug in the dollar value for any year from 1913 to the present. The calculator will tell you what the dollar amount is or was worth for any year from 1913 to the present. It uses the average Consumer Price Index for that calendar year. For the current year, it uses the latest monthly index. |
Melting and crumbling glaciers are largely responsible for rising sea levels, so learning more about how glaciers shrink is vital to those who hope to save coastal cities and preserve wildlife.
But it is hard to get good pictures and measurements because glaciers typically are in remote, difficult-to-reach, and even dangerous locations. Satellites are often used to measure glacial retreat, but these images are far from complete, especially when it’s cloudy, foggy, raining, or snowing.
So researchers have turned to hydrophones, instruments that use underwater microphones to gather data beyond the reach of any camera or satellite. Hydrophones can record underwater in all conditions. Originally used by the military to detect submarines, hydrophones are now one more tool scientists have to learn about climate change. The devices collect data continuously and they are relatively inexpensive and easy to deploy and maintain in many different areas of the world to monitor sea ice, underwater earthquakes, ship noise, and even wildlife patterns.
Groans, creaks, icebergs’ calving splashes
Oskar Glowacki already knew that melting glacial ice sounds like frying bacon. As ice bubbles burst, anyone nearby can hear crackling and popping, said Glowacki, a postdoctoral scholar at the Scripps Institution of Oceanography. Using hydrophones, he and other scientists now can make more nuanced measurements of how a changing climate sounds underwater, from the groans, creaks and splashes of a calving iceberg to the changes in whale songs as the ocean warms.
Glowacki recently used a pair of hydrophones to study the underwater world of glaciers, publishing his findings in The Cryosphere. He and co-author Grant B. Deane measured glacier retreat by recording the sounds of ice – from small chunks to enormous slabs – falling off the glacier and splashing into the water.
During the summer of 2016, Glowacki’s team placed two hydrophones near Hansbreen Glacier in Hornsund Fjord, Svalbard. For a month and a half, they recorded sounds, also using three time-lapse cameras to collect images – including the “drop height” (how far the ice fell into the water) – so they could compare photos to the recordings. The team created a formula to represent the relationship between the size of a piece of ice falling from a glacier and the sound it makes underwater, also accounting for the pieces of ice falling from varying heights. (Hear an example of the sound an iceberg makes while calving here.)
“Iceberg calving, defined as mechanical loss of ice from the edges of glaciers and ice shelves, is thought to be one of the most important components of the total ice loss,” they note in their paper. They mention also that 32-40% of the Greenland ice sheet’s mass loss is from solid ice discharge. However, Glowacki says it isn’t just one process leading to glaciers losing mass: Surface melt, calving, and under-sea melting are all contributing factors.
Satellites are often used to measure glacial retreat, but the images they provide don’t present a complete picture, and Glowacki and other scientists say they hope hydrophones can help provide more answers. Given that glaciers are typically remote and difficult to reach, collecting data remotely is key. Hydrophones, on the other hand, can record underwater in all these conditions.
Hydrophones also collect acoustic data and smaller events hard to find from a satellite image. “In a single day, [you] can have 100 or 200 icebergs breaking off from a single glacier,” Glowacki says.
Glowacki says he and his team plan to further study iceberg calving, including studying additional glaciers, and collecting data for longer periods of time.
Unlocking information about Antarctic ice shelf
Other researchers also are using hydrophones to learn more about crumbling glaciers. Bob Dziak, research oceanographer with the NOAA/Pacific Marine Environmental Laboratory acoustics research group, captured a massive calving event of the Nansen Ice Shelf in Antarctica with a hydrophone. He published the results with colleagues in Frontiers in Earth Science
On April 7, 2016, satellite images showed a massive calving event had occurred on the ice shelf. The paper described it as the “first large scale calving event in >30 years.”
‘Fortuitous timing and proximity’ open way for hearing sounds of ice shelf calving, iceberg formation.
However, once Dziak and colleagues delved into the data from three hydrophones deployed 60 kilometers east of the ice shelf, they uncovered a series of “icequakes” from January to early March 2016. He and other researchers believe that much of the ice actually broke free in mid-January to February, but it remained in the same location until an April storm – which their paper described as the “largest low-pressure storm recorded in the previous seven months” – broke the ice free.
“We suspected that the icebergs broke apart but remained in place – kind of pinned in place – until a major storm with high winds passed through the area and, finally, it was that last push that pushed the icebergs out to sea,” Dziak says.
He and his co-authors wrote that “fortuitous timing and proximity of the hydrophone deployment presented a rare opportunity to study cryogenic signals and ocean ambient sounds of a large-scale ice shelf calving and iceberg formation event.”
Listening to songs of humpback whales
Monterey Bay Aquarium Research Institute studies the ocean, including its acoustics. One of the institute’s projects involves examining the soundscape of California’s Monterey Bay, including sounds from animals, humans, weather, and geologic processes like earthquakes. The researchers once even recorded an under-sea landslide. They also focus on recording and analyzing the songs of humpback whales. Male humpback whales’ songs can be over 15 minutes in length, and they can be repeated for long periods of time – even hours. Listening to these songs and analyzing them can provide unique insights into the lives of these complex animals.
“Any time we want to study marine mammals, sound gives us a window into their lives because they use sound for all of their essential life activities, really,” says institute biological oceanographer John Ryan. “Communication, foraging, reproduction, navigation – depending on the species, of course.”
Previously, scientists had thought singing occurred only during courtship and mating, but now they think whales may also use song while migrating and hunting. They know song has a crucial role in the whales’ lives.
“There’s a whole other dimension to humpback whale song,” Ryan says. “It is a mode of cultural transmission in this species. They learn songs from each other. They share songs as a population, and when populations mix and mingle, they learn new ideas, they explore with their song, improvise, and it’s a real essential part of their culture.”
In 2015, institute researchers placed a hydrophone 3,000 feet deep, recording and analyzing humpback whale songs. Between 2015 and 2018, they collected over 26,000 hours of audio, which they used computer software to analyze. The researchers determined “peak singing season” in November through January, and they found most singing occurred at night. During peak season, songs were heard around 70% of the night.
However, from September 2015 to May 2016, they detected whales singing only about 11% of the time. Those months correlated with a period when the water temperature was especially high, depleting stocks of vital food sources like anchovies and krill, and correlating also with a toxic algal bloom. Scientists think the whales may have had to devote more time and energy to finding food, leaving less for singing. As researchers continue to study the worlds’ oceans, they will undoubtedly learn more about underwater mysteries.
Listen to MBARI’s live stream from beneath the bay here.
Kristen Pope is an Idaho-based freelance writer who frequently covers science and conservation-related topics. |
SummaryThis lesson will allow students to explore an important role of environmental engineers: cleaning the environment. Students will learn details about the Exxon Valdez oil spill, which was one of the most publicized and studied environmental tragedies in history. In the accompanying activity, they will try many "engineered" strategies to clean up their own manufactured oil spill and learn the difficulties of dealing with oil released into our waters.
When oil spills occur, environmental engineers help clean them up. They determine which type(s) of cleanup method is best for different situations by examining the weather patterns of the area, the type of oil spilled, and what living creatures and natural environments are being affected by the spill. Their efforts, plus those of many rescue workers, help restore a habitat after such a disaster occurs.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
- Use ratio reasoning to convert measurement units; manipulate and transform units appropriately when multiplying or dividing quantities. (Grade 6) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Fluently divide multi-digit numbers using the standard algorithm. (Grade 6) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Use ratio and rate reasoning to solve real-world and mathematical problems. (Grade 6) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Solve real-world and mathematical problems involving the four operations with rational numbers. (Grade 7) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Identify problems, and propose solutions related to water quality, circulation, and distribution – both locally and worldwide (Grade 6) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
- Identify the various causes and effects of water pollution in local and world water distributions (Grade 6) Details... View more aligned curriculum... Give feedback on this alignment... Thanks for your feedback!
Before this lesson students should understand the concept of density.
After this lesson, students should be able to:
- Explain an oil spill in terms of density of a liquid.
- Relate oil spills to an environment's ability to provide food, water, space and essential nutrients for its inhabitants.
- Describe one oil spill event in history and use numbers to understand the magnitude of the spill.
- Describe some technologies used by environmental engineers to cleanup an oil spill.
The Exxon Valdez oil spill was one of the largest oil spills in history. It took place in Alaska in March 1989, when an oil tanker ran aground, causing 10.8 million gallons of crude oil to spill into Prince William Sound. While this was indeed a huge spill, it was actually only a small fraction of what the United States uses in oil in any one day. The United States uses about 700 million gallons of oil every day. That is a lot of oil — enough to completely fill 9 school gymnasiums!
How do we use oil? (Answer: To operate cars and other methods of transportation; to run our electrical power plants; for lubrication, for things such as bicycles and doors; to make paint and varnishes; and to run machinery, among other things.) Oil spills often are caused by human mistakes. Sometimes, they happen when natural weather disasters occur, such as when a hurricane destroys oil mining equipment or oil tankers; other times, they occur when large industry machinery breaks down. Unfortunately, spills are sometimes caused by deliberate acts, such as citizens dumping oil illegally or even during times of conflict (war) as a means of sabotage.
There are several characteristics of oil that make oil spills very dangerous and difficult to clean up. One important quality of oil is it is less dense then water, which means it will float in water. In order to motivate students to observe and estimate densities of several different liquids, have four students volunteer to stand in the front of the classroom and pour equal quantities of the following liquids into one graduated cylinder: water with food coloring, Karo® corn syrup, light vegetable oil, and rubbing alcohol. Have the students rate the liquids from most dense to least dense. (Answer: In order from bottom (most dense) to top (least dense) of the cylinder: Karo® syrup water vegetable oil rubbing alcohol; remember: the oil floats on water.)
When an oil spill occurs, environmental engineers work to help clean it up. Have you ever noticed a rainbow puddle on the street or parking lot after a rain? That rainbow sheen that you see on top of the water is actually oil from cars that has leaked onto the ground. Environmental engineers are responsible for assessing what type of cleanup method is best for different situations. They examine the weather patterns of the area, the type of oil that was spilled, and what living creatures are, or will be, affected by the spill. Is there a community nearby? Are there a lot of plants or animals in the area? Some of the methods engineers use are dispersants (chemicals used to break down the oil); booms and skimmers (used to contain the oil and avoid spreading); absorbents and vacuum cleaners; burning the oil; and biodegradation (the use of microorganisms that digest oil). Today, we are going to act as environmental engineers and learn how these different techniques can be used to clean up oil spills.
Lesson Background and Concepts for Teachers
Exxon Valdez: What happened?
At 9:12 p.m. on March 23, 1989, the Exxon Valdez departed from Prince William Sound, Alaska. There were several people in charge of the ship. The pilot William Murphy, the captain Joe Hazelwood and the helmsmen Harry Claar all were challenged to steer the 986 ft. ship — carrying 53,094,510 gallons of oil — through the Valdez Narrows. Navigating through the Valdez Narrows is exceptionally challenging because of the Bligh Reef, which makes the narrows just 500 ft. wide. That night the Exxon Valdez came across an iceberg, so Captain Hazelwood ordered the helmsmen to turn the ship out of the shipping lanes and around the iceberg. Later that evening the wheelhouse was turned over to Third Mate Gregory Cousins and Helmsman Robert Kagan. They were given specific instructions to turn the ship back into the shipping lanes at a certain point to avoid the reef. However, due to reasons unknown at this time, the helmsmen did not steer the vessel back to the channel, and at 12:04 a.m. on March 24, the Exxon Valdez ran aground on Bligh Reef, spilling10.8 million gallons into Price William Sound.
It is unclear who was at fault. The captain was seen drinking at a local bar and had alcohol in his blood several hours after the accident, but he was asleep when the boat ran aground. Was it his fault? A jury in Alaska ultimately found him not quality of running the ship aground. Hazelwood was however, charged with negligent discharge of oil and fined $50,000 as well as 1000 hours of community service.
Today, Exxon Shipping Company is operating under the new name Sea River Shipping Company. The Exxon Valdez, now the Sea River Mediterranean, was repaired but is prohibited from entering Prince William sound.
Effects of the Spill
It is estimated that 10.8 million gallons of oil leaked from the Exxon Valdez — enough oil to fill 125 Olympic sized swimming pools. The spill is the largest spill ever in the United States. Oil covered about 1,300 miles of shoreline, with 200 miles heavily covered and 1,100 lightly covered. The impact of an oil spill of this magnitude is tremendous. Exxon has paid $2.1 billion to clean up the spill, but the effects are still being felt today. The ecological impacts are impossible to know because the effects of the spill will still be felt for many years to come. It is estimated, however, that the oil spill killed 250,000 seabirds, 2,800 sea otters, 300 harbor seals, 250 bald eagles 22 killer whales and billions of salmon and herring eggs. Today 10 years later, only two of the 23 species that were injured by the spill have recovered.
During clean up of the Exxon Valdez oil spill, environmental engineers worked to develop methods that could prove helpful in such a large disaster. They tried traditional mechanical methods: backhoes to sift through sand covered with oil-soaked soil and high pressure hot and cold water treatments to wash the oil off the shore to be scooped up by skimmers or adsorbed by adsorbent material. Additionally, they used the uncommon method of bioremediation: adding fertilizer to the beaches to promote the growth of bacteria that can degrade the oil. During the Exxon Valdez oil spill clean up, environmental engineers learned a lot and optimized many processes. The use of bioremediation, for example, was not commonly used before Exxon Valdez oil spill, and now it is used for many different applications. Conversely, the use of hot water treatment was very common before the Exxon Valdez oil spill. However, during the Valdez clean up, they found that hot water treatment actually did more damage then good, because it harmed many of the small organisms living in the water.
More information on the oil spill can be found at the Exxon Valdez Oil Spill Trustee Council website (search for Council website through any Internet search engine).
biodegradation: The use of microorganisms that digest oil as a means of cleaning up oil spills.
bioremediation: Adding fertilizer to oil-covered beaches to promote the growth of bacteria that can degrade the oil.
dense: Having relatively high density or crowded closely together; compact.
density: The mass per unit volume of a substance.
dispersant: A chemical used to break down the oil.
solubility: the ability or tendency of one substance to dissolve into another at a given temperature and pressure; generally expressed in terms of the amount of solute that will dissolve in a given amount of solvent to produce a saturated solution.
- Oil Spill Cleanup - Can we bring back pristine conditions to a beach with an oil spill? Students create small scale models of an oil spill, determine the harmful effects on animals and engineer different methods to clean the oil out of the water.
- Oil on the Ocean - Students learn about the environmental and economic effects of oil spills. Following the steps of the engineering design process, they brainstorm oil spill clean-up methods and then design, build, and re-design oil booms.
Today we looked at the cleanup of oil spills — in particular, the 1989 Exxon Valdez oil spill. We remembered that oil is less dense than water and therefore floats on the water, causing harm to the organisms that need that water source to live. Environmental engineers are often employed to help cleanup oil spills and return an environment's ability to provide food, water, space and essential nutrients back to its original state. We citizens can help prevent some oil-related problems by using less oil in our daily lives, whenever possible.
Discuss with students the effectiveness of cleaning up an oil spill. After trying to contain, clean, dissolve or remove the oil spill with the various utensils and "chemicals," did the students ever reach a pristine environment? (Answer: probably not) Is it better to use more than one technology to clean up the oil spill? (Answer: usually, yes) What are some ideas they have for helping cleanup oil spills? (Answers will vary.)
If time permits, have students compare methods they used to clean up their model oil spills with the methods currently used to clean up actual oil spills. The following comparisons can be made: dispersants (chemicals used to break down the oil) to soap; booms and skimmers (used to contain the oil and avoid spreading) to pieces of thread and cotton balls; absorbents and vacuum cleaners to paper towels and pipettes; and burning the oil to using matches.
Brainstorming: As a class, have the students engage in open discussion. Remind students that in brainstorming, no idea or suggestion is "silly." All ideas should be respectfully heard. Take an uncritical position, encourage wild ideas and discourage criticism of ideas. Have them raise their hands to respond. Write their ideas on the board.
- Ask the students what they think one thing an environmental engineer would do. Have each student write their answer on a piece of paper. After everyone has had a chance to write down their answer, go around the room and ask each student to give their answer to the class. Keep a list of all ideas on the board. If need be, create categories of similar ideas. Some examples may include: clean up oil spills, protect the environment, treat drinking water, design landfills, and treat waste water.
Discussion Question: Solicit, integrate and summarize student responses.
- Tell Students that in March 1989, nearly 11 million gallons of crude oil spilled into Prince William Sound in Alaska. Ask students what the possible effects of a spill like this would be. Lead an informal discussion. Record answers on the board.
Question/Answer: Ask students the following questions to determine their understanding of the Exxon Valdez oil spill.
- Where did the oil spill occur? (Answer: Prince William Sound, Alaska.)
- What were the main environmental effects of the oil spill? (Answer: It killed more than 250,000 seabirds, 2,800 sea otters, 300 harbor seals, 250 bald eagles, up to 22 killer whales, and billions of salmon and herring eggs.)
Problem Solving: Ask students to calculate how many classrooms the Exxon Valdez oil spill would fill up if each classroom is 20' x 20' x 10' (height). (Answer: There were 10.8 million gallons of oil spilled in the Exxon Valdez oil spill. 7.4805 gallons = 1 cubic foot. 10.8 million gallons would equal about 1,443,753 cubic feet or about 361 classrooms that were each 20' x 20' x 10' (height) (4000 cubic feet volume).
Lesson Summary Assessment
Persuasion Papers/Oral Defense: Engineers often have to explain their point of view to companies, communities or conference attendees. Have students defend the position of an environmental engineer with regard to pollution in a one-page persuasive piece. They should be able to explain the jobs environmental engineers perform, the sources of pollution and why pollution is bad for the environment. If time, have them explain their position orally.
Community Debate: Have students write/perform a short play or debate about the lesson topic. The setting is a town meeting about a relevant issue. The people present are: an environmental engineer, an oil company owner, captain of an oil tanker, a local politician, and various citizens. The scenario is a large oil spill (from an oil tanker) in the community marina or local body of water.
Lesson Extension Activities
This lesson provides an opportunity to introduce or review a variety of concepts including solubility, density and biological effects of pollution. Some possible questions for discussion may include: Why did the oil and water not mix? How can we keep the oil from washing onto beaches? What was produced when oil burned? How could the air be polluted? What would happen if the oil had been denser than water? What are some dangers of a large oil spill, such as the Exxon Valdez spill?
Ask the students if they know how an oil spill in a neighboring body of water will impact the economy of a country. Have the students research a few of the most recent oil spills in the world and their resulting economic effects.
Students could be challenged to estimate the relative and exact densities of each liquid used in the demonstration during the Introduction/Motivation section.
ContributorsSharon D. Perez-Suarez; Melissa Straten; Malinda Schaefer Zarske; Janet Yowell
Copyright© 2005 by Regents of the University of Colorado.
Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government. |
When solving inequalities do not forget that multiplying or dividing by a negative number reverses the inequality sign: −x 3, becomes x 1 below is a graph for 2x-y = 1 the formula can be rearanged as y = 2x -1 and results in a straight. Solving trig inequalities is a tricky work that often leads to errors/mistakes after solving trig inequalities by the algebraic method, you can check the answers by graphing the trig function. Solve an inequality this page will show you how to solve a relationship involving an inequality note the inequality is already put in for you please do not type it anywhere just fill in what’s on the left and right side of your inequality. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more khan academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Linear inequalities may look intimidating, but they're really not much different than linear equations in this lesson, we'll practice solving a variety of linear inequalities.
To solve an inequality with two inequality symbols, use equation-solving methods to solve for the variable, and be sure to perform the same operations on all parts separated by an inequality symbol. The previous inequalities are called linear inequalities because we are dealing with linear expressions like x – 2 (x 2 is just x – 2 0, before you finished solving it)when we have an inequality with x 2 as the highest-degree term, it is called a quadratic inequalitythe method of solution is more complicated. The inequalities worksheets are randomly created and will never repeat so you have an endless supply of quality inequalities worksheets to use in the classroom or at home our inequalities worksheets are free to download, easy to use, and very flexible. Fun and easy learning 4 kids on inequalities ipracticemath provides several math test, practice and worksheet for students of grade1 to grade 12 fun and easy learning 4 kids on inequalities ipracticemath provides several math test, practice and worksheet for students of grade1 to grade 12.
Examples of how to solve and graph linear inequalities example 1: solve and graph the solution of the inequality to solve this inequality, we want to find all values of x that can satisfy it this means there are almost infinite values of x which when substituted, would yield true statements. The indeterminate for which you solve an equation, an inequality of a system: an identifier or an indexed identifier a, b arithmetical expressions vars a nonempty set or list of indeterminates for which you solve an equation, an inequality, or a system eqs. Improve your math knowledge with free questions in solve linear inequalities and thousands of other math skills. Logarithmic inequalities are inequalities in which one (or both) sides involve a logarithm like exponential inequalities, they are useful in analyzing situations involving repeated multiplication, such as in the cases of interest and exponential decay the key to working with logarithmic inequalities is the following fact: if.
Two-step inequalities are slightly more complicated than one-step inequalities (duh) this is a worked example of solving ⅔-4y-8⅓ if you're seeing this message, it means we're having trouble loading external resources on our website. To solve a quadratic inequality, follow these steps: solve the inequality as though it were an equation the real solutions to the equation become boundary points for the solution to the inequality. Solving inequalities is not so different from solving regular equations in fact, an inequality sign (,≤,≥) is treated the same as an equal (=) sign when solving inequalities involving only addition or subtraction before all that, let us define the different inequality signs. Equations and inequalities involving signed numbers in chapter 2 we established rules for solving equations using the numbers of arithmetic now that we have learned the operations on signed numbers, we will use those same rules to solve equations that involve negative numbers. Just like with equations, the solution to an inequality is a value that makes the inequality true you can solve inequalities in the same way you can solve equations, by following these rules you can solve inequalities in the same way you can solve equations, by following these rules.
Solving and graphing inequalities if you are beginning your study of inequalities, i have a lot of lessons for you to study as you check them out below, make sure you start right here on this page for a quick introduction to basic inequalities solving and graphing inequalities. Algebra solving inequalities compound inequalities page 1 of 2 compound inequalities so far, we've just been solving inequalities with two parts: a left side and a right side like this but, sometimes we'll have inequalities with three parts: sometimes, these are called compound inequalities. Algebra inequalities - problems | problems with answers from cymath solver cymath is an online math equation solver and mobile app. Students learn that when solving an inequality, such as –3x 12, the goal is the same as when solving an equation: to get the variable by itself on one sidenote that when multiplying or dividing both sides of an inequality by a negative number, the direction of the inequality sign must be switched.
Solving trigonometric inequalities (concep t, methods, and steps) by nghi h nguyen definition a trig inequality is an inequality in standard form: r(x) 0 (or 0) that contains one or a few trig. Solving compound inequalities a compound inequality contains at least two inequalities that are separated by either and or or the graph of a compound inequality with an and represents the intersection of the graph of the inequalities. Quickmath allows students to get instant solutions to all kinds of math problems, from algebra and equation solving right through to calculus and matrices the inequalities section lets you solve an inequality or a system of inequalities for a single variable you can also plot inequalities in two variables. Examples on solving inequalities and representing them on a number line mini-plenary with variables on both sides and a grade b style plenary question included all work is differentiated with answers throughout. |
Join Vince Kotchian for an in-depth discussion in this video Perimeter and area, part of Learning Everyday Math.
- All right, in this video, I'm going to talk about a couple of different concepts, perimeter and area. So, perimeter, is the measure of the distance surrounding something flat, like a picture frame, maybe, or a fenced in yard. So, if you see this, imagine this is a picture frame, let's say. We're going to get the perimeter by adding up all the distances surrounding the thing. So we often call these length and width. So, usually the shorter distance is the width, so I'll put a w there, and the longer distance is the length, so I would add up the two lengths, and the two widths.
So, let's say that the width is five inches, and the length is 10 inches. I would just add them all up, so there's two fives, five up here, five up here, and two 10s, so 10 plus 10, plus five plus five is a perimeter of 30 inches. It's a little more complicated if I have a circle, there's a formula for the perimeter of a circle, and really, what we're getting, when we talk about the perimeter of a circle, is the circumference of the circle.
So the formula for that is two pi r. So, pi is actually a number, this little symbol right here, and the number is approximately 3.14. So, the r in the formula stands for the radius of the circle. So I'll explain what that is. If we draw a line straight across the circle, that's what we call the circle's diameter, so that's the distance all the way across the circle.
The radius is one half of the diameter, and we usually call the radius r. So let's say we have a circle here, and we know the radius is four. We would just put it into our formula, to solve for its circumference or its perimeter. So we would say two times pi, and I'll just write out pi. If you have a calculator, you may have a button for pi, for now I'll just write it out, times the radius. If we put that in our calculator, that will give us the perimeter of the circle, AKA its circumference.
So I'll clear the screen so I can talk about the next concept, which is area. So, area is the measure of space within something flat, again, a yard or maybe a circular skating rink perhaps. So there's a few different ways we need to calculate area depending on what kind of shape we have. Go back to our rectangle, or a rectangle. The area is the width times the length. So let's say, again, we had the width of five, and let's say the length is 10.
We would just multiply those two numbers to get the area, so area of a rectangle is length times width, so five times 10 in this case, would be 50. The area of a triangle is what we call the base times the height, divided by two, so for the area of a triangle, we need to calculate what the height is, and to do that, we need to draw a line that's perpendicular to the base, and perpendicular means at a right angle, to the base.
So let's say the base of this triangle, and I'll call it this side, was 10. And let's say the height is 12. So we would multiply the base times the height, 12 times 10 is 120. And divide it by two to get the area, which is 60. For the area of the circle, we're going to use that number pi again. There's a formula for this. So the area of a circle is pi times the radius squared.
So let's say, again, we have a circle with a radius of five. We would square that radius so five to the second power is 25, we'd write that as 25 times pi. A lot of times in math, we leave pi as pi, and we don't turn it into that 3.14, even though it is that number. So, one more thing I want you to notice about all these areas, they're expressed in square units, so going back to this picture frame, let's say that this was 10 inches by five inches.
We would express the area as 50 square inches, and that's just how we express area, in terms of square units, and the exact same thing for the circle. It would be 25 pi square inches. So I'll clear my screen, give you another example of when we can use area. So in real life, let's say you're ordering a pizza. And you want to figure out, what really, is the difference between maybe a medium and a large pizza in terms of its size, and we can express its size in terms of its area.
So let's say we have a couple of different options. Let's say we have a 10 inch pizza, versus a 12 inch pizza. So let's think about these pizzas for a second. The 10 inch pizza, that means it has a diameter of 10 inches. So we just learned that the radius is half the diameter, so the radius of this pizza, would be five, and the bigger pizza has a diameter of 12 inches, that would mean its radius is half of that, so its radius would be 6.
So, doesn't seem like a big difference, maybe, at first, but let's look at the pizzas' areas if we calculate them. So remember, the area of a circle, is pi times the radius squared. So for the smaller pizza, we would have five squared is 25 times pi, and for the larger pizza, we would have six squared, which is 36 times pi. So I'm going to just approximate a little bit here, using three for pi, this would turn into about, a little bit more than 75 square inches.
And using, again, three for pi, this would turn into 108 square inches. So, the pizza on the left is actually, not even three quarters of the pizza on the right, so just a way to kind of get a sense of how big shapes are by calculating their areas. So, remember, you can solve for perimeter by finding the distance measured around something. Solve for the area by remembering the formula for the shape, and solving for the distance inside something.
Now, open up the Exercise Files for Chapter three, go to the file for perimeter and area, and try some problems on your own.
- Practicing mental math
- Understanding decimals and fractions
- Working with percentages and ratios
- Solving equations
- Calculating area and volume
- Scaling recipes
- Estimating your running speed
- Calculating the area for home-improvement projects
- Working with probability |
H.1 Introduction to relativity
H.1.1 Describe what is meant by a frame of reference
A frame of reference refers to a point of view. Physics refers to observational frames of reference, i.e. what is heard, seen, touched, smelt or tasted from a certain point of view.
e.g. you have a frame of reference (in a computer chair) that views the world as stationary. A frame of reference with the sun as stationary would view you as moving around the earth.
H.1.2 Describe what is meant by a Galilean Transformation
Galilean is what seems intuitive. The equations do not involve relativity.
H.1.3 Solve problems involving relative velocities using the Galilean transformation equations
This should be easy. position equation: x'=x-vt
Velocity equation: u'=u-v
e.g. You are sitting on a park bench. You see a bike and car moving away from each other. The bike is moving at 5m/s, the car is moving at 20 m/s. How fast is the bike moving in the cars reference frame? (Ans: 25m/s)
H.2 Concepts and postulates of special relativity
H.2.1 Describe what is meant by an inertial frame of reference
A reference frame that is moving with a constant velocity. This really isn't that essential but you might see a fundamental flaw in this definition (just ignore it if you do).
H.2.2 State the two postulates of the special theory of relativity
- the speed of light in a vacuum is constant for all inertial observers.
- The laws of physics are the same for all inertial observers.
H.2.3 Discuss the concept of simultaneity
Consult a textbook (or youtube "simultaneity"), best explained with diagrams.
In summary though, simultaneous events that take place at the same point in space will be simultaneous to all observers. However, events that take place at different points in space can be simultaneous for one observer but not for another.
H.3 Relativistic Kinematics
H.3.1 Describe the concept of a light clock
Best described by diagram.
Imagine a clock where light is bounced off two mirrors. mirror| light beam --> mirror |
Each time it hits a mirror a "tick" is registered. Since light is the same for all reference frames, this is the most accurate type of clock.
This seams simple for a stationary clock, HOWEVER in a reference frame where the clock moves, it must travel a diagonal distance which is greater than that for a stationary time frame. In this case a "tick" would take longer for one reference frame than the other. Therefore, time is shorter for one reference frame than another.
H.3.2 Define proper time interval
Proper time interval is the elapsed time measured between two events when the observer is in an inertial frame of reference.
H.3.3 Derive the time dilation formula
The time dilation formula can be derived using pythagoras's theorem. The length of the light clock is l. According to the observer's clocks, the clock in the light clock travels longer (l' = ct'). Horizontal distance traveled by the light clock: vt'. t = l/c
l2+(vt')2 = (l')2
t'2 = (l')2/c2
t'2 = (l2+(vt')2)/(c2)
After rearranging, t' = 1/( 1-v2/(c2) ) x t
H.3.4 Sketch and annotate a graph showing the variation with relative velocity of the Lorentz factor
H.3.5 Solve problems involving time dilation
H.3.6 Define proper length
Length of an object as defined in a reference frame which is in rest with the object (inertial reference frame)
H.3.7 Describe the phenomenon of length contraction
H.3.8 Solve problems involving length contraction
H.4 Some consequences of special relativity
H.4.1. Describe how the concept of time dilation leads to the "twin paradox"
H.4.2 Discuss the Hafele-Keating experiment
Testing the theory of special relativity, scientists flew two clocks around the world in opposite directions and compared them to a clock that was stationary relative to the earth's surface. One plane flew eastward and another went westward. When the planes returned, it was found that that for the plane flying East, it's clock was slower than the clock on the earth's surface (it was behind by 59 ns). Since the earth spins to the east and the plane was travelling east relative to the earth's atmosphere, this clock had a greater velocity relative to the stationary frame of reference: the centre of the Earth and so it's clock ticked a little slower. The westward clock ran faster than the clock that remained on the earth's surface as it was moving more slowly relative to the centre of the earth. This experiment provided evidence of time dilation that was in excellent agreement with relativistic predictions.
H.4.3 Solve one-dimensional problems involving the relativistic addition of velocities
H.4.4 State the formula representing the equivalence of mass and energy
H.4.5 Define rest mass
Rest mass of an object is defined to be the energy required to produce the object at rest.
H.4.6 Distinguish between the energy of a body at rest and its total energy when moving
H.4.7 Explain why no object can ever attain the speed of light in a vacuum
According to the classical mechanics, F = ma therefore if the force is applied to an object constantly over a very long time period, the speed of the object must increase without limit. However, this is not true; as the speed of the object increases, the relativistic mass of the object increases, therefore acceleration will gradually decrease. Only the particles with no rest mass (such as photon) can travel with the speed of light.
H.4.8 Determine the total energy of an accelerated particle
The total energy E, momentum p, and the rest energy E0 (=m0c2) have a relationship E2 = p2c2 + E02.
H.5 Evidence to support special relativity
H.5.1 Discuss muon decay as experimental evidence to support special relativity
H.5.2 Solve problems involving the muon decay experiment
H.5.3 Outline the Michelson-Morley experiment
H.5.4 Discuss the result of the Michelson-Morley experiment and its implication
H.5.5 Outline an experiment that indicates that the speed of light in vacuum is independent of its source
H.6 Relativistic momentum and energy
H.6.1 Apply the relation for the relativistic momentum p = γm0u of particles
H.6.2 Apply the formula Ek = (γ-1)m0c2 for the kinetic energy of a particle
H.6.3 Solve problems involving relativistic momentum and energy
H.7 General relativity
H.7.1 Explain the difference between the terms gravitational mass and inertial mass
Gravitational mass is the mass obtained from when a gravitational force acts upon an object. The inertial mass is the mass obtained from when an external force acts upon a body (the mass is resisting the acceleration). Evidently, the gravitational and inertial mass are exactly the same because uniform acceleration is indistinguishable from a gravitational field.
H.7.2 Describe and discuss Einstein's principle of equivalence
The principle of equivalence states that there is no difference between an accelerating observer and an observer in a gravitational field. This principle coincides with General Relativity and the IB exams focus on light being bent by a gravitational field; Einstein stated this idea in respect to his closed elevator thought experiment. Light bent the same way regardless of whether the elevator was at rest or accelerating downward. The second part of the principle of equivalence is that time slows down when an object approaches a massive body such as black holes.
H.7.3 Deduce that the principle of equivalence predicts bending of light rays in a gravitational field
H.7.4 Deduce that the principle of equivalence predicts that time slows down near a massive body
H.7.5 Describe the concept of spacetime
Spacetimeis the 4-dimensional world with three space and one time coordinates.
H.7.6 State that moving objects follow the shortest path between two points in spacetime
In the absence of any forces, a moving object will move in spacetime along path with the shortest length. This path is called geodesics.
H.7.7 Explain gravitational attraction in terms of the warping of spacetime by matter
Large masses will warp space-time in such a way that the shortest distance to be travelled between point A and B for a particle is now a curve around the large mass. As such, this curved path that the particle follows can be thought of as the gravitational attraction.
H.7.8 Describe black holes
Black hole is a singularity of space time, and is a point of infinite density. It causes extreme curvature of spacetime around it.
H.7.9 Define the term Schwarzschild radius
Schwarzschild radius (Rs) is sometimes called gravitational radius, or event horizon. Within Rs, no object can escape from the gravitational field (because the escape velocity exceeds c within Rs).
H.7.10 Calculate the Schwarzschild radius
Rs = 2GMc-2, where G is the gravitational constant and M is mass of the black hole (or the star).
H.7.11 Solve problems involving time dilation close to a black hole
H.7.12 Describe the concept of gravitational red-shift
H.7.13 Solve problems involving frequency shifts between different points in a uniform gravitational field
H.7.14 Solve problems using the gravitational time dilation formula
H.8 Evidence to support general relativity
H.8.1 Outline an experiment for the bending of EM waves by a massive object
H.8.2 Describe gravitational lensing
Assuming a massive galaxy has a large gravitational pull, light from a distant quasar is bent, thereby creating two quasars when viewed through a telescope. The galaxy acts a lens as it bends the incoming light from the quasar.
H.8.3 Outline an experiment that provides evidence for gravitational red-shift
Pound-Rebka Experiment where they shot a photon from the ground floor of a building up to the attic. They found that the frequency of the photon was longer up at the attic then at the ground floor and thus, provides evidence for gravitational red-shift. |
What is the special factoring method?
Some special factoring formulas include the difference of two squares, the sum of two cubes, and the difference of two cubes. If there are three terms or more in the polynomial, students can use strategies such as finding common factors and factoring by grouping.
How do you solve special products?
These special product formulas are as follows:
- (a + b)(a + b) = a^2 + 2ab + b^2.
- (a – b)(a – b) = a^2 – 2ab + b^2.
- (a + b)(a – b) = a^2 – b^2.
What are special factors?
Special factors means the factors that the IEP team shall consider when the team develops each child’s IEP, as provided in 34 CFR 300.324(a)(2) and in Ed 1100.
What are the types of special cases?
Factoring Special Cases
- Factor a perfect square trinomial.
- Factor a difference of squares.
- Factor a sum and difference of cubes.
- Factor an expression with negative or fractional exponents.
What is special product in algebra?
Lesson Summary Special products are simply special cases of multiplying certain types of binomials together. We have three special products: (a + b)(a + b) (a – b)(a – b) (a + b)(a – b)
What are the special products and factor types?
58 Factor Special Products
- Factor perfect square trinomials.
- Factor differences of squares.
- Factor sums and differences of cubes.
- Choose method to factor a polynomial completely.
Which of the following would be considered a special factor that must be considered by the IEP team?
IDEA lists five special factors that the IEP team must consider in the development, review, and revision of each child’s IEP: behavior, limited English proficiency, Braille and children with blindness or visual impairment, communication needs (especially important for children who are deaf or hard of hearing), and …
Why is special education so special?
Special education is ‘special’ because it has a distinct place in the education of not only individuals with disabilities but also diverse learners, including those who are at risk.
What is meant by special case?
Definition of special case : a case the proceedings under which are different from those of the regular common law or equity actions: such as. a : an action or proceeding established by statute to provide new rights or remedies.
What are the special cases of factor polynomials?
The special cases are: binomials that are the difference of two squares, a2 – b2, which factors as (a + b) (a – b). For some polynomials, you may need to combine techniques (looking for common factors, grouping, and using special products) to factor the polynomial completely.
Can the sum of squares and cubes be factored?
Although the sum of squares cannot be factored, the sum of cubes can be factored into a binomial and a trinomial. Similarly, the sum of cubes can be factored into a binomial and a trinomial but with different signs.
What is the factored form of a difference of squares?
A difference of squares can be rewritten as two factors containing the same terms but opposite signs. Confirm that the first and last term are perfect squares. Write the factored form as (a+b)(a−b) ( a + b) ( a − b). Factor 9×2 −25 9 x 2 − 25.
What is an example of a factor equation?
Example Factor x2 + 6x + 9. x2 + 3x + 3x + 9 Rewrite 6x as 3x + 3x, as 3 • 3 = 9, the (x2 + 3x) + (3x + 9) Group pairs of terms. x(x + 3) + 3 (x + 3) Factor x out of the first pair, and fact (x + 3) (x + 3) or (x + 3)2 Factor out x + 3. (x + 3) (x + 3) can al |
Blood pressure; Systolic blood pressure
Blood pressure is a measurement of the force applied to the walls of the arteries as the heart pumps blood through the body. The pressure is determined by the force and amount of blood pumped, and the size and flexibility of the arteries.
Blood pressure is continually changing depending on activity, temperature, diet, emotional state, posture, physical state, and medication use.
How the test is performed
Blood pressure is usually measured while you are seated with your arm resting on a table. Your arm should be slightly bent so that it is at the same level as your heart. The upper arm should be bare, with your sleeve comfortably rolled up.
Blood pressure readings are usually given as 2 numbers: for example, 110 over 70 (written as 110/70). The first number is the systolic blood pressure reading, and it represents the maximum pressure exerted when the heart contracts. The second number is the diastolic blood pressure reading, and it represents the pressure in the arteries when the heart is at rest.
To obtain your blood pressure measurement, your health care provider will wrap the blood pressure cuff snugly around your upper arm, positioning it so that the lower edge of the cuff is 1 inch above the bend of the elbow.
The provider will locate the large artery on the inside of the elbow by feeling for the pulse and will place the head of the stethoscope over this artery, below the cuff. It should not rub the cuff or any clothing because these noises may block out the pulse sounds. Correct positioning of the stethoscope is important to get an accurate recording.
Your provider will close the valve on the rubber inflating bulb and then will squeeze it rapidly to inflate the cuff until the dial or column of mercury reads 30 mmHg (millimeters of mercury) higher than the usual systolic pressure. If the usual systolic pressure is unknown, the cuff is inflated to 210 mmHg.
Now the valve is opened slightly, allowing the pressure to fall gradually (2 to 3 mmHg per second). As the pressure falls, the level on the dial or mercury tube at which the pulsing is first heard is recorded. This is the systolic pressure.
As the air continues to be let out, the sounds will disappear. The point at which the sound disappears is recorded. This is the diastolic pressure (the lowest amount of pressure in the arteries as the heart rests).
The procedure may be performed 2 or more times.
How to prepare for the test
The test may be done at any time. When it is performed for comparison purposes, it is usually done after resting for at least 5 minutes. All you need to perform a blood pressure measurement is a cuff and a device for detecting the pulse in the artery (stethoscope or microphone).
Infants and children:
The preparation you can provide for this test depends on your child’s age, previous experiences, and level of trust. For general information regarding how you can prepare your child, see the following topics:
- Infant test or procedure preparation (birth to 1 year)
- Toddler test or procedure preparation (1 to 3 years)
- Preschooler test or procedure preparation (3 to 6 years)
- Schoolage test or procedure preparation (6 to 12 years)
- Adolescent test or procedure preparation (12 to 18 years)
How the test will feel
You will feel the pressure of the cuff on your arm. If the test is repeated a few times, you may feel temporary numbness or tingling in your hand.
Why the test is performed
Most people cannot sense if their blood pressure is high (hypertension) because there are usually no symptoms. High blood pressure increases the risk of heart failure, heart attack, stroke, and kidney failure. For people who have high blood pressure, this test is a way of monitoring the effectiveness of medications and dietary modifications.
Low blood pressure may be a sign of a variety of illnesses, including heart failure, infection, gland disorders, and dehydration.
In adults, the systolic pressure should be less than 120 mmHg, and the diastolic pressure should be less than 80 mmHg.
What abnormal results mean
- Pre-high blood pressure: systolic pressure consistently 120 to 139, or diastolic 80 to 89
- Stage 1 high blood pressure: systolic pressure consistently 140 to 159, or diastolic 90 to 99
- Stage 2 high blood pressure: systolic pressure consistently 160 or over, or diastolic 100 or over
- Hypotension (blood pressure below normal): may be indicated by a systolic pressure lower than 90, or a pressure 25 mmHg lower than usual
Blood pressure readings may be affected by many different conditions, including:
- Cardiovascular disorders
- Neurological conditions
- Kidney and urological disorders
- Pre-eclampsia in pregnant women
- Psychological factors such as stress, anger, or fear
- Various medications
- “White coat hypertension” may occur if the medical visit itself produces extreme anxiety
What the risks are
There are no significant risks associated with checking blood pressure.
Consult your provider if your blood pressure measurements are consistently high or low or if you have symptoms at the same time as the high or low reading.
Repeated measurements are important for screening or monitoring. A single high measurement does not necessarily mean hypertension. A single normal measurement does not necessarily mean that high blood pressure is not present.
by Janet G. Derge, M.D.
All ArmMed Media material is provided for information only and is neither advice nor a substitute for proper medical care. Consult a qualified healthcare professional who understands your particular history for individual concerns. |
Last weekend (April 27, 2013), the Fermi and Swift spacecraft witnessed a “shockingly” bright burst of gamma rays from a dying star. Named GRB 130427A, it produced one of the longest lasting and brightest GRBs ever detected.
Because Swift was able to rapidly determine the GRB’s position in the sky, and also because of the duration and brightness of the burst, the GRB was able to be detected in optical, infrared and radio wavelengths by ground-based observatories. Astronomers quickly learned that the GRB had one other near-record breaking quality: it was relatively close, as it took place just 3.6 billion light-years away.
“This GRB is in the closest 5 percent of bursts, so the big push now is to find an emerging supernova, which accompanies nearly all long GRBs at this distance,” said Neil Gehrels, principal investigator for Swift.
“We have waited a long time for a gamma-ray burst this shockingly, eye-wateringly bright,” said Julie McEnery, project scientist for the Fermi Gamma-ray Space Telescope. “The GRB lasted so long that a record number of telescopes on the ground were able to catch it while space-based observations were still ongoing.”
No two GRBs are the same, but they are usually classified as either long or short depending on the burst’s duration. Long bursts are more common and last for between 2 seconds and several minutes; short bursts last less than 2 seconds, meaning the action can all over in only milliseconds.
This recent event started just after 3:47 a.m. EDT on April 27. Fermi’s Gamma-ray Burst Monitor (GBM) triggered on the eruption of high-energy light in the constellation Leo. The burst occurred as NASA’s Swift satellite was slewing between targets, which delayed its Burst Alert Telescope’s detection by a few seconds.
Fermi’s Large Area Telescope (LAT) recorded one gamma ray with an energy of at least 94 billion electron volts (GeV), or some 35 billion times the energy of visible light, and about three times greater than the LAT’s previous record. The GeV emission from the burst lasted for hours, and it remained detectable by the LAT for the better part of a day, setting a new record for the longest gamma-ray emission from a GRB.
As far as the optical brightness of this event, according to a note posted on the BAUT Forum (the Universe Today and Bad Astronomy forum) data from the SARA-North 1-meter telescope at at Kitt Peak in Arizona at about 04:00 UT on April 29 showed a relative magnitude of about 18.5.
Gamma-ray bursts are the universe’s most luminous explosions, and come from the explosion of massive stars or the collision between two pulsars. Colliding pulsars are usually of short duration, so astronomers can rule out a pulsar collision as causing this event.
If the GRB is near enough, astronomers usually discover a supernova at the site a week or so after the outburst.
NASA said that ground-based observatories are monitoring the location of GRB 130427A and expect to find an underlying supernova by midmonth.
According to astronomer Andrew Levan, there’s an old adage in studying gamma ray bursts: “When you’ve seen one gamma ray burst, you’ve seen … only one gamma ray burst. They aren’t all the same,” he said during a press briefing on April 16 discussing the discovery of a very different kind of GRB – a type that comes in a new long-lasting flavor.
Three of these unusual long-lasting stellar explosions have recently been discovered using the Swift satellite and other international telescopes, and one, named GRB 111209A, is the longest GRB ever observed, with a duration of at least 25,000 seconds, or about 7 hours.
“We have observed the longest gamma ray burst in modern history, and think this event is caused by the death of a blue supergiant,” said Bruce Gendre, a researcher now associated with the French National Center for Scientific Research who led this study while at the Italian Space Agency’s Science Data Center in Frascati, Italy. “It caused the most powerful stellar explosion in recent history, and likely since the Big Bang occurred.”
The astronomers said these three GRBs represent a previously unrecognized class of these stellar explosions, which arise from the catastrophic deaths of supergiant stars hundreds of times larger than our Sun. GRBs are the most luminous and mysterious explosions in the Universe. The blasts emit surges of gamma rays — the most powerful form of light — as well as X-rays, and they produce afterglows that can be observed at optical and radio energies.
Swift, the Fermi telescope and other spacecraft detect an average of about one GRB each day. As to why this type of GRB hasn’t been detected before, Levan explained this new type appears to be difficult to find because of how long they last.
“Gamma ray telescopes usually detect a quick spike, and you look for a burst — at how many gamma rays come from the sky,” Levan told Universe Today. “But these new GRBs put out energy over a long period of time, over 10,000 seconds instead of the usual 100 seconds. Because it is spread out, it is harder to spot, and only since Swift launched do we have the ability to build up images of GBSs across the sky. To detect this new kind, you have to add up all the light over a long period of time.”
Levan is an astronomer at the University of Warwick in Coventry, England.
He added that these long-lasting GRBs were likely more common in the Universe’s past.
Traditionally, astronomers have recognized two types of GRBs: short and long, based on the duration of the gamma-ray signal. Short bursts last two seconds or less and are thought to represent a merger of compact objects in a binary system, with the most likely suspects being neutron stars and black holes. Long GRBs may last anywhere from several seconds to several minutes, with typical durations falling between 20 and 50 seconds. These events are thought to be associated with the collapse of a star many times the Sun’s mass and the resulting birth of a new black hole.
“It’s a very random process and every GRB looks very different,” said Levan during the briefing. “They all have a range of durations and a range of energies. It will take much bigger sample to see if this new type have more complexities than regular gamma rays bursts.”
All GRBs give rise to powerful jets that propel matter at nearly the speed of light in opposite directions. As they interact with matter in and around the star, the jets produce a spike of high-energy light.
Gendre and his colleagues made a detailed study of GRB 111209A, which erupted on Dec. 9, 2011, using gamma-ray data from the Konus instrument on NASA’s Wind spacecraft, X-ray observations from Swift and the European Space Agency’s XMM-Newton satellite, and optical data from the TAROT robotic observatory in La Silla, Chile. The 7-hour burst is by far the longest-duration GRB ever recorded.
Another event, GRB 101225A, exploded on December 25, 2010 and produced high-energy emission for at least two hours. Subsequently nicknamed the “Christmas burst,” the event’s distance was unknown, which led two teams to arrive at radically different physical interpretations. One group concluded the blast was caused by an asteroid or comet falling onto a neutron star within our own galaxy. Another team determined that the burst was the outcome of a merger event in an exotic binary system located some 3.5 billion light-years away.
“We now know that the Christmas burst occurred much farther off, more than halfway across the observable universe, and was consequently far more powerful than these researchers imagined,” said Levan.
Using the Gemini North Telescope in Hawaii, Levan and his team obtained a spectrum of the faint galaxy that hosted the Christmas burst. This enabled the scientists to identify emission lines of oxygen and hydrogen and determine how much these lines were displaced to lower energies compared to their appearance in a laboratory. This difference, known to astronomers as a redshift, places the burst some 7 billion light-years away.
Levan’s team also examined 111209A and the more recent burst 121027A, which exploded on Oct. 27, 2012. All show similar X-ray, ultraviolet and optical emission and all arose from the central regions of compact galaxies that were actively forming stars. The astronomers have concluded that all three GRBs constitute a new kind of GRB, which they are calling “ultra-long” bursts.
“Ultra-long GRBs arise from very large stars,” said Levan, “perhaps as big as the orbit of Jupiter. Because the material falling onto the black hole from the edge of the star has further to fall it takes longer to get there. Because it takes longer to get there, it powers the jet for a longer time, giving it time to break out of the star.”
Levan said that Wolf-Rayet stars best fit the description. “They are born with more than 25 times the Sun’s mass, but they burn so hot that they drive away their deep, outermost layer of hydrogen as an outflow we call a stellar wind,” he said. Stripping away the star’s atmosphere leaves an object massive enough to form a black hole but small enough for the particle jets to drill all the way through in times typical of long GRBs
John Graham and Andrew Fruchter, both astronomers at the Space Telescope Science Institute in Baltimore, provided details that these blue supergiant contain relatively modest amounts of elements heavier than helium, which astronomers call metals. This fits an apparent puzzle piece, that these ultra-long GRBs seem to have a strong intrinsic preference for low metallicity environments that contain just trace amounts of elements other than hydrogen and helium.
“High metalicity long duration GRBs do exist but are rare,” said Graham. “They occur at about 1/25th the rate (per unit of star formation) of the low metallicity events. This is good news for us here on Earth, as the likelihood of this type of GRB going off in our own galaxy is far less than previously thought.”
The astronomers discussed their findings Tuesday at the 2013 Huntsville Gamma-ray Burst Symposium in Nashville, Tenn., a meeting sponsored in part by the University of Alabama at Huntsville and NASA’s Swift and Fermi Gamma-ray Space Telescope missions. Gendre’s findings appear in the March 20 edition of The Astrophysical Journal.
Caption: Artist’s impression of ESA’s orbiting gamma-ray observatory Integral. Image credit: ESA
Integral, ESA’s International Gamma-Ray Astrophysics Laboratory launched ten years ago this week. This is a good time to look back at some of the highlights of the mission’s first decade and forward to its future, to study at the details of the most sensitive, accurate, and advanced gamma-ray observatory ever launched. But the mission has also had some recent exciting research of a supernova remnant.
Integral is a truly international mission with the participation of all member states of ESA and United States, Russia, the Czech Republic, and Poland. It launched from Baikonur, Kazakhstan on October 17th 2002. It was the first space observatory to simultaneously observe objects in gamma rays, X-rays, and visible light. Gamma rays from space can only be detected above Earth’s atmosphere so Integral circles the Earth in a highly elliptical orbit once every three days, spending most of its time at an altitude over 60 000 kilometres – well outside the Earth’s radiation belts, to avoid interference from background radiation effects. It can detect radiation from events far away and from the processes that shape the Universe. Its principal targets are gamma-ray bursts, supernova explosions, and regions in the Universe thought to contain black holes.
5 metres high and more than 4 tonnes in weight Integral has two main parts. The service module is the lower part of the satellite which contains all spacecraft subsystems, required to support the mission: the satellite systems, including solar power generation, power conditioning and control, data handling, telecommunications and thermal, attitude and orbit control. The payload module is mounted on the service module and carries the scientific instruments. It weighs 2 tonnes, making it the heaviest ever placed in orbit by ESA, due to detectors’ large area needed to capture sparse and penetrating gamma rays and to shield the detectors from background radiation in order to make them sensitive. There are two main instruments detecting gamma rays. An imager producing some of the sharpest gamma-ray images and a spectrometer that gauges gamma-ray energies very precisely. Two other instruments, an X-ray monitor and an optical camera, help to identify the gamma-ray sources.
During its extended ten year mission Integral has has charted in extensive detail the central region of our Milky Way, the Galactic Bulge, rich in variable high-energy X-ray and gamma-ray sources. The spacecraft has mapped, for the first time, the entire sky at the specific energy produced by the annihilation of electrons with their positron anti-particles. According to the gamma-ray emission seen by Integral, some 15 million trillion trillion trillion pairs of electrons and positrons are being annihilated every second near the Galactic Centre, that is over six thousand times the luminosity of our Sun.
A black-hole binary, Cygnus X-1, is currently in the process of ripping a companion star to pieces and gorging on its gas. Studying this extremely hot matter just a millisecond before it plunges into the jaws of the black hole, Integral has discovered that some of it might be escaping along structured magnetic field lines. By studying the alignment of the waves of high-energy radiation originating from the Crab Nebula, Integral found that the radiation is strongly aligned with the rotation axis of the pulsar. This implies that a significant fraction of the particles generating the intense radiation must originate from an extremely organised structure very close to the pulsar, perhaps even directly from the powerful jets beaming out from the spinning stellar core.
Just today ESA reported that Integral has made the first direct detection of radioactive titanium associated with supernova remnant 1987A. Supernova 1987A, located in the Large Magellanic Cloud, was close enough to be seen by the naked eye in February 1987, when its light first reached Earth. Supernovae can shine as brightly as entire galaxies for a brief time due to the enormous amount of energy released in the explosion, but after the initial flash has faded, the total luminosity comes from the natural decay of radioactive elements produced in the explosion. The radioactive decay might have been powering the glowing remnant around Supernova 1987A for the last 20 years.
During the peak of the explosion elements from oxygen to calcium were detected, which represent the outer layers of the ejecta. Soon after, signatures of the material from the inner layers could be seen in the radioactive decay of nickel-56 to cobalt-56, and its subsequent decay to iron-56. Now, after more than 1000 hours of observation by Integral, high-energy X-rays from radioactive titanium-44 in supernova remnant 1987A have been detected for the first time. It is estimated that the total mass of titanium-44 produced just after the core collapse of SN1987A’s progenitor star amounted to 0.03% of the mass of our own Sun. This is close to the upper limit of theoretical predictions and nearly twice the amount seen in supernova remnant Cas A, the only other remnant where titanium-44 has been detected. It is thought both Cas A and SN1987A may be exceptional cases
Christoph Winkler, ESA’s Integral Project Scientist says “Future science with Integral might include the characterisation of high-energy radiation from a supernova explosion within our Milky Way, an event that is long overdue.”
Find out more about Integral here
and about Integral’s study of Supernova 1987A here
What would a gamma-ray burst sound like? No one really knows, but members of the team that work with the Fermi Large Area Telescope (LAT) have translated gamma-ray measurements into musical notes and have created a “song” from the photons from one of the most energetic of these powerful explosions, GRB 080916C which occurred in September of 2008.
“In translating the gamma-ray measurements into musical notes we assigned the photons to be “played” by different instruments (harp, cello, or piano) based on the probabilities that they came from the burst,” the team wrote in the Fermi blog. “By converting gamma rays into musical notes, we have a new way of representing the data and listening to the universe.” Continue reading “A Gamma-Ray Burst as Music”
When it comes to high-energy sources, no one knows them better than NASA’s Fermi Gamma-ray Space Telescope. Taking a portrait of the entire sky every 240 minutes, the program is continually renewing and updating its sources and once a year the scientists harvest the data. These annual gatherings are then re-worked with new tools to produce an ever-deeper look into the Universe around us.
Fermi is famous for its analysis of steady gamma-ray sources, numerous transient events, the dreaded GRB and even flares from the Sun. Its all-sky map absolutely bristles with the energy that’s out there and earlier this year a second catalog of objects was released to eager public eyes. An astounding 1,873 objects were detected by the satellite’s Large Area Telescope (LAT) and this high energy form of light is turning some heads.
“More than half of these sources are active galaxies, whose massive black holes are responsible for the gamma-ray emissions that the LAT detects,” said Gino Tosti, an astrophysicist at the University of Perugia in Italy and currently a visiting scientist at SLAC National Accelerator Laboratory in Menlo Park, California.
One of the scientists who led the new compilation, Tosti presented a paper on the catalog at a meeting of the American Astronomical Society’s High Energy Astrophysics Division in Newport, R.I. “What is perhaps the most intriguing aspect of our new catalog is the large number of sources not associated with objects detected at any other wavelength,” he noted.
If we were to look at Fermi’s gathering experience as a harvest, we’d see two major components – crops and mystery. Add to that a bushel of pulsars, a basket of supernova remnants and a handful of other things, like galaxies and globular clusters. For Fermi farmers, harvesting new types of gamma-ray-emitting objects that are from “unassociated sources” would account for about 31% of the cash crop. However, the brave little Fermi LAT is producing results from some highly unusual sources. Mystery growth? Think this way… If it’s a light source, then it has a spectrum. When it comes to gamma rays, they’re seen at different energies. “At some energy, the spectra of many objects display what astronomers call a spectral break, that is, a greater-than-expected drop-off in the number of gamma rays seen at increasing energies.” Let’s take a look at two…
Within our galaxy is 2FGL J0359.5+5410. Right now, scientists just don’t understand what it is… only that it’s located in the constellation Camelopardalis. Since it appears about midplane, we’re just assuming it belongs to the Milky Way. From its spectrum, it might be a pulsar – but one without a pulse. Or how about 2FGL J1305.0+1152? It also resides along the midplane and smack dab in the middle of galaxy country – Virgo. Even after two years, Fermi can’t tease out any more details. It doesn’t even have a spectral break!
NASA’s Swift, Hubble Space Telescope and Chandra X-ray Observatory have teamed up to study one of the most puzzling cosmic blasts yet observed. More than a week later, high-energy radiation continues to brighten and fade from its location.
Astronomers say they have never seen anything this bright, long-lasting and variable before. Usually, gamma-ray bursts mark the destruction of a massive star, but flaring emission from these events never lasts more than a few hours.
Although research is ongoing, astronomers say that the unusual blast likely arose when a star wandered too close to its galaxy’s central black hole. Intense tidal forces tore the star apart, and the infalling gas continues to stream toward the hole. According to this model, the spinning black hole formed an outflowing jet along its rotational axis. A powerful blast of X- and gamma rays is seen if this jet is pointed in our direction.
On March 28, Swift’s Burst Alert Telescope discovered the source in the constellation Draco when it erupted with the first in a series of powerful X-ray blasts. The satellite determined a position for the explosion, now cataloged as gamma-ray burst (GRB) 110328A, and informed astronomers worldwide.
As dozens of telescopes turned to study the spot, astronomers quickly noticed that a small, distant galaxy appeared very near the Swift position. A deep image taken by Hubble on April 4 pinpoints the source of the explosion at the center of this galaxy, which lies 3.8 billion light-years away.
That same day, astronomers used NASA’s Chandra X-ray Observatory to make a four-hour-long exposure of the puzzling source. The image, which locates the object 10 times more precisely than Swift can, shows that it lies at the center of the galaxy Hubble imaged.
“We know of objects in our own galaxy that can produce repeated bursts, but they are thousands to millions of times less powerful than the bursts we are seeing now. This is truly extraordinary,” said Andrew Fruchter at the Space Telescope Science Institute in Baltimore.
“We have been eagerly awaiting the Hubble observation,” said Neil Gehrels, the lead scientist for Swift at NASA’s Goddard Space Flight Center in Greenbelt, Md. “The fact that the explosion occurred in the center of a galaxy tells us it is most likely associated with a massive black hole. This solves a key question about the mysterious event.”
Most galaxies, including our own, contain central black holes with millions of times the sun’s mass; those in the largest galaxies can be a thousand times larger. The disrupted star probably succumbed to a black hole less massive than the Milky Way’s, which has a mass four million times that of our sun
Astronomers previously have detected stars disrupted by supermassive black holes, but none have shown the X-ray brightness and variability seen in GRB 110328A. The source has repeatedly flared. Since April 3, for example, it has brightened by more than five times.
Scientists think that the X-rays may be coming from matter moving near the speed of light in a particle jet that forms as the star’s gas falls toward the black hole.
“The best explanation at the moment is that we happen to be looking down the barrel of this jet,” said Andrew Levan at the University of Warwick in the United Kingdom, who led the Chandra observations. “When we look straight down these jets, a brightness boost lets us view details we might otherwise miss.”
This brightness increase, which is called relativistic beaming, occurs when matter moving close to the speed of light is viewed nearly head on.
Astronomers plan additional Hubble observations to see if the galaxy’s core changes brightness.
Gamma Ray Bursts (GRBs) are among the most energetic phenomena astronomers regularly observe. These events are triggered by massive explosions and a large amount of the energy if focused into narrow beams that sweep across the universe. These beams are so tightly concentrated that they can be seen across the visible universe and allow astronomers to probe the universe’s history. If such an event happened in our galaxy and we stood in the path of the beam, the effects would be pronounced and may lead to large extinctions. Yet one of the most energetic GRBs on record (GRB 080607) was shrouded in cloud of gas and dust dimming the blast by a factor of 20 – 200, depending on the wavelength. Despite this strong veil, the GRB was still bright enough to be detected by small optical telescopes for over an hour. So what can this hidden monster tell astronomers about ancient galaxies and GRBs in general?
GRB 080607 was discovered on June 6, 2008 by the Swift satellite. Since GRBs are short lived events, searches for them are automated and upon detection, the Swift satellite immediately oriented itself towards the source. Other GRB hunting satellites quickly joined in and ground based observatories, including ROTSE-III and Keck made observations as well. This large collection of instruments allowed astronomers, led by D. A. Perley of UC Berkley, to develop a strong understanding of not just the GRB, but also the obscuring gas. Given that the host galaxy lies at a distance of over 12 billion light years, this has provided a unique probe into the nature of the environment of such distant galaxies.
One of the most surprising features was unusually strong absorption near 2175 °A. Although such absorption has been noticed in other galaxies, it has been rare in galaxies at such large cosmological distances. In the local universe, this feature seems to be most common in dynamically stable galaxies but tends to be “absent in more disturbed locations such as the SMC, nearby starburst galaxies” as well as some regions of the Milky Way which more turbulence is present. The team uses this feature to imply that the host galaxy was stable as well. Although this feature is familiar in nearby galaxies, observing it in this case makes it the furthest known example of this phenomenon. The precise cause of this feature is not yet known, although other studies have indicated “polycyclic aromatic hydrocarbons and graphite” are possible suspects.
Earlier studies of this event have shown other novel spectral features. A paper by Sheffer et al. notes that the spectrum also revealed molecular hydrogen. Again, such a feature is common in the local universe and many other galaxies, but never before has such an observation been made linked to a galaxy in which a GRB has occurred. Molecular hydrogen (as well as other molecular compounds) become disassociated at high temperatures like the ones in galaxies containing large amounts of star formation that would produce regions with large stars capable of triggering GRBs. With observations of one molecule in hand, this lead Sheffer’s team to suspect that there might be large amounts of other molecules, such as carbon monoxide (CO). This too was detected making yet another first for the odd environment of a GRB host.
This unusual environment may help to explain a class of GRBs known as “subluminous optical bursts” or “dark bursts” in which the optical component of the burst (especially the afterglow) is less bright than would be predicted by comparison to more traditional GRBs.
A record-breaking gamma ray burst from beyond the Milky Way temporarily blinded the X-ray eye on NASA’s Swift space observatory on June 21, 2010. The X-rays traveled through space for 5-billion years before slamming into and overwhelming the space-based telescope. “This gamma-ray burst is by far the brightest light source ever seen in X-ray wavelengths at cosmological distances,” said David Burrows, senior scientist and professor of astronomy and astrophysics at Penn State University and the lead scientist for Swift’s X-ray Telescope (XRT).
A gamma-ray burst is a violent eruption of energy from the explosion of a massive star morphing into a new black hole. This mega burst, named GRB 100621A, is the brightest X-ray source that Swift has detected since the observatory began X-ray observation in early 2005.
Although Swift satellite was designed specifically to study gamma-ray bursts, the instrument was not designed to handle an X-ray blast this bright. “The intensity of these X-rays was unexpected and unprecedented” said Neil Gehrels, Swift’s principal investigator at NASA’s Goddard Space Flight Center. “Just when we were beginning to think that we had seen everything that gamma-ray bursts could throw at us, this burst came along to challenge our assumptions about how powerful their X-ray emissions can be.”.
What is a supernova? Well, “nova” means “new star”, and “super” means “really big”, like supermarket, so a supernova is a really bright new star. That’s where the word comes from, but today it has a rather more precise meaning, namely a once-off variable star which has a peak brightness similar to, or greater than, that of a typical galaxy.
Supernovae aren’t new stars in the sense that they were not stars before they became supernovae; the progenitor – what the star was before it went supernova – of a supernova is just a star (or a pair of stars), albeit an unusual one.
From what we see – the rise of the intensity of light (and electromagnetic radiation in general) to a peak, its decline; the lines which show up in the spectra (and the ones which don’t), etc – we can classify supernovae into several different types. There are two main types, called Type I and Type II. The difference between them is that Type I supernovae have no lines of hydrogen in their spectra, while Type II ones do.
Centuries of work by astronomers and physicists have given us just two kinds of progenitors: white dwarfs and massive (>8 sols) stars; and just two key physical mechanisms: nuclear detonation and core collapse.
Core collapse supernovae happen when a massive star tries to fuse iron in its core … bad move, because fusing iron requires energy (rather than liberates it), and the core suddenly collapses due to its gravity. A lot of interesting physics happens when such a core collapses, but it either results in a neutron star or a black hole, and a vast amount of energy is produced (most of it in the form of neutrinos!). These supernovae can be of any type, except a sub-type of Type I (called Ia). They also produce the long gamma-ray bursts (GRB).
Detonation is when a white dwarf star undergoes almost simultaneous fusion of carbon or oxygen throughout its entire body (it can do this because a white dwarf has the same temperature throughout, unlike an ordinary star, because its electrons are degenerate). There are at least two ways such a detonation can be triggered: steady accumulation of hydrogen transferred from a close binary companion, or a collision or merger with a neutron star or another white dwarf. These supernovae are all Type Ia.
One other kind of supernova: when two neutron stars merge, or a ~solar mass black hole and a neutron star merge – as a result of loss of orbital energy due to gravitational wave radiation – an intense burst of gamma-rays results, along with a fireball and an afterglow (as the fireball cools). We see such an event as a short GRB, but if we were unlikely enough to be close to such a stellar death, we’d certainly see it as a spectacular supernova!
Are the relativistic jets of long gamma ray bursts (GRBs) produced by brand new black holes? Do some core-collapse supernovae result in black holes and relativistic jets?
The answer to both questions is ‘very likely, yes’! And what recent research points to those answers? Study of an Ic supernova (SN 2007gr), and an Ibc one (SN 2009bb), by two different teams, using archived Gamma-Ray Burst Coordination Network data, and trans-continental Very Long Baseline Interferometry (VLBI) radio observations.
“In every respect, these objects look like gamma-ray bursts – except that they produced no gamma rays,” said Alicia Soderberg at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass.
Soderberg led a team that studied SN 2009bb, a supernova discovered in March 2009. It exploded in the spiral galaxy NGC 3278, located about 130 million light-years away.
The other object is SN 2007gr, which was first detected in August 2007 in the spiral galaxy NGC 1058, some 35 million light-years away (it’s one of the closest Ic supernovae detected in the radio waveband). The team which studied this supernova using VLBI was led by Zsolt Paragi at the Netherlands-based Joint Institute for Very Long Baseline Interferometry in Europe, and included Chryssa Kouveliotou, an astrophysicist at NASA’s Marshall Space Flight Center in Huntsville, Alabama.
The researchers searched for gamma-rays associated with the supernovae using archived records in the Gamma-Ray Burst Coordination Network located at NASA’s Goddard Space Flight Center in Greenbelt, Md. This project distributes and archives observations of gamma-ray bursts by NASA’s SWIFT spacecraft, the Fermi Gamma-ray Space Telescope and many others. However, no bursts coincided with the supernovae.
“The explosion dynamics in typical supernovae limit the speed of the expanding matter to about three percent the speed of light,” explained Kouveliotou, co-author of one of the new studies. “Yet, in these new objects, we’re tracking gas moving some 20 times faster than this.”
Unlike typical core-collapse supernovae, the stars that produce long gamma-ray bursts possess a “central engine” – likely a nascent black hole – that drives particle jets clocked at more than 99 percent the speed of light (short GRBs are likely produced by the collision/merger of two neutron stars, or a neutron star and a stellar mass black hole).
By contrast, the fastest outflows detected from SN 2009bb reached 85 percent of the speed of light and SN 2007gr reached more than 60 percent of light speed; this is “mildly relativistic”.
“These observations are the first to show some supernovae are powered by a central engine,” Soderberg said. “These new radio techniques now give us a way to find explosions that resemble gamma-ray bursts without relying on detections from gamma-ray satellites.”
The VLBI radio observations showcase how the new electronic capabilities of the European VLBI Network empower astronomers to react quickly when transient events occur. The team led by Paragi included 14 members from 12 institutions spread over seven countries, the United States, the Netherlands, Hungary, the United Kingdom, Canada, Australia and South Africa.
“Using the electronic VLBI technique eliminates some of the major issues,” said Huib Jan van Langevelde, the director of JIVE “Moreover it allows us to produce immediate results necessary for the planning of additional measurements.”
Perhaps as few as one out of every 10,000 supernovae produce gamma rays that we detect as a long gamma-ray burst. In some cases, the star’s jets may not be angled in a way to produce a detectable burst; in others, the energy of the jets may not be enough to allow them to blast through the overlying bulk of the dying star.
“We’ve now found evidence for the unsung crowd of supernovae – those with relatively dim and mildly relativistic jets that only can be detected nearby,” Kouveliotou said. “These likely represent most of the population.” |
MICE target during development testing. Credit: STFC
The target used to generate the muons for the experiment. Credit: STFC
Since the 1930s, accelerators have been used to make ever more energetic proton, electron, and ion beams. These beams have been used in practically every scientific field, from colliding particles in the Large Hadron Collider to measuring the chemical structure of drugs, treating cancers and the manufacture of the ubiquitous silicon microchip.
Now, the international Muon Ionization Cooling Experiment (MICE) collaboration, which includes many UK scientists, has made a major step forward in the quest to create an accelerator for an entirely different sort of particle, a muon. A muon accelerator could replace the Large Hadron Collider (LHC), providing at least a ten-fold increase in energy for the creation of new particles.
Until now, the question has been whether you can channel enough muons into a small enough volume to be able to study physics in new, unexplored systems. This new research, published in Nature today, shows that it is possible. The results of the experiment, carried out using the MICE muon beam-line at the Science and Technology Facilities Council (STFC) ISIS Neutron and Muon Beam facility on the Harwell Campus in the UK, clearly show that ionization cooling works and can be used to channel muons into a tiny volume.
“The enthusiasm, dedication, and hard work of the international collaboration and the outstanding support of laboratory personnel at STFC and from institutes across the world have made this game-changing breakthrough possible,” said Professor Ken Long from Imperial College London, spokesperson for the experiment.
Dr Chris Rogers, based at ISIS and the collaboration’s Physics Co-ordinator, explained: “MICE has demonstrated a completely new way of squeezing a particle beam into a smaller volume. This technique is necessary for making a successful muon collider, which could outperform even the LHC.”
Muons have many uses – they can be used to study the atomic structure of materials, they can be used as a catalyst for nuclear fusion and they can be used to see through really dense materials which X-rays can't get through. The research team hopes that this technique can help produce good quality muon beams for these applications as well.
Muons are produced by smashing a beam of protons into a target. The muons can then be separated off from the debris created at the target and directed through a series of magnetic lenses. Because of this rough-and-ready production mechanism, these muons form a diffuse cloud – so when it comes to colliding the muons, the chances of them hitting each other and producing interesting physical phenomena is really low.
To make the cloud less diffuse, a process called beam cooling is used. This involves getting the muons closer together and moving in the same direction. Magnetic lenses can get the muons closer together, or get them moving in the same direction, but not both at the same time.
A major obstacle to cooling a muon beam this is that muons only live for two millionths of a second, and previous methods developed to cool beams take hours to achieve an effect. In the 1970s a new method called 'ionization cooling' had been suggested, and developed into theoretically operable schemes in the in the 1990s. The hurdle of testing this idea in practice remained formidable.
The MICE collaboration developed the completely new method to tackle this unique challenge, cooling the muons by putting them through specially-designed energy-absorbing materials such as lithium hydride, a compound of lithium metal and hydrogen, or liquid hydrogen cooled to around minus 250 degrees Celsius and encased by incredibly thin aluminium windows. This was done while the beam was very tightly focussed by powerful superconducting magnetic lenses. The measurement is so delicate that it requires measuring the beam particle-by-particle using particle physics techniques rather than the usual accelerator diagnostics.
After cooling the beam, the muons can be accelerated by a normal particle accelerator in a precise direction, making it much more likely for the muons to collide. Alternatively, the cold muons can be slowed down so that their decay products can be studied.
Professor Alain Blondel, spokesperson of MICE from 2001 to 2013, and Emeritus Professor at the University of Geneva, said: “We started MICE studies in 2000 with great enthusiasm and a strong team from all continents. It is a great pride to see the demonstration achieved, just at a time when it becomes evident to many new people that we must include muon machines in the future of particle physics."
“In this era of ever more-expensive particle accelerators, MICE points the way to a new generation of cost-effective muon colliders,” said Professor Dan Kaplan, Director of the IIT Center for Accelerator and Particle Physics in Chicago.
Professor Paul Soler from the University of Glasgow and UK Principal Investigator said: "Ionization cooling is a game-changer for the future of high-energy muon accelerators, such as a muon collider, and we are extremely grateful to all the international funding agencies, including STFC in the UK, for supporting the experiment and to the staff at the ISIS neutron and muon source for hosting the facility that made this result possible.”
“Demonstration of cooling by the Muon Ionization Cooling Experiment” was published in Nature on 5 February.
Read more about the MICE collaboration: http://www.mice.iit.edu/
Link to the original STFC article here.
Science in action: https://www.bbc.co.uk/sounds/play/w3csym32 |
Confederate States of America
|Confederate States of America|
|February 4, 1861- May 10, 1865
|Flag||Coat of Arms|
|Area||770,425 sq mi|
The Confederate States of America (informally, the Confederacy) was a government created from an alliance of eleven southern states which had seceded from the United States between December 1860 and April 1861. The American Civil War that was begun by the Confederate shelling of Fort Sumter proved disastrous; four years of savage fighting ended with the fledgling government defeated and dissolved, and left the southern states a financial and industrial wreck. The main reason for secession was to preserve slavery—but all the slaves were emancipated with no compensation to the owners. After the war, the states were later readmitted during Reconstruction.
For the social, political, economic and diplomatic history see American Civil War homefront
- 1 Beginnings
- 2 Geography
- 3 Economy
- 4 Ideology
- 5 Leadership
- 6 Flags of the Confederacy
- 7 Diplomacy
- 8 Economics
- 9 Legacy of destruction
- 10 Legacy
- 11 Bibliography
- 12 See also
- 13 External links
Seven southern states seceded from the United States of America over the winter of 1860-61 and joined together as the "Confederate States of America" to protect their sovereignty and economic status. They saw that the antislavery forces in the North were gaining strength, typified by the election as president of Abraham Lincoln in 1860. The political future for slavery and for Southern commerce was bleak, as the South was losing relative strength in Congress. Arguing that their Constitutional states' rights protected the extension of slavery into America's western territories, they saw that issue rejected in the North. The Union government rejected the claims that a state had a right to secede.
When fighting began in April Lincoln called on all states to send troops; at this point four more states broke away and joined the Confederacy, Virginia, North Cariolina, Tennessee and Arkansas.
Of the 15 slave states, four remained in the United States: Delaware, Kentucky, Maryland and Missouri. Residents of the latter three states raised regiments for the Confederacy, although not as an official act of their governments. In addition, Kentucky and Missouri saw the establishment of Confederate legislatures within their borders which sent delegates to the Confederate Congress. West Virginia broke away from Virginia during the war and joined the Union as a state.
The president of the Confederate States of America was Jefferson Davis, a former Secretary of War under President Franklin Pierce and Senator from Mississippi. Richmond, Virginia became the capital of the Confederacy after that state seceded. It was a poor choice for a war capital because it was hard to supply and hard to defend.
Starting in 1862 General Robert E. Lee led the Confederate Army of Northern Virginia against the Union armies, which were led by various generals appointed by Abraham Lincoln, with the last, and most successful being General Ulysses S. Grant. Lee proved a tenacious defender of Richmond, which had an exposed position and a long, difficult supply line. Grant wore down Lee's army, which was unable to replace its casualties, supplies, and desertions.
The Confederate States of America claimed a total of 2,919 miles (4,698 km) of coastline. A large part of this territory lay on the sea coast with level and sandy ground. The interior portions were hilly and mountainous, and the far western territories were deserts. The lower reaches of the Mississippi River bisected the country, with the western half referred to as the Trans-Mississippi.
Much of the area claimed by the CSA had a humid subtropical climate with mild winters and long, hot, humid summers. The climate varied to semi-arid steppe and arid desert west of longitude 96 degrees west. The subtropical climate made winters mild but allowed infectious diseases to flourish. Consequently, disease killed more soldiers than did enemy action.
In peacetime, the vast system of navigable rivers allowed for cheap and easy transportation of farm products. The railroad system was built as a supplement, tying plantation areas to the nearest river or seaport. The vast geography made for difficult Union logistics, and Union soldiers were used to garrison captured areas and protect rail lines. But the Union Navy seized most of the navigable rivers by 1862, making its own logistics easy and Confederate movements difficult. After the fall of Vicksburg in July 1863, it became impossible for units of any size to cross the Mississippi since Union gunboats constantly patrolled it. The South thus lost use of its western regions.
The area claimed by the Confederate States of America was overwhelmingly rural. Small towns of more than 1,000 were few — the typical county seat had a population of less than 500 people. Cities were rare. New Orleans was the only Southern city in the list of top 10 largest U.S. cities in the 1860 census, and it was captured by the Union in 1862. Only 13 Confederate cities ranked among the top 100 U.S. cities in 1860, most of them ports whose economic activities were shut down by the Union blockade. The population of Richmond swelled after it became the national capital, reaching an estimated 128,000 in 1864 (Dabney 1990:182). Other large Southern cities such as Baltimore, St. Louis, Louisville, and Washington, as well as Wheeling, West Virginia, and Alexandria, Virginia (both located in territory that had officially seceded), were never under the control of the Confederate government.
|#||City||1860 Population||US Rank||return to USA control|
|1.||New Orleans, Louisiana||168,675||6||1862|
|2.||Charleston, South Carolina||40,522||22||1865|
|13.||Wilmington, North Carolina||9,553||100||1865|
Before the war the states that formed the Confederacy had an agrarian economy with exports, to a world market, of cotton, and, to a lesser extent, tobacco and sugar. Local food production included grains, hogs, cattle, and gardens. The 11 states produced $155 million in manufactured goods in 1860, chiefly from local grist mills, and lumber, processed tobacco, cotton goods and naval stores such as turpentine. The CSA adopted a low tariff of 15%, but imposed it on all imports from the rest of the United States—thus making one of the greatest tax increases in American history. The tariff mattered little; the Confederacy's ports were blocked to commercial traffic by the Union's blockade, and very few people paid taxes on goods smuggled from the Union states. The government collected about $3.5 million in tariff revenue from the start of their war against the Union to late 1864. The lack of adequate financial resources led the Confederacy to finance the war through printing money, which led to high inflation.
Historian Emory Thomas compared the correspondence sent by the Confederate government in the first year of its existence to different governments. He writes, "The Southern nation was by turns a guileless people attacked by a voracious neighbor, an 'established' nation in some temporary difficulty, a collection of bucolic aristocrats making a romantic stand against the banalities of industrial democracy, a cabal of commercial farmers seeking to make a pawn of King Cotton, an apotheosis of nineteenth-century nationalism and revolutionary liberalism, or the ultimate statement of social and economic reaction."
The example of the U. S. Constitution clearly guided the drafters of the Confederate Constitution, enabling the latter group to complete their work in less than half as much time. However, this Confederate Constitution contained a provision banning efforts to end de jure slavery, found at Article I, Section 9, clause 4, lumped in with the provisions banning ex post facto laws and bills of attainder. Another clause banned the international slave trade, but permitted the importation of slaves from the United States; this clause was consistent with the United States' banning of Atlantic slave trading in 1808, which had the effect of improving the domestic slave market, benefiting states such as Virginia. The wording of this clause demonstrates that the drafters clearly anticipated that not all slave states would secede, although they also included a provision for accepting new states into the Confederacy. This proved essential when Virginia, Arkansas, Tennessee and North Carolina seceded from the United States after the Confederate Constitution was in effect. Although the Confederate document includes no bill of rights, the Ninth Amendment and Tenth Amendment of the U.S. Bill of Rights are reproduced in Article VI as Sections 5 and 6. The Confederate Constitution implemented a ban on a religious test for office in Section 4, notwithstanding the preamble's invocation of God's blessing on the Confederate experiment. Other differences had to do with the appropriations process in Congress. Not only was a line-item veto expressly included, but Congress required a two-thirds supermajority to appropriate any funds not specifically requested by the President, giving Jefferson Davis in a real sense more Constitutional power than Abraham Lincoln possessed - an irony, given the Confederate states' putative objection to centralized power.
Despite the later romanticization of the Confederate cause, the perpetuation of Southern conceptions of race and slavery was of prime importance to the new nation. In his "Cornerstone Speech," Vice President Alexander Stephens argued that a major difference between the Confederate Constitution and the United States Constitution was the belief that blacks were not inherently equal. In describing this fundamental difference, Stephens said, "The new constitution has put at rest, forever, all the agitating questions relating to our peculiar institution—African slavery as it exists amongst us—the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution... Our new government is founded upon exactly the opposite idea; its foundations are laid, its corner- stone rests, upon the great truth that the negro is not equal to the white man." The rejection of slavery as a dominant ideology of the Confederate States began after the Civil War, as Confederate leaders sought to legitimize their failed rebellion.
President of the Congress
- Robert Woodward Barnwell, 4 Feb 1861 (served for several hours)
- Howell Cobb, 4 Feb 1861 - 18 Feb 1861
President Pro Tempore of the Senate
- Robert Mercer Taliaferro Hunter, 18 Feb 1862 - 18 Mar 1865
President Pro Tempore of the Provisional Congress
- Robert Woodward Barnwell, 4 Feb 1861 - 16 Mar 1861
- Thomas Stanhope Bobcock, Josiah A.P. Campbell, 18 Nov 1861 - 17 Feb 1862
Speaker of the House of Representatives
- Thomas Stanhope Bobcock, 18 Feb 1862 - 18 Mar 1865
President of the Confederate States
- Jefferson Finis Davis, 18 Feb 1861 - 10 May 1865 (provisional president to 6 Nov 1861)
Vice President of the Confederate States
- Alexander Hamilton Stephens, 18 Feb 1861 - 11 May 1865 (provisional vice president to 6 Nov 1861)
Secretary of State
- Robert Augustus Toombs, 21 Feb 1861 - 24 Jul 1861
- Robert Mercer Taliaferro Hunter, 25 Jul 1861 - 1 Feb 1862
- William Montague Brown, 1 Feb 1862 - 17 Mar 1862
- Judah Philip Benjamin, 18 Mar 1862 - 3 May 1865
- Judah Philip Benjamin, 25 Feb 1861 - 17 Sep 1861
- Wade Rutledge Keyes, 17 Sep 1861 - 21 Nov 1861
- Thomas Bragg, Jr., 21 Nov 1861 - 17 Mar 1862
- Thomas Hill Watts, 18 Mar 1862 - 1 Oct 1863
- Wade Rutledge Keyes, 1 Oct 1863 - 2 Jan 1864
- George Davis, 2 Jan 1864 - 24 Apr 1865
Commissioner of Patents
- Rufus Randolph Rhodes, 31 May 1861 - Apr 1865
- Henry T. Ellet, 25 Feb 1861 - 6 Mar 1861 (nominated and confirmed; declined appointment)
- John Henninger Reagan, 6 Mar 1861 - 5 May 1865
Secretary of the Treasury
- Christopher Gustavus Memminger, 21 Feb 1861 - 18 Jul 1864
- George Alfred Trenholm, 18 Jul 1864 - 27 Apr 1865
- John Henninger Reagan, 28 Apr 1865 - 4 May 1865
Treasurer of the Confederate States
- Edward Carrington Elmore, 6 Mar 1861 - 1865
Comptroller and Solicitor
- Lewis Cruger, 1861 - 1865
Secretary of War
- Leroy Pope Walker, 21 Feb 1861 - 16 Sep 1861
- Judah Philip Benjamin, 17 Sep 1861 - 23 Mar 1862
- George Wythe Randolph, 24 Mar 1862 - 17 Nov 1862
- Gustavus Woodson Smith, 17 Nov 1862 - 21 Nov 1862
- James Alexander Seddon, 21 Nov 1862 - 6 Feb 1865
- John Cabell Breckenridge, 6 Feb 1865 - 5 May 1865
Chiefs of the Army Engineers Bureau (subordinated to Secretary of War)
- Josiah Gorgas, 8 Apr 1861 - 3 Aug 1861
- Danville Leadbetter, 3 Aug 1861 - 10 Nov 1861
- Alfred Landon Rives, 13 Nov 1861 - 24 Sep 1862
- Jeremy Francis Gilmer, 25 Sep 1862 - 17 Aug 1863
- Alfred Landon Rives, 18 Aug 1863 - 9 Mar 1864
- Martin Luther Smith, 9 Mar 1864 - Apr 1864
- Alfred Landon Rives, Apr 1864 - Jun 1864
- Jeremy Francis Gilmer, Jun 1864 - Apr 1865
Commissioner of Indian Territory (subordinated to Secretary of War)
- Albert Pike, 16 Mar 1861 - 1862
- Benjamin J. McCullough, 1862 - 7 Mar 1862
- Albert Pike, 1862 - 5 Nov 1862
- Douglas Hancock Cooper, Nov 1862 - Jan 1863
- William Steels, Jan 1863 - Dec 1863
Commander of the Department of Indian Territory and Superintendent of Indian Affairs (subordinated to Secretary of War)
- Samuel Ball Maxey, Dec 1863 - 1865
- Douglas Hancock Cooper, 1865
Surgeon-general (subordinated to Secretary of War)
- Samuel Preston Moore, 16 Mar 1861 - 1865
Secretary of the Navy
- Stephan Russell Mallory, 4 Mar 1861 - 5 May 1865
Colonel-Commandant of the Confederate States Marine Corps (subordinated to Secretary of the Navy)
- Lloyd J. Beall, 23 May 1861 - 10 May 1865
Superintendent of the Confederate States Naval Academy
- William Parker, 23 Jul 1863 - 2 May 1865
- A federal court system with a chief justice was not created during the 1861-1865 Confederacy.
Flags of the Confederacy
- For more detail, see Confederate flag
|Seal and Flags||Detail||Dates of Use|
|Great Seal of the Confederate States of America. The Latin motto Deo Vindice reads either "Under God, Our Vindicator" or "With God as [our] Champion".||1862-1865|
|The Bonnie Blue flag, unofficial first flag of the Confederacy. First flown January 9, 1861 over the state capitol building of Jackson, Mississippi. Originally, it was used by settlers of west Florida in a short, 74-day republic after they had revolted against the Spanish government, raising it at the Spanish fort in Baton Rouge on September 23, 1810.||1861|
|Called the Stars and Bars, it was first flown over Fort Sumter on April 13, 1861.||1861|
|The First National Flag of the Confederacy. Like the flag before, it was also called the Stars and Bars; this flag incorporated a different number of stars depending on the time. At the most, and for the longest period of time, the flag had 13 stars, representing the 11 states of the C.S.A. as well as Kentucky and Missouri.||March 4, 1861 to May 1, 1863|
|The Second National Flag; also known as the Stainless Banner due to the large white field. It was also referred to as the Stonewall Flag, as its first official use was to cover the casket of Lieutenant General Thomas J. Jackson in 1863.||May 1, 1863 to March 3, 1865|
|The Third National Flag. The red stripe was added to the fly to correct a major drawback of the previous flag: the appearance of a flag of surrender when it hung limp.||March 4, 1865 to April 26, 1865|
|The flag of General Robert E. Lee's Army of Northern Virginia, called the Southern Cross; the design became so popular that is was used as the canton of the Confederate national flag.||November 28, 1861 to the fall|
|A variation of the Second National Flag, with a shorter 1.5:1 ratio instead of 2:1. This was the last flag hauled down in surrender, when CSS Shenandoah lowered it on November 7, 1865 in Liverpool, England.||May 1, 1863 to the fall|
|This 7-star jack was used on Confederate naval warships until 1863.||1861-1863|
|Official Confederate naval jack for use on all warships from 1863, and patterned after the design of the battle flag. This flag was adopted in the years since as the de-facto flag of the South itself.||1863-1865|
Relations with the United States
For the four years of its existence, the Confederate States of America asserted its independence and appointed dozens of diplomatic agents abroad. The United States government, by contrast, asserted that the Southern states were provinces in rebellion and refused any formal recognition of their status. Thus, U.S. Secretary of State William H. Seward issued formal instructions to Charles Francis Adams Sr., the new minister to Great Britain:
You will indulge in no expressions of harshness or disrespect, or even impatience concerning the seceding States, their agents, or their people. But you will, on the contrary, all the while remember that those States are now, as they always heretofore have been, and, notwithstanding their temporary self-delusion, they must always continue to be, equal and honored members of this Federal Union, and that their citizens throughout all political misunderstandings and alienations, still are and always must be our kindred and countrymen.
However, if the British seemed inclined to recognize the Confederacy, or even waver in that regard, they were to be sharply warned, with a strong hint of war:
[if Britain is] tolerating the application of the so-called seceding States, or wavering about it, you will not leave them to suppose for a moment that they can grant that application and remain friends with the United States. You may even assure them promptly, in that case, that if they determine to recognize, they may at the same time prepare to enter into alliance with the enemies of this republic."
The Confederate Congress responded to the hostilities by formally declaring war on the United States in May 1861 — calling it "The War between the Confederate States of America and the United States of America." The Union government never declared war but conducted its war efforts under a proclamation of blockade and rebellion. Mid-war negotiations between the two sides occurred without formal political recognition, though the laws of war governed military relationships.
Four years after the war, in 1869, the United States Supreme Court ruled in Texas v. White that secession was unconstitutional and legally null. The court's opinion was authored by Chief Justice Salmon P. Chase. Jefferson Davis and his vice president Alexander Stephens both wrote long books expounding their theories of secession's legality.
Once the war with the United States began, the best hope for the survival of the Confederacy was military intervention by Britain and France. The U.S. realized that too and made it clear that recognition of the Confederacy meant war with the United States — and the cutoff of food shipments into Britain. The Confederates who had believed in "King Cotton" — that is, Britain had to support the Confederacy to obtain cotton for its industries— were proven wrong. Britain, in fact, had ample stores of cotton in 1861 and depended much more on grain from the U.S.
During its existence, the Confederate government sent repeated delegations to Europe; historians do not give them high marks for diplomatic skills. James M. Mason was sent to London as Confederate minister to Queen Victoria, and John Slidell was sent to Paris as minister to Napoleon III. Both were able to obtain private meetings with high British and French officials, but they failed to secure official recognition for the Confederacy. Britain and the United States were at sword's point during the Trent Affair in late 1861. Mason and Slidell had been illegally seized from a British ship by an American warship. Queen Victoria's husband, Prince Albert, helped calm the situation, and Lincoln released Mason and Slidell, so the episode was no help to the Confederacy.
Throughout the early years of the war, British foreign secretary Lord Russell and Napoleon III, and, to a lesser extent, British Prime Minister Lord Palmerston, explored the risks and advantages of recognition of the Confederacy, or at least of offering a mediation. Recognition meant certain war with the United States, loss of American grain, loss of exports to the United States, loss of huge investments in American securities, loss of Canada and other North American colonies, much higher taxes, many lives lost and a severe threat to the entire British merchant marine, in exchange for the possibility of some cotton. Many party leaders and the general public wanted no war with such high costs and meager benefits. Recognition was considered following the Second Battle of Manassas when the British government was preparing to mediate in the conflict, but the Union victory at the Battle of Antietam and Lincoln's Emancipation Proclamation, combined with internal opposition, caused the government to back away.
No country appointed any diplomat officially to the Confederacy, but several maintained their consuls in the South who had been appointed before the war. In 1863, the Confederacy expelled all foreign consuls (all of them British or French diplomats) for advising their subjects to refuse to serve in combat against the U.S.
Throughout the war most European powers adopted a policy of neutrality, meeting informally with Confederate diplomats but withholding diplomatic recognition. None ever sent an ambassador or official delegation to Richmond. However, they applied international law principles that recognized the Union and Confederate sides as belligerents. Canada allowed both Confederate and Union agents to work openly within its borders, and some state governments in northern Mexico negotiated local agreements to cover trade on the Texas border.
Died of States Rights?
Historian Frank Lawrence Owsley argued that the Confederacy "died of states rights." That is, strong-willed governors and state legislatures refused to give the national government the soldiers and money it needed because they feared that Richmond was encroaching on the rights of the states. Historians agree that northern governors were much more supportive of Lincoln's policies. Georgia's governor Joseph Brown warned that he saw the signs of a deep-laid conspiracy on the part of Jefferson Davis to destroy states' rights and individual liberty. Brown declaimed: "Almost every act of usurpation of power, or of bad faith, has been conceived, brought forth and nurtured in secret session." To grant the Confederate government the power to draft soldiers was the "essence of military despotism." In 1863 governor Pendleton Murrah of Texas insisted that Texas troops were needed for self-defense (against Indians or a threatened Union invasion), and refused to send them East. Zebulon Vance, the governor of North Carolina was notoriously hostile to Davis and his demands. Opposition to conscription in North Carolina was intense and its results were disastrous for recruiting. Governor Vance's faith in states' rights drove him into a stubborn opposition.
Vice President Stephens broke publicly with President Davis, saying any accommodation would only weaken the republic, and he therefore had no choice but to break publicly with the Confederate administration and the president. Stephens charged that to allow Davis to make "arbitrary arrests" and to draft state officials conferred on him more power than the English Parliament had ever bestowed on the king. History proved the dangers of such unchecked authority." He added that Davis intended to suppress the peace meetings in North Carolina and "put a muzzle upon certain presses" (especially the antiwar newspaper Raleigh Standard) in order to control elections in that state. Echoing Patrick Henry's "give me liberty or give me death" Stephens warned the Southerners they should never view liberty as "subordinate to independence" because the cry of "independence first and liberty second" was a "fatal delusion." As historian George Rable concludes, "For Stephens, the essence of patriotism, the heart of the Confederate cause, rested on an unyielding commitment to traditional rights. In his idealist vision of politics, military necessity, pragmatism, and compromise meant nothing."
The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year. The civilians, although enthusiastic in 1861-62 seem to have lost faith in the nation's future by 1864, and instead looked to protect their homes and communities instead. As Rable explains, "As the Confederacy shrank, citizens' sense of the cause more than ever narrowed to their own states and communities. This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment.
While the Northern States had modernized and embraced the industrial revolution, incorporating new technology and rebuilding infrastructure to support it, the South had remained agrarian, and reliant on slave labor to boost productivity.
The southern economic model was challenged by its lack of access to capital. When a businessman in the North wanted to build a factory, he could obtain a loan from the bank or from a group of investors, pay the labor and material costs to erect it, maintain the factory as collateral, paying back the loan with cash flow from the business. The capital was constantly liquid, changing hands from one individual to another.
Banks were fewer in the South, and most plantation profits went to purchases of more slaves and more land.
Legacy of destruction
The principal physical legacy of the Confederacy was mass destruction. Four years of Civil War killed at least 620,000 soldiers (counting deaths from disease as well as in battle), of whom approximately 260,000 were from the Confederacy. This represented a much larger fraction (slightly over one quarter) of the Confederacy's military age free men than was lost by the Union.
An unknown number of civilians also died, in part as a result of the campaign of systematic destruction of the infrastructure in late 1864 and early 1865 by Union General William Tecumseh Sherman which helped not only to destroy the will of the Confederacy to fight, but literally to destroy the parts of the civilian economy useful to the war effort. Severe damage was inflicted on both urban and rural communities in the South, and hundreds of thousands of people became refugees. The exigencies of total war had led to the destruction of much Southern infrastructure, in particular railroads, long before 1864.
Upon the Confederate defeat, General Lee on April 9, 1865 ruled out continuing to fight on as insurgents. To have done otherwise would have been folly. The Union was prepared to use techniques it had learned in Missouri: move all the hostile civilians into concentration camps, thus cutting off supplies to the insurgents, and then hunt down the rebels bands one by one.
Extent of wartime destruction
Most of the war was fought in Virginia and Tennessee, but every state was affected. There was little military action in Texas and Florida. Of 645 counties in 9 Confederate states (exclusing Texas and Florida), there was Union military action in 56%, containing 63% of the whites and 64% of the slaves in 1860; however when the action took place, some people had fled to safer areas, so the exact population exposed to war is unknown.
Towns and cities
The Confederacy in 1861 had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,554), Charleston, Columbia, and Richmond (with prewar populations of 40,522, at least 8,052, and 37,910, respectively); the eleven contained 115,916 people in the 1860 census, or 14% of the urban South. Historians have not estimated their population when they were invaded. The number of people who lived in the destroyed places represented just over 1% of the Confederacy's population. In addition, 45 court houses were burned (out of 830).
The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, there was 40% less, of $48 million worth. Many old tools had broken through heavy use and could not be replaced; even repairs were difficult.
Railroad mileage was of course mostly in rural areas. The war followed the rails, and over two-thirds of the rails and rolling stock were in areas reached by Union armies, which systematically destroyed what it could. The South had 9400 miles of track and 6500 miles was in areas reached by the Union armies. About 4400 miles were in areas where Sherman and other Union generals adopted a policy of systematic destruction of the rail system. Even in untouched areas, the lack of maintenance and repair, the absence of new equipment, the heavy over-use, and the deliberate movement of equipment by the Confederates from remote areas to the war zone guaranteed the system would be virtually ruined at war's end.
Slavery was abolished in the Confederacy—but not in the four slave states that had not seceded—by Lincoln's Emancipation Proclamation, by the U.S. Army, and by the Thirteenth Amendment which became law in late 1865.
The seceding states all rescinded their ordinances of secession and were admitted, one-by-one, back into the Union by a government that had previously maintained that it was not possible to secede from the Union. This was done by the process of Reconstruction. Reconstruction began during the war and ended in 1877.
There was never an effort to revive the Confederacy, but nostalgia for the Lost Cause, poisoned relations from the war and the destruction of Southern property, and by the vengeful administration of Reconstruction soured North-South relations until at least the 1890s. Some Southerners insisted on white supremacy.
After Reconstruction, the Redeemers (white Democrats) took full control and slowly removed the voting rights and some of the legal rights of the Freedmen, installing a system of segregation known as Jim Crow. The region became a Democratic Party stronghold for a century.
Economically the South was badly damaged and fell far behind the North in terms of prosperity; it took 100 years for the South to catch up.
Surveys and textbooks
- Coulter, E. Merton. The Confederate States of America, 1861-1865 (1950), highly detailed overview; strong Southern accent
- Current, Richard N., et al. eds. Encyclopedia of the Confederacy (1993) (4 Volume set; also 1 vol abridged version), comprehensive excellent reference work
- Davis, William C. Look Away! A History of the Confederate States of America (2003)
- Donald, David et al. The Civil War and Reconstruction (latest edition 2001); 700 page survey
- Eaton, Clement. A History of the Southern Confederacy (1954).
- Fellman, Michael et al. This Terrible War: The Civil War and its Aftermath (2nd ed. 2007), 544 page survey
- Heidler, David Stephen, ed. Encyclopedia of the American Civil War: A Political, Social, and Military History (2002), 1600 entries in 2700 pages in 5 vol or 1-vol editions; very good basic reference
- McPherson, James M. Battle Cry of Freedom: The Civil War Era (1988), 900 page survey; Pulitzer prize
- Nevins, Allan. Ordeal of the Union, an 8-volume set (1947-1971). the most detailed political, economic and military narrative; by Pulitzer Prize winner
- vol 4. Prologue to Civil War, 1859-1861; 5. The Improvised War, 1861-1862; 6. War Becomes Revolution, 1862-1863; 7. The Organized War, 1863-1864; 8. The Organized War to Victory, 1864-1865
- Rhodes, James Ford. History of the Civil War, 1861-1865 (1918), Pulitzer Prize; a short version of his 5-volume history
- Roland, Charles P. The Confederacy, 1960. old brief survey
- Rubin, Anne Sarah. A Shattered Nation: The Rise and Fall of the Confederacy, 1861-1868. (2005). 319 pp.
- Thomas, Emory M. Confederate Nation: 1861-1865 (1979). Standard political-economic-social history
- Beringer, Richard E., Archer Jones, and Herman Hattaway, Why the South Lost the Civil War (1986) influential analysis of factors; The Elements of Confederate Defeat: Nationalism, War Aims, and Religion (1988), abridged version
- Boritt, Gabor S., et al., Why the Confederacy Lost, (1992).
- Davis, William C. and Robertson, James I., Jr., eds. Virginia at War, 1861. (2007). 241 pp.
- Goldin, Claudia D., and Frank D. Lewis, "The Economic Cost of the American Civil War: Estimates and Implications," Journal of Economic History 35#2 (June 1975), pp. 299–326 in JSTOR
- Owsley, Frank Lawrence. King Cotton Diplomacy: Foreign relations of the Confederate States of America (1931)
- Ransom, Roger L. "The Economics of the Civil War," EH.Net Encyclopedia, ed. Robert Whaples (Aug. 25, 2001), online edition
- Rable, George C., The Confederate Republic: A Revolution against Politics, (1994). online edition
- Thomas, Emory M. The Confederacy as a Revolutionary Experience, (1992) short interpretive essay
- Wallenstein, Peter and Wyatt-Brown, Bertram, eds. Virginia's Civil War. (2005). 303 pp. excerpt and text search
- Wiley, Bell Irvin. Southern Negroes: 1861-1865 (1938)
- Faust, Drew. Mothers of Invention: Women of the Slaveholding South in the American Civil War (2004) excerpt and text search
- Harper, Judith E. Women during the Civil War: An Encyclopedia. (2004). 472 pp.
- Massey, Mary. Bonnet Brigades: American Women and the Civil War (1966), excellent overview
- Rable, George C. Civil Wars: Women and the Crisis of Southern Nationalism (1989), excellent
- Roberts, Giselle. The Confederate Belle. (2003). 245 pp.
- Wiley, Bell Irvin. Confederate Women (1975), good survey
- Woodward, C. Vann, Ed., Mary Chesnut's Civil War, (1981) Pulitzer Prize; primary source
- American Civil War homefront
- American Civil War: 1861
- American Civil War: 1862
- American Civil War: 1863
- American Civil War: 1864
- American Civil War: 1865
- American Civil War: Aftermath
- C.S.A.: The Confederate States of America - A 2004 film set in an alternate world where the Confederacy won the American Civil War
- Carter, Susan B., ed. The Historical Statistics of the United States: Millennial Edition (5 vols), 2006; online at many universities
- Davis, Jefferson, The Rise and Fall of the Confederate Government (2 vols), 1881.
- Harwell, Richard B. ed. The Confederate Reader (1957) 389 pp. online edition
- Jones, John B. A Rebel War Clerk's Diary at the Confederate States Capital, edited by Howard Swiggert, 1993. 2 vols.
- Richardson, James D., ed. A Compilation of the Messages and Papers of the Confederacy, Including the Diplomatic Correspondence 1861-1865, 2 volumes, 1906.
- Yearns, W. Buck and Barret, John G.,eds. North Carolina Civil War Documentary, 1980.
- Confederate official government documents major online collection of complete texts in HTML format, from U. of North Carolina
- Journal of the Congress of the Confederate States of America, 1861-1865 (7 vols), 1904. online
- The Countryman, 1862-1866, published weekly by Turnwold, Ga., edited by J.A. Turner; primary source
- Confederate offices Index of Politicians by Office Held or Sought
- The Federal and the Confederate Constitution Compared
- The Making of the Confederate Constitution, by A. L. Hull, 1905.
- Photographic History of the Civil War, 10 vols., 1912.
- DocSouth: Documenting the American South - numerous online text, image, and audio collections.
- The Geographical Reader for the Dixie Children - a Confederacy textbook written in 1863.
- Confederate States of America: A Register of Its Records in the Library of Congress
- Emory M. Thomas, The Confederate Nation: 1861-1865 (1979), pp. 83-84.
- Thomas, p. 63.
- Thomas, Appendix, pp. 306-322.
- William Seward to Charles Francis Adams Sr., April 10, 1861 in Marion Mills Miller, Ed. Life And Works Of Abraham Lincoln (1907) Vol 6.
- Seward to Adams April 10, 1861 ibid
- In 1861, Ernst Raven applied for approval as the Saxe-Coburg-Gotha consul, but he was a citizen of Texas and there is no evidence that Saxe officials knew what he was doing; Saxe was a firm supporter of the U.S. It is false to state that Saxe (or the Pope) recognized the Confederacy. No country did so. On the Pope see
- Frank L. Owsley, State Rights in the Confederacy (Chicago, 1925),
- Rable (1994) 257; however Wallace Hettle in The Peculiar Democracy: Southern Democrats in Peace and Civil War (2001) p. 158 says Owsley's "famous thesis...is overstated."
- John Moretta; "Pendleton Murrah and States Rights in Civil War Texas," Civil War History, Vol. 45, 1999
- Albert Burton Moore,Conscription and Conflict in the Confederacy. (1924) p. 295.
- Rable (1994) 258-9
- Rable (1994) p 265
- Paul F. Paskoff, "Measures of War: A Quantitative Examination of the Civil War's Destructiveness in the Confederacy," Civil War History 54.1 (2008) 35-62
- Paul F. Paskoff, "Measures of War: A Quantitative Examination of the Civil War's Destructiveness in the Confederacy," Civil War History 54.1 (2008) 35-62
- Paul F. Paskoff, "Measures of War: A Quantitative Examination of the Civil War's Destructiveness in the Confederacy," Civil War History 54.1 (2008) 35-62 |
Asteroids are minor planets, especially those of the inner Solar System. The larger ones have also been called planetoids. These terms have historically been applied to any astronomical object orbiting the Sun that did not show the disk of a planet and was not observed to have the characteristics of an active comet, but as minor planets in the outer Solar System were discovered, their volatile-based surfaces were found to resemble comets more closely and so were often distinguished from traditional asteroids.[not in citation given] Thus the term asteroid has come increasingly to refer specifically to the small bodies of the inner Solar System out to the orbit of Jupiter. They are grouped with the outer bodies—centaurs, Neptune trojans, and trans-Neptunian objects—as minor planets, which is the term preferred in astronomical circles. In this article the term "asteroid" refers to the minor planets of the inner Solar System.
There are millions of asteroids, many thought to be the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. The large majority of known asteroids orbit in the asteroid belt between the orbits of Mars and Jupiter, or are co-orbital with Jupiter (the Jupiter Trojans). However, other orbital families exist with significant populations, including the near-Earth asteroids. Individual asteroids are classified by their characteristic spectra, with the majority falling into three main groups: C-type, S-type, and M-type. These were named after and are generally identified with carbon-rich, stony, and metallic compositions, respectively.
Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye, and this only in very dark skies when it is favorably positioned. Rarely, small asteroids passing close to Earth may be visible to the naked eye for a short time. As of September 2013, the Minor Planet Center had data on more than one million objects in the inner and outer Solar System, of which 625,000 had enough information to be given numbered designations.
On 22 January 2014, ESA scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids."
- 1 Naming
- 2 Discovery
- 3 Terminology
- 4 Formation
- 5 Distribution within the Solar System
- 6 Characteristics
- 7 Classification
- 8 Exploration
- 9 Fiction
- 10 See also
- 11 Notes
- 12 References
- 13 External links
A newly discovered asteroid is given a provisional designation (such as 2002 AT4) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. 433 Eros). The formal naming convention uses parentheses around the number (e.g. (433) Eros), but dropping the parentheses is quite common. Informally, it is common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text.
The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1855 there were two dozen asteroid symbols, which often occurred in multiple variants.
|1 Ceres||⚳||Ceres' scythe, reversed to double as the letter C||1801|
|2 Pallas||⚴||Athena's (Pallas') spear||1801|
|3 Juno||⚵||A star mounted on a scepter,
for Juno, the Queen of Heaven
|4 Vesta||⚶||The altar and sacred fire of Vesta||1807|
|5 Astraea||A scale, or an inverted anchor, symbols of justice||1845|
|6 Hebe||Hebe's cup||1847|
|7 Iris||A rainbow (iris) and a star||1847|
|8 Flora||A flower (flora) (specifically the Rose of England)||1847|
|9 Metis||The eye of wisdom and a star||1848|
|10 Hygiea||Hygiea's serpent and a star, or the Rod of Asclepius||1849|
|11 Parthenope||A harp, or a fish and a star; symbols of the sirens||1850|
|12 Victoria||The laurels of victory and a star||1850|
|13 Egeria||A shield, symbol of Egeria's protection, and a star||1850|
|14 Irene||A dove carrying an olive branch (symbol of
irene 'peace') with a star on its head, or
an olive branch, a flag of truce, and a star
|15 Eunomia||A heart, symbol of good order
(eunomia), and a star
|16 Psyche||A butterfly's wing, symbol of
the soul (psyche), and a star
|17 Thetis||A dolphin, symbol of Thetis, and a star||1852|
|18 Melpomene||The dagger of Melpomene, and a star||1852|
|19 Fortuna||The wheel of fortune and a star||1852|
|26 Proserpina||Proserpina's pomegranate||1853|
|28 Bellona||Bellona's whip and lance||1854|
|29 Amphitrite||The shell of Amphitrite and a star||1854|
|35 Leukothea||A lighthouse beacon, symbol of Leucothea||1855|
|37 Fides||The cross of faith (fides)||1855|
In 1851, after the fifteenth asteroid (Eunomia) had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid (although he assigned ① to the fifth, Astraea, while continuing to designate the first four only with their existing iconic symbols). The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years. (See chart above.) 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides. That year Astraea's number was increased to ⑤, but the first four asteroids, Ceres to Vesta, were not listed by their numbers until the 1867 edition. The circle was soon abbreviated to a pair of parentheses, which were easier to typeset and sometimes omitted altogether over the next few decades, leading to the modern convention.
The first asteroid to be discovered, Ceres, was found in 1801 by Giuseppe Piazzi, and was originally considered to be a new planet.[note 1] This was followed by the discovery of other similar bodies, which, with the equipment of the time, appeared to be points of light, like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term "asteroid", coined in Greek as ἀστεροειδής asteroeidēs 'star-like, star-shaped', from Ancient Greek ἀστήρ astēr 'star, planet'. In the early second half of the nineteenth century, the terms "asteroid" and "planet" (not always qualified as "minor") were still used interchangeably; for example, the Annual of Scientific Discovery for 1871, page 316, reads "Professor J. Watson has been awarded by the Paris Academy of Sciences, the astronomical prize, Lalande foundation, for the discovery of eight new asteroids in one year. The planet Lydia (No. 110), discovered by M. Borelly at the Marseilles Observatory [...] M. Borelly had previously discovered two planets bearing the numbers 91 and 99 in the system of asteroids revolving between Mars and Jupiter".
Asteroid discovery methods have dramatically improved over the past two centuries.
In the last years of the 18th century, Baron Franz Xaver von Zach organized a group of 24 astronomers to search the sky for the missing planet predicted at about 2.8 AU from the Sun by the Titius-Bode law, partly because of the discovery, by Sir William Herschel in 1781, of the planet Uranus at the distance predicted by the law. This task required that hand-drawn sky charts be prepared for all stars in the zodiacal band down to an agreed-upon limit of faintness. On subsequent nights, the sky would be charted again and any moving object would, hopefully, be spotted. The expected motion of the missing planet was about 30 seconds of arc per hour, readily discernible by observers.
The first object, Ceres, was not discovered by a member of the group, but rather by accident in 1801 by Giuseppe Piazzi, director of the observatory of Palermo in Sicily. He discovered a new star-like object in Taurus and followed the displacement of this object during several nights. His colleague, Carl Friedrich Gauss, used these observations to find the exact distance from this unknown object to the Earth. Gauss' calculations placed the object between the planets Mars and Jupiter. Piazzi named it after Ceres, the Roman goddess of agriculture.
Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered over the next few years, with Vesta found in 1807. After eight more years of fruitless searches, most astronomers assumed that there were no more and abandoned any further searches.
However, Karl Ludwig Hencke persisted, and began searching for more asteroids in 1830. Fifteen years later, he found 5 Astraea, the first new asteroid in 38 years. He also found 6 Hebe less than two years later. After this, other astronomers joined in the search and at least one new asteroid was discovered every year after that (except the wartime year 1945). Notable asteroid hunters of this early era were J. R. Hind, Annibale de Gasparis, Robert Luther, H. M. S. Goldschmidt, Jean Chacornac, James Ferguson, Norman Robert Pogson, E. W. Tempel, J. C. Watson, C. H. F. Peters, A. Borrelly, J. Palisa, the Henry brothers and Auguste Charlois.
In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, calling them "vermin of the skies", a phrase variously attributed to Eduard Suess and Edmund Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named.
Manual methods of the 1900s and modern reporting
Until 1998, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope, or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. Any body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations.
These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: 1998 FJ74).
The last step of discovery is to send the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union.
There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth (see Earth-crosser asteroids). The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens. Various asteroid deflection strategies have been proposed, as early as the 1960s.
The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of the Earth in 1937. Astronomers began to realize the possibilities of Earth impact.
Two events in later decades increased the alarm: the increasing acceptance of Walter Alvarez' hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to 10 metres across.
All these considerations helped spur the launch of highly efficient automated systems that consist of Charge-Coupled Device (CCD) cameras and computers directly connected to telescopes. As of spring 2011, it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such automated systems includes:
- The Lincoln Near-Earth Asteroid Research (LINEAR) team
- The Near-Earth Asteroid Tracking (NEAT) team
- The Lowell Observatory Near-Earth-Object Search (LONEOS) team
- The Catalina Sky Survey (CSS)
- The Campo Imperatore Near-Earth Object Survey (CINEOS) team
- The Japanese Spaceguard Association
- The Asiago-DLR Asteroid Survey (ADAS)
The LINEAR system alone has discovered 138,393 asteroids, as of 20 September 2013. Among all the automated systems, 4711 near-Earth asteroids have been discovered including over 600 more than 1 km (0.6 mi) in diameter.
Traditionally, small bodies orbiting the Sun were classified as asteroids, comets or meteoroids, with anything smaller than ten metres across being called a meteoroid. The term "asteroid" is ill-defined. It never had a formal definition, with the broader term minor planet being preferred by the International Astronomical Union from 1853 on. In 2006, the term "small Solar System body" was introduced to cover both most minor planets and comets. Other languages prefer "planetoid" (Greek for "planet-like"), and this term is occasionally used in English for larger minor planets such as the dwarf planets. The word "planetesimal" has a similar meaning, but refers specifically to the small building blocks of the planets that existed when the Solar System was forming. The term "planetule" was coined by the geologist William Daniel Conybeare to describe minor planets, but is not in common use. The three largest objects in the asteroid belt, Ceres, 2 Pallas, and 4 Vesta, grew to the stage of protoplanets. Ceres is a dwarf planet, the only one in the inner Solar System.
When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until "small Solar System body" was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroids. A further distinction is that comets typically have more eccentric orbits than most asteroids; most "asteroids" with notably eccentric orbits are probably dormant or extinct comets.
For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, 2060 Chiron, in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as 944 Hidalgo ventured far beyond Jupiter for part of their orbit. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids, though there was debate over whether they should be considered as asteroids or as a new type of object. Then, when the first trans-Neptunian object, 1992 QB1, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. These inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids.
The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are believed to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.
The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term "asteroid" to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.
When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets—those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about 950 km (590 mi) across, has been placed in the dwarf planet category, although there are several large asteroids (Vesta, Pallas, and Hygiea) that may be classified as dwarf planets when their shapes are better known.
It is believed that planetesimals in the asteroid belt evolved much like the rest of the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately 120 km (75 mi) in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust.
In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres.
Distribution within the Solar System
Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include:
The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is now estimated to contain between 1.1 and 1.9 million asteroids larger than 1 km (0.6 mi) in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter.
Trojan asteroids are a population that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, L4 and L5, which lie 60° ahead of and behind the larger body.
The most significant population of Trojan asteroids are the Jupiter Trojans. Although fewer Jupiter Trojans have been discovered as of 2010, it is thought that they are as numerous as the asteroids in the asteroid belt.
Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross the Earth's orbital path are known as Earth-crossers. As of May 2010, 7,075 near-Earth asteroids are known and the number over one kilometre in diameter is estimated to be 500–1,000.
Asteroids vary greatly in size, from almost 1,000 km for the largest down to rocks just tens of metres across.[note 3] The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either surviving planetesimals or fragments of larger bodies.
The dwarf planet Ceres is by far the largest asteroid, with a diameter of 975 km (610 mi). The next largest are 2 Pallas and 4 Vesta, both with diameters of just over 500 km (300 mi). Vesta is the only main-belt asteroid that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis.
The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be about 2.8–3.2×1021 kg, or about 4% of the mass of the Moon. Of this, Ceres comprises 0.95×1021 kg, a third of the total. Adding in the next three most massive objects, Vesta (9%), Pallas (7%), and Hygiea (3%), brings this figure up to 51%; whereas the three after that, 511 Davida (1.2%), 704 Interamnia (1.0%), and 52 Europa (0.9%), only add another 3% to the total mass. The number of asteroids then increases rapidly as their individual masses decrease.
The number of asteroids decreases markedly with size. Although this generally follows a power law, there are 'bumps' at 5 km and 100 km, where more asteroids than expected from a logarithmic distribution are found.
|D||100 m||300 m||500 m||1 km||3 km||5 km||10 km||30 km||50 km||100 km||200 km||300 km||500 km||900 km|
Although their location in the asteroid belt excludes them from planet status, the four largest objects, Ceres, Vesta, Pallas, and Hygiea, are remnant protoplanets that share many characteristics common to planets, and are atypical compared to the majority of "potato"-shaped asteroids.
Ceres is the only asteroid with a fully ellipsoidal shape and hence dwarf planet. Vesta has—aside from the large crater at its southern pole, Rheasilvia—an ellipsoidal shape. Ceres has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock such as olivine. Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Hygiea is a carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic.
|Attributes of protoplanetary asteroids|
(% of Moon)
(% of Ceres)
|15%||260||28%||3.44 ± 0.12||5.34||29°||85–270 K|
|28%||940||100%||2.12 ± 0.04||9.07||≈ 3°||167 K|
|16%||210||22%||2.71 ± 0.11||7.81||≈ 80°||164 K|
|12%||87||9%||2.76 ± 1.2||27.6||≈ 60°||164 K|
Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. No asteroid with a diameter larger than 100 meters has a rotation period smaller than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertia at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through accumulation of debris after collisions between asteroids.
The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle, where Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. 10 Hygiea, however, which appears to have a uniformly primitive composition of carbonaceous chondrite, is thought to be the largest undifferentiated asteroid. Most of the smaller asteroids are thought to be piles of rubble held together loosely by gravity, though the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: Rubble piles, moons, binaries, and scattered asteroid families are believed to be the results of collisions that disrupted a parent asteroid.
Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth. (See also panspermia.) In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space.
Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Only half a dozen asteroids are larger than 87 Sylvia, though none of them have moons; however, some smaller asteroids are thought to be more massive, suggesting they may not have been disrupted, and indeed 511 Davida, the same size as Sylvia to within measurement error, is estimated to be two and a half times as massive, though this is highly uncertain. The fact that such large asteroids as Sylvia can be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar system: Computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly.
On 7 October 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA’s Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimated, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. The presence of ice on 24 Themis supports this theory.
Most asteroids outside the big four (Ceres, Pallas, Vesta, and Hygiea) are likely to be broadly similar in appearance, if irregular in shape. 50-km (31-mi) 253 Mathilde (shown at right) is a rubble pile saturated with craters with diameters the size of the asteroid's radius, and Earth-based observations of 300-km (186-mi) 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida that have been observed up close also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid. Ceres seems quite different in the glimpses Hubble has provided, with surface features that are unlikely to be due to simple craters and impact basins, but details will not be known until Dawn arrives in 2015.
Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousands years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Asteroids are commonly classified according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum.
Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families have only been recognized within the asteroid belt. They were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor.
About 30–35% of the bodies in the asteroid belt belong to dynamical families each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet Haumea.
Quasi-satellites and horseshoe objects
Some asteroids have unusual horseshoe orbits that are co-orbital with the Earth or some other planet. Examples are 3753 Cruithne and 2002 AA29. The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus.
Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites.
In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Clark R. Chapman, David Morrison, and Ben Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied.
The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes.
The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals.
Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of similar materials.
Until the age of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes and their shapes and terrain remained a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can resolve a small amount of detail on the surfaces of the largest asteroids, but even these mostly remain little more than fuzzy blobs. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (their variation in brightness as they rotate) and their spectral properties, and asteroid sizes can be estimated by timing the lengths of star occulations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. In terms of delta-v and propellant requirements, NEOs are more easily accessible than the Moon.
The first close-up photographs of asteroid-like objects were taken in 1971 when the Mariner 9 probe imaged Phobos and Deimos, the two small moons of Mars, which are probably captured asteroids. These images revealed the irregular, potato-like shapes of most asteroids, as did later images from the Voyager probes of the small moons of the gas giants.
In September 2007, NASA launched the Dawn Mission, which orbited the protoplanet 4 Vesta from July 2011 to September 2012, and is planned to orbit 1 Ceres in 2015. 4 Vesta is the largest asteroid visited to date.
The Japan Aerospace Exploration Agency (JAXA) plans to launch around 2015 the improved Hayabusa 2 space probe and to return asteroid samples by 2020. Current target for the mission is the C-type asteroid (162173) 1999 JU3.
On 15 February 2013, an asteroid measuring approximately 18 metres (59 feet) with a mass of about 9,100 tonnes (10,000 short tons) exploded over Chelyabinsk, Russia causing 1,500 injuries and damaging 7,000 buildings. Small samples of the rocky Chelyabinsk meteorite were quickly recovered and analyzed with a larger fragment found several months later.
In early 2013, NASA announced the planning stages of a mission to capture a near-Earth asteroid and move it into lunar orbit where it could possibly be visited by astronauts and later impacted into the Moon.
It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on earth (asteroid mining), or materials for constructing space habitats (see Colonization of the asteroids). Materials that are heavy and expensive to launch from earth may someday be mined from asteroids and used for space manufacturing and construction.
Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spaceships traveling between two other points, and as a threat to life on Earth by potential impact.
- Asteroid deflection strategies
- Atira asteroids, (Interior-Earth objects, asteroids with orbits fully within that of Earth).
- BOOTES (Burst Observer and Optical
Transient Exploring System)
- Category:Asteroid groups and families
- Category:Binary asteroids
- Centaur (minor planet)
- Dwarf planet
- Impact event
- List of asteroid close approaches to Earth
- List of asteroids named after people
- List of asteroids named after places
- List of minor planets
- List of notable asteroids
- Lost asteroid
- Marco Polo (spacecraft)
- Meanings of asteroid names
- Minor planet
- Near-Earth object
- Near Earth Object Surveillance Satellite NEOsStat Canada's New Satellite
- Orion asteroid mission
- Pioneer 10 space probe
- Pronunciation of asteroid names
- Rosetta probe
- Ceres is the largest asteroid and is now classified as a dwarf planet. All other asteroids are now classified as small Solar System bodies along with comets, centaurs, and the smaller trans-Neptunian objects.
- Neptune also has a few known trojans, and these are thought to actually be much more numerous than the Jovian trojans. However, they are often included in the trans-Neptunian population rather than counted with the asteroids.
- Below 10 m, these rocks are by convention considered to be meteoroids.
- "Asteroids". NASA – Jet Propulsion Laboratory. Retrieved 13 September 2010.
- Asimov, Isaac, and Dole, Stephen H. Planets for Man (New York: Random House, 1964), p.43.
- "What Are Asteroids And Comets?". Near Earth Object Program FAQ. NASA. Archived from the original on 9 September 2010. Retrieved 13 September 2010.
- Closest Flyby of Large Asteroid to be Naked-Eye Visible, Space.com, 4 February 2005
- Provisional Designations, Minor Planet Center, 20 September 2013
- Küppers, Michael; O’Rourke, Laurence; Bockelée-Morvan, Dominique; Zakharov, Vladimir; Lee, Seungwon; von Allmen, Paul; Carry, Benoît; Teyssier, David; Marston, Anthony; Müller, Thomas; Crovisier, Jacques; Barucci, M. Antonietta; Moreno, Raphael (2014). "Localized sources of water vapour on the dwarf planet (1) Ceres". Nature 505 (7484): 525–527. doi:10.1038/nature12918. ISSN 0028-0836.
- Harrington, J.D. (22 January 2014). "Herschel Telescope Detects Water on Dwarf Planet - Release 14-021". NASA. Retrieved 22 January 2014.
- Gould, B. A. (1852). "On the Symbolic Notation of the Asteroids". Astronomical Journal 2: 80. Bibcode:1852AJ......2...80G. doi:10.1086/100212.
- Hilton, James L. (17 September 2001). "When Did the Asteroids Become Minor Planets". Retrieved 26 March 2006.[dead link]
- Encke, J. F. (1854). "Beobachtung der Bellona, nebst Nachrichten über die Bilker Sternwarte". Astronomische Nachrichten 38 (9): 143. doi:10.1002/asna.18540380907.
- Rümker, G. (1855). "Name und Zeichen des von Herrn R. Luther zu Bilk am 19. April entdeckten Planeten". Astronomische Nachrichten 40 (24): 373. doi:10.1002/asna.18550402405.
- Luther, R. (1856). "Schreiben des Herrn Dr. R. Luther, Directors der Sternwarte zu Bilk, an den Herausgeber". Astronomische Nachrichten 42 (7): 107. Bibcode:1855AN.....42..107L. doi:10.1002/asna.18550420705.
- "When did the asteroids become minor planets?". Naval Meteorology and Oceanography Command. Retrieved 6 November 2011.
- Except for Pluto and, in the astrological community, for a few outer bodies such as 2060 Chiron
- In an oral presentation("HAD Meeting with DPS, Denver, October 2013 - Abstracts of Papers". Retrieved 14 October 2013.), Clifford Cunningham presented his finding that the word has been coined by Charles Burney, jr., the son of a friend of Herschel, see "Local expert reveals who really coined the word 'asteroid'". South Florida Sun-Sentinel. 8 October 2013. Retrieved 10 October 2013.. See also Wall, Mike (10 January 2011). "Who Really Invented the Word 'Asteroid' for Space Rocks?". SPACE.com. Retrieved 10 October 2013.
- Hale, George E. (1916). "Address at the semi-centennial of the Dearborn Observatory: Some Reflections on the Progress of Astrophysics". Popular Astronomy 24: 550–558, at p 555. Bibcode:1916PA.....24..550H.
- Seares, Frederick H. (1930). "Address of the Retiring President of the Society in Awarding the Bruce Medal to Professor Max Wolf". Publ. Astr. Soc. Pacific 42: 5–22, at p 10. Bibcode:1930PASP...42....5S. doi:10.1086/123986.
- Chapman, Mary G. (17 May 1992). "Carolyn Shoemaker, Planetary Astronomer and Most Successful 'Comet Hunter' To Date". USGS. Retrieved 15 April 2008.
- NEO Discovery Statistics from Mainzer et al. (2011)
- Yeomans, Don. "Near Earth Object Search Programs". NASA. Archived from the original on 24 April 2008. Retrieved 15 April 2008.
- "Minor Planet Discover Sites". Archived from the original on 30 August 2010. Retrieved 24 August 2010.
- "Unusual Minor Planets". Archived from the original on 27 August 2010. Retrieved 24 August 2010.
- Beech, M.; Steel, D. (September 1995). "On the Definition of the Term Meteoroid". Quarterly Journal of the Royal Astronomical Society 36 (3): 281–284. Bibcode:1995QJRAS..36..281B. Retrieved 27 September 2013.
- Czechowski L., Adv. Space Res., 38, 2006,2054-2059. DOI 10.1016/j.astr.2006.09.004
- The definition of "small Solar System bodies" says that they "include most of the Solar System asteroids, most trans-Neptunian objects, comets, and other small bodies". The Final IAU Resolution on the definition of "planet" ready for voting (IAU)
- "English Dictionary – Browsing Page P-44". HyperDictionary.com. Retrieved 15 April 2008.
- Weissman, Paul R., William F. Bottke, Jr., and Harold F. Levinson. "Evolution of Comets into Asteroids." Southwest Research Institute, Planetary Science Directorate. 2002. Web Retrieved 3 August 2010
- "Are Kuiper Belt Objects asteroids?", "Ask an astronomer", Cornell University
- "Asteroids and Comets", NASA website
- "Comet Dust Seems More Asteroidy" Scientific American, 25 January 2008
- "Comet samples are surprisingly asteroid-like", New Scientist, 24 January 2008
- For instance, a joint NASA–JPL public-outreach website states:
"We include Trojans (bodies captured in Jupiter's 4th and 5th Lagrange points), Centaurs (bodies in orbit between Jupiter and Neptune), and trans-Neptunian objects (orbiting beyond Neptune) in our definition of "asteroid" as used on this site, even though they may more correctly be called "minor planets" instead of asteroids."
- Questions and Answers on Planets, IAU
- "Three new planets may join solar system", New Scientist, 16 August 2006
- Bottke, Durda; Durda, Jedicke; Nesvorny, Vokrouhlicky; Jedicke, R; Morbidelli, A; Vokrouhlicky, D; Levison, H (2005). "The fossilized size distribution of the main asteroid belt". Icarus 175: 111. Bibcode:2005Icar..175..111B. doi:10.1016/j.icarus.2004.10.026.
- Kerrod, Robin (2000). Asteroids, Comets, and Meteors. Lerner Publications Co. ISBN 0-585-31763-1.
- William B. McKinnon, 2008, "On The Possibility Of Large KBOs Being Injected Into The Outer Asteroid Belt". American Astronomical Society, DPS meeting #40, #38.03
- Tedesco, Edward; Metcalfe, Leo (4 April 2002). "New study reveals twice as many asteroids as previously believed" (Press release). European Space Agency. Retrieved 21 February 2008.
- Schmidt, B.; Russell, C. T.; Bauer, J. M.; Li, J.; McFadden, L. A.; Mutchler, M.; Parker, J. W.; Rivkin, A. S.; Stern, S. A.; Thomas, P. C. (2007). "Hubble Space Telescope Observations of 2 Pallas". American Astronomical Society, DPS meeting #39 39: 485. Bibcode:2007DPS....39.3519S.
- Pitjeva, E. V. (2004). "Estimations of masses of the largest asteroids and the main asteroid belt from ranging to planets, Mars orbiters and landers". 35th COSPAR Scientific Assembly. Held 18–25 July 2004, in Paris, France. p. 2014.
- Davis 2002, "Asteroids III", cited by Željko Ivezić
- "Recent Asteroid Mass Determinations". Maintained by Jim Baer. Last updated 2010-12-12. Retrieved 2 September 2011. The values of Juno and Herculina may be off by as much as 16%, and Euphrosyne by a third. The order of the lower eight may change as better data is acquired, but the values do not overlap with any known asteroid outside these twelve.
- Pitjeva, E. V. (2005). "High-Precision Ephemerides of Planets—EPM and Determination of Some Astronomical Constants" (PDF). Solar System Research 39 (3): 184. Bibcode:2005SoSyR..39..176P. doi:10.1007/s11208-005-0033-2.
- "The Final IAU Resolution on the Definition of "Planet" Ready for Voting". IAU. 24 August 2006. Retrieved 2 March 2007.
- Parker, J. W.; Stern, S. A.; Thomas, P. C.; Festou, M. C.; Merline, W. J.; Young, E. F.; Binzel, R. P.; and Lebofsky, L. A. (2002). "Analysis of the First Disk-resolved Images of Ceres from Ultraviolet Observations with the Hubble Space Telescope". The Astronomical Journal 123 (1): 549–557. arXiv:astro-ph/0110258. Bibcode:2002AJ....123..549P. doi:10.1086/338093.
- "Asteroid 1 Ceres". The Planetary Society. Archived from the original on 29 September 2007. Retrieved 20 October 2007.
- "Key Stages in the Evolution of the Asteroid Vesta". Hubble Space Telescope news release. 1995. Archived from the original on 30 September 2007. Retrieved 20 October 2007. Russel, C. T.; et al. (2007). "Dawn mission and operations". NASA/JPL. Retrieved 20 October 2007.
- Burbine, T. H. (July 1994). "Where are the olivine asteroids in the main belt?". Meteoritics 29 (4): 453. Bibcode:1994Metic..29..453B.
- Torppa, J.; et al. (1996). "Shapes and rotational properties of thirty asteroids from photometric data". Icarus 164 (2): 346–383. Bibcode:2003Icar..164..346T. doi:10.1016/S0019-1035(03)00146-5.
- Larson, H. P.; Feierberg, M. A.; and Lebofsky, L. A.; Feierberg; Lebofsky (1983). "The composition of asteroid 2 Pallas and its relation to primitive meteorites". Icarus (ISSN 0019-1035) 56 (3): 398. Bibcode:1983Icar...56..398L. doi:10.1016/0019-1035(83)90161-6.
- Barucci, M. A.; et al. (2002). "10 Hygiea: ISO Infrared Observations" (PDF). Archived from the original on 28 November 2007. Retrieved 21 October 2007. "Ceres the Planet". orbitsimulator.com. Archived from the original on 11 October 2007. Retrieved 20 October 2007.
- "Asteroid Density, Porosity, and Structure". lpi.usra.edu. Retrieved 3 January 2013.
- Rossi, Alessandro (20 May 2004). "The mysteries of the asteroid rotation day". The Spaceguard Foundation. Retrieved 9 April 2007.
- HubbleSite – NewsCenter – Asteroid or Mini-Planet? Hubble Maps the Ancient Surface of Vesta (04/19/1995) – Release Images
- Life is Sweet: Sugar-Packing Asteroids May Have Seeded Life on Earth Archived January 24, 2002 at the Wayback Machine, Space.com, 19 December 2001
- Callahan, M.P.; Smith, K.E.; Cleaves, H.J.; Ruzica, J.; Stern, J.C.; Glavin, D.P.; House, C.H.; Dworkin, J.P. (11 August 2011). "Carbonaceous meteorites contain a wide range of extraterrestrial nucleobases". PNAS. doi:10.1073/pnas.1106493108. Retrieved 15 August 2011.
- Steigerwald, John (8 August 2011). "NASA Researchers: DNA Building Blocks Can Be Made in Space". NASA. Retrieved 10 August 2011.
- ScienceDaily Staff (9 August 2011). "DNA Building Blocks Can Be Made in Space, NASA Evidence Suggests". ScienceDaily. Retrieved 9 August 2011.
- "Artist's view of watery asteroid in white dwarf star system GD 61". ESA/Hubble. Retrieved 12 October 2013.
- Marchis, Descamps, et al. Icarus, February 2011
- Cowen, Ron (8 October 2009). "Ice confirmed on an asteroid". Science News. Archived from the original on 12 October 2009. Retrieved 9 October 2009.
- Atkinson, Nancy (8 October 2009). "More water out there, ice found on an asteroid". International Space Fellowship. Archived from the original on 11 October 2009. Retrieved 11 October 2009.
- Campins, H.; Hargrove, K; Pinilla-Alonso, N; Howell, E.S.; Kelley, M.S.; Licandro, J.; Mothé-Diniz, T.; Fernández, Y.; Ziffer, J. (2010). "Water ice and organics on the surface of the asteroid 24 Themis". Nature 464 (7293): 1320–1. doi:10.1038/nature09029. PMID 20428164.
- Rivkin, Andrew S.; Emery, Joshua P. (2010). "Detection of ice and organics on an asteroidal surface". Nature 464 (7293): 1322–1323. Bibcode:2010Natur.464.1322R. doi:10.1038/nature09028. PMID 20428165.
- Mack, Eric. "Newly spotted wet asteroids point to far-flung Earth-like planets". CNET.
- A.R. Conrad et al. 2007. "Direct measurement of the size, shape, and pole of 511 Davida with Keck AO in a single night", Icarus, doi:10.1016/j.icarus.2007.05.004
- "University of Hawaii Astronomer and Colleagues Find Evidence That Asteroids Change Color as They Age". University of Hawaii Institute for Astronomy. 19 May 2005. Retrieved 27 February 2013.
- Rachel Courtland (30 April 2009). "Sun damage conceals asteroids' true ages". New Scientist. Retrieved 27 February 2013.
- Zappalà, V. (1995). "Asteroid families: Search of a 12,487-asteroid sample using two different clustering techniques". Icarus 116 (2): 291–314. Bibcode:1995Icar..116..291Z. doi:10.1006/icar.1995.1127.
- Chapman, C. R.; Morrison, David; Zellner, Ben (1975). "Surface properties of asteroids: A synthesis of polarimetry, radiometry, and spectrophotometry". Icarus 25 (1): 104–130. Bibcode:1975Icar...25..104C. doi:10.1016/0019-1035(75)90191-8.
- Tholen, D. J. (March 8–11, 1988). "Asteroid taxonomic classifications". Asteroids II; Proceedings of the Conference. Tucson, AZ: University of Arizona Press. pp. 1139–1150. Retrieved 14 April 2008.
- Bus, S. J. (2002). "Phase II of the Small Main-belt Asteroid Spectroscopy Survey: A feature-based taxonomy". Icarus 158 (1): 146. Bibcode:2002Icar..158..146B. doi:10.1006/icar.2002.6856.
- McSween Jr., Harry Y. (1999). Meteorites and their Parent Planets (2nd ed.). Oxford University Press. ISBN 0-521-58751-4.
- A Piloted Orion Flight to a Near-Earth Object: A Feasibility Study
- NASA May Slam Captured Asteroid Into Moon (Eventually), space.com, Mike Wall, 30 September 2013
|Find more about Asteroid at Wikipedia's sister projects|
|Definitions and translations from Wiktionary|
|Media from Commons|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
- Asteroids@home (BOINC distributed computing project)
- Rocks from the Main Belt asteroids
- Alphabetical list of minor planet names (ASCII) (Minor Planet Center)
- Near Earth Asteroid Tracking (NEAT)
- Asteroids Page at NASA's Solar System Exploration
- Asteroid Simulator with Moon and Earth
- Alphabetical and numerical lists of minor planet names (Unicode) (Institute of Applied Astronomy)
- Future Asteroid Interception Research
- Near Earth Objects Dynamic Site
- Asteroids Dynamic Site Up-to-date osculating orbital elements and proper orbital elements University of Pisa, Italy.
- JPL small bodies database Current down-loadable ASCII table of orbit data and absolute mags H for over 200000 asteroids, sorted by number. Caltech/JPL.
- Asteroid naming statistics
- Spaceguard UK
- Committee on Small Body Nomenclature
- List of minor planet orbital groupings and families from ProjectPluto
- Cunningham, Clifford, "Introduction to Asteroids: The Next Frontier", ISBN 0-943396-16-6
- James L. Hilton: When Did the Asteroids Become Minor Planets?
- Kirkwood, Daniel; Relations between the Motions of some of the Minor Planets (1874).
- Schmadel, L.D. (2003). Dictionary of Minor Planet Names. 5th ed. IAU/Springer-Verlag: Heidelberg.
- Asteroid articles in Planetary Science Research Discoveries
- Catalogue of the Solar System Small Bodies Orbital Evolution
- TECA Table of next close approaches to the Earth
- SAEL Small Asteroids Encounter List
- MBPL Minor Body Priority List
- PCEL Planetary Close Encounter List
- NEO MAP (Armagh Observatory)
- Information about near-Earth asteroids and their close approaches |
Q fever is an infectious disease that is spread from animals to people. It is caused by bacteria called Coxiella burnetii.
What is Q fever?
Q fever is an infectious disease that is spread from animals to people by bacteria called Coxiella burnetii. Cattle, sheep and goats are the most common source of human infection, but other animals such as kangaroos, bandicoots, camels, dogs and cats can also cause infection.
Infected animals generally do not become ill, though miscarriage or stillbirth may occur. They can contaminate their environment when they shed the bacteria in their urine, faeces, milk and in especially high numbers in birthing products, such as the placenta. Infected ticks living on animals can also shed the bacteria and contaminate the animal's hide, wool and fur.
People become infected with Q fever by inhaling contaminated aerosols and dust arising from:
- animals, animal products and waste (e.g. milk, wool, hides, fur, urine, faeces and birth products)
- animal environments (e.g. soil, bedding, straw, hay and grass)
- other contaminated items (e.g. machinery, equipment, vehicles and clothing).
Less commonly, infection occurs from consuming raw milk. Infection from tick bites and via person to person occurs rarely.
Many people who are infected with Q fever do not become sick or may have only a mild illness, sometimes mistaken for a cold or flu. Those who become acutely ill usually develop an influenza (flu)-like illness that can be severe and may require admission to hospital. Some people may develop pneumonia (chest infection) and hepatitis (inflammation of the liver). Most people make a full recovery and become immune to repeat infection; however it may take time to return to normal health.
Infection during pregnancy may cause complications such as miscarriage or the baby being born prematurely. Subsequent pregnancies may also be affected.
Around 20 per cent of people with acute Q fever develop post-Q fever fatigue syndrome, causing prolonged ill health and debilitating fatigue that lasts more than 12 months.
Less than five per cent of infected people develop chronic Q fever. This condition, caused by persistent infection, can result in serious health issues months or years later. It most commonly causes endocarditis (inflammation of the lining of the heart) but can also affect internal organs, tissues and bone. Conditions including heart valve disorders, impaired immunity and pregnancy increase the risk of chronic Q fever.
It is important that people who work with animals, animal products and waste let their doctor know if they become ill with a flu-like illness. A doctor can test for Q fever, if indicated, and treat the person with antibiotics if they are infected.
Who is most at risk?
People who work with animals and animal products and waste are at risk of being infected with Q fever, especially new workers and visitors to animal-related industries.
Meat workers who work exclusively with pigs and town butchers working with dressed carcasses are not considered to be at an increased risk for Q fever.
Typical at-risk workers include:
- abattoir workers, contractors and visitors to abattoirs
- cattle, sheep and goat farmers and graziers
- dairy industry workers and those who work with raw milk
- shearers and wool classers
- tannery workers
- kangaroo shooters
- wild game and camel meat processing workers
- transporters of livestock, animal products and waste
- feedlot workers
- staff and students of agricultural education programs
- rendering plant workers
- pet food manufacturing workers
- wildlife and zoo workers and animal exhibitors
- laboratory workers handling veterinary specimens or working with Q fever bacteria
- workers in animal research facilities
- workers processing animal foetal products for the cosmetics industry
- veterinarians and veterinary nurses
- professional dog and cat breeders
- animal refuge workers
- laundry workers who handle clothing from at risk workplaces
- gardeners mowing in at-risk environments
- other people exposed to cattle, sheep, goats, camels, native wildlife, and animal products and waste.
The risk of infection is significant, as:
- Q fever is very infectious and people can become infected from inhaling just a few bacteria
- large numbers of bacteria are shed by infected animals
- the bacteria can survive in the environment for long periods, tolerate harsh conditions and spread in the air.
Prevention and control measures
Q fever vaccination
Q fever vaccination is the most important way to protect workers against infection. This requires pre-vaccination screening to exclude workers who have previously been infected with or vaccinated against Q fever, as they are at increased risk for a severe vaccine reaction.
Non-immune workers should be vaccinated against Qfever. Immunity usually develops 15 days later. Workers in the meat processing or affiliated livestock industries who have completed Q fever screening and vaccination are able to store this information on the Q fever Register.
New workers to a business should undergo Q fever screening and vaccination before starting work. If this is not possible, they should undergo screening and vaccination as soon as possible after starting work and work in lower risk areas until they are known to be immune. If they need to enter higher risk areas, they should wear a suitable respirator as a short-term control measure and be properly trained in its proper use and fit. The minimum level of respiratory protection is a fit tested (PDF, 0.86 MB) half facepiece respirator with a P2 filter.
Supporting control measures
Supporting control measures should also be implemented to protect other workers, visitors and members of the public from Q fever risks. These will vary according to the nature of the work and the level of risk, but the following are examples of additional ways to control the risk.
Level 1 control measures
Eliminate the risks associated with Q fever (e.g. restrict non-immune persons from visiting the workplace).
Level 2 control measures
Minimise the risks by substituting a work activity with something safer, such as:
- changing a high pressure water cleaning method with a low pressure water system to minimise airborne aerosols
- roster on immune workers for high risk locations and tasks.
Isolate the hazard such as:
- restrict non-essential and non-immune persons from entering Q fever risk areas
- isolate, enclose or contain the source of infection such as by installing enclosed visitor viewing areas at meatworks.
Use engineering and design controls to minimise exposure, such as:
- install ventilation systems to minimise the dispersal of airborne contaminants
- locate high traffic areas, car parks, site entry, offices and dining facilities away from higher risk areas
- install dust suppression systems to minimise airborne dust (e.g. water sprinklers)
- ensure that structures, surfaces, machinery and equipment are designed to be easily cleaned.
Level 3 control measures
Lower order control measures should be used to support higher order control measures.
- Use administrative controls, such as:
- develop safe work procedures to minimise Q fever risks
- provide workers with information, instruction and training on Q fever
- require contractors, labour hire workers and visitors to show proof of immunity to Q fever
- maintain a pool of Q fever immune contractors and casual workers
- keep the workplace clean to minimise the accumulation of dust and dirt
- use signage to inform people about Q fever risks and to use personal protective equipment (PPE)
- handle and dispose of animal products, waste, placenta and aborted fetuses appropriately, and where possible prevent animals from eating the placenta after giving birth
- provide suitable washing facilities for workers
- implement biosecurity measures to prevent the spread of infection between animals, e.g. tick treatments.
- Use PPE
- launder protective clothing (work clothes) on site or through a commercial laundry contractor, and kept separate from street clothing
- Respiratory protective equipment (RPE) may be used as an interim or short-term control measure to protect non-immune workers, contractors and visitors. The minimum level of respiratory protection is a fit tested (PDF, 0.86 MB) half facepiece respirator with a P2 filter. This must be of a suitable size and fit and worn by the worker. The worker is to be instructed in its correct use and fit.
Work-caused Q fever is a notifiable incident.
- Human health issues visit www.health.qld.gov.au or call 13HEALTH (13 43 25 84).
- Animal health issues, visit www.biosecurity.qld.gov.au or call 13 25 23.
- To find a Q fever vaccine provider or to register immune workers, contact the Q Fever Register on 1300 QFEVER (1300 733 837) or visit www.qfever.org. |
Editors’ Vox is a blog from AGU’s Publications Department.
Magnetic minerals, typically iron oxides and some iron sulfides, preserve a record of the ancient magnetic field of planetary bodies and, as such, carry a wealth of geoscientific information. Hematite (ferric iron oxide) is a common magnetic mineral on Earth and Mars. In addition to registering the paleomagnetic field, it carries information about ancient climates and environments. Well-dated terrestrial hematite enables detailed monsoon reconstructions, while hematite occurrences on Mars tend to be associated with the former presence of water. Hence, hematite-bearing regions are considered key to the search for potential ancient life on Mars.
A recent article in Reviews of Geophysics describes the magnetic and color properties of terrestrial hematite—where both sets of properties help to identify and interpret signals due to hematite. Here, the authors give an overview of the importance of hematite on Earth and Mars.
What is hematite and where is it most commonly found?
Hematite (a-Fe2O3) is an iron oxide that derives its name from the Greek haimatite, meaning blood-like, which refers to its distinctive red color. Hematite occurs widely on Earth, Mars, the Moon, and some asteroids. On Earth, hematite is abundant in aerobic tropical and subtropical soils and sediments, loosely referred to as red beds; i.e., it reflects warm and humid climates.
Hematite also occurs in Archean and Paleoproterozoic (4 to 1.6 billion years ago) sedimentary banded iron formations that record the evolution of Earth’s early atmosphere and ocean.
Moreover, hematite is the dominant pigment in oceanic red beds, which are reddish to pinkish marine sedimentary rocks deposited in the open sea far out of the coast that document global oceanic and climate changes during the Cretaceous greenhouse world.
High pressure and temperature experiments on hematite and its polymorphs suggest that they can be dominant magnetic signal carriers down to depths of ~600 km in (cold portions of) subducted slabs under conditions where the archetypical terrestrial magnetic mineral, magnetite (Fe3O4), has decomposed thermally.
Hematite is the most common pigmenting mineral on the surface of Mars, and its nickname as the “Red Planet” is due to hematite. Hematite on Mars occurs in three forms with different properties: nanophase, red crystalline, and gray crystalline hematite. The former two are the dominant contributors to the eye-catching reddish color of the bright regions of Mars.
How can hematite be used to understand paleoclimate variations over time?
Hematite formation is controlled by several geologic processes. For example, compared to other magnetic minerals, hematite is more abundant in subtropical and tropical soils that experience frequent prolonged dry episodes. Therefore, hematite content variations may indicate paleoclimate variations.
Additionally, hematite in some marine sediments is dominantly transported as dust from inland by wind (e.g., monsoon or westerlies). Stronger winds transport more dust and, thus, more hematite into the oceans. Evolution of monsoon systems or westerlies may be tracked by the hematite content in wind-blown sediments.
Additionally, soils with near-neutral pH and low organic content tend to favor hematite formation over goethite (a-FeOOH). So, the hematite to goethite ratio (Hm/Gt) can provide important soil moisture information related to climate change unless either phase is dissolved reductively.
Therefore, hematite abundance is an important proxy in studies of geologic and environmental processes.
What is cation substitution and how does it affect the physical properties of hematite?
Cations other than Fe3+ (e.g., Al3+, Ti4+, Mn2+) are always present to some extent in natural hematite. They are incorporated easily into the hematite crystal lattice by substituting for Fe.
For example, Al-substituted hematite occurs widely in Al-rich tropical and subtropical soils, e.g., in Brazil and South Africa. In such warm and humid environments, Al is incorporated into iron oxides during chemical weathering, so Al-hematite typically is associated with soil origin.
Fe3+ and Al3+ have a different ionic radius (Fe3+: 0.65 Å; Al3+: 0.53 Å), so the symmetrical octahedral hematite structure becomes distorted when Al substitution occurs, which enhances internal strain. Al-hematite particles are also smaller. These differences cause the magnetic properties of Al-substituted hematite to differ from unsubstituted hematite.
Non-magnetic Al ions are incorporated randomly into the hematite lattice, so Fe ions are diluted, which decreases the magnetism of hematite with increasing Al content, i.e., themagnetic susceptibility (c, the magnetic moment of a sample in a low, Earth-like, magnetic field) and saturation remanent magnetization (Mrs, the maximum possible permanent magnetic moment after exposure to a high magnetic field) go down. In addition, with increasing Al content, the color of hematite becomes lighter red, so the characteristic peak position and amplitude of color reflectance spectra will change correspondingly.
These variable properties are crucial for identifying hematite and its cation-substituted counterparts on Earth and Mars.
What is remagnetization and how does it complicate the paleomagnetic record?
The natural remanent magnetization (NRM, a rock’s registration of the prevailing geomagnetic field at a given time) of a remagnetized rock represents a much younger age than that of the host rock. Thus, remagnetization complicates paleomagnetic data interpretation of rocks, soils, and sediments.
Typically, remagnetization mechanisms may involve several pathways: 1) magnetic mineral transformations associated with redox processes, i.e. formation of magnetite (Fe3O4), greigite (Fe3S4), or pyrrhotite (Fe7S8); 2) deformation-associated fluid migration and/or pressure solution; 3) chemical weathering in moist, tropical environments; or 4) the resetting of existing NRM by prolonged exposure to a stable geomagnetic field at moderately elevated temperatures (~100-250 °C), so called thermoviscous remanent magnetization acquisition.
Acquisition of secondary NRM through any of these mechanisms will obscure the primary paleomagnetic record and may lead to inaccurate paleomagnetic interpretations if not recognized. Therefore, discriminating between primary and secondary remanence is central to paleomagnetic studies. Unfortunately, widespread remagnetizations carried by hematite have been documented, especially in red beds.
How can terrestrial hematite be used to better understand the geology of Mars?
Understanding the origin of Martian hematite is essential to unravel its geologic history and to answer the question of whether liquid water was present on Mars. Possibly, hematite may even unveil its climatic history.
Martian samples are extremely scarce (essentially only from Martian meteorites). This makes it difficult to investigate hematite formation on Mars, so we must rely for now on relevant Earth analogs.
Hematite can form by several mechanisms, most of which involve water. Understanding hematite formation pathways is critical for interpreting information carried by hematite in terms of climate, environment, tectonics, and planetary evolution.
Although a fully defined hematite formation pathway has yet to be discerned for Mars, the origin of crystalline hematite can provide chemical clues about the early Martian environment, especially pertaining to the existence of (liquid) water, which is an indispensable requirement for evolution and sustenance of life.
What are some of the unresolved questions where additional research, data, or modeling is needed?
Although terrestrial hematite has been investigated systematically, the scarcity of Martian meteorite samples makes it difficult to investigate hematite from Mars. Systematic studies of the properties of terrestrial hematite are indispensable to provide an interpretive framework for Martian hematite. However, hematite properties are complex and depend critically on formation conditions.
Key questions are: how comparable is hematite on Earth and Mars? Is hematite on Mars cation-doped? If so, how can cation content be quantified? Can this be done with color reflectance measurements? A robust database of these hematite properties on Earth and Mars is required to inform such questions along with a full understanding of its formation mechanisms.
Integrated analysis should provide meaningful reference data for future studies of Martian soils or rocks. For example, by comparing with Al-substituted terrestrial reference hematite, it may be deduced whether hematite on Mars is also Al-substituted. This will provide necessary ground-truthing for spectral and other remote Martian surface observations.
—Zhaoxia Jiang (firstname.lastname@example.org, |
IIT JEE Maths Statistics and Probability
Statistics and Probability PDF Notes, Important Questions and Synopsis
- Statistics deals with the collection, presentation, analysis and interpretation of data.
- Data can be either ungrouped or grouped. Further, grouped data can be categorized into
- Discrete frequency distribution
- Continuous frequency distribution
Data can be represented in the form of tables or in the form of graphs.
Common graphical forms are bar charts, pie diagrams, histograms, frequency polygons, ogives etc.
First order of comparison for given data is the measures of central tendencies. Commonly used measures are (i) arithmetic mean, (ii) median and (iii) mode.
- Arithmetic mean or simply mean is the sum of all observations divided by the number of observations. It cannot be determined graphically. Arithmetic mean is not a suitable measure in case of extreme values in the data.
- Median is the measure which divides the data in two equal parts. Median is the middle term when the data is sorted.
In case of odd observations, the middle observation is the median. In case of even observations, the median is the average of the two middle observations.
The median can be determined graphically. It does not take into account all the observations.
- The mode is the most frequently occurring observation. For a frequency distribution, the mode may or may not be defined uniquely.
Variability or dispersion captures the spread of data. Dispersion helps us to differentiate the data when the measures of central tendency are the same.
The dispersion or scatter of a dataset can be measured from two perspectives:
Taking the order of the observations into consideration, the two measures are
- Quartile deviation
Taking the distance of each observation from the central position yields two measures:
- Mean deviation
- Variance and standard deviation
- Range is the difference between the highest and the lowest observation in the given data.
- There are three quartiles, Q1, Q2 and Q3 which divide the data into 4 equal parts. Here, Q2 is the median of the data.
- Mean of the absolute deviations about ‘a’ gives the ‘mean deviation about a’, where ‘a’ is the mean. It is denoted as MD(a).
MD(a) = Sum of absolute values of deviations from the mean 'a' divided by the number of observations.
Mean deviation can be calculated about the median or mode.
- Merits of mean deviation:
- It utilises all the observations of the set.
- It is the least affected by extreme values.
- It is simple to calculate and understand.
- Limitations of mean deviation:
- The foremost weakness of mean deviation is that in its calculations, negative differences are considered positive without any sound reasoning.
- It is not amenable to algebraic treatment.
- It cannot be calculated in the case of open end classes in the frequency distribution.
- Variance: Measure of variation based on taking the squares of the deviation.
- Variance is given by the mean of squared deviations. If the variance is small, then the data points cluster around the mean; otherwise, they are spread across.
- Standard deviation is simply expressed as the positive square root of variance of the given data set. Standard deviation of a set of observations does not change if a non-zero constant is added or subtracted from each observation.
- Merits of standard deviation:
- It is based on all the observations.
- It is suitable for further mathematical treatments.
- It is less affected by the fluctuations of sampling.
- A measure of variability which is independent of the units is called the coefficient of variation. It is denoted as CV.
- Coefficient of variation: A dimensionless constant which helps compare the variability of two observations with same or different units. The distribution having a greater coefficient of variation has more variability around the central value than the distribution having a smaller value of the coefficient of variation.
- The theory of probability is a branch of mathematics which deals with uncertain or unpredictable events. Probability is a concept which gives a numerical measurement for the likelihood of occurrence of an event.
- The sample space S of an experiment is the set of all its outcomes. Thus, each outcome is also called a sample point of the experiment.
- An experiment is called random experiment if it satisfies the following two conditions:
- It has more than one possible outcome.
- It is not possible to predict the outcome in advance.
- Deterministic experiment: An experiment which results in a unique outcome.
- Sample space is a set consisting of all the outcomes; its cardinality is given by n(S). Any subset ‘E’ of a sample space for an experiment is called an event.
- The empty set and the sample space S describe events. In fact, is called an impossible event and S, i.e. the whole sample space, is called a sure event.
- If an event E has only one sample point of a sample space, it is called a simple (or elementary) event.
- A subset of the sample space which has more than an element is called a compound event.
- Events are said to be equally likely if we have no reason to believe that one is more likely to occur than the other. Both outcomes (head and tail) of tossing a coin are equally likely.
- The complement of an event A is the set of all outcomes which are not in (or not favourable to) A. It is denoted by A’.
- Certain event (sure event): If a random experiment occurs always, then the corresponding event is called a certain event.
- Impossible event: If a random experiment never occurs, then the corresponding event is called an impossible event.
- Mutually exclusive event: In a random experiment, if the occurrence of any one event prevents the occurrence of all the other
- events, then the corresponding events are said to be mutually exclusive.
- In other words, events A and B are said to be mutually exclusive if and only if they have no elements in common.
- Exhaustive event: In a random experiment, if the union of two or more events is the sample space, then the associated events are said to be exhaustive events.
- In other words, when every possible outcome of an experiment is considered, the events are called exhaustive events.
- Probability of an event E is the ratio of the number of elements in the event to the number of elements in the sample space.
i. P(E) = 0 £ P(E) £
ii. 0 £ P(E) £ 1
- Independent events: Two or more events are said to be independent if the occurrence or non-occurrence of any of them does not affect the probability of occurrence or non-occurrence of the other events.The complement of an event A is the set of all outcomes which are not in (or not favourable to) A. It is denoted by A’.
IIT JEE Class Revise
Kindly Sign up for a personalised experience
- Ask Study Doubts
- Sample Papers
- Past Year Papers
- Textbook Solutions
Verify mobile number
Enter the OTP sent to your number |
Mathematics represents more than something that has to be learned in school, it is a structure that helps your children understand a world of information. What follows are the 7 Principles of building a solid math foundation for your children. The principles listed here are simple and powerful.
Understanding the Value First
Many young children understand how to count at least to a certain point, but knowing that adding 2 + 2 will equal 4 is not enough for them to fully understand its meaning unless they know what “2” and “4” are in the first place.
Math is the use of values, not just in how they interact, but in their meaning as well. Once a child understands the value of numbers, then concepts such as addition, subtraction, multiplication and division become more meaningful and relatable to them.
Repetition engrains the lessons of mathematics into the child so they can be recalled instantly. This is why learning is so often doing a series of similar problems over and over again so they can be recalled and the solutions become more close at hand.
Daily Math Problem Solving
Along with repetition for each lesson is providing them lessons every day. To have a select amount of time each day learning math from using flash cards to using the homework assignments, helping them with their math skills even if its only 10 to 15 minutes per day can drastically improve their performance.
Share Math in Real Life
One of the best ways to instill the importance of math is showing how it is used in real life. For example, showing how many gallons are needed to fill up the gas tank. The use of measuring cups when cooking to demonstrate volume, counting the miles to the next destination. Everyday there are situations when math arises, for young children this can be a very valuable lesson that shows the importance of math.
Teach Math in Sequence
Going from Calculus and then back to Algebra may not seem that important at first, but Calculus uses a great deal of Algebra in solving the equations. Therefore, it is vital that math is learned in sequence so that the child can fully understand the principles and applications before proceeding to a higher level.
Encourage Exploration of Math
A big factor in what inhibits many children from embracing math is their fear of the subject. Have your child use a calendar, globe, watch, measuring cup, milk jug and many more items around the home use forms of math as part of their function.
By encouraging them to explore how math applies to space, value, distance volume and so forth you can bolster their confidence and lower their anxiety about tackling new math subjects.
A math tutor San Diego can be brought in if the child is having difficulty understanding math problems. In many cases, the private instruction of a math tutor done at a pace that the child can keep up will help motivate them while as they learn.
These 7 principles of building a solid math foundation can help your child learn and embrace mathematics which will open up entire new worlds for them.
If you live in San Diego and your child is struggling with Math in school, please contact us to find out how an expert Math Tutor San Diego can help you. |
Coaxial cablesDecember 7, 2009
Introduction to coaxial cables
A coaxial cable is one that consists of two conductors that share a common axis. The inner conductor is typically a straight wire, either solid or stranded and the outer conductor is typically a shield that might be braided or a foil.
Coaxial cable is a cable type used to carry radio signals, video signals, measurement signals and data signals. Coaxial cables exists because we can’t run open-wire line near metallic objects (such as ducting) or bury it. We trade signal loss for convenience and flexibility. Coaxial cable consists of an insulated ceter conductor which is covered with a shield. The signal is carried between the cable shield and the center conductor. This arrangement give quite good shielding agains noise from outside cable, keeps the signal well inside the cable and keeps cable characteristics stable.
Coaxial cables and systems connected to them are not ideal. There is always some signal radiating from coaxial cable. Hence, the outer conductor also functions as a shield to reduce coupling of the signal into adjacent wiring. More shield coverage means less radiation of energy (but it does not necessarily mean less signal attenuation).
Coaxial cable are typically characterized with the impedance and cable loss. The length has nothing to do with a coaxial cable impedance. Characteristic impedance is determined by the size and spacing of the conductors and the type of dielectric used between them. For ordinary coaxial cable used at reasonable frequency, the characteristic impedance depends on the dimensions of the inner and outer conductors. The characteristic impedance of a cable (Zo) is determined by the formula 138 log b/a, where b represents the inside diameter of the outer conductor (read: shield or braid), and a represents the outside diameter of the inner conductor.
Most common coaxial cable impedances in use in various applications are 50 ohms and 75 ohms. 50 ohms cable is used in radio transmitter antenna connections, many measurement devices and in data communications (Ethernet). 75 ohms coaxial cable is used to carry video signals, TV antenna signals and digital audio signals. There are also other impedances in use in some special applications (for example 93 ohms). It is possible to build cables at other impedances, but those mentioned earlier are the standard ones that are easy to get. It is usually no point in trying to get something very little different for some marginal benefit, because standard cables are easy to get, cheap and generally very good. Different impedances have different characteristics. For maximum power handling, somewhere between 30 and 44 Ohms is the optimum. Impedance somewhere around 77 Ohms gives the lowest loss in a dielectric filled line. 93 Ohms cable gives low capacitance per foot. It is practically very hard to find any coaxial cables with impedance much higher than that.
Here is a quick overview of common coaxial cable impedances and their main uses:
- 50 ohms: 50 ohms coaxial cable is very widely used with radio transmitter applications. It is used here because it matches nicely to many common transmitter antenna types, can quite easily handle high transmitter power and is traditionally used in this type of applications (transmitters are generally matched to 50 ohms impedance). In addition to this 50 ohm coaxial cable can be found on coaxial Ethernet networks, electronics laboratory interconnection (foe example high frequency oscilloscope probe cables) and high frequency digital applications (fe example ECL and PECL logic matches nicely to 50 ohms cable). Commonly used 50 Ohm constructions include RG-8 and RG-58.
- 60 Ohms: Europe chose 60 ohms for radio applications around 1950s. It was used in both transmitting applications and antenna networks. The use of this cable has been pretty much phased out, and nowdays RF system in Europe use either 50 ohms or 75 ohms cable depending on the application.
- 75 ohms: The characteristic impedance 75 ohms is an international standard, based on optimizing the design of long distance coaxial cables. 75 ohms video cable is the coaxial cable type widely used in video, audio and telecommunications applications. Generally all baseband video applications that use coaxial cable (both analogue and digital) are matched for 75 ohm impedance cable. Also RF video signal systems like antenna signal distribution networks in houses and cable TV systems are built from 75 ohms coaxial cable (those applications use very low loss cable types). In audio world digital audio (S/PDIF and coaxial AES/EBU) uses 75 ohms coaxial cable, as well as radio receiver connections at home and in car. In addition to this some telecom applications (for example some E1 links) use 75 ohms coaxial cable. 75 Ohms is the telecommunications standard, because in a dielectric filled line, somewhere around 77 Ohms gives the lowest loss. For 75 Ohm use common cables are RG-6, RG-11 and RG-59.
- 93 Ohms: This is not much used nowadays. 93 ohms was once used for short runs such as the connection between computers and their monitors because of low capacitance per foot which would reduce the loading on circuits and allow longer cable runs. In addition thsi was used in some digital commication systems (IBM 3270 terminal networks) and some early LAN systems.
The characteristic impedance of a coaxial cable is determined by the relation of outer conductor diameter to inner conductor diameter and by the dielectric constant of the insulation. The impednage of the coaxial cable chanes soemwhat with the frequency. Impedance changes with frequency until resitance is a minor effect and until dielectric dielectric constant is table. Where it levels out is the “characteristic impedance”. The freqnency where the impedance matches to the characteristic impedance varies somwehat between different cables, but this generally happens at frequency range of around 100 kHz (can vary).
Essential properties of coaxial cables are their characteristic impedance and its regularity, their attenuation as well as their behaviour concerning the electrical separation of cable and environment, i.e. their screening efficiency. In applications where the cable is used to supply voltage for active components in the cabling system, the DC resistance has significance. Also the cable velocity information is needed on some applications. The coaxial cable velocity of propagation is defined by the velocity of the dielectric. It is expressed in percents of speed of light. Here is some data of come common coaxial cable insulation materials and their velocities:
Polyethylene (PE) 66% Teflon 70% Foam 78..86%
Return loss is one number which shows cable performance meaning how well it matches the nominal impedance. Poor cable return loss can show cable manufacturing defects and installation defects (cable damaged on installation). With a good quality coaxial cable in good condition you generally get better than -30 dB return loss, and you should generally not got much worse than -20 dB. Return loss is same thing as VSWR term used in radio world, only expressed differently (15 dB return loss = 1.43:1 VSWR, 23 dB return loss = 1.15:1 VSWR etc.).
Often used coaxial cable types
General data on some commonly used coaxial cables compared (most data from http://dct.draka.com.sg/coaxial_cables.htm, http://www.drakausa.com/pdfsDSC/pCOAX.pdf and http://users.viawest.net/~aloomis/coaxdat.htm):
Cable type RG-6 RG-59 B/U RG-11 RG-11 A/U RG-12 A/U RG-58 C/U RG-213U RG-62 A/U Impedance (ohms) 75 75 75 75 75 50 50 93 Conductor material Bare Copper Bare Tinned Tinned Tinned Bare Copper Copper Planted Copper Copper Copper Copper Copper Planted Steel Steel Conductor strands 1 1 1 7 7 19 7 1 Conductor area (mm2) 0.95 0.58 1.63 0.40 0.40 0.18 0.75 0.64 Conductor diameter 0.028" 0.023" 0.048" 0.035" 0.089" 0.025" 21AWG 23AWG 18AWG 20AWG 13AWG 22AWG Insulation material Foam PE PE Foam PE PE PE PE Pe PE (semi-solid) Insulation diameter 4.6 mm 3.7 mm 7.24 mm 7.25 mm 9.25 mm 2.95 7.25 3.7 mm Outer conductor Aluminium Bare Aluminium Bare Base Tinned Bare Bare polyester copper polyester copper copper copper copper copper tape and wire tape and wire wire wire wire wire tin copper braid tin copper braid braid braid braid braid braid braid Coverage Foil 100% 95 % Foil 100% 95% 95% 95% 97% 95% braid 61% Braid 61% Outer sheath PVC PVC PVC PVC PE PVC PVC PVC Outside diameter 6.90 mm 6.15 mm 10.3 mm 10.3 mm 14.1 mm 4.95 mm 10.3 6.15 mm Capacitance per meter 67 pF 67 pF 57 pF 67 pF 67 pF 100 pF 100 pF Capacitance per feet 18.6 20.5 16.9 20.6 20.6 pF 28.3 pF 30.8 13.5 pF Velocity 78% 66% 78% 66% 66% 66% 66% 83% Weight (g/m) 59 56 108 140 220 38 Attenuation db/100m 50 MHz 5.3 8 3.3 4.6 4.6 6.3 100 MHz 8.5 12 4.9 7 7 16 7 10 200 MHz 10 18 7.2 10 10 23 9 13 400 MHz 12.5 24 10.5 14 14 33 14 17 500 MHz 16.2 27.5 12.1 16 16 20 900 MHz 21 39.5 17.1 24 24 28.5
NOTE: The comparision table above is for information only. There is no guarantee of correctness of data presented. When selecting cable for a certain application, check the cable data supplied by the cable manifacturer. There can be some differences on the performance and specifications of different cables from different manufacturers. For example the insulation rating of cables vary. Many PE insulated coax cables can handle several kilovots voltage, while some foam insulated coax cables cna handle only 200 volts or so.
NOTE: Several of cables mentioned above are available with foam insulation material. This changes the capacitances to somewhat lower value and gives higher velocity (typically around 0.80).
Cable type RG-6 RG-59 B/U RG-11 RG-11 A/U RG-12 A/U TELLU 13 Tasker RGB-75 Impedance (ohms) 75 75 75 75 75 75 75 Impedance accuracy +-2 ohms +-3 ohms +-2 ohms +-3% Conductor material Bare Copper Bare Tinned Tinned Bare Bare Copper Planted Copper Copper Copper Copper Copper Steel Conductor strands 1 1 1 7 7 1 10 Conductor strand(mm2) 0.95 0.58 1.63 0.40 0.40 1mm diameter 0.10mm diameter Resistance (ohm/km) 44 159 21 21 22 210 Insulation material Foam PE PE Foam PE PE PE Foam PE Insulation diameter 4.6 mm 3.7 mm 7.24 mm 7.25 mm 9.25 mm Outer conductor Aluminium Bare Aluminium Bare Base Copper Tinned polyester copper polyester copper copper foil under copper tape and wire tape and wire wire bare copper tin copper braid tin copper braid braid braid braid braid Coverage Foil 100% 95 % Foil 100% 95% 95% Foil ~95% braid 61% Braid 61% Braid 66% Resistance (ohm/km) 6.5 8.5 4 4 12 ~40 Outer sheath PVC PVC PVC PVC PE PVC (white) PVC Outside diameter 6.90 mm 6.15 mm 10.3 mm 10.3 mm 14.1 mm 7.0 mm 2.8 mm Capacitance per meter 67 pF 67 pF 57 pF 67 pF 67 pF 55 pF ~85 pF Capacitance per feet 18.6 20.5 16.9 20.6 20.6 pF Velocity 78% 66% 78% 66% 66% 80% 66% Screening factor 80 dB Typical voltage (max) 2000V 5000V 1500V Weight (g/m) 59 56 108 140 220 58 Attenuation db/100m 5 MHz 2.5 1.5 50 MHz 5.3 8 3.3 4.6 4.6 4.7 19.5 100 MHz 8.5 12 4.9 7 7 6.2 28.5 200 MHz 10 18 7.2 10 10 8.6 35.6 400 MHz 12.5 24 10.5 14 14 12.6 60.0 500 MHz 16.2 27.5 12.1 16 16 ~14 ~70 900 MHz 21 39.5 17.1 24 24 19.2 90.0 2150 MHz 31.6 3000 MHz 37.4
NOTE: The numbers with ~ mark in front of them are approximations calculated and/or measured from cables or cable data. Those numbera are not from manufacturer literature. NOTE2: Several of cables mentioned above are available in sepcial versionswith foam insulation material. This changes the capacitances to somewhat lower value and gives higher velocity (typically around 0.80).
General coaxial cable details
The dielectric of a coaxial cable serves but one purpose – to maintain physical support and a constant spacing between the inner conductor and the outer shield. In terms of efficiency, there is no better dielectric material than air. In most practical cables cable companies use a variety of hydrocarbon-based materials such as polystyrene, polypropylenes, polyolefins and other synthetics to maintain structural integrity.
Sometimes coaxial cables are used also for carrying low frequency signals, like audio signals or measurement device signals. In audio applications especially the coaxial cable impedance does not matter much (it is a high frequency property of cable). Generally coaxial has a certain amount of capacitance (50 pF/foot is typical) and a certain amount of inductance. But it has very little resistance.
General characteristics of cables:
- A typical 50 ohm coax coaxial cable is pretty much 30pf per foot (doesn’t apply to miniature cables or big transmitter cables, check a cable catalogue for more details). 50 ohms coaxial cables are used in most radio applications, in coaxial Ethernet and in many instrumentation applications.
- A typical 75 ohm coaxial cable is about 20 pf per foot (doesn’t apply to miniature cables or big transmitter cables, check a cable catalogue for more details). 75 ohms cable is used for all video application (baseband video, monitor cables, antenna networks cable TV, CCTV etc.), for digital audio (S/PDIF, coaxial AES/EBU) and for telecommunication application (for example for E1 coaxial cabling).
- A typical 93 ohm is around 13 pf per foot (does not apply to special cables). This cable type is ued for some special applications.
Please note that these are general statements. A specific 75 ohm cable could be 20pF/ft. Another 75 ohm cable could be 16pF/ft. There is no exact correlation between characteristic impedance and capacitance.
In general, a constant impedance (including connectors) cable, when terminated at both ends with the correct load, represents pure resistive loss. Thus, cale capacitance is immaterial for video and digital applications.
Typical coaxial cable constructions are:
- Flexible (Braided) Coaxial Cable is by far the most common type of closed transmission line because of its flexibility. It is a coaxial cable, meaning that both the signal and the ground conductors are on the same center axis. The outer conductor is made from fine braided wire, hence the name “braided coaxial cable”. This type of cable is used in practically all applications requiring complete shielding of the center conductor. The effectiveness of the shielding depends upon the weave of the braid and the number of braid layers. One of the draw-backs of braided cable is that the shielding is not 100% effective, especially at higher frequencies. This is because the braided construction can permit small amounts of short wavelength (high frequency) energy to radiate. Normally this does not present a problem; however, if a higher degree of shielding is required, semirigid coaxial cable is recommended. In some high frequency flexible coaxial cables the outer shield consists if normal braids and an extra aluminium foil shield to give better high frequency shielding.
- Semirigid Coaxial Cable uses a solid tubular outer conductor, so that all the RF energy is contained within the cable. For applications using frequencies higher than 30 GHz a miniature semirigid cable is recommended.
- Ribbon Coaxial Cable combines the advantages of both ribbon cable and coaxial cable. Ribbon Coaxial Cable consists of many tiny coaxial cables placed physically on the side of each other to form a flat cable. Each individual coaxial cable consists of the signal conductor, dielectric, a foil shield and a drain wire which is in continuous contact with the foil. The entire assembly is then covered with an outer insulating jacket. The major advantage of this cable is the speed and ease with which it can be mass terminated with the insulation displacement technique.
Often you will hear the term shielded cable. This is very similar to coaxial cable except the spacing between center conductor and shield is not carefully controlled during manufacture, resulting in non-constant impedance.
If the cable impedance is critical enough to worry about correctly choosing between 50 and 75 Ohms, then the capacitance will not matter. The reason this is so is that the cable will be either load terminated or source terminated, or both, and the distributed capacitance of the cable combines with its distributed inductance to form its impedance.
A cable with a matched termination resistance at the other end appears in all respects resistive, no matter whether it is an inch long or a mile. The capacitance is not relevant except insofar as it affects the impedance, already accounted for. In fact, there is no electrical measurement you could make, at just the end of the cable, that could distinguish a 75 Ohm (ideal) cable with a 75 Ohm load on the far end from that same load without intervening cable. Given that the line is teminated with a proper 75 ohm load (and if it’s not, it damn well should be!), the load is 75 ohms resistive, and the lumped capacitance of the cable is irrelevant. Same applies to other impedance cables also when terminated to their nominal impedance.
There exist an effect that characteristic impedance of a cable if changed with frequency. If this frequency-dependent change in impedance is large enough, the cable will be impedance-matched to the load and source at some frequencies, and mismatched at others. Characteristic impedance is not the only detail in cable. However there is another effect that can cause loss of detail fast-risetime signals. There is such a thing as frequency-dependent losses in the cable. There is also a property of controlled impedance cables known as dispersion, where different frequencies travel at slightly different velocities and with slightly different loss.
In some communications applications a pair of 50 ohm coaxial cables are used to transmit a differential signal on two non-interacting pieces of 50-ohm coax. The total voltage between the two coaxial conductors is double the single-ended voltage, but the net current in each is the same, so the differential impedance between two coax cable used in a differential configuration would be 100 ohms. As long as the signal paths don’t interact, the differential impedance is always precisely twice the single-ended impedance of either path.
RF coax(ial) connectors are a vital link in the system which uses coaxial cables and high frequency signals. Coax connectors are often used to interface two units such as the antenna to a transmission line, a receiver or a transmitter. The proper choice of a coax connector will facilitate this interface.
Coax connectors come in many impedances, sizes, shapes and finishings. There are also female and male versions of each. As a consequence, there are thousands of models and variations, each with its advantages and disadvantages. Coax connectors are usually referred to by series designations. Fortunately there are only about a dozen or so groupings or series designations. Each has its own important characteristics, The most popular RF coax connector series not in any particular order are UHF, N, BNC, TNC , SMA, 7-16 DIN and F. Here is quicl introduction to those connector types:
- “UHF” connector: The “UHF” connector is the old industry standby for frequencies above 50 MHz (during World War II, 100 MHz was considered UHF). The UHF connector is primarily an inexpensive all purpose screw on type that is not truly 50 Ohms. Therefore, it’s primarily used below 300 MHz. Power handling of this connector is 500 Watts through 300 MHz. The frequency range is 0-300 MHz.
- “N” connectors: “N” connectors were developed at Bell Labs soon after World War II so it is one of the oldest high performance coax connectors. It has good VSWR and low loss through 11 GHz. Power handling of this connector is 300 Watts through 1 GHz. The frequency range is 0-11 GHz.
- “BNC” connctor: “BNC” connectors have a bayonet-lock interface which is suitable for uses where where numerous quick connect/disconnect insertions are required. BNC connector are for exampel used in various laboratory instruments and radio equipment. BNC connector has much lower cutoff frequency and higher loss than the N connector. BNC connectors are commonly available at 50 ohms and 75 ohms versions. Power handling of this connector is 80 Watts at 1 GHz. The frequency range is 0-4 GHz.
- “TNC” connectors are an improved version of the BNC with a threaded interface. Power handling of this connector is 100 Watts at 1 GHz. The frequency range is 0-11 GHz.
- “SMA” connector: “SMA” or miniature connectors became available in the mid 1960’s. They are primarily designed for semi-rigid small diameter (0.141″ OD and less) metal jacketed cable. Power handling of this connector is 100 Watts at 1 GHz. The frequency range is 0-18 GHz.
- “7-16 DIN” connector: “7-16 DIN” connectors are recently developed in Europe. The part number represents the size in metric millimeters and DIN specifications. This quite expensive connector series was primarily designed for high power applications where many devices are co-located (like cellular poles). Power handling of this connector is 2500 Watts at 1 GHz. The frequency range is 0-7.5 GHz.
- “F” connector: “F” connectors were primarily designed for very low cost high volume 75 Ohm applications much as TV and CATV. In this connector the center wire of the coax becomes the center conductor.
- “IEC antenna connector”: This is a very low-cost high volume 75 ohm connector used for TV and radio antenna connections around Europe.
There are also some special connectors and special variations of connectors used for some special applications. For example FCC has required that suppliers of RF LANs (local area networks) have an RF interface that cannot be matched by the present available RF connector series (idea is to prevent connecting higher gain antennas to those devices). As a result, several so called “reverse polarity connectors” have been designed. The reverse polarity TNC is one of the most popular where the threads are left-hand instead of the conventional right-hand type. |
Note 1: This section deals with linear momentum. Angular momentum (the momentum of turning or spinning objects) will be covered in another section.
Note 2: In discussions about momentum and collisions, the concept of kinetic energy often comes up, as it will on this page. You can still learn a lot about momentum without knowing anything about KE, but once you do learn that, you might want to revisit.
Momentum is very easily defined, it's mass × velocity, but why do we need it? Consider this:
Would you rather be hit in the face by a piano moving at 10 mi./hour or by a feather moving at the same speed?
Momentum couples velocity to mass to give us a better gauge of how much energy a collision can deliver.
The momentum of an object traveling in a straight line (linear momentum) is given the symbol p, and the definition p = mv. The SI units of momentum are mass (Kg) × velocity (m/s) = Kg·m·s-1.
Kinetic energy, KE = ½ mv2, is also a function of mass and velocity, but it is a scalar quantity, while momentum is a vector. We'll see that this makes momentum quite useful as a property of motion.
SI stands for Le Système International d'Unités (French), or International System of Units.
It is a standardized system of physical units based on the meter (m), kilogram (Kg), second (s), ampere (A), Kelvin (K), candela (cd), and mole (mol), along with a set of prefixes to indicate multiplication or division by a power of ten.
Momentum, p = mv is mass (a scalar) multiplied by the velocity, a vector, therefore it too is a vector. Remember that multiplying a vector by a scalar can change its length and units, but not its direction.
The only things that matter about any vector are its length and direction. The length, or magnitude, of a momentum vector is how much momentum there is, and its direction is the direction of the momentum, or its velocity component.
Remember that vectors, like the two-dimensional ones on the right, can be moved around at will without loss of meaning, and that we add them head-to-tail, as shown here.
You can think of a momentum vector as a velocity vector multiplied by a scalar mass.
A scalar is a number, which has no direction implied, like mass, temperature or speed. Velocity is a vector that has both speed and direction.
The total amount of momentum in a system is always conserved. That is, momentum is never lost or gained in a closed system.
For example, consider the drawing of billiard balls below. The system would be the balls and the table. The white ball is hit with momentum P into the stationary red balls, all packed together, touching, and initially at rest. We'll assume that the masses of all balls are the same, just to make things easy. Because they are not moving, the total momentum of all of the red balls is zero, therefore the momentum of the entire system is just the momentum of the white ball, which is moving.
Shortly after the collision of the white ball with the lead red ball, the picture might look something like this (below). The white ball has lost most of its momentum, and each red ball has picked up a part of it. Each has scattered in a different direction, but the sum of the lengths of all 11 momentum vectors, p1, p2, ..., p11, is equal to the length of the initial momentum vector P.
All of the momentum is still there after the collision, it's just been redistributed.
In science, to say that a quantity is conserved means that, in a closed system, or in the universe, the amount of that quantity never changes, though it might get spread around in different ways.
Here's another illustration of what conservation of momentum means in terms of vectors. Consider the pink ball with momentum p1 as it simultaneously strikes the two green balls at rest (p = 0).
We denote the length of vector p1 with absolute-value bars, | |. In the context of vectors, these always mean "length of."
Conservation of momentum says that |p1| before the collision must equal |p2| + |p3| after it. We can never have more momentum in the green balls after the collision.
We might end up with less total momentum in the green balls, but that's because the energy of the collision can be distributed elsewhere, like sound or heat energy. In our ideal system,
|p1| = |p2| + |p3|
There are two basic types of collisions between objects, elastic collisions (also known as ideal collisions) and inelastic collisions.
In an elastic collision, like the one illustrated below, two objects (the pink and green balls) approach each other with a certain momentum. We'll assume, for simplicity, identical masses and identical velocities, except for direction, so the momenta are the same: m1v1 = m2v2.
In all collisions, momentum is conserved. That is, the total amount of momentum present in the system (m1v1 + m2v2 here) is still present after the collision, except that it might be distributed a little differently.
In this collision, the balls collide and instantaneously reverse direction to head the other way. In an elastic collision, there is no deformation of the objects, so there's no perturbation of the atoms and molecules within, so there's no heat radiated away.
Collisions of certain real objects, such as billiard balls, are very nearly elastic. The atoms and molecules of certain gases collide pretty much elastically, too, which is a big help in calculating the properties and behavior of them using the ideal gas law.
In an inelastic collision, momentum is still conserved in just the way it was for an elastic collision, but kinetic energy is not. Consider the collision in the drawing:
The situation is the same, but now the balls may deform as they collide, which can, in turn heat them up through atomic and molecular motion. Sound or even heat and light might be given off, carrying energy away. This inelasticity is more common for a real collision. This energy is lost to the surroundings. While energy is always conserved in the universe, it is not conserved in this system; some is lost from the two balls to the surroundings.
In an elastic collision, all kinetic energy remains with the colliding bodies.
In an inelastic collision, some kinetic energy is lost to the surroundings in other forms, such as heat and sound.
In a perfectly inelastic collision, two objects collide, stick together and move as one object thereafter.
Let's consider a ball rolling into an immovable object, like a wall. If we let the wall be very massive compared to the ball, then the collision won't cause it to move. The wall won't have any momentum, either before or after the collision.
We'll let capital letters M & V stand for the mass and velocity of the wall, and lower case, m, v1 & v2 stand for the mass and velocity of the ball.
The incoming velocity of the ball is v1, and the outgoing velocity is v2. Our job here is to find v2 in terms of v1.
We begin with the total momentum of the system, before and after. Remember that these must be equal. Before the collision, the wall is stationary, so it has no momentum, so all of the momentum of the system is on the left side of this equation. On the right is the momentum after the collision. We'll allow for movement of the wall, then look at that later:
If we divide both sides by m, we get
Now let's put both velocities together on the left:
Now we're going to want to compare this momentum-balance equation to the kinetic energy one, so let's square both sides and save this equation with a ( * ) for now:
Now let's consider the kinetic energy of the system. It's the same process. The KE before the collision (left side of this equation) just involves the ball; the wall isn't moving. On the right, after the collision, we allow for movement of the wall.
If we multiply through by 2 we get
and here again, we can divide by m:
Moving v22 to the left, we get:
Now let's multiply both sides of that equation by M/m so that we can line it up with equation (
Now we have two equations containing (M/m)2 V2, so we can hook those up using the transitive property:
If we multiply both sides by m/M, we get
Now let's do something interesting. Let's ask what happens as we make the mass of the wall, M, infinitely large. The limit ("lim") notation below is used for that. This statement says, "in the limit where M becomes infinitely large, the expression goes to zero."
That is, as M gets huge, m/M gets very small, and the term on the right side of our equation vanishes, so we have
That gives us
If we take the square root of both sides we have
Now we know that the ball isn't going through the wall, so v1 = - v2.
This means that in a collision of a moving object with an immovable object, all of the momentum remains in the moving object and its velocity is just reversed.
The scenario is just a bit more complicated if the ball hits the wall at a non 90˚ angle, of course, but not much more complicated. We'll tackle that later.
Of course, this is only true for an elastic collision, where all of the kinetic energy is conserved. In a real collision, we always lose some energy to the generation of sound or heat.
If a = c and b = c,
then a = b.
Consider the setup below. If we can contrive a way to place a small explosive between two balls of equal mass at the center of a track, with bumpers on the ends to ensure as close as possible to elastic collisions, the two balls should bounce off either end with the same momentum, meet back in the middle and stop there. You can click the forward button to see an animation in slow motion.
This experiment (and it can be done as an experiment on an air-track, a track with very little dynamic friction) is a very good one for investigating conservation of momentum.
Think about it for a minute. At the beginning, the momentum of the system is zero. The law of conservation of momentum tells us that the momentum of the system must remain zero. Therefore the velocity of each ball after the explosion between them must be the same, but in opposite directions (vector velocities add to zero). After the bounce, the two balls collide with equal momenta but opposite direction, so the momenta add to zero and everything stops.
At all times in the animation, ppink = -pgreen
Image: Wikipedia Commons
Fireworks are a great example of conservation of momentum. The symmetry of fireworks explosions shows that momentum is conserved.
To achieve a pattern like this one, the explosion of the colored fireworks charges is timed so that it occurs right at the top of the flight of the fireworks package, where the velocity of the shell is zero. At that point the momentum of the system is roughly zero, so the explosion has spherical symmetry, in which all of the 3-D momentum vectors must add to zero.
That means there must be as many colored streaks to the right as to the left, as many up as down, and so on.
In the early days of rocketry, many believed that a rocket couldn't move in space because there was nothing for the rocket exhaust to "push against" in the vacuum of space. But conservation of momentum won out, and it turns out that rockets do just fine with nothing to push against.
Consider the picture below. The top figure shows a stationary rocket. The velocity of the rocket is zero, so its momentum is zero. No hot gas molecules are being ejected from the nozzle at the back, so there is no momentum there.
Now let's ignite the fuel, which causes the ejection of hot gas molecules to the left at very high velocities.
While the momentum of each molecule or atom is very small, there are a very great number of them, adding up to the left-pushing momentum
pengine = mv1 + mv2 + mv3 + ...
Now conservation of momentum says that the total momentum of this system (rocket plus gases) must remain zero, so there must be an equal momentum of the rocket toward the right, with
procket = - pengine
Here's an interesting thing: when it comes to real rockets, notice that as the fuel is ejected from the engine, the mass of the rocket actually decreases, so its velocity must increase proportionally in order to maintain the momentum. So as the fuel tank empties, the forward speed of the rocket increases even more.
In a perfectly inelastic collision, two objects collide along a line and they stick together, effectively forming one object. The classic example of this is the coupling of two train cars, like this:
In the animation (play it a few times), you can see that each train car has its own momentum, p1 = m1v1 and p2 = m2v2. The initial velocity of the second car is zero, so its momentum is zero. That means that all of the momentum of the system is in the first car.
After the collision, when the cars couple, they are effectively a single car with mass m1 + m2, and a new velocity. That velocity has to be smaller than v1 because the total momentum is conserved but the mass of the moving object has increased.
Let's pause here to derive a new way to define kinetic energy, in terms of momentum. We already have KE = ½mv2. We'll begin with the definition of momentum:
Now square both sides to get
The right side is looking like ½mv2, so let's divide out one of the masses:
And finally, if we divide both sides by 2, we have:
So we have a new formula for the kinetic energy, on that comes in handy from time to time:
The kinetic energy of a moving object can be calculated in two ways:
The velocity vectors and masses are shown. The green ball has a negative velocity. It doesn't matter which direction we choose as negative, just that they're opposite. We can calculate momentum vectors:
Now the total momentum is the sum of these oppositely-signed vectors:
The law of conservation of momentum says that the momentum of this system must remain forever Ptot = -1300 Kg·m/s. We are given the momentum of the pink ball (cart) after the collision, P1 = (130 Kg)(-13.5 m/s) = -1755 Kg·m/s:
These two momentum vectors must sum to the total system momentum. Plugging in P1 = -1755 Kg·m/s and rearranging, we can find P2:
Now P2 = m2v2, so
from which we can solve for the velocity of the second cart.
The second cart rolls away much more slowly, but that's necessary for momentum to be conserved. In reality, the collisions aren't completely elastic, and the post-collision velocities would be lower.
The momenta of each ball are calculated like this:
So the total momentum of the system before the collision (a vector sum) is
That momentum must remain constant in this elastic collision.
The kinetic energy of the first ball is
and the second ball:
The total KE of the system is a sum of positive scalars:
Now after the collision, it must be true that the momenta will sum to 400 Kg·m/s. I'll write the mass values in from here on:
And the total kinetic energy of this elastic collision must also remain the same, so we have:
This we can reduce a bit to
Now we have two equations, (1) and (2), and two unknowns, the velocities v1 and v2. We can solve for v2 in (1) and simplify to get
Now we can plug that value of v2 into (2):
To simplify and solve for v1, let's first square that denominator and move it outside the parentheses:
That fraction reduced easily:
Now let's multiply through by 3/5 because the coefficients are divisible by 5 and it will clear the 5/3:
Now expand the binomial (20-5v1)2 :
Gathering terms gives a solvable quadratic:
We can reduce again by dividing by 5:
Now let's just complete the square to solve this exactly. Dividing by 11 gives:
Adding the square of 1/2 the coefficient of v1 to both sides, we get
Identifying the perfect square on the left and getting a common denominator on the right gives
Now we can take the square root of both sides and move the 20/11 to the right to isolate v1 to find two possible solutions.
We'll rule out the first solution because it's physically impossible (see the diagram above!). So the final velocity of the pink ball (100 Kg) is -6.26 m/s. Plugging that into our earlier expression for v2 gives:
Now let's use those velocities to check whether we do indeed end up with the same momentum as before the collision:
... and we do. The final picture looks like this:
Whew! That's a lot of work, but very satisfying!
The first thing to do is to calculate the x- and y-components of the velocity vector of the pink-ball. We'll call the up and right directions positive and the down and left directions negative. It's not crucial which is which, just that we remain consistent throughout our work:
The x-component of the momentum is:
and the y-component of the momentum is:
(Yup, easy because of the 1Kg mass). The green ball has no vertical velocity, and therefore no vertical momentum, so its x- and y-momenta are
Now we can calculate the total momentum in the x- and y-directions:
The next step is to look at what we're given about what happens after the collision, keeping in mind that the total momentum in the x- and y-directions must remain constant. Here's the picture:
First we calculate the x- and y-components of the green ball velocity after the collision:
... and convert those to momenta by multiplying by the mass of the green ball:
Finally, the x- and y-momenta of both balls must sum to the total momentum in each of those directions before the collision (conservation of momentum), so we have:
Rearrangement gives us the momentum of the pink ball in the x-direction:
Likewise we can rearrange
Now let's check everything for consistency. The total momentum in the x-direction after the collision is p1x + p2x = -0.744 - 0.39 = -1.134 Kg·m/s. The total momentum in the y-direction after the collision is p1y + p2y = 1.45 - 0.95 = 0.5 Kg·m/s. These are the same x- and y-momenta as before the collision.
Here's a picture of the whole collision, drawn roughly to scale:
Now that's the momentum of the system after the collision, too.
The total mass of both cars together is m1 + m2 = 60,000 Kg, so the velocity of that double-car is calculated by rearranging the momentum formula,
and plugging in the numbers:
If you think about it, it makes sense that the velocity is halved in this case. |
Biodiversity is the diversity of life in all its forms across the planet. The National State of the Environment Report made the following statements in defining biodiversity (Slattery, et al., 2003):
“Biodiversity is the variety of all forms of life - the different plants, animals and microorganisms, the genes they contain and the ecosystems of which they form a part”.
It ranges from large scale ecosystems to the different species of flora and fauna, and to genetic differences between individuals of the same species. These three levels work together to create the complexity of life on earth. Accordingly, biodiversity is conventionally partitioned into three components: (1) genetic diversity, (2) species diversity and (3) diversity of ecosystems (Slattery, et al., 2003).
Genetic diversity is normally considered to be the range of genetic information present within a species. Genetic information is passed on to successive generations either directly in asexual reproduction or by mixing of genetic material from both parents in sexual reproduction. Genetic diversity increases the diversity of form and behavior within a species, which provides it with greater capacity to cope with changing environmental conditions or make use of completely different environments. Genetic diversity may be expressed in the form of genetic variation within individuals, within populations and/or between populations. Genetic diversity can decline if populations are lost or if the total population of the species is drastically reduced. If populations become fragmented into very small sub-populations, inbreeding depression may cause the genetic diversity to decline further (Smith, et. al., 2000).
The number of species present in a location depends on the type of ecosystem. Subtropical rainforests usually contain over 100 vascular plant species in a hectare, including 30-40 tree species, whereas a hectare of cool temperate rainforest may contain only 5-10 tree species. Grassy woodlands and heaths may also be diverse with more than 100 species in a hectare, although sometimes these vegetation types contain relatively few species. However, ecosystems with relatively few species, such as temperate rainforests, can be important for species diversity if those species are unique. The diversity of various animal species assemblages varies even more than that of plants, and some groups also show distinct seasonal variation (Smith, et. al., 2000) (Figure 1.).
Diversity of ecosystems
The species in a given area interact with each other and with their environment to form complex networks known as ecosystems. These differ from place to place, thus creating ecosystem diversity. Each ecosystem differs from all others because it contains a unique combination of species (and therefore genes) and because these species interact with each other and with each environment in distinctive ways. Biodiversity is not static but is constantly changing. It is increased by genetic change and evolutionary processes and reduced by processes such as habitat degradation, a decline in flora and fauna, and the extinction of species. Diversity in all its forms (genetic, species and ecosystem) is a critical factor in the resilience of an area and its ability to respond to significant changes such as fire, food, climate and human impacts. Diversity is the key to maintaining viable populations of our native flora and fauna (Environmental Protection Authority, 2000) (Figure 2).
This is due to three major components, loss of biodiversity can result in reduction or loss of ecosystem function. Therefore, it is very important the need for conservation of biodiversity. There are four main reasons for preserving biodiversity: maintaining ecosystem processes, ethics, aesthetics and culture, and economics (Environmental Protection Authority, 2000).
Biodiversity has two key aspects:
its functional value at the ecosystem level; and
its intrinsic value at the individual species, species assemblages and genetic levels.
The functional value is derived from the parts played by the species assemblages in supporting ecosystem processes and is expressed through the kinds of plant and animal assemblages occurring in various parts of the landscape on different soil types. In addressing this, matters requiring consideration include (Environmental Protection Authority, 2000):
differences in composition pre and post disturbance; and
the ecosystem processes, linkages and how they are supported.
The intrinsic values relate to the actual species and species associations. Two species assemblages may have different intrinsic values but may still have the same functional value in terms of the part they play in maintaining ecosystem/ecological processes (Environmental Protection Authority, 2000).
Biodiversity provides the critical processes that make life possible, and that are often taken for granted. Healthy, functioning ecosystems are necessary to maintain the quality of the atmosphere, and to maintain and regulate the climate, fresh water, soil formation, cycling of nutrients and disposal of wastes (often referred to as ecosystem services). Biodiversity is essential for controlling pest plants, animals and diseases, for pollinating crops and for providing food, clothing and many kinds of raw materials (Environmental Protection Authority, 2000).
Biodiversity refers to the living pieces that shouldn’t be discarded since we use the earth’s resources to sustain us. Experience suggests to us that the first rule of intelligent tinkering is to keep all of the pieces. Because of the interconnected nature of ecosystems, the loss or addition of one species has the potential to change an ecosystem.
High levels of biodiversity are associated with greater ecosystem stability. The more diverse a system is, the better able it is to cope with environmental stressors, such as floods or drought. Biodiversity gives us choices, options and flexibility to help us cope with variability, including long-term habitat changes.
When a system is simplified, such as having only one species of crop or type of grass, it increases the odds that environmental stressors will have a more pronounced impact or that a disease or pest will be able to spread rapidly. Animal and plant populations with low genetic diversity are much more susceptible to stress and vulnerable to extinction.
We all rely on the tremendous variety of species, genes and ecosystems in our world and the many benefits we receive from them - they deserve our respect and conservation.
The Importance of Biodiversity
Potential benefits of biodiversity are; health of ecosystems, their ability to maintain and regulate atmospheric quality, climate, fresh water, marine productivity, soil formation, cycling of nutrients and waste disposal, Resilience of ecosystems, their ability to respond to and recover from external shocks such as drought, flood, and climate change, cultural values (Figure 4). Biodiversity is extremely important to people and the health of ecosystems. A few of the reasons are (National Wildlife Federation, 2013):
Biodiversity allows us to live healthy and happy lives. It provides us with an array of foods and materials and it contributes to the economy. Without a diversity of pollinators, plants, and soils, our supermarkets would have a lot less produce.
Most medical discoveries to cure diseases and lengthen life spans were made because of research into plant and animal biology and genetics. Every time a species goes extinct or genetic diversity is lost, we will never know whether research would have given us a new vaccine or drug.
Biodiversity is an important part of ecological services that make life livable on Earth. They include everything from cleaning water and absorbing chemicals, which wetlands do, to providing oxygen for us to breathe—one of the many things that plants do for people.
Biodiversity allows for ecosystems to adjust to disturbances like extreme fires and floods. If a reptile species goes extinct, a forest with 20 other reptiles is likely to adapt better than another forest with only one reptile.
Genetic diversity prevents diseases and helps species adjust to changes in their environment.
Simply for the wonder of it all. There are few things as beautiful and inspiring as the diversity of life that exists on Earth.
Threats to Biodiversity
Extinction is a native part of life on Earth. Over the history of the planet most of the species that ever existed, evolved and then gradually went extinct. Species go extinct because of native shifts in the environment that take place over long periods of time, such as ice ages.
Today, species are going extinct at an accelerated and dangerous rate, because of non-native environmental changes caused by human activities. Some of the activities have direct effects on species and ecosystems, such as are (National Wildlife Federation, 2013):
Habitat loss/ degradation
When an ecosystem has been dramatically changed by human activities—such as agriculture, oil and gas exploration, commercial development or water diversion—it may no longer be able to provide the food, water, cover, and places to raise young. Every day there are fewer places left that wildlife can call home (Figure 5). There are three major kinds of habitat loss are (National Wildlife Federation, 2013):
Habitat destruction: A bulldozer pushing down trees is the iconic image of habitat destruction. Other ways that people are directly destroying habitat, include filling in wetlands, dredging rivers, mowing fields, and cutting down trees.
Habitat fragmentation: Much of the remaining terrestrial wildlife habitat in the U.S. has been cut up into fragments by roads and development. Aquatic species’ habitat has been fragmented by dams and water diversions. These fragments of habitat may not be large or connected enough to support species that need a large territory in which to find mates and food. The loss and fragmentation of habitat make it difficult for migratory species to find places to rest and feed along their migration routes.
Habitat degradation: Pollution, invasive species and disruption of ecosystem processes (such as changing the intensity of fires in an ecosystem) are some of the ways habitats can become so degraded that they no longer support native wildlife.
Over exploitation (such as overfishing)
People have always depended on wildlife and plants for food, clothing, medicine, shelter and many other needs. But today we are taking more than the native world can supply. The danger is that if we take too many individuals of a species from their native environment, the species may no longer be able to survive. The loss of one species can affect many other species in an ecosystem. Overexploitation is the over use of wildlife and plant species by people for food, clothing, pets, medicine, sport and many other purposes. The hunting, trapping, collecting and fishing of wildlife at unsustainable levels is not something new. The passenger pigeon was hunted to extinction early in the last century, and overhunting nearly caused the extinction of the American bison and several species of whales. Today, the Endangered Species Act protects some U.S. species that were in danger from overexploitation, and the Convention on International Trade in Endangered Species of Fauna and Flora (CITES) works to prevent the global trade of wildlife. But there are many species that are not protected from being illegally traded or overharvested are (National Wildlife Federation, 2013).
Spread of Non-native Species/ Diseases
Human health and economies are also at risk from invasive species. The impacts of invasive species on our native ecosystems and economy cost billions of dollars each year. Many of our commercial, agricultural, and recreational activities depend on healthy native ecosystems.
Some human activities have indirect but wide-reaching effects on biodiversity, including:
All of these threats have put a serious strain on the diversity of species on Earth. According to the International Union for Conservation of Nature (IUCN), globally about one third of all known species are threatened with extinction. That includes 29% of all amphibians, 21% of all mammals and 12% of all birds. If we do not stop the threats to biodiversity, we could be facing another mass extinction with dire consequences to the environment and human health and livelihood are (National Wildlife Federation, 2013).
In an area without anthropogenic influences, vegetation consisting of completely native causes "native vegetation" is called. Anthropogenic impacts on vegetation by humans and animals all the destroyed (cut plants, fires, grazing, irrigation and drying of areas of life, such as climate change).
A lot of people from the intensification of economic activities in the world, after the last 50 years, especially with the increasing environmental pollution, anthropogenic influences largely been talk of a distant vegetation. The world's most secluded and found not the people living in the bodies of living things, even if the environmental pollution caused by the accumulation of many chemicals were transported. For these reasons, today, even the anthropogenic impact in less developed regions is still far from human influence.
Description of the plant natively grows only in certain areas of native vegetation, but also to be expressed in those areas should be moved by the people. Native plants in geological times in a region that region-specific climate, soil, rainfall, drought and frost, depending on the physical and biotic characteristics evolve and interact with other species in that region are found in the local plant communities. In this way, the conditions under which native plants will have certain features that makes them perfectly adapted to the characteristics and landscaping, conservation and restoration projects is extremely important to provide alternatives.
Plant sociologists, often talking about the native vegetation of different “Potential Native Vegetation”. Potential Native Vegetation, all the external influences in an area (anthropogenic effects) resulting from native plant cover is removed. Generally, founded by ancient civilizations such as Turkey, passed away of native resources and areas of use, this day there are big differences between vegetation and native vegetation. However, in areas that are difficult to reach believers hardly influence the high mountains and vegetation, is equivalent to the present.
The material is of great importance for landscaping with native plants and plant communities that make us the main ones are as follows: Forests, savannah, maquis, savannahs, deserts, meadows, tundras, alpine plants, swamps. These are usually the most important plant communities that make up the earth, ecological sense, have come together and established a partnership between the livings, rather than growing the plant complied evaluated in terms of their formation and utilization conditions.
Development of vegetation in an area and constantly remain in the area of environmental conditions that have an impact on the router. Environmental conditions, climatic conditions generally (temperature, humidity, rainfall, light, wind, etc.)., edaphic conditions (soil, water), orographic conditions (pressure area, slope, elevation, etc.)., biotic conditions (the effect of the surrounding creatures) is understood. Environment, living environment, "Biyosenoz" and the non-living environment "Ecotope" ecosystem, which together are called. Ecotopes a harmonious balance between native surroundings and has biyosenoz. At the end of the evolution of ecosystems composed of many years in this environment and vegetation is appropriate that this "Klimax" is called. There is one or more of the conditions that make the environment as a result of changes in the plant community, especially, are seen in the changing ecosystem. For these reasons, making any assessment of the vegetation in an area that should be examined thoroughly the effects of local environmental conditions and their vegetation.
Plants form the key elements stored in the primary energy production. All living things depend on other living plants. Native plants natively occur in the region in which they evolved. While non-native plants might provide some of the above benefits, native plants have many additional advantages. Because native plants are adapted to local soils and climate conditions, they generally require less watering and fertilizing than non-natives. Natives are often more resistant to insects and disease as well, and so are less likely to need pesticides. Wildlife evolved with plants; therefore, they use native plant communities for food, cover and rearing young. Using native plants helps preserve the balance and beauty of native ecosystems (Figure 6.).
Native species are those that occur in the region in which they evolved. Plants evolve over geologic time in response to physical and biotic processes characteristic of a region: the climate, soils, timing of rainfall, drought, and frost; and interactions with the other species inhabiting the local community. Thus native plants possess certain traits that make them uniquely adapted to local conditions, providing a practical and ecologically valuable alternative for landscaping, conservation and restoration projects, and as livestock forage. In addition, native plants can match the finest cultivated plants in beauty, while often surpassing non-natives in ruggedness and resistance to drought, insects and disease (Virginia Department of Conservation & Recreation, 2012) (Figure 7.).
Native vegetation is defined differently in different jurisdictions but typically is defined to include natively occurring local vegetation (in some cases defined as vegetation that existed before a certain date), including in some jurisdictions native grasses and aquatic vegetation. The definition of terms such as ‘remnant’, ‘regrowth’, and ‘thickening’ is more contentious. (Productivity Commission, 2004).
Potential benefits of Native vegetation
Fodder; food; seeds; wildflowers and plants; medicines; timber, including for fencing and firewood; shade; shelter; honey production; pollination and pest control services
Tourism, recreation and visual amenity
Habitat for native fauna (Figure 8)
Soil and water protection (eg prevention of salinity, soil erosion or acidification)
Carbon sinks and/or storage
‘Existence’ and ‘option’ values (Productivity Commission, 2004).
In other words, native vegetation is;
Oxygen production is the largest function.
The native vegetation is important in terms of microclimatic effects.
It is important in relation to hydrological events.
I is important in relation to soil
It is home to wildlife
It is high and significant economic value.
It is important in relation to recreational (Figure 9)
Native vegetation provides many benefits principally through the protection of the land surface, amelioration or modification of local climate, maintenance of critical ecosystem processes, conservation of biodiversity, enhancement and protection of cultural and aesthetic values, and the provision of economically important products such as timber and grazing forage. However, significant degradation and loss of native vegetation has taken place since European settlement, principally as a result of human activity (Smith, et. al., 2000).
The native vegetation is directly related to land use and environmental change and is the most easily visible and perceived part of the landscape. It is important in relation to visual effects. However, without detailed studies cannot be understood fully the relationship between other landscape elements. According to Peter et al. (2000), as well as providing essential habitat, native vegetation, including small isolated remnants and scattered trees, has an important role in providing connectivity across the landscape (Figure 10).
According to (Smith, et. al., 2000); Connectivity concerns how easily the landscape allows plant and animal species to disperse or move through it. Adequate connectivity in the landscape reduces the probability of small isolated populations occurring, allows mobile species to access essential but dispersed resources, and may be important for species migration. Connectivity needs to be considered on a whole-of-landscape basis. This is because species movement can occur from any patch or island that is either large enough or of sufficient habitat quality to support a breeding population or among a number of smaller patches that combine to provide suitable habitat for a population. Corridors are generally considered important for providing connectivity in highly cleared and fragmented landscapes (Fahrig and Merriam 1985, Downes et al. 1997). However, there is only limited proof of their efficacy in allowing species movement and they may indeed negatively affect individual species by promoting transmission of diseases and disturbances (Simberlof and Cox 1987, Hess 1994). Nevertheless, it is generally agreed that species response to fragmentation is individualistic and that corridors enhance landscape connectivity for many species (Saunders et al. 1991, Dawson 1994, Beier and Noss 1998).
According to (Smith, et. al., 2000); Native vegetation plays an important role in many ecosystem processes. These processes include nutrient retention and cycling, carbon storage, purification of water and the maintenance of viable and diverse populations of important components of biodiversity such as detritivores (organisms that break down organic matter) pollinators and parasites and predators of farm pests (Figure 11). For example the life cycle of some parasitic wasps and flies depend on nearby sources of food found in native vegetation. Some species of these parasites seldom travel more than 200 metres from such sources of food (Davidson and Davidson 1992).
Degradation and loss of native vegetation resulting from human activity has altered and disturbed many of these ecosystem processes. Broad-scale loss of vegetation cover has led to considerable land degradation by exposing the land surface to wind and rainfall, which greatly increases soil erosion. These problems are exacerbated by some agricultural management practices, which cause loss of soil organic matter and nutrient decline and, in some places, increasing soil salinity and acidity. For example, it has been estimated that some 120,000 ha of NSW are currently affected by salinity and that 7.5 million ha could potentially become salt affected (Smith, et. al., 2000).
Disturbance of ecosystem function, fragmentation of habitats, the introduction of foreign species and ecologically unsympathetic agricultural systems has been widespread. As a consequence, species decline and extinction has been marked, while altered community balance has frequently led to the unchecked and damaging spread of exotic plants and animals. Halting the decline in native vegetation cover and rectifying some of the damage that has been done is not an easy task, but is possible. Nature reserves have a vital role in this recovery but are only part of the solution. We also need sympathetic management of privately owned native vegetation, and it should be recognised that such management can offer production benefits by preserving land and water quality. Restoration of vegetation cover also provides considerable potential for improved ecosystem function, increased biodiversity and better health for the wider environment (Smith, et. al., 2000).
Benefits of native vegetation
Biodiversity conservation is an essential component of responsible environmental and native resource management. It is fundamental to quality of life and supports our economy and productivity, both now and in the future. Native vegetation provides habitat for native animals. It delivers a range of ecosystem services that make the land more productive and that contribute to human wellbeing. The benefits provided by native vegetation can be separated into the following categories (Victorian Government Department of Sustainability and Environment, 2012).
Use values involve people physically using or experiencing native vegetation and the attributes it provides, and deriving value from this use. These use values comprise both direct use values and indirect use values.
Direct use values. These values include benefits to agricultural production, such as enriching soils, shade for animals, pollination of plants, and native vegetation as the provider of goods such as honey, timber and pasture for grazing. Other direct uses of native vegetation include recreation and cultural uses.
Indirect use values. These values include functional benefits derived from relying on native ecosystems for life support functions including providing clean air, water and other resources, along with the conservation of biodiversity. Other benefits include resilience to climate change, and reduced susceptibility to disease and extreme weather events.
There are a range of benefits that flow from native vegetation that are enjoyed without contact with the native vegetation. These are known as non-use values and include existence values, option values and bequest values (Victorian Government Department of Sustainability and Environment, 2012).
Existence values. This means the satisfaction that the community derives simply from knowing that native vegetation and biodiversity exist.
Option values. These are benefits derived from retaining the option to use native vegetation in the future without necessarily planning to do so. These benefits include the value of waiting until a time in the future when better information is available to inform decisions about the use of native vegetation.
Bequest values. These values derive from the knowledge that maintaining native vegetation and biodiversity will benefit future generations. (Victorian Government Department of Sustainability and Environment, 2012).
Identification and classificiation of native vegetation
The aim is to determine the extent of the formation and vegetation. Classificiation table is;
Floristic:As an individual species, genera, families and so on. to classify the botanical names of the form
Form and structure: This is based on the dominant forms (forest, pasture etc.) of plant communities by giving importance to classify plants (the most abundant, etc.)
Ecological: This classification, to classify plants according to habitat and some of the critical environmental parameters.
Turkey and Native Vegetation
Biodiversity is the great wealth of the countries that have the most perceived. Turkey is the world's richest in terms of native vegetation one of the countries. Mediterranean, Irano-Turanian and Euro-Siberian phytogeographical regions of Anatolia into finding and intertwined with each other, this is the main cause of wealth.
Meadows and pastures in our country is one of the most important sources of biological diversity. These are considered one of the largest renewable native resources. Turkey, in terms of plants in the world, is one of the countries in the temperate climate zone. The main reasons for this wealth, climate differences, topographical diversity, geological and geomorphological diversity, sea, lake, river, such as various water variety of environments, ranging from 0-5000 meters height differences, three different combined with the fact that the place of plant geography, east and west of Anatolia differences between the present ecological and floristic diversity in all of these ecological reflection of the diversity (Figure 12).
Turkey, around 9000 with the type of ferns and seed plants is a country very rich in terms of flora. Whereas, flora of the European continent contains about 12,000 species. The importance of the flora of Turkey, as well as species richness is due to the high rate of endemism. There are 2750 endemic species in the European countries, Turkey, this number is around 3000 (Figure 13).
In our country, the factors leading to the formation of native vegetation regions of Anatolia, depends on the structure of a specific native. The first of these stages of the country that are turning very sharp elevation in mountainous morphological structure. Accordingly, in the north, north west, south of the south-westerly winds, the creation of different climates. Therefore, the vegetation varies in the vertical direction only, but also by looking also varies widely (Figure 14).
The native vegetation has taken over the operations supported by the complex structure of biological diversity reveals the cycle of a healthy ecosystem. For this reason, all of the physical and biotic factors in areas where native vegetation and an interaction of mutual. The native vegetation on the one hand living environments affected by factors other living and non-living, and their presence on the other hand constitute one of the most important factors in terms of diversity. Native plants that will blend the best of local environmental conditions, plant species, and above all a large scale ecological plantation of native plants provides significant contributions to native and biological communities. Aesthetic and functional characteristics of many native plants, as well as contribute to the efficiency of the soil, reduce erosion, and often less than many exotic plants, fertilizers, pesticides and other chemicals, such as input, show the need for maintenance measures. Plantation studies, native plants are becoming increasingly popular for many countries.
The main reasons for this interest;
Aesthetics: Beauty, interesting or rare forms and the native environment, establishing the connection.
Environment: Water use reduction, less pesticide and fertilizer use, the creation of a suitable environment for wildlife.
Maintenance: low long-term maintenance costs, and less work needed to increase the durability of the plant listed as heard.
The presence of native vegetation of the country’s economy is the direct and indirect benefits of unlimited. A rich variety of plant species in the flora of Turkey, local people need to grow them, as well as industrial and scientific organizations, benefits for different purposes. Native vegetation, improving a country's climatic conditions and soil loss prevention in rural areas, preparing the ground for scientific research, forest products, food, and pharmaceutical industry is a source of units to meet the need for raw materials and fuel.
Especially important in the protection of endemics are carried out. Classes of endangered to which they belong are determined internationally, is under a lot of pressure given priority species which are in danger of extinction.
Threats to plants in Turkey are shown below. These factors are;
Industrialization and urbanization,
Urbanization in rural areas is particularly damaging to vegetation
Agricultural extension and excessive grazing
The native pasture areas, overgrazing, and prevents the growth of weeds as well as to the spread of cosmopolitan species are herbaceous origin. The animals eating the herbs roots up to their neck that prevents the growth of weeds and grass pastures deteriorate the quality of output. The inedible prickly weeds are spreading.
Dunes and coastal area to be occupied by tourism facilities
Collecting plants from nature
Native plants are collected from nature for various purposes (medical, spices, ornamental, fuel, animal feed, especially bulbous plants)
Reclamation of Halophytic Areas
Large halophytic areas are improved, especially for agricultural purposes.
Agricultural Control and Pollution
Unconscious use of agricultural pesticides. Especially in endemic and rare plants are damaged.
Reforestation activities is changed environmental conditions of the plants. This activity is made of a plant endemic to the place where you are likely to disappearance.
Fires can damage local and rare endemic species
The European continent in the Thrace Region, different climates, soil types, and due to the geographical characteristics of the floristic aspect is very rich. Although there are many different vegetation types in the Thrace Region, There are four main types:
2. Tekirdağ and Ganos mountains
Northwest Turkey Tekirdag, Marmara Sea, one of the three provinces in the north of the territory of the whole of Thrace, as well as two sea-coasts in Turkey is one of the six provinces. With a surface area of 6313 km ², Istanbul, Tekirdag from the east, north, BSNL, west of Edirne, Canakkale south-west to the south, surrounded by the Sea of Marmara. North-east to the Black Sea is 2.5 km from the coast.
The geological structure of Tekirdağ is relatively young. area of the province while the sea covered the first time, the current image of the IV. from time. Increased Anatolia and Thrace, the Aegean Marmara and Black Sea basins lowered. Soils generally consist of sandstones containing clay and cemented.
Tekirdag, approximately 1,200 km long starting Karlıova 100-15000 m wide, consisting of the many faults of the North Anatolian Fault (NAF) is located near the end. (15-25 km). Faults may be caused by an earthquake within the boundaries of Tekirdag Province; Saros - Gaziköy fault in the Sea of Marmara with the edges of the trench part of the fault. Ministry of Public Works and Settlement, dated 04.18.1996 "Seismic Zoning Map of Turkey" according to the outcrops, and Barbaros 1 Mürefte Degrees are in the earthquake area.
Located in the southeastern part of the Balkan Peninsula in the Thrace region has a different morphological units. The most important of these morphological units in the province of Tekirdag Ganos and Koru Mountain (Figure 15).
Between these two mountainous terrain, river branches and split Ergene, mild, moderate, and sometimes pen plain steep slope lands in the south and middle parts are the high hills and sloping hillside land (Figure 16).
Ganos Mountain is located in the south of Thrace. Mountains, extends northeast-southwest direction. By Streams highly fragmented appearance. There are many hill and mountain villages in the region. Ganos Mountain is cool and a little rainy Mediterranean climate.
Therefore, there are so many taxa of Mediterranean origin. Ganos Mountain, Quercus sp. and Carpinus sp. representation of the Balkans, central Europe and the Euxine falls into the area of the dominant elements of the flora. In addition, shrubs Ganos Mountain foothills overlooking the Marmara Sea and the lowlands of the elements is observed that the pseudomaquis.
Constitute the most important Tekir Mountain the elevation of the province; 12 km south of the city of Tekirdag Kumbağ from the start, until to Gelibolu in a row (60 km) stretches. The highest point Ganos Mountain. The eastern part of the province is less high. Some of the ridges on the plains is slightly wavy. One of them is around Corlu, extends east-west direction. Line is part of a water basin that serves as a teenager, and this limits the ridge to the east foot of the Strandja Tekirdag regains the west. Istrancalar (Ganos Mountains), Çerkezköy starts and rises gradually to the north.
In inland areas covering the vast and fertile plains of river valleys are broad-based. The most important of these, from the western direction Çerkezköy, Ergene river flowing through the bed with an ever-expanding Ergene Plain Hayrabolu and Cene (Beşiktepe) alluvial deposits along the creeks and Maxillofacial Hayrabolu plains.
Small and narrow coastal plains along the shores of the Marmara, the materials brought by the rivers along the coast, is a result of the accumulation. Tekirdag, Ergene Despite the fact that the basin, vegetation, rainfall, due to lack of geological structure is sparse and has a network of small streams. Irregular flow regimes of rivers, and is proportional to the amount of rainfall and the regime. In the summer, the waters decreased drying grows in winter precipitation and snow melt, or even brimming. City streams Saros Gulf, the Sea of Marmara and the Black Sea is poured. Ergene important rivers of our city with the River Corlu, Hayrabolu, Işıklar, Olukbaşı and puddles streams.
133 km along the coast of the Sea of Marmara Tekirdağ is the southern boundary. 2.5 km from the coast of the Black Sea, there are also. Marmara coast, leaving aside the small and narrow coastal plains, no generally high coasts. Marmaraereğlisi the only native harbor off the coast of Tekirdag. This is a peninsula. Marmaraereğlisi east of the port in the form of a semi-circle diameter of 1.600 m. Northeasterly mouth open, the other winds are closed. To prevent severe winds and west southwest wind is a haven of refuge for boats.
Tekirdag province, Black Sea Kastro (Camlikoy) dating back to the bay and the bay Çilingoz high and steep rocky coastline has a view.
Does not exceed 100 m in the Gulf of Tekirdag depth. Self called a shallow sea. Rich in marine plants and animals. More than 1000 m in the Gulf Kumbağ'dan south west of the line will be drawn deeper. Fishermen here or com channel are also. And the actual currents through shoals of fish here.
Temperature averages and indices, given the general humidity, Tekirdağ province's climate is characterized as temperate semi-humid. The effect of distance and elevation coastal walks inland sea temperature and precipitation values are entered variations are small.
Along the coast of the Sea of Marmara, summer is hot and dry, while winters are mild and rainy characteristics of the Mediterranean climate. However, the effect of the Black Sea climate lighter summer drought. During the winter season snowfalls are common. More arid inland summer is entered the semi-continental climate with cold winters more apparent.
Extending to the north of Tekirdag House Strandja more rainfall due to the mass of the northern slopes are covered with beech forests. In this section cover ormanaltı rhododendrons (Rhododendron) creates. The further south, the southern slopes and, due to reduced rainfall, Fagus sp., Quercus sp. and Carpınus sp. seen that place.
Adolescents in the residential areas near the basin rarely leave the bus at the correct Quercus sp., Carpinus sp, Paliurus and Ulmus sp. populations are outstanding. These small groups of trees, whether Thrace is a testament to the inner sections of the steppe area. Thrace region, as a result of the destruction of forests to make farmland, today has the appearance of steppe land. (Anthropogen step) In this section and in the valleys of the land base poplar and willow species are common (Figure 17).
The northern slopes of the mountains of southern Ganos, Carpinus sp., Quercus sp., Tilia sp., and under dense forest cover evident, due to reduced rainfall in the southern slopes are dry forests and scrub communities. Quercus sp. and Pinus sylvestris sp. forests in the Ganos Mountains communities are dominant.
As a result of the researches of the Ganos Mountain, 202 genera and 64 families, 305 plant taxa were included in these genera. Compositae families in terms of richness of species in this family are the most important. This is followed by Leguminosae family. Ganos Mountain taxa of flora distribution of parts of the Euro-Mediterranean elements with the elements of Sibirian comes first. Tekirdağ can be shown poor in terms of forest. Istranca mountains fall within provincial areas, in places, are oak groves. In some areas, Alnus sp, Ulmus sp. and Pinus species where observed. Typical plants are Mediterrean Climate maquis, vineyards, fruit orchards and olive groves. Plants identified in this study are shown below;
Acer campestre L. subsp. Campestre, Alkanna tinctoria Tausch, Arbutus andrachne L, Asparagus acutifolius L., Briza maxima L., Calycotome villosa (Poir) Lk., Capparis spinosa L., Carpinus betulus L., Carpinus orientalis Miller, Cersis siliquastrum L, Cistus creticus L., Colutea cilicica Boiss.et Bal., Coronilla emerus subsp. Emeroides, Cercis siliquastrum L., Cistus creticus L., Clematis vitalba L., Colchicum autumnale L., Colutea cilicica Boiss.et Bal. Cornus mas L., Crataegus monogyna L., Cydonia oblonga Miller, Dittrichia viscosa L. Greuter, Doronicum orientale HOFFM., Emerus majus Mill., Euphorbia rigida Bieb., Euphorbia characias subsp.wulfenii, Ferula communis subsp. Communis, Fraxinus ornus L. subsp. ornus, Glaucium flavum Crantz, Hymenocarpus circinnatus L. Savi, Hypericum perforatum L., İlex aquifolium L., Jasminum fruticans L., Juncus acutus L., Juniperus oxycedrus L. subsp. oxycedrus., Phyllrea latifolia L. Prunus spinosa L. subsp. dasyphylla., Quercus frainetto Ten., Quercus infectoria Olivier, Quercus petraea (Mattuschka) Liebl. subsp. Petraea, Muscari armeniacum Leıchtlın Ex Baker, Nasturtium officinale L., Paliurus spina-christi Mill., Parietaria officinalis L., Phyllirea latifolia L., Pistacia terebinthus L. Platanus orientalis L., Ruscus hypoglossum L., Salix viminalis L., Salvia triloba L., Sambucus ebulus L., Sarcopoterium spinosum (L.) SPACH., Scorpiurus muricatus L. Fiori., Seseli tortuosum L., Smilax excelsa L., Spartium junceum L., Tamus communis L., Thymelaea tartonraira L., Thymus atticus Celak., Tilia argentea Desf. (Korkut, 1987; Özyavuz, 2011) |
Vikings[a] were the seafaring Norse people from southern Scandinavia (present-day Denmark, Norway and Sweden) who from the late 8th to late 11th centuries raided, pirated, traded and settled throughout parts of Europe. They explored both westward to Iceland, Greenland, and Vinland as well as eastward through Russia to Constantinople, Iran, and Arabia. In the countries they raided and settled, the period is known as the Viking Age, and the term 'Viking' also commonly includes the inhabitants of the Norse homelands. The Vikings had a profound impact on the early medieval history of Scandinavia, the British Isles, France, Estonia, and Kievan Rus'.
Expert sailors and navigators aboard their characteristic longships, Vikings voyaged as far as the Mediterranean, North Africa, the Middle East, and were the first Europeans to reach North America, briefly settling in Newfoundland. Vikings established Norse settlements and governments in the British Isles, Ireland, the Faroe Islands, Iceland, Greenland, Normandy, the Baltic coast, and along the Dnieper and Volga trade routes in what is now European Russia, Belarus and Ukraine (where they were also known as Varangians). The Normans, Norse-Gaels, Rus' people, Faroese and Icelanders emerged from these Norse colonies. While spreading Norse culture to foreign lands, they simultaneously brought home slaves, concubines and foreign cultural influences to Scandinavia, profoundly influencing the genetic and historical development of both. During the Viking Age the Norse homelands were gradually consolidated from smaller kingdoms into three larger kingdoms: Denmark, Norway and Sweden.
The Vikings spoke Old Norse and made inscriptions in runes. For most of the period they followed the Old Norse religion, but later became Christians. The Vikings had their own laws, art and architecture. Most Vikings were also farmers, fishermen, craftsmen and traders. Popular conceptions of the Vikings often strongly differ from the complex, advanced civilisation of the Norsemen that emerges from archaeology and historical sources. A romanticised picture of Vikings as noble savages began to emerge in the 18th century; this developed and became widely propagated during the 19th-century Viking revival. Perceived views of the Vikings as violent, piratical heathens or as intrepid adventurers owe much to conflicting varieties of the modern Viking myth that had taken shape by the early 20th century. Current popular representations of the Vikings are typically based on cultural clichés and stereotypes, complicating modern appreciation of the Viking legacy. These representations are rarely accurate--for example, there is no evidence that they wore horned helmets, a costume element that first appeared in Wagnerian opera.
The form occurs as a personal name on some Swedish runestones. The stone of Tóki víking (Sm 10) was raised in memory of a local man named Tóki who got the name Tóki víking (Toki the Viking), presumably because of his activities as a Viking. The Gårdstånga Stone (DR 330) uses the phrase "Þe? drænga? wa?u wiða unesi? i wikingu" (These men where well known i Viking), referring to the stone's dedicatees as Vikings. The Västra Strö 1 Runestone has an inscription in memory of a Björn, who was killed when "i vikingu". In Sweden there is a locality known since the Middle Ages as Vikingstad. The Bro Stone (U 617) was raised in memory of Assur who is said to have protected the land from Vikings (Sa? va? vikinga vorðr með Gæiti). There is little indication of any negative connotation in the term before the end of the Viking Age.
Another less popular theory is that víking from the feminine vík, meaning "creek, inlet, small bay". Various theories have been offered that the word viking may be derived from the name of the historical Norwegian district of Víkin, meaning "a person from Víkin".
However, there are a few major problems with this theory. People from the Viken area were not called "Viking" in Old Norse manuscripts, but are referred to as víkverir, ('Vík dwellers'). In addition, that explanation could explain only the masculine (víkingr) and not the feminine (víking), which is a serious problem because the masculine is easily derived from the feminine but hardly the other way around.
Another etymology that gained support in the early twenty-first century, derives Viking from the same root as Old Norse vika, f. 'sea mile', originally 'the distance between two shifts of rowers', from the root *weik or *wîk, as in the Proto-Germanic verb *wîkan, 'to recede'. This is found in the Proto-Nordic verb *wikan, 'to turn', similar to Old Icelandic víkja (ýkva, víkva) 'to move, to turn', with well-attested nautical usages. Linguistically, this theory is better attested, and the term most likely predates the use of the sail by the Germanic peoples of North-Western Europe, because the Old Frisian spelling Witsing or W?sing shows that the word was pronounced with a palatal k and thus in all probability existed in North-Western Germanic before that palatalisation happened, that is, in the 5th century or before (in the western branch).
In that case, the idea behind it seems to be that the tired rower moves aside for the rested rower on the thwart when he relieves him. The Old Norse feminine víking (as in the phrase fara í víking) may originally have been a sea journey characterised by the shifting of rowers, i.e. a long-distance sea journey, because in the pre-sail era, the shifting of rowers would distinguish long-distance sea journeys. A víkingr (the masculine) would then originally have been a participant on a sea journey characterised by the shifting of rowers. In that case, the word Viking was not originally connected to Scandinavian seafarers but assumed this meaning when the Scandinavians begun to dominate the seas.
In Old English, the word wicing appears first in the Anglo-Saxon poem, Widsith, which probably dates from the 9th century. In Old English, and in the history of the archbishops of Hamburg-Bremen written by Adam of Bremen in about 1070, the term generally referred to Scandinavian pirates or raiders. As in the Old Norse usages, the term is not employed as a name for any people or culture in general. The word does not occur in any preserved Middle English texts. One theory made by the Icelander Örnolfur Kristjansson is that the key to the origins of the word is "wicinga cynn" in Widsith, referring to the people or the race living in Jórvík (York, in the ninth century under control by Norsemen), Jór-Wicings (note, however, that this is not the origin of Jórvík).
The word Viking was introduced into Modern English during the 18th-century Viking revival, at which point it acquired romanticised heroic overtones of "barbarian warrior" or noble savage. During the 20th century, the meaning of the term was expanded to refer to not only seaborne raiders from Scandinavia and other places settled by them (like Iceland and the Faroe Islands), but also any member of the culture that produced said raiders during the period from the late 8th to the mid-11th centuries, or more loosely from about 700 to as late as about 1100. As an adjective, the word is used to refer to ideas, phenomena, or artefacts connected with those people and their cultural life, producing expressions like Viking age, Viking culture, Viking art, Viking religion, Viking ship and so on.
The term "Viking" that appeared in Northwestern Germanic sources in the Viking Age denoted pirates. According to some researchers, the term back then had no geographic or ethnic connotations that limited it to Scandinavia only. The term was instead used about anyone who to the Norse peoples appeared as a pirate. Therefore, the term had been used about Israelites on the Red Sea; Muslims encountering Scandinavians in the Mediterranean; Caucasian pirates encountering the famous Swedish Ingvar-Expedition, and Estonian pirates on the Baltic Sea. Thus the term "Viking" was supposedly never limited to a single ethnicity as such, but rather an activity.
The Vikings were known as Ascomanni ("ashmen") by the Germans for the ash wood of their boats,Dubgail and Finngail ( "dark and fair foreigners") by the Irish,Lochlannaich ("people from the land of lakes") by the Gaels,Dene (Dane) by the Anglo-Saxons and Northmonn by the Frisians.
The scholarly consensus is that the Rus' people originated in what is currently coastal eastern Sweden around the eighth century and that their name has the same origin as Roslagen in Sweden (with the older name being Roden). According to the prevalent theory, the name Rus, like the Proto-Finnic name for Sweden (*Ruotsi), is derived from an Old Norse term for "the men who row" (rods-) as rowing was the main method of navigating the rivers of Eastern Europe, and that it could be linked to the Swedish coastal area of Roslagen (Rus-law) or Roden, as it was known in earlier times. The name Rus would then have the same origin as the Finnish and Estonian names for Sweden: Ruotsi and Rootsi.
The Slavs and the Byzantines also called them Varangians (Russian: , from Old Norse Væringjar 'sworn men', from vàr- "confidence, vow of fealty", related to Old English wær "agreement, treaty, promise", Old High German wara "faithfulness"). Scandinavian bodyguards of the Byzantine emperors were known as the Varangian Guard. The Rus' initially appeared in Serkland in the 9th century, traveling as merchants along the Volga trade route, selling furs, honey, and slaves, as well as luxury goods such as amber, Frankish swords, and walrus ivory. These goods were mostly exchanged for Arabian silver coins, called dirhams. Hoards of 9th century Baghdad-minted silver coins have been found in Sweden, particularly in Gotland.
During and after the Viking raid on Seville in 844 CE the Muslim chroniclers of al-Andalus referred to the Vikings as Magians (Arabic: al-Majus ?), conflating them with fire worshipping Zoroastrians from Persia. When Ibn Fadlan was taken captive by Vikings in the Volga, he referred to them as Rus.
Anglo-Scandinavian is an academic term referring to the people, and archaeological and historical periods during the 8th to 13th centuries in which there was migration to--and occupation of--the British Isles by Scandinavian peoples generally known in English as Vikings. It is used in distinction from Anglo-Saxon. Similar terms exist for other areas, such as Hiberno-Norse for Ireland and Scotland.
The Viking Age in Scandinavian history is taken to have been the period from the earliest recorded raids by Norsemen in 793 until the Norman conquest of England in 1066. Vikings used the Norwegian Sea and Baltic Sea for sea routes to the south.
The Normans were descendants of those Vikings who had been given feudal overlordship of areas in northern France, namely the Duchy of Normandy, in the 10th century. In that respect, descendants of the Vikings continued to have an influence in northern Europe. Likewise, King Harold Godwinson, the last Anglo-Saxon king of England, had Danish ancestors. Two Vikings even ascended to the throne of England, with Sweyn Forkbeard claiming the English throne in 1013 until 1014 and his son Cnut the Great being king of England between 1016 and 1035.
Geographically, the Viking Age covered Scandinavian lands (modern Denmark, Norway and Sweden), as well as territories under North Germanic dominance, mainly the Danelaw, including Scandinavian York, the administrative centre of the remains of the Kingdom of Northumbria, parts of Mercia, and East Anglia. Viking navigators opened the road to new lands to the north, west and east, resulting in the foundation of independent settlements in the Shetland, Orkney, and Faroe Islands; Iceland; Greenland; and L'Anse aux Meadows, a short-lived settlement in Newfoundland, circa 1000. The Greenland settlement was established around 980, during the Medieval Warm Period, and its demise by the mid-15th century may have been partly due to climate change. The Viking Rurik dynasty took control of territories in Slavic and Finno-Ugric-dominated areas of Eastern Europe; they annexed Kiev in 882 to serve as the capital of the Kievan Rus'.
As early as 839, when Swedish emissaries are first known to have visited Byzantium, Scandinavians served as mercenaries in the service of the Byzantine Empire. In the late 10th century, a new unit of the imperial bodyguard formed. Traditionally containing large numbers of Scandinavians, it was known as the Varangian Guard. The word Varangian may have originated in Old Norse, but in Slavic and Greek it could refer either to Scandinavians or Franks. In these years, Swedish men left to enlist in the Byzantine Varangian Guard in such numbers that a medieval Swedish law, Västgötalagen, from Västergötland declared no one could inherit while staying in "Greece"--the then Scandinavian term for the Byzantine Empire--to stop the emigration, especially as two other European courts simultaneously also recruited Scandinavians:Kievan Rus' c. 980-1060 and London 1018-1066 (the Þingalið).
There is archaeological evidence that Vikings reached Baghdad, the centre of the Islamic Empire. The Norse regularly plied the Volga with their trade goods: furs, tusks, seal fat for boat sealant, and slaves. Important trading ports during the period include Birka, Hedeby, Kaupang, Jorvik, Staraya Ladoga, Novgorod, and Kiev.
Scandinavian Norsemen explored Europe by its seas and rivers for trade, raids, colonization, and conquest. In this period, voyaging from their homelands in Denmark, Norway and Sweden the Norsemen settled in the present-day Faroe Islands, Iceland, Norse Greenland, Newfoundland, the Netherlands, Germany, Normandy, Italy, Scotland, England, Wales, Ireland, the Isle of Man, Estonia, Ukraine, Russia and Turkey, as well as initiating the consolidation that resulted in the formation of the present day Scandinavian countries.
In the Viking Age, the present day nations of Norway, Sweden and Denmark did not exist, but were largely homogeneous and similar in culture and language, although somewhat distinct geographically. The names of Scandinavian kings are reliably known for only the later part of the Viking Age. After the end of the Viking Age the separate kingdoms gradually acquired distinct identities as nations, which went hand-in-hand with their Christianisation. Thus the end of the Viking Age for the Scandinavians also marks the start of their relatively brief Middle Ages.
The Vikings significantly intermixed with the Slavs. Slavic and Viking tribes were "closely linked, fighting one another, intermixing and trading". In the Middle Ages, a significant amount of ware was transferred from Slavic areas to Scandinavia, and Denmark was "a melting pot of Slavic and Scandinavian elements". The presence of Slavs in Scandinavia is "more significant than previously thought" although "the Slavs and their interaction with Scandinavia have not been adequately investigated". A 10th-century grave of a warrior-woman in Denmark was long thought to belong to a Viking. However, new analyses suggest that the woman was a Slav from present-day Poland. The first king of the Swedes, Eric, was married to Gunhild, of the Polish House of Piast. Likewise, his son, Olof, fell in love with Edla, a Slavic woman, and took her as his frilla (concubine). She bore him a son and a daughter: Emund the Old, King of Sweden, and Astrid, Queen of Norway. Cnut the Great, King of Denmark, England and Norway, was the son of a daughter of Mieszko I of Poland, possibly the former Polish queen of Sweden, wife of Eric. Richeza of Poland, Queen of Sweden, married Magnus the Strong, and bore him several children, including Canute V, King of Denmark.Catherine Jagiellon, of the House of Jagiellon, was married to John III, King of Sweden. She was the mother of Sigismund III Vasa, King of Poland, King of Sweden, and Grand Duke of Finland.
Colonization of Iceland by Norwegian Vikings began in the ninth century. The first source mentioning Iceland and Greenland is a papal letter of 1053. Twenty years later, they appear in the Gesta of Adam of Bremen. It was not until after 1130, when the islands had become Christianized, that accounts of the history of the islands were written from the point of view of the inhabitants in sagas and chronicles. The Vikings explored the northern islands and coasts of the North Atlantic, ventured south to North Africa, east to Kievan Rus (now - Ukraine, Belarus), Constantinople, and the Middle East.
They raided and pillaged, traded, acted as mercenaries and settled colonies over a wide area. Early Vikings probably returned home after their raids. Later in their history, they began to settle in other lands. Vikings under Leif Erikson, heir to Erik the Red, reached North America and set up short-lived settlements in present-day L'Anse aux Meadows, Newfoundland, Canada. This expansion occurred during the Medieval Warm Period.
Viking expansion into continental Europe was limited. Their realm was bordered by powerful tribes to the south. Early on, it was the Saxons who occupied Old Saxony, located in what is now Northern Germany. The Saxons were a fierce and powerful people and were often in conflict with the Vikings. To counter the Saxon aggression and solidify their own presence, the Danes constructed the huge defence fortification of Danevirke in and around Hedeby.
The Vikings witnessed the violent subduing of the Saxons by Charlemagne, in the thirty-year Saxon Wars of 772-804. The Saxon defeat resulted in their forced christening and the absorption of Old Saxony into the Carolingian Empire. Fear of the Franks led the Vikings to further expand Danevirke, and the defence constructions remained in use throughout the Viking Age and even up until 1864.
The south coast of the Baltic Sea was ruled by the Obotrites, a federation of Slavic tribes loyal to the Carolingians and later the Frankish empire. The Vikings--led by King Gudfred--destroyed the Obotrite city of Reric on the southern Baltic coast in 808 AD and transferred the merchants and traders to Hedeby. This secured Viking supremacy in the Baltic Sea, which continued throughout the Viking Age.
Because of the expansion of the Vikings across Europe, a comparison of DNA and archeology undertaken by scientists at the University of Cambridge and University of Copenhagen suggested that the term "Viking" may have evolved to become "a job description, not a matter of heredity," at least in some Viking bands.
The motives driving the Viking expansion are a topic of much debate in Nordic history.
Researchers have suggested that Vikings may have originally started sailing and raiding due to a need to seek out women from foreign lands. The concept was expressed in the 11th century by historian Dudo of Saint-Quentin in his semi imaginary History of The Normans. Rich and powerful Viking men tended to have many wives and concubines; these polygynous relationships may have led to a shortage of eligible women for the average Viking male. Due to this, the average Viking man could have been forced to perform riskier actions to gain wealth and power to be able to find suitable women. Viking men would often buy or capture women and make them into their wives or concubines. Polygynous marriage increases male-male competition in society because it creates a pool of unmarried men who are willing to engage in risky status-elevating and sex seeking behaviors. The Annals of Ulster states that in 821 the Vikings plundered an Irish village and "carried off a great number of women into captivity".
One common theory posits that Charlemagne "used force and terror to Christianise all pagans", leading to baptism, conversion or execution, and as a result, Vikings and other pagans resisted and wanted revenge. Professor Rudolf Simek states that "it is not a coincidence if the early Viking activity occurred during the reign of Charlemagne". The ascendance of Christianity in Scandinavia led to serious conflict, dividing Norway for almost a century. However, this time period did not commence until the 10th century, Norway was never subject to aggression by Charlemagne and the period of strife was due to successive Norwegian kings embracing Christianity after encountering it overseas.
Another explanation is that the Vikings exploited a moment of weakness in the surrounding regions. Contrary to Simek's assertion, Viking raids occurred sporadically long before the reign of Charlemagne; but exploded in frequency and size after his death, when his empire fragmented into multiple much weaker entities. England suffered from internal divisions and was relatively easy prey given the proximity of many towns to the sea or to navigable rivers. Lack of organised naval opposition throughout Western Europe allowed Viking ships to travel freely, raiding or trading as opportunity permitted. The decline in the profitability of old trade routes could also have played a role. Trade between western Europe and the rest of Eurasia suffered a severe blow when the Western Roman Empire fell in the 5th century. The expansion of Islam in the 7th century had also affected trade with western Europe.
Raids in Europe, including raids and settlements from Scandinavia, were not unprecedented and had occurred long before the Vikings arrived. The Jutes invaded the British Isles three centuries earlier, pouring out from Jutland during the Age of Migrations, before the Danes settled there. The Saxons and the Angles did the same, embarking from mainland Europe. The Viking raids were, however, the first to be documented in writing by eyewitnesses, and they were much larger in scale and frequency than in previous times.
Vikings themselves were expanding; although their motives are unclear, historians believe that scarce resources or a lack of mating opportunities were a factor.
The "Highway of Slaves" was a term for a route that the Vikings found to have a direct pathway from Scandinavia to Constantinople and Baghdad while traveling on the Baltic Sea. With the advancements of their ships during the ninth century, the Vikings were able to sail to Kievan Rus and some northern parts of Europe.
Jomsborg was a semi-legendary Viking stronghold at the southern coast of the Baltic Sea (medieval Wendland, modern Pomerania), that existed between the 960s and 1043. Its inhabitants were known as Jomsvikings. Jomsborg's exact location, or its existence, has not yet been established, though it is often maintained that Jomsborg was somewhere on the islands of the Oder estuary.
While the Vikings were active beyond their Scandinavian homelands, Scandinavia was itself experiencing new influences and undergoing a variety of cultural changes.
By the late 11th century, royal dynasties were legitimised by the Catholic Church (which had had little influence in Scandinavia 300 years earlier) which were asserting their power with increasing authority and ambition, with the three kingdoms of Denmark, Norway, and Sweden taking shape. Towns appeared that functioned as secular and ecclesiastical administrative centres and market sites, and monetary economies began to emerge based on English and German models. By this time the influx of Islamic silver from the East had been absent for more than a century, and the flow of English silver had come to an end in the mid-11th century.
Christianity had taken root in Denmark and Norway with the establishment of dioceses in the 11th century, and the new religion was beginning to organise and assert itself more effectively in Sweden. Foreign churchmen and native elites were energetic in furthering the interests of Christianity, which was now no longer operating only on a missionary footing, and old ideologies and lifestyles were transforming. By 1103, the first archbishopric was founded in Scandinavia, at Lund, Scania, then part of Denmark.
The assimilation of the nascent Scandinavian kingdoms into the cultural mainstream of European Christendom altered the aspirations of Scandinavian rulers and of Scandinavians able to travel overseas, and changed their relations with their neighbours.
One of the primary sources of profit for the Vikings had been slave-taking from other European peoples. The medieval Church held that Christians should not own fellow Christians as slaves, so chattel slavery diminished as a practice throughout northern Europe. This took much of the economic incentive out of raiding, though sporadic slaving activity continued into the 11th century. Scandinavian predation in Christian lands around the North and Irish Seas diminished markedly.
The kings of Norway continued to assert power in parts of northern Britain and Ireland, and raids continued into the 12th century, but the military ambitions of Scandinavian rulers were now directed toward new paths. In 1107, Sigurd I of Norway sailed for the eastern Mediterranean with Norwegian crusaders to fight for the newly established Kingdom of Jerusalem, and Danes and Swedes participated energetically in the Baltic Crusades of the 12th and 13th centuries.
A variety of sources illuminate the culture, activities, and beliefs of the Vikings. Although they were generally a non-literate culture that produced no literary legacy, they had an alphabet and described themselves and their world on runestones. Most contemporary literary and written sources on the Vikings come from other cultures that were in contact with them. Since the mid-20th century, archaeological findings have built a more complete and balanced picture of the lives of the Vikings. The archaeological record is particularly rich and varied, providing knowledge of their rural and urban settlement, crafts and production, ships and military equipment, trading networks, as well as their pagan and Christian religious artefacts and practices.
The most important primary sources on the Vikings are contemporary texts from Scandinavia and regions where the Vikings were active. Writing in Latin letters was introduced to Scandinavia with Christianity, so there are few native documentary sources from Scandinavia before the late 11th and early 12th centuries. The Scandinavians did write inscriptions in runes, but these are usually very short and formulaic. Most contemporary documentary sources consist of texts written in Christian and Islamic communities outside Scandinavia, often by authors who had been negatively affected by Viking activity.
Later writings on the Vikings and the Viking Age can also be important for understanding them and their culture, although they need to be treated cautiously. After the consolidation of the church and the assimilation of Scandinavia and its colonies into the mainstream of medieval Christian culture in the 11th and 12th centuries, native written sources begin to appear in Latin and Old Norse. In the Viking colony of Iceland, an extraordinary vernacular literature blossomed in the 12th through 14th centuries, and many traditions connected with the Viking Age were written down for the first time in the Icelandic sagas. A literal interpretation of these medieval prose narratives about the Vikings and the Scandinavian past is doubtful, but many specific elements remain worthy of consideration, such as the great quantity of skaldic poetry attributed to court poets of the 10th and 11th centuries, the exposed family trees, the self images, the ethical values, that are contained in these literary writings.
Indirectly, the Vikings have also left a window open onto their language, culture and activities, through many Old Norse place names and words found in their former sphere of influence. Some of these place names and words are still in direct use today, almost unchanged, and shed light on where they settled and what specific places meant to them. Examples include place names like Egilsay (from Eigils ey meaning Eigil's Island), Ormskirk (from Ormr kirkja meaning Orms Church or Church of the Worm), Meols (from merl meaning Sand Dunes), Snaefell (Snow Fell), Ravenscar (Ravens Rock), Vinland (Land of Wine or Land of Winberry), Kaupanger (Market Harbour), Tórshavn (Thor's Harbour), and the religious centre of Odense, meaning a place where Odin was worshipped. Viking influence is also evident in concepts like the present-day parliamentary body of the Tynwald on the Isle of Man.
Common words in everyday English language, such as the names of weekdays (Thursday means Thor's day, Friday means Freya's day, Wednesday means Woden, or Odin's day, Tuesday means Týr's day, Týr being the Norse god of single combat, law, and justice), axle, crook, raft, knife, plough, leather, window, berserk, bylaw, thorp, skerry, husband, heathen, Hell, Norman and ransack stem from the Old Norse of the Vikings and give us an opportunity to understand their interactions with the people and cultures of the British Isles. In the Northern Isles of Shetland and Orkney, Old Norse completely replaced the local languages and over time evolved into the now extinct Norn language. Some modern words and names only emerge and contribute to our understanding after a more intense research of linguistic sources from medieval or later records, such as York (Horse Bay), Swansea (Sveinn's Isle) or some of the place names in Normandy like Tocqueville (Toki's farm).
Linguistic and etymological studies continue to provide a vital source of information on the Viking culture, their social structure and history and how they interacted with the people and cultures they met, traded, attacked or lived with in overseas settlements. A lot of Old Norse connections are evident in the modern-day languages of Swedish, Norwegian, Danish, Faroese and Icelandic. Old Norse did not exert any great influence on the Slavic languages in the Viking settlements of Eastern Europe. It has been speculated that the reason for this was the great differences between the two languages, combined with the Rus' Vikings more peaceful businesses in these areas and the fact that they were outnumbered. The Norse named some of the rapids on the Dnieper, but this can hardly be seen from the modern names.
The Norse of the Viking Age could read and write and used a non-standardised alphabet, called runor, built upon sound values. While there are few remains of runic writing on paper from the Viking era, thousands of stones with runic inscriptions have been found where Vikings lived. They are usually in memory of the dead, though not necessarily placed at graves. The use of runor survived into the 15th century, used in parallel with the Latin alphabet.
The runestones are unevenly distributed in Scandinavia: Denmark has 250 runestones, Norway has 50 while Iceland has none. Sweden has as many as between 1,700 and 2,500 depending on definition. The Swedish district of Uppland has the highest concentration with as many as 1,196 inscriptions in stone, whereas Södermanland is second with 391.
The majority of runic inscriptions from the Viking period are found in Sweden. Many runestones in Scandinavia record the names of participants in Viking expeditions, such as the Kjula runestone that tells of extensive warfare in Western Europe and the Turinge Runestone, which tells of a war band in Eastern Europe.
Other runestones mention men who died on Viking expeditions. Among them include the England runestones (Swedish: Englandsstenarna) which is a group of about 30 runestones in Sweden which refer to Viking Age voyages to England. They constitute one of the largest groups of runestones that mention voyages to other countries, and they are comparable in number only to the approximately 30 Greece Runestones and the 26 Ingvar Runestones, the latter referring to a Viking expedition to the Middle East. They were engraved in Old Norse with the Younger Futhark.
The Jelling stones date from between 960 and 985. The older, smaller stone was raised by King Gorm the Old, the last pagan king of Denmark, as a memorial honouring Queen Thyre. The larger stone was raised by his son, Harald Bluetooth, to celebrate the conquest of Denmark and Norway and the conversion of the Danes to Christianity. It has three sides: one with an animal image, one with an image of the crucified Jesus Christ, and a third bearing the following inscription:
King Haraldr ordered this monument made in memory of Gormr, his father, and in memory of Thyrvé, his mother; that Haraldr who won for himself all of Denmark and Norway and made the Danes Christian.
Runestones attest to voyages to locations such as Bath, Greece (how the Vikings referred to the Byzantium territories generally),Khwaresm,Jerusalem, Italy (as Langobardland),Serkland (i.e. the Muslim world), England (including London), and various places in Eastern Europe. Viking Age inscriptions have also been discovered on the Manx runestones on the Isle of Man.
The last known people to use the Runic alphabet were an isolated group of people known as the Elfdalians, that lived in the locality of Älvdalen in the Swedish province of Dalarna. They spoke the language of Elfdalian, the language unique to Älvdalen. The Elfdalian language differentiates itself from the other Scandinavian languages as it evolved much closer to Old Norse. The people of Älvdalen stopped using runes as late as the 1920s. Usage of runes therefore survived longer in Älvdalen than anywhere else in the world. The last known record of the Elfdalian Runes is from 1929; they are a variant of the Dalecarlian runes, runic inscriptions that were also found in Dalarna.
Traditionally regarded as a Swedish dialect, but by several criteria closer related to West Scandinavian dialects, Elfdalian is a separate language by the standard of mutual intelligibility. Although there is no mutual intelligibility, due to schools and public administration in Älvdalen being conducted in Swedish, native speakers are bilingual and speak Swedish at a native level. Residents in the area who speak only Swedish as their sole native language, neither speaking nor understanding Elfdalian, are also common. Älvdalen can be said to have had its own alphabet during the 17th and 18th century. Today there are about 2,000-3000 native speakers of Elfdalian.
There are numerous burial sites associated with Vikings throughout Europe and their sphere of influence--in Scandinavia, the British Isles, Ireland, Greenland, Iceland, Faeroe Islands, Germany, The Baltic, Russia, etc. The burial practices of the Vikings were quite varied, from dug graves in the ground, to tumuli, sometimes including so-called ship burials.
According to written sources, most of the funerals took place at sea. The funerals involved either burial or cremation, depending on local customs. In the area that is now Sweden, cremations were predominant; in Denmark burial was more common; and in Norway both were common. Viking barrows are one of the primary source of evidence for circumstances in the Viking Age. The items buried with the dead give some indication as to what was considered important to possess in the afterlife. It is unknown what mortuary services were given to dead children by the Vikings. Some of the most important burial sites for understanding the Vikings include:
There have been several archaeological finds of Viking ships of all sizes, providing knowledge of the craftsmanship that went into building them. There were many types of Viking ships, built for various uses; the best-known type is probably the longship. Longships were intended for warfare and exploration, designed for speed and agility, and were equipped with oars to complement the sail, making navigation possible independently of the wind. The longship had a long, narrow hull and shallow draught to facilitate landings and troop deployments in shallow water. Longships were used extensively by the Leidang, the Scandinavian defence fleets. The longship allowed the Norse to go Viking, which might explain why this type of ship has become almost synonymous with the concept of Vikings.
The Vikings built many unique types of watercraft, often used for more peaceful tasks. The knarr was a dedicated merchant vessel designed to carry cargo in bulk. It had a broader hull, deeper draught, and a small number of oars (used primarily to manoeuvre in harbours and similar situations). One Viking innovation was the 'beitass', a spar mounted to the sail that allowed their ships to sail effectively against the wind. It was common for seafaring Viking ships to tow or carry a smaller boat to transfer crews and cargo from the ship to shore.
Ships were an integral part of the Viking culture. They facilitated everyday transportation across seas and waterways, exploration of new lands, raids, conquests, and trade with neighbouring cultures. They also held a major religious importance. People with high status were sometimes buried in a ship along with animal sacrifices, weapons, provisions and other items, as evidenced by the buried vessels at Gokstad and Oseberg in Norway and the excavated ship burial at Ladby in Denmark. Ship burials were also practised by Vikings abroad, as evidenced by the excavations of the Salme ships on the Estonian island of Saaremaa.
Well-preserved remains of five Viking ships were excavated from Roskilde Fjord in the late 1960s, representing both the longship and the knarr. The ships were scuttled there in the 11th century to block a navigation channel and thus protect Roskilde, then the Danish capital, from seaborne assault. The remains of these ships are on display at the Viking Ship Museum in Roskilde.
In 2019, archaeologists uncovered two Viking boat graves in Gamla Uppsala. They also discovered that one of the boats still holds the remains of a man, a dog, and a horse, along with other items. This has shed light on death rituals of Viking communities in the region.
Viking society was divided into the three socio-economic classes: Thralls, Karls and Jarls. This is described vividly in the Eddic poem of Rígsþula, which also explains that it was the God Ríg--father of mankind also known as Heimdallr--who created the three classes. Archaeology has confirmed this social structure.
Thralls were the lowest ranking class and were slaves. Slaves comprised as much as a quarter of the population. Slavery was of vital importance to Viking society, for everyday chores and large scale construction and also to trade and the economy. Thralls were servants and workers in the farms and larger households of the Karls and Jarls, and they were used for constructing fortifications, ramps, canals, mounds, roads and similar hard work projects. According to the Rigsthula, Thralls were despised and looked down upon. New thralls were supplied by either the sons and daughters of thralls or captured abroad. The Vikings often deliberately captured many people on their raids in Europe, to enslave them as thralls. The thralls were then brought back home to Scandinavia by boat, used on location or in newer settlements to build needed structures, or sold, often to the Arabs in exchange for silver. Other names for thrall were 'træl' and 'ty'.
Karls were free peasants. They owned farms, land and cattle and engaged in daily chores like ploughing the fields, milking the cattle, building houses and wagons, but used thralls to make ends meet. Other names for Karls were 'bonde' or simply free men.
The Jarls were the aristocracy of the Viking society. They were wealthy and owned large estates with huge longhouses, horses and many thralls. The thralls did most of the daily chores, while the Jarls did administration, politics, hunting, sports, visited other Jarls or went abroad on expeditions. When a Jarl died and was buried, his household thralls were sometimes sacrificially killed and buried next to him, as many excavations have revealed.
In daily life, there were many intermediate positions in the overall social structure and it is believed that there must have been some social mobility. These details are unclear, but titles and positions like hauldr, thegn, landmand, show mobility between the Karls and the Jarls.
Other social structures included the communities of félag in both the civil and the military spheres, to which its members (called félagi) were obliged. A félag could be centred around certain trades, a common ownership of a sea vessel or a military obligation under a specific leader. Members of the latter were referred to as drenge, one of the words for warrior. There were also official communities within towns and villages, the overall defence, religion, the legal system and the Things.
Like elsewhere in medieval Europe, most women in Viking society were subordinate to their husbands and fathers and had little political power. However, the written sources portray free Viking women as having independence and rights. Viking women generally appear to have had more freedom than women elsewhere, as illustrated in the Icelandic Grágás and the Norwegian Frostating laws and Gulating laws.
Most free Viking women were housewives, and the woman's standing in society was linked to that of her husband. Marriage gave a woman a degree of economic security and social standing encapsulated in the title húsfreyja (lady of the house). Norse laws assert the housewife's authority over the 'indoor household'. She had the important roles of managing the farm's resources, conducting business, as well as child-rearing, although some of this would be shared with her husband.
After the age of 20, an unmarried woman, referred to as maer and mey, reached legal majority and had the right to decide her place of residence and was regarded as her own person before the law. An exception to her independence was the right to choose a husband, as marriages were normally arranged by the family. The groom would pay a bride-price (mundr) to the bride's family, and the bride brought assets into the marriage, as a dowry. A married woman could divorce her husband and remarry.
Concubinage was also part of Viking society, whereby a woman could live with a man and have children with him without marrying; such a woman was called a frilla. Usually she would be the mistress of a wealthy and powerful man who also had a wife. The wife had authority over the mistresses if they lived in her household. Through her relationship to a man of higher social standing, a concubine and her family could advance socially; although her position was less secure than that of a wife. There was no distinction made between children born inside or outside marriage: both had the right to inherit property from their parents, and there were no "legitimate" or "illegitimate" children. However, children born in wedlock had more inheritance rights than those born out of wedlock.
A woman had the right to inherit part of her husband's property upon his death, and widows enjoyed the same independent status as unmarried women. The paternal aunt, paternal niece and paternal granddaughter, referred to as odalkvinna, all had the right to inherit property from a deceased man. A woman with no husband, sons or male relatives could inherit not only property but also the position as head of the family when her father or brother died. Such a woman was referred to as Baugrygr, and she exercised all the rights afforded to the head of a family clan, until she married, by which her rights were transferred to her new husband.
Women had religious authority and were active as priestesses (gydja) and oracles (sejdkvinna). They were active within art as poets (skalder) and rune masters, and as merchants and medicine women. There may also have been female entrepreneurs, who worked in textile production. Women may also have been active within military office: the tales about shieldmaidens are unconfirmed, but some archaeological finds such as the Birka female Viking warrior may indicate that at least some women in military authority existed. These liberties gradually disappeared after the introduction of Christianity, and from the late 13th-century, they are no longer mentioned.
Examinations of Viking Age burials suggests that women lived longer, and nearly all well past the age of 35, as compared to earlier times. Female graves from before the Viking Age in Scandinavia holds a proportional large number of remains from women aged 20 to 35, presumably due to complications of childbirth.
Scandinavian Vikings were similar in appearance to modern Scandinavians; "their skin was fair and the hair color varied between blond, dark and reddish". Genetic studies show that people were mostly blond in what is now eastern Sweden, while red hair was mostly found in western Scandinavia. Most Viking men had shoulder-length hair and beards, and slaves (thralls) were usually the only men with short hair. The length varied according to personal preference and occupation. Men involved in warfare, for example, may have had slightly shorter hair and beards for practical reasons. Men in some regions bleached their hair a golden saffron color. Females also had long hair, with girls often wearing it loose or braided and married women often wearing it in a bun. The average height is estimated to have been 67 inches (5'5") for men and 62 inches (5'1") for women.
The three classes were easily recognisable by their appearances. Men and women of the Jarls were well groomed with neat hairstyles and expressed their wealth and status by wearing expensive clothes (often silk) and well crafted jewellery like brooches, belt buckles, necklaces and arm rings. Almost all of the jewellery was crafted in specific designs unique to the Norse (see Viking art). Finger rings were seldom used and earrings were not used at all, as they were seen as a Slavic phenomenon. Most Karls expressed similar tastes and hygiene, but in a more relaxed and inexpensive way.
Archaeological finds from Scandinavia and Viking settlements in the British Isles support the idea of the well groomed and hygienic Viking. Burial with grave goods was a common practice in the Scandinavian world, through the Viking Age and well past the Christianization of the Norse peoples. Within these burial sites and homesteads, combs, often made from antler, are a common find. The manufacturing of such antler combs was common, as at the Viking settlement at Dublin hundreds of examples of combs from the tenth-century have survived, suggesting that grooming was a common practice. The manufacturing of such combs was also widespread throughout the Viking world, as examples of similar combs have been found at Viking settlements in Ireland, England, and Scotland. The combs share a common visual appearance as well, with the extant examples often decorated with linear, interlacing, and geometric motifs, or other forms of ornamentation depending on the comb's period and type, but stylistically similar to Viking Age art. The practice of grooming was a concern for all levels of Viking age society, as grooming products, combs, have been found in common graves as well as aristocratic ones.
The sagas tell about the diet and cuisine of the Vikings, but first hand evidence, like cesspits, kitchen middens and garbage dumps have proved to be of great value and importance. Undigested remains of plants from cesspits at Coppergate in York have provided much information in this respect. Overall, archaeo-botanical investigations have been undertaken increasingly in recent decades, as a collaboration between archaeologists and palaeoethno-botanists. This new approach sheds light on the agricultural and horticultural practices of the Vikings and their cuisine.
The combined information from various sources suggests a diverse cuisine and ingredients. Meat products of all kinds, such as cured, smoked and whey-preserved meat, sausages, and boiled or fried fresh meat cuts, were prepared and consumed. There were plenty of seafood, bread, porridges, dairy products, vegetables, fruits, berries and nuts. Alcoholic drinks like beer, mead, bjórr (a strong fruit wine) and, for the rich, imported wine, were served.
Certain livestock were typical and unique to the Vikings, including the Icelandic horse, Icelandic cattle, a plethora of sheep breeds, the Danish hen and the Danish goose. The Vikings in York mostly ate beef, mutton, and pork with small amounts of horse meat. Most of the beef and horse leg bones were found split lengthways, to extract the marrow. The mutton and swine were cut into leg and shoulder joints and chops. The frequent remains of pig skull and foot bones found on house floors indicate that brawn and trotters were also popular. Hens were kept for both their meat and eggs, and the bones of game birds such as black grouse, golden plover, wild ducks, and geese have also been found.
Seafood was important, in some places even more so than meat. Whales and walrus were hunted for food in Norway and the north-western parts of the North Atlantic region, and seals were hunted nearly everywhere. Oysters, mussels and shrimps were eaten in large quantities and cod and salmon were popular fish. In the southern regions, herring was also important.
Milk and buttermilk were popular, both as cooking ingredients and drinks, but were not always available, even at farms. Milk came from cows, goats and sheep, with priorities varying from location to location, and fermented milk products like skyr or surmjölk were produced as well as butter and cheese.
Food was often salted and enhanced with spices, some of which were imported like black pepper, while others were cultivated in herb gardens or harvested in the wild. Home grown spices included caraway, mustard and horseradish as evidenced from the Oseberg ship burial or dill, coriander, and wild celery, as found in cesspits at Coppergate in York. Thyme, juniper berry, sweet gale, yarrow, rue and peppercress were also used and cultivated in herb gardens.
Vikings collected and ate fruits, berries and nuts. Apple (wild crab apples), plums and cherries were part of the diet, as were rose hips and raspberry, wild strawberry, blackberry, elderberry, rowan, hawthorn and various wild berries, specific to the locations.Hazelnuts were an important part of the diet in general and large amounts of walnut shells have been found in cities like Hedeby. The shells were used for dyeing, and it is assumed that the nuts were consumed.
The invention and introduction of the mouldboard plough revolutionised agriculture in Scandinavia in the early Viking Age and made it possible to farm even poor soils. In Ribe, grains of rye, barley, oat and wheat dated to the 8th century have been found and examined, and are believed to have been cultivated locally. Grains and flour were used for making porridges, some cooked with milk, some cooked with fruit and sweetened with honey, and also various forms of bread. Remains of bread from primarily Birka in Sweden were made of barley and wheat. It is unclear if the Norse leavened their breads, but their ovens and baking utensils suggest that they did.Flax was a very important crop for the Vikings: it was used for oil extraction, food consumption and most importantly the production of linen. More than 40% of all known textile recoveries from the Viking Age can be traced as linen. This suggests a much higher actual percentage, as linen is poorly preserved compared to wool for example.
The quality of food for common people was not always particularly high. The research at Coppergate shows that the Vikings in York made bread from whole meal flour--probably both wheat and rye--but with the seeds of cornfield weeds included. Corncockle (Agrostemma), would have made the bread dark-coloured, but the seeds are poisonous, and people who ate the bread might have become ill. Seeds of carrots, parsnip, and brassicas were also discovered, but they were poor specimens and tend to come from white carrots and bitter tasting cabbages. The rotary querns often used in the Viking Age left tiny stone fragments (often from basalt rock) in the flour, which when eaten wore down the teeth. The effects of this can be seen on skeletal remains of that period.
Sports were widely practised and encouraged by the Vikings. Sports that involved weapons training and developing combat skills were popular. This included spear and stone throwing, building and testing physical strength through wrestling (see glima), fist fighting, and stone lifting. In areas with mountains, mountain climbing was practised as a sport. Agility and balance were built and tested by running and jumping for sport, and there is mention of a sport that involved jumping from oar to oar on the outside of a ship's railing as it was being rowed.Swimming was a popular sport and Snorri Sturluson describes three types: diving, long-distance swimming, and a contest in which two swimmers try to dunk one another. Children often participated in some of the sport disciplines and women have also been mentioned as swimmers, although it is unclear if they took part in competition. King Olaf Tryggvason was hailed as a master of both mountain climbing and oar-jumping, and was said to have excelled in the art of knife juggling as well.
Horse fighting was practised for sport, although the rules are unclear. It appears to have involved two stallions pitted against each other, within smell and sight of fenced-off mares. Whatever the rules were, the fights often resulted in the death of one of the stallions.
Icelandic sources refer to the sport of knattleik. A ball game akin to hockey, knattleik involved a bat and a small hard ball and was usually played on a smooth field of ice. The rules are unclear, but it was popular with both adults and children, even though it often led to injuries. Knattleik appears to have been played only in Iceland, where it attracted many spectators, as did horse fighting.
Hunting, as a sport, was limited to Denmark, where it was not regarded as an important occupation. Birds, deer, hares and foxes were hunted with bow and spear, and later with crossbows. The techniques were stalking, snare and traps and par force hunting with dog packs.
Board games and dice games were played as a popular pastime at all levels of society. Preserved gaming pieces and boards show game boards made of easily available materials like wood, with game pieces manufactured from stone, wood or bone, while other finds include elaborately carved boards and game pieces of glass, amber, antler or walrus tusk, together with materials of foreign origin, such as ivory. The Vikings played several types of tafl games; hnefatafl, nitavl (nine men's morris) and the less common kvatrutafl. Chess also appeared at the end of the Viking Age. Hnefatafl is a war game, in which the object is to capture the king piece--a large hostile army threatens and the king's men have to protect the king. It was played on a board with squares using black and white pieces, with moves made according to dice rolls. The Ockelbo Runestone shows two men engaged in Hnefatafl, and the sagas suggest that money or valuables could have been involved in some dice games.
On festive occasions storytelling, skaldic poetry, music and alcoholic drinks, like beer and mead, contributed to the atmosphere. Music was considered an art form and music proficiency as fitting for a cultivated man. The Vikings are known to have played instruments including harps, fiddles, lyres and lutes.
Experimental archaeology of the Viking Age is a flourishing branch and several places have been dedicated to this technique, such as Jorvik Viking Centre in the United Kingdom, Sagnlandet Lejre and Ribe Viking Center in Denmark, Foteviken Museum in Sweden or Lofotr Viking Museum in Norway. Viking-age reenactors have undertaken experimental activities such as iron smelting and forging using Norse techniques at Norstead in Newfoundland for example.
On 1 July 2007, the reconstructed Viking ship Skuldelev 2, renamed Sea Stallion, began a journey from Roskilde to Dublin. The remains of that ship and four others were discovered during a 1962 excavation in the Roskilde Fjord. Tree-ring analysis has shown the ship was built of oak in the vicinity of Dublin in about 1042. Seventy multi-national crew members sailed the ship back to its home, and Sea Stallion arrived outside Dublin's Custom House on 14 August 2007. The purpose of the voyage was to test and document the seaworthiness, speed, and manoeuvrability of the ship on the rough open sea and in coastal waters with treacherous currents. The crew tested how the long, narrow, flexible hull withstood the tough ocean waves. The expedition also provided valuable new information on Viking longships and society. The ship was built using Viking tools, materials, and much the same methods as the original ship.
Other vessels, often replicas of the Gokstad ship (full- or half-scale) or Skuldelev have been built and tested as well. The Snorri (a Skuldelev I Knarr), was sailed from Greenland to Newfoundland in 1998.
Elements of a Scandinavian identity and practices were maintained in settler societies, but they could be quite distinct as the groups assimilated into the neighboring societies. Assimilation to the Frankish culture in Normandy for example was rapid. Links to a Viking identity remained longer in the remote islands of Iceland and the Faroes.
Knowledge about the arms and armour of the Viking age is based on archaeological finds, pictorial representation, and to some extent on the accounts in the Norse sagas and Norse laws recorded in the 13th century. According to custom, all free Norse men were required to own weapons and were permitted to carry them at all times. These arms indicated a Viking's social status: a wealthy Viking had a complete ensemble of a helmet, shield, mail shirt, and sword. However, swords were rarely used in battle, probably not sturdy enough for combat and most likely only used as symbolic or decorative items.
A typical bóndi (freeman) was more likely to fight with a spear and shield, and most also carried a seax as a utility knife and side-arm. Bows were used in the opening stages of land battles and at sea, but they tended to be considered less "honourable" than melee weapons. Vikings were relatively unusual for the time in their use of axes as a main battle weapon. The Húscarls, the elite guard of King Cnut (and later of King Harold II) were armed with two-handed axes that could split shields or metal helmets with ease.
The warfare and violence of the Vikings were often motivated and fuelled by their beliefs in Norse religion, focusing on Thor and Odin, the gods of war and death. In combat, it is believed that the Vikings sometimes engaged in a disordered style of frenetic, furious fighting known as berserkergang, leading them to be termed berserkers. Such tactics may have been deployed intentionally by shock troops, and the berserk-state may have been induced through ingestion of materials with psychoactive properties, such as the hallucinogenic mushrooms, Amanita muscaria, or large amounts of alcohol.
Except for the major trading centres of Ribe, Hedeby and the like, the Viking world was unfamiliar with the use of coinage and was based on so called bullion economy, that is, the weight of precious metals. Silver was the most common metal in the economy, although gold was also used to some extent. Silver circulated in the form of bars, or ingots, as well as in the form of jewellery and ornaments. A large number of silver hoards from the Viking Age have been uncovered, both in Scandinavia and the lands they settled.[better source needed] Traders carried small scales, enabling them to measure weight very accurately, so it was possible to have a very precise system of trade and exchange, even without a regular coinage.
Organized trade covered everything from ordinary items in bulk to exotic luxury products. The Viking ship designs, like that of the knarr, were an important factor in their success as merchants. Imported goods from other cultures included:
To counter these valuable imports, the Vikings exported a large variety of goods. These goods included:
Other exports included weapons, walrus ivory, wax, salt and cod. As one of the more exotic exports, hunting birds were sometimes provided from Norway to the European aristocracy, from the 10th century.
Many of these goods were also traded within the Viking world itself, as well as goods such as soapstone and whetstone. Soapstone was traded with the Norse on Iceland and in Jutland, who used it for pottery. Whetstones were traded and used for sharpening weapons, tools and knives. There are indications from Ribe and surrounding areas, that the extensive medieval trade with oxen and cattle from Jutland (see Ox Road), reach as far back as c. 720 AD. This trade satisfied the Vikings' need for leather and meat to some extent, and perhaps hides for parchment production on the European mainland. Wool was also very important as a domestic product for the Vikings, to produce warm clothing for the cold Scandinavian and Nordic climate, and for sails. Sails for Viking ships required large amounts of wool, as evidenced by experimental archaeology. There are archaeological signs of organised textile productions in Scandinavia, reaching as far back as the early Iron Ages. Artisans and craftsmen in the larger towns were supplied with antlers from organised hunting with large-scale reindeer traps in the far north. They were used as raw material for making everyday utensils like combs.
In England the Viking Age began dramatically on 8 June 793 when Norsemen destroyed the abbey on the island of Lindisfarne. The devastation of Northumbria's Holy Island shocked and alerted the royal courts of Europe to the Viking presence. "Never before has such an atrocity been seen," declared the Northumbrian scholar Alcuin of York. Medieval Christians in Europe were totally unprepared for the Viking incursions and could find no explanation for their arrival and the accompanying suffering they experienced at their hands save the "Wrath of God". More than any other single event, the attack on Lindisfarne demonised perception of the Vikings for the next twelve centuries. Not until the 1890s did scholars outside Scandinavia begin to seriously reassess the achievements of the Vikings, recognizing their artistry, technological skills, and seamanship.
Norse Mythology, sagas, and literature tell of Scandinavian culture and religion through tales of heroic and mythological heroes. Early transmission of this information was primarily oral, and later texts relied on the writings and transcriptions of Christian scholars, including the Icelanders Snorri Sturluson and Sæmundur fróði. Many of these sagas were written in Iceland, and most of them, even if they had no Icelandic provenance, were preserved there after the Middle Ages due to the continued interest of Icelanders in Norse literature and law codes.
The 200-year Viking influence on European history is filled with tales of plunder and colonisation, and the majority of these chronicles came from western witnesses and their descendants. Less common, though equally relevant, are the Viking chronicles that originated in the east, including the Nestor chronicles, Novgorod chronicles, Ibn Fadlan chronicles, Ibn Rusta chronicles, and brief mentions by Photius, patriarch of Constantinople, regarding their first attack on the Byzantine Empire. Other chroniclers of Viking history include Adam of Bremen, who wrote, in the fourth volume of his Gesta Hammaburgensis Ecclesiae Pontificum, "[t]here is much gold here (in Zealand), accumulated by piracy. These pirates, which are called wichingi by their own people, and Ascomanni by our own people, pay tribute to the Danish king." In 991, the Battle of Maldon between Viking raiders and the inhabitants of Maldon in Essex was commemorated with a poem of the same name.
Early modern publications, dealing with what is now called Viking culture, appeared in the 16th century, e.g. Historia de gentibus septentrionalibus (History of the northern people) of Olaus Magnus (1555), and the first edition of the 13th-century Gesta Danorum (Deeds of the Danes), by Saxo Grammaticus, in 1514. The pace of publication increased during the 17th century with Latin translations of the Edda (notably Peder Resen's Edda Islandorum of 1665).
In Scandinavia, the 17th-century Danish scholars Thomas Bartholin and Ole Worm and the Swede Olaus Rudbeck used runic inscriptions and Icelandic sagas as historical sources. An important early British contributor to the study of the Vikings was George Hickes, who published his Linguarum vett. septentrionalium thesaurus (Dictionary of the Old Northern Languages) in 1703-05. During the 18th century, British interest and enthusiasm for Iceland and early Scandinavian culture grew dramatically, expressed in English translations of Old Norse texts and in original poems that extolled the supposed Viking virtues.
The word "viking" was first popularised at the beginning of the 19th century by Erik Gustaf Geijer in his poem, The Viking. Geijer's poem did much to propagate the new romanticised ideal of the Viking, which had little basis in historical fact. The renewed interest of Romanticism in the Old North had contemporary political implications. The Geatish Society, of which Geijer was a member, popularised this myth to a great extent. Another Swedish author who had great influence on the perception of the Vikings was Esaias Tegnér, a member of the Geatish Society, who wrote a modern version of Friðþjófs saga hins froekna, which became widely popular in the Nordic countries, the United Kingdom, and Germany.
Fascination with the Vikings reached a peak during the so-called Viking revival in the late 18th and 19th centuries as a branch of Romantic nationalism. In Britain this was called Septentrionalism, in Germany "Wagnerian" pathos, and in the Scandinavian countries Scandinavism. Pioneering 19th-century scholarly editions of the Viking Age began to reach a small readership in Britain, archaeologists began to dig up Britain's Viking past, and linguistic enthusiasts started to identify the Viking-Age origins of rural idioms and proverbs. The new dictionaries of the Old Norse language enabled the Victorians to grapple with the primary Icelandic sagas.
Until recently, the history of the Viking Age was largely based on Icelandic sagas, the history of the Danes written by Saxo Grammaticus, the Russian Primary Chronicle, and Cogad Gáedel re Gallaib. Few scholars still accept these texts as reliable sources, as historians now rely more on archaeology and numismatics, disciplines that have made valuable contributions toward understanding the period.
The romanticised idea of the Vikings constructed in scholarly and popular circles in northwestern Europe in the 19th and early 20th centuries was a potent one, and the figure of the Viking became a familiar and malleable symbol in different contexts in the politics and political ideologies of 20th-century Europe. In Normandy, which had been settled by Vikings, the Viking ship became an uncontroversial regional symbol. In Germany, awareness of Viking history in the 19th century had been stimulated by the border dispute with Denmark over Schleswig-Holstein and the use of Scandinavian mythology by Richard Wagner. The idealised view of the Vikings appealed to Germanic supremacists who transformed the figure of the Viking in accordance with the ideology of a Germanic master race. Building on the linguistic and cultural connections between Norse-speaking Scandinavians and other Germanic groups in the distant past, Scandinavian Vikings were portrayed in Nazi Germany as a pure Germanic type. The cultural phenomenon of Viking expansion was re-interpreted for use as propaganda to support the extreme militant nationalism of the Third Reich, and ideologically informed interpretations of Viking paganism and the Scandinavian use of runes were employed in the construction of Nazi mysticism. Other political organisations of the same ilk, such as the former Norwegian fascist party Nasjonal Samling, similarly appropriated elements of the modern Viking cultural myth in their symbolism and propaganda.
Soviet and earlier Slavophile historians emphasized a Slavic rooted foundation in contrast to the Normanist theory of the Vikings conquering the Slavs and founding the Kievan Rus'. They accused Normanist theory proponents of distorting history by depicting the Slavs as undeveloped primitives. In contrast, Soviet historians stated that the Slavs laid the foundations of their statehood long before the Norman/Viking raids, while the Norman/Viking invasions only served to hinder the historical development of the Slavs. They argued that Rus' composition was Slavic and that Rurik and Oleg' success was rooted in their support from within the local Slavic aristocracy.. After the dissolution of the USSR, Novgorod acknowledged its Viking history by incorporating a Viking ship into its logo.
Led by the operas of German composer Richard Wagner, such as Der Ring des Nibelungen, Vikings and the Romanticist Viking Revival have inspired many creative works. These have included novels directly based on historical events, such as Frans Gunnar Bengtsson's The Long Ships (which was also released as a 1963 film), and historical fantasies such as the film The Vikings, Michael Crichton's Eaters of the Dead (movie version called The 13th Warrior), and the comedy film Erik the Viking. The vampire Eric Northman, in the HBO TV series True Blood, was a Viking prince before being turned into a vampire. Vikings appear in several books by the Danish American writer Poul Anderson, while British explorer, historian, and writer Tim Severin authored a trilogy of novels in 2005 about a young Viking adventurer Thorgils Leifsson, who travels around the world.
In 1962, American comic book writer Stan Lee and his brother Larry Lieber, together with Jack Kirby, created the Marvel Comics superhero Thor, which they based on the Norse god of the same name. The character is featured in the 2011 Marvel Studios film Thor and its sequels Thor: The Dark World and Thor: Ragnarok. The character also appears in the 2012 film The Avengers and its associated animated series.
The appearance of Vikings within popular media and television has seen a resurgence in recent decades, especially with the History Channel's series Vikings (2013), directed by Michael Hirst. The show has a loose grounding in historical facts and sources, but bases itself more so on literary sources, such as fornaldarsaga Ragnars saga loðbrókar, itself more legend than fact, and Old Norse Eddic and Skaldic poetry. The events of the show frequently make references to the Völuspá, an Eddic poem describing the creation of the world, often directly referencing specific lines of the poem in the dialogue. The show portrays some of the social realities of the medieval Scandinavian world, such as slavery and the greater role of women within Viking society. The show also addresses the topics of gender equity in Viking society with the inclusion of shield maidens through the character Lagertha, also based on a legendary figure. Recent archaeological interpretations and osteological analysis of previous excavations of Viking burials has given support to the idea of the Viking woman warrior, namely the excavation and DNA study of the Birka female Viking warrior, within recent years. However, the conclusions remain contentious.
Vikings have served as an inspiration for numerous video games, such as The Lost Vikings (1993), Age of Mythology (2002), and For Honor (2017). All three Vikings from The Lost Vikings series--Erik the Swift, Baleog the Fierce, and Olaf the Stout--appeared as a playable hero in the crossover title Heroes of the Storm (2015).The Elder Scrolls V: Skyrim (2011) is an action role-playing video game heavily inspired by Viking culture. Vikings are set to be the lead focus of the 2020 video game Assassin's Creed Valhalla, which is set in 873 AD, and recounts an alternative history of the Viking invasion of Britain.
Modern reconstructions of Viking mythology have shown a persistent influence in late 20th- and early 21st-century popular culture in some countries, inspiring comics, movies, television series, role-playing games, computer games, and music, including Viking metal, a subgenre of heavy metal music.
Since the 1960s, there has been rising enthusiasm for historical reenactment. While the earliest groups had little claim for historical accuracy, the seriousness and accuracy of reenactors has increased. The largest such groups include The Vikings and Regia Anglorum, though many smaller groups exist in Europe, North America, New Zealand, and Australia. Many reenactor groups participate in live-steel combat, and a few have Viking-style ships or boats.
Apart from two or three representations of (ritual) helmets--with protrusions that may be either stylised ravens, snakes, or horns--no depiction of the helmets of Viking warriors, and no preserved helmet, has horns. The formal, close-quarters style of Viking combat (either in shield walls or aboard "ship islands") would have made horned helmets cumbersome and hazardous to the warrior's own side.
Historians therefore believe that Viking warriors did not wear horned helmets; whether such helmets were used in Scandinavian culture for other, ritual purposes, remains unproven. The general misconception that Viking warriors wore horned helmets was partly promulgated by the 19th-century enthusiasts of Götiska Förbundet, founded in 1811 in Stockholm. They promoted the use of Norse mythology as the subject of high art and other ethnological and moral aims.
The Vikings were often depicted with winged helmets and in other clothing taken from Classical antiquity, especially in depictions of Norse gods. This was done to legitimise the Vikings and their mythology by associating it with the Classical world, which had long been idealised in European culture.
The latter-day mythos created by national romantic ideas blended the Viking Age with aspects of the Nordic Bronze Age some 2,000 years earlier. Horned helmets from the Bronze Age were shown in petroglyphs and appeared in archaeological finds (see Bohuslän and Vikso helmets). They were probably used for ceremonial purposes.
Viking helmets were conical, made from hard leather with wood and metallic reinforcement for regular troops. The iron helmet with mask and mail was for the chieftains, based on the previous Vendel-age helmets from central Sweden. The only original Viking helmet discovered is the Gjermundbu helmet, found in Norway. This helmet is made of iron and has been dated to the 10th century.
The image of wild-haired, dirty savages sometimes associated with the Vikings in popular culture is a distorted picture of reality. Viking tendencies were often misreported, and the work of Adam of Bremen, among others, told largely disputable tales of Viking savagery and uncleanliness.
There is no evidence that Vikings drank out of the skulls of vanquished enemies. This was a misconception based on a passage in the skaldic poem Krákumál speaking of heroes drinking from ór bjúgviðum hausa (branches of skulls). This was a reference to drinking horns, but was mistranslated in the 17th century as referring to the skulls of the slain.
Margaryan et al. 2020 analyzed 442 Viking world individuals from various archaeological sites in Europe. They were found to be closely related to modern Scandinavians. The Y-DNA composition of the individuals in the study was also similar to that of modern Scandinavians. The most common Y-DNA haplogroup was I1 (95 samples), followed by R1b (84 samples) and R1a, especially (but not exclusively) of the Scandinavian R1a-Z284 subclade (61 samples). The study showed what many historians have hypothesized, that it was common for Norseman settlers to marry foreign women. Some individuals from the study, such as those found in Foggia, display typical Scandinavian Y-DNA haplogroups but also Southern European autosomal ancestry, suggesting that they were the descendants of Viking settler males and local women. The 5 individual samples from Foggia were likely Normans. The same pattern of a combination of Scandinavian Y-DNA and local autosomal ancestry is seen in other samples from the study, for example Varangians buried near lake Ladoga and Vikings in England, suggesting that Viking men had married into local families in those places too.
Unsurprisingly, and very much consistent with historical records, the study found evidence of a major influx of Danish Viking ancestry into England, a Swedish influx into Estonia and Finland; and Norwegian influx into Ireland, Iceland and Greenland during the Viking Age.
Margaryan et al. 2020 examined the skeletal remains of 42 individuals from the Salme ship burials in Estonia. The skeletal remains belonged to warriors killed in battle who were later buried together with numerous valuable weapons and armour. DNA testing and isotope analysis revealed that the men came from central Sweden.
Female descent studies show evidence of Norse descent in areas closest to Scandinavia, such as the Shetland and Orkney islands. Inhabitants of lands farther away show most Norse descent in the male Y-chromosome lines.
A specialised genetic and surname study in Liverpool showed marked Norse heritage: up to 50% of males of families that lived there before the years of industrialisation and population expansion. High percentages of Norse inheritance--tracked through the R-M420 haplotype--were also found among males in the Wirral and West Lancashire. This was similar to the percentage of Norse inheritance found among males in the Orkney Islands.
Recent research suggests that the Celtic warrior Somerled, who drove the Vikings out of western Scotland and was the progenitor of Clan Donald, may have been of Viking descent, a member of haplogroup R-M420.
Margaryan et al. 2020 examined an elite warrior burial from Bodzia (Poland) dated to 1010-1020 AD. The cemetery in Bodzia is exceptional in terms of Scandinavian and Kievian Rus links. The Bodzia man (sample VK157, or burial E864/I) was not a simple warrior from the princely retinue, but he belonged to the princely family himself. His burial is the richest one in the whole cemetery, moreover, strontium analysis of his teeth enamel shows he was not local. It is assumed that he came to Poland with the Prince of Kiev, Sviatopolk the Accursed, and met a violent death in combat. This corresponds to the events of 1018 AD when Sviatopolk himself disappeared after having retreated from Kiev to Poland. It cannot be excluded that the Bodzia man was Sviatopolk himself, as the genealogy of the Rurikids at this period is extremely sketchy and the dates of birth of many princes of this dynasty may be quite approximative. The Bodzia man carried haplogroup I1-S2077 and had both Scandinavian ancestry and Russian admixture.
The term 'Viking'... came to be used more especially of those warriors who left their homes in Scandinavia and made raids on the chief European countries. This is the narrow, and technically the only correct use of the term 'Viking,' but in such expressions as 'Viking civilisation,' 'the Viking age,' 'the Viking movement,' 'Viking influence,' the word has come to have a wider significance and is used as a concise and convenient term for describing the whole of the civilisation, activity and influence of the Scandinavian peoples, at a particular period in their history, and to apply the term 'Viking' in its narrower sense to these movements would be as misleading as to write an account of the age of Elizabeth and label it 'The Buccaneers.'
Viking is not merely another way of referring to a medieval Scandinavian. Technically, the word has a more specific meaning, and it was used (only infrequently by contemporaries of the Vikings) to refer to those Scandinavians, usually men, who attacked their contemporaries...
Strictly speaking, therefore, the term Viking should only be applied to men actually engaged in these violent pursuits, and not to every contemporary Scandinavian...
The Viking appellation... refers to an activity, not to an ethnic group
The term "Viking" is applied today to Scandinavians who left their homes intent on raiding or conquest, and their descendants, during a period extending roughly from a.d. 800 to 1050.
The term Viking... is now commonly applied to those Norsemen, Danes and Swedes who harried Europe from the eighth to the eleventh centuries...
Viking... Scandinavian words used to describe the seafaring raiders from Norway, Sweden, and Denmark who ravaged the coasts of Europe from about 800 ad onwards.
Viking is an Old Norse term, of disputed derivation, which only came into common usage in the 19th cent. to describe peoples of Scandinavian origin who, as raiders, settlers, and traders, had major and long-lasting effects on northern Europe and the Atlantic seaboards between the late 8th and 11th cents.
Vikings: Any of the Scandinavian seafaring pirates and traders who raided and settled in many parts of NW Europe in the 8th-11th centuries...
Viking... Any of the Scandinavian pirates who plundered the coasts of Europe from the 8th to 10th centuries
The Vikings were people who sailed from Scandinavia and attacked villages in most parts of north-western Europe from the 8th to the 11th centuries
Viking... [A]ny of the Danes, Norwegians, and Swedes who raided by sea most of N and W Europe from the 8th to the 11th centuries, later often settling, as in parts of Britain.
Viking... [A]ny of the Scandinavian sea rovers and pirates who ravaged the coasts of Europe from the 8th to the 10th cent.
Viking... [A] person belonging to a race of Scandinavian people who travelled by sea and attacked parts of northern and southern Europe between the 8th and the 11th centuries, often staying to live in places they travelled to.
Viking, also called Norseman or Northman, member of the Scandinavian seafaring warriors who raided and colonized wide areas of Europe from the 9th to the 11th century and whose disruptive influence profoundly affected European history. These pagan Danish, Norwegian, and Swedish warriors were...
Viking society, which had developed by the 9th century, included the peoples that lived in what are now Denmark, Norway, Sweden, and, from the 10th century, Iceland
In many aspects, Elfdalian, takes up a middle position between East and West Nordic. However, it shares some innovations with West Nordic, but none with East Nordic. This invalidates the claim that Elfdalian split off from Old Swedish |
If you don’t already know: Yes, there is water on the moon. NASA suggests there’s as much as 600 million metric tons of water ice there, which could someday help lunar colonists survive. It could even be turned into an affordable form of rocket fuel (you just have to split water into oxygen and hydrogen, and presto—you have propulsion for spaceflight).
Unfortunately, we’ve never known how much water is actually on the moon, where exactly those reserves are stored, or how to access and harvest it. Nor have scientists ever really understood how water originated there.
We still don’t have answers to these questions, but two new studies published in Nature Astronomy today do suggest that water on the moon is not as hidden away as scientists once thought.
Through the looking glass
The first study reports the detection of water molecules on lunar surfaces exposed to sunlight near the 231 kilometer-long Clavius crater, thanks to observations made by the Stratospheric Observatory for Infrared Astronomy (SOFIA) run by NASA and the German Aerospace Center. It has long been thought that water would have the best chance of remaining stable in regions of the moon, such as large craters, that are permanently covered in shadows. Such regions and any water they contained, researchers thought, would be protected from temperature disturbances induced by the sun’s rays.
As it turns out, there’s water sitting in broad daylight. “This is the first time we can say with certainty that the water molecule is present on the lunar surface,” says Casey Honniball, a researcher at NASA Goddard Space Flight Center and lead author of the SOFIA study.
The SOFIA observations point to water molecules incorporated into the structure of glass beads, which allows the molecules to withstand sunlight exposure. The amount of water contained in these glassy beads is comparable to 12 ounces dispersed over a cubic meter of soil, spread across the surface of the moon. “We expect the abundance of water to increase as we move closer to the poles,” says Honniball. “But what we observed with SOFIA is the opposite”—the beads were found in a latitudinal region that’s closer to the equator, though that’s not likely to be a global phenomenon.
SOFIA is an airborne observatory built out of a modified 747 that flies high through the atmosphere, so its nine-foot telescope can observe objects in space with minimal disturbance by Earth’s water-heavy atmosphere. This is especially useful for observing in infrared wavelengths, and in this case it helped researchers distinguish molecular water from hydroxyl compounds on the moon.
The glassy water features on the moon were previously found in an investigation on lunar mineralogy conducted in 1969 (thanks to observations made by a balloon observatory). But those observations were not reported and published. “Maybe they did not realize the big discovery they had actually made,” says Honniball.
The amount of water contained in the glassy beads is a bit low to be useful to humans, but it’s possible the concentration is much greater in other areas (the SOFIA study only focused on one area of the moon).
More important, the findings tease the possibility of a “lunar water cycle” that might replenish water reserves on the moon, something that seems barely comprehensible for a world long thought to be dry and dead. “It’s a new area we’ve not really looked at in any great detail before,” says Clive Neal, a planetary geologist at the University of Notre Dame, who was not involved in either study.
The smallest shadows
The second study, however, might be more relevant to NASA’s immediate plans for lunar exploration. The new findings suggest that the moon’s water ice reserves are sustained in what are called “micro cold traps” that are just a centimeter or less in diameter. New 3D models generated using thermal infrared and optical images taken by NASA’s Lunar Reconnaissance Orbiter show that the temperatures in these micro traps are low enough to keep water ice intact. They may be responsible for housing 10 to 20% of the water stored in all the moon’s permanent shadows, for a total area of about 40,000 square kilometers, mostly in regions closer to the poles.
“Instead of just a handful of large cold traps within ‘craters with names,’ there’s a whole galaxy of tiny cold traps spread out over the whole polar region,” says Paul Hayne, a planetary scientist at the University of Colorado, Boulder, the lead author of the study. “Micro cold traps are much more accessible than larger, permanently shadowed regions. Rather than designing missions to venture deep into the shadows, astronauts and rovers could remain in sunlight while extracting water from micro cold traps.” There might be hundreds of millions or even billions of these sites strewn across the lunar surface.
More data makes more mysteries
The studies aren’t perfect. There is no clear explanation yet for how these water-bearing glasses formed. Honniball says they likely originated from meteorites that either generated the water upon impact or delivered it as is. Or they could be the result of ancient volcanic activity. Neal points out the SOFIA study isn’t able to provide a complete picture of why the distribution of glass appears as a function of latitude, or how it might change over a full lunar cycle. Direct observations are needed to confirm what both studies suggest, and to answer the questions they raise.
We might not have to wait long for that kind of data. In the run-up to the Artemis missions intended to take astronauts back to the surface of the moon, NASA plans to launch a suite of robotic missions that would also help characterize the water ice content on the moon. The most high-profile of these missions is VIPER, a rover scheduled for launch in 2022 that’s supposed to prospect for subsurface water ice.
In light of the new findings, NASA might elect to change VIPER’s goal a bit to study surface water as well, and take a closer look at any glass features under the sun or examine how well the micro cold traps might work to preserve water ice. Other NASA payloads, as well as missions run by other countries, are likely to study the contents of surface water more closely. Neal suggests that a lunar exosphere monitoring system would be very useful in unraveling the history of water on the moon and figuring out how a possible lunar water cycle results in stable (or unstable) water on the surface.
“The more we look at the moon, the less we seem to understand,” says Neal. “Now we’ve got a few more reasons to go back and study it. We’ve got to get to the surface and get samples and set up monitoring stations to actually get definitive data to study this kind of cycle.”
How the James Webb Space Telescope broke the universe
Scientists were in awe of the flood of data that arrived when the new space observatory booted up.
NASA’s return to the moon is off to a rocky start
Artemis aims to deliver astronauts back to the lunar surface by 2025, but it’s riding on an old congressional pet project.
James Webb Space Telescope: 10 Breakthrough Technologies 2023
A marvel of precision engineering, JWST could revolutionize our view of the early universe.
What’s next in space
The moon, private space travel, and the wider solar system will all have major missions over the next 12 months.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. |
The water found on the moon, like that on Earth, came from small meteorites called carbonaceous chondrites in the first 100 million years or so after the solar system formed, researchers from Brown and Case Western Reserve universities and Carnegie Institution of Washington have found.
Evidence discovered within samples of moon dust returned by lunar crews of Apollo 15 and 17 dispels the theory that comets delivered the molecules.
The research is published online in Science Express today.
The discovery's telltale sign is found in the ratio of an isotopic form of hydrogen, called deuterium, to standard hydrogen. The ratio in the Earth's water and in water from specks of volcanic glass trapped in crystals within moon dust match the ratio found in the chondrites. The proportions are far different from those in comet water.
The moon is thought to have formed from a disc of debris left when a giant object hit the Earth 4.5 billion years ago, very early in Earth's history. Scientists have long assumed that the heat from an impact of that size would cause hydrogen and other volatile elements to boil off into space, meaning the moon must have started off completely dry. But recently, NASA spacecraft and new research on samples from the Apollo missions have shown that the moon actually has water, both on and beneath its surface.
By showing that water on the moon and Earth came from the same source, this new study offers yet more evidence that the moon's water has been there all along, or nearly so.
"The simplest explanation for what we found is that there was water on the proto-Earth at the time of the giant impact," said Alberto Saal, a geochemist at Brown University and the study's lead author. "Some of that water survived the impact, and that's what we see in the moon."
Or, the proto-moon and proto-Earth were showered by the same family of carbonaceous chondrites soon after they separated, said James Van Orman, professor of earth, environmental and planetary sciences at Case Western Reserve, and a co-author.
The other authors are Erik Hauri, of the Carnegie Institution, and Malcolm Rutherford, from Brown.
To find the origin of the moon's water, the researchers looked at the trapped volcanic glass, referred to as a melt inclusion. The surrounding olivine crystals prevent water form escaping during an eruption, providing researchers an idea of what the inside of the moon is like.
Research from 2011, led by Hauri, found that the melt inclusions have plenty of water--as much water, in fact, as lavas forming on the Earth's ocean floor. This study aimed to find the origin of that water. To do that, Saal and his colleagues looked at the isotopic composition of the hydrogen trapped in the inclusions.
Using a Cameca NanoSIMS 50L multicollector ion microprobe at Carnegie, the researchers measured the amount of deuterium in the samples compared to the amount of regular hydrogen. Deuterium has an extra neutron.
Water molecules originating from different places in the solar system have different amounts of deuterium. In general, things formed closer to the sun have less deuterium than things formed further out.
The investigators found that the deuterium/hydrogen ratio in the melt inclusions was relatively low and matched the ratio found in carbonaceous chondrites. These meteorites originated in the asteroid belt near Jupiter and are thought to be among the oldest objects in the solar system. That means the source of the water on the moon is primitive meteorites.
Comets, like meteorites, are known to carry water and other volatiles. But most comets were formed in the icy Oort Cloud, more than 1,000 times more distant than Neptune. Because comets formed so far from the sun, they tend to have high deuterium/hydrogen ratios--much higher ratios than in the moon's interior, where the samples in this study originated.
"The measurements themselves were very difficult," Hauri said, "but the new data provide the best evidence yet that the carbon-bearing chondrites were a common source for the volatiles in the Earth and moon, and perhaps the entire inner solar system."
To determine the ratios that would currently be found deep in the moon's interior, Van Orman and Saal modeled the loss of gasses from inside melt inclusions and the influence of degassing on the deuterium. The researchers also had to take into account the impact of cosmic rays--high-energy rays that carry charged particles--on the water trapped inside the inclusions. The interaction produces more deuterium than hydrogen. In total, the effects proved to be small for the melt inclusions, and the ratios remained consistent with the those of the chondrites.
Recent research, Saal said, has found that as much as 98 percent of the water on Earth also comes from primitive meteorites, suggesting a common source for water on Earth and the moon. The easiest way to explain that, Saal said, is that the water was already present on the early Earth and was transferred to the moon.
The finding is not necessarily inconsistent with the idea that the moon was formed by a giant impact with the early Earth, but presents a problem. If the moon is made from material that came from the Earth, it makes sense that the water in both would share a common source, Saal said. However, there's still the question of how that water was able to survive such a violent collision.
"Our work suggests that even highly volatile elements may not be lost completely during a giant impact," said Van Orman. "We need to go back to the drawing board and discover more about what giant impacts do, and we also need a better handle on volatile inventories in the moon."
Funding for the research came from NASA's Cosmochemistry and LASER programs and the NASA Lunar Science Institute.
Please follow Astrobiology on Twitter. |
Common Derivatives > Pi Derivative
The derivative of π with respect to x is 0.
This only works if you have a constant. It doesn’t work if you have any variable in your expression. For example, the rule doesn’t apply to π(x), π2(x) or πsin(x).
There are some other interesting relationships with derivatives involving π, especially the one involving the derivative of the area of a circle.
Pi Derivative in Other Functions
Example #1 : What is the derivative with respect to x of
f(x) = π2(x)?
Step 1: Place the constant in front:
= π2 d/dx(x).
Step 2: Use the common derivative d/dx = 1:
= π2 · 1.
Step 3: Simplify:
Example #2 : What is the derivative with respect to x of
f(x) = xπ?
Step 1: Use the power rule d/dx (xa = a · xa – 1:
We have a = π so:
Example #3 : What is the derivative with respect to x of f(x) = πx?
Step 1: Use the common derivative rule for exponential functions d/dx(ax = axln(a):
Plugging our function into the formula, we have a = π, so:
The derivative of a circle’s area (πr2) is it’s circumference (2*πr).
This relationship also holds for a semicircle, and it can be extended to a sphere: the derivative of the volume function of a sphere equals its surface area. This interesting relationship does not hold for all shapes though, such as squares or rectangles .
The math behind this fact is used in the cylindrical shell method for finding volumes of shapes. The logic is as follows : a small change in the radius of the sphere produces a small change in the sphere’s volume, which is equal to the volume of a spherical shell of radius R and thickness δR. This shell’s volume is approximately:
V = Surface area of sphere * δ R.
The derivative here is the change in ball volume / δR, which is just the surface area of the sphere.
Pi Derivative: References
Marichal, J. & Dorff, M. (2007). Derivative Relationships between volume and surface area of compact regions in Rd. Rocky Mountain Journal of Mathematics, vol 37 No 2. Retrieved May 3, 2021 from: https://math.byu.edu/~mdorff/docs/SomeRelations2007.pdf
Su, Francis E., et al. “Surface Area of a Sphere.” Math Fun Facts.
Stephanie Glen. "Pi Derivative & Derivative of Circle’s Area" From CalculusHowTo.com: Calculus for the rest of us! https://www.calculushowto.com/pi-derivative-derivative-of-circles-area/
Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free! |
By the end of this section, you will be able to:
- Describe the effects of gravity on objects in motion.
- Describe the motion of objects that are in free fall.
- Calculate the position and velocity of objects in free fall.
The information presented in this section supports the following AP® learning objectives and science practices:
- 3.A.1.1 The student is able to express the motion of an object using narrative, mathematical, or graphical representations. (S.P. 1.5, 2.1, 2.2)
- 3.A.1.2 The student is able to design an experimental investigation of the motion of an object. (S.P. 4.2)
- 3.A.1.3 The student is able to analyze experimental data describing the motion of an object and is able to express the results of the analysis using narrative, mathematical, and graphical representations. (S.P. 5.1)
Falling objects form an interesting class of motion problems. For example, we can estimate the depth of a vertical mine shaft by dropping a rock into it and listening for the rock to hit the bottom. By applying the kinematics developed so far to falling objects, we can examine some interesting situations and learn much about gravity in the process.
The most remarkable and unexpected fact about falling objects is that, if air resistance and friction are negligible, then in a given location all objects fall toward the center of Earth with the same constant acceleration, independent of their mass. This experimentally determined fact is unexpected, because we are so accustomed to the effects of air resistance and friction that we expect light objects to fall slower than heavy ones.
In the real world, air resistance can cause a lighter object to fall slower than a heavier object of the same size. A tennis ball will reach the ground after a hard baseball dropped at the same time. (It might be difficult to observe the difference if the height is not large.) Air resistance opposes the motion of an object through the air, while friction between objects—such as between clothes and a laundry chute or between a stone and a pool into which it is dropped—also opposes motion between them. For the ideal situations of these first few chapters, an object falling without air resistance or friction is defined to be in free-fall.
The force of gravity causes objects to fall toward the center of Earth. The acceleration of free-falling objects is therefore called the acceleration due to gravity. The acceleration due to gravity is constant, which means we can apply the kinematics equations to any falling object where air resistance and friction are negligible. This opens a broad class of interesting situations to us. The acceleration due to gravity is so important that its magnitude is given its own symbol, . It is constant at any given location on Earth and has the average value
Although varies from to , depending on latitude, altitude, underlying geological formations, and local topography, the average value of will be used in this text unless otherwise specified. The direction of the acceleration due to gravity is downward (towards the center of Earth). In fact, its direction defines what we call vertical. Note that whether the acceleration in the kinematic equations has the value or depends on how we define our coordinate system. If we define the upward direction as positive, then , and if we define the downward direction as positive, then .
One-Dimensional Motion Involving Gravity
The best way to see the basic features of motion involving gravity is to start with the simplest situations and then progress toward more complex ones. So we start by considering straight up and down motion with no air resistance or friction. These assumptions mean that the velocity (if there is any) is vertical. If the object is dropped, we know the initial velocity is zero. Once the object has left contact with whatever held or threw it, the object is in free-fall. Under these circumstances, the motion is one-dimensional and has constant acceleration of magnitude . We will also represent vertical displacement with the symbol and use for horizontal displacement.
Calculating Position and Velocity of a Falling Object: A Rock Thrown Upward
A person standing on the edge of a high cliff throws a rock straight up with an initial velocity of 13.0 m/s. The rock misses the edge of the cliff as it falls back to Earth. Calculate the position and velocity of the rock 1.00 s, 2.00 s, and 3.00 s after it is thrown, neglecting the effects of air resistance.
Draw a sketch.
We are asked to determine the position at various times. It is reasonable to take the initial position to be zero. This problem involves one-dimensional motion in the vertical direction. We use plus and minus signs to indicate direction, with up being positive and down negative. Since up is positive, and the rock is thrown upward, the initial velocity must be positive too. The acceleration due to gravity is downward, so is negative. It is crucial that the initial velocity and the acceleration due to gravity have opposite signs. Opposite signs indicate that the acceleration due to gravity opposes the initial motion and will slow and eventually reverse it.
Since we are asked for values of position and velocity at three times, we will refer to these as and ; and ; and and .
Solution for Position
1. Identify the knowns. We know that ; ; ; and .
2. Identify the best equation to use. We will use because it includes only one unknown, (or , here), which is the value we want to find.
3. Plug in the known values and solve for .
The rock is 8.10 m above its starting point at s, since . It could be moving up or down; the only way to tell is to calculate and find out if it is positive or negative.
Solution for Velocity
1. Identify the knowns. We know that ; ; ; and . We also know from the solution above that .
2. Identify the best equation to use. The most straightforward is (from , where ).
3. Plug in the knowns and solve.
The positive value for means that the rock is still heading upward at . However, it has slowed from its original 13.0 m/s, as expected.
Solution for Remaining Times
|Time, t||Position, y||Velocity, v||Acceleration, a|
Graphing the data helps us understand it more clearly.
The interpretation of these results is important. At 1.00 s the rock is above its starting point and heading upward, since and are both positive. At 2.00 s, the rock is still above its starting point, but the negative velocity means it is moving downward. At 3.00 s, both and are negative, meaning the rock is below its starting point and continuing to move downward. Notice that when the rock is at its highest point (at 1.5 s), its velocity is zero, but its acceleration is still . Its acceleration is for the whole trip—while it is moving up and while it is moving down. Note that the values for are the positions (or displacements) of the rock, not the total distances traveled. Finally, note that free-fall applies to upward motion as well as downward. Both have the same acceleration—the acceleration due to gravity, which remains constant the entire time. Astronauts training in the famous Vomit Comet, for example, experience free-fall while arcing up as well as down, as we will discuss in more detail later.
A simple experiment can be done to determine your reaction time. Have a friend hold a ruler between your thumb and index finger, separated by about 1 cm. Note the mark on the ruler that is right between your fingers. Have your friend drop the ruler unexpectedly, and try to catch it between your two fingers. Note the new reading on the ruler. Assuming acceleration is that due to gravity, calculate your reaction time. How far would you travel in a car (moving at 30 m/s) if the time it took your foot to go from the gas pedal to the brake was twice this reaction time?
Calculating Velocity of a Falling Object: A Rock Thrown Down
What happens if the person on the cliff throws the rock straight down, instead of straight up? To explore this question, calculate the velocity of the rock when it is 5.10 m below the starting point, and has been thrown downward with an initial speed of 13.0 m/s.
Draw a sketch.
Since up is positive, the final position of the rock will be negative because it finishes below the starting point at . Similarly, the initial velocity is downward and therefore negative, as is the acceleration due to gravity. We expect the final velocity to be negative since the rock will continue to move downward.
1. Identify the knowns. ; ; ; .
2. Choose the kinematic equation that makes it easiest to solve the problem. The equation works well because the only unknown in it is . (We will plug in for .)
3. Enter the known values
where we have retained extra significant figures because this is an intermediate result.
Taking the square root, and noting that a square root can be positive or negative, gives
The negative root is chosen to indicate that the rock is still heading down. Thus,
Note that this is exactly the same velocity the rock had at this position when it was thrown straight upward with the same initial speed. (See Example 2.14 and Figure 2.54(a).) This is not a coincidental result. Because we only consider the acceleration due to gravity in this problem, the speed of a falling object depends only on its initial speed and its vertical position relative to the starting point. For example, if the velocity of the rock is calculated at a height of 8.10 m above the starting point (using the method from Example 2.14) when the initial velocity is 13.0 m/s straight up, a result of is obtained. Here both signs are meaningful; the positive value occurs when the rock is at 8.10 m and heading up, and the negative value occurs when the rock is at 8.10 m and heading back down. It has the same speed but the opposite direction.
Another way to look at it is this: In Example 2.14, the rock is thrown up with an initial velocity of . It rises and then falls back down. When its position is on its way back down, its velocity is . That is, it has the same speed on its way down as on its way up. We would then expect its velocity at a position of to be the same whether we have thrown it upwards at or thrown it downwards at . The velocity of the rock on its way down from is the same whether we have thrown it up or down to start with, as long as the speed with which it was initially thrown is the same.
Find g from Data on a Falling Object
The acceleration due to gravity on Earth differs slightly from place to place, depending on topography (e.g., whether you are on a hill or in a valley) and subsurface geology (whether there is dense rock like iron ore as opposed to light rock like salt beneath you.) The precise acceleration due to gravity can be calculated from data taken in an introductory physics laboratory course. An object, usually a metal ball for which air resistance is negligible, is dropped and the time it takes to fall a known distance is measured. See, for example, Figure 2.55. Very precise results can be produced with this method if sufficient care is taken in measuring the distance fallen and the elapsed time.
Suppose the ball falls 1.0000 m in 0.45173 s. Assuming the ball is not affected by air resistance, what is the precise acceleration due to gravity at this location?
Draw a sketch.
We need to solve for acceleration . Note that in this case, displacement is downward and therefore negative, as is acceleration.
1. Identify the knowns. ; ; ; .
2. Choose the equation that allows you to solve for using the known values.
3. Substitute 0 for and rearrange the equation to solve for . Substituting 0 for yields
Solving for gives
4. Substitute known values yields
so, because with the directions we have chosen,
The negative value for indicates that the gravitational acceleration is downward, as expected. We expect the value to be somewhere around the average value of , so makes sense. Since the data going into the calculation are relatively precise, this value for is more precise than the average value of ; it represents the local value for the acceleration due to gravity.
While it is well established that the acceleration due to gravity is quite nearly 9.8 m/s2 at all locations on Earth, you can verify this for yourself with some basic materials.
Your task is to find the acceleration due to gravity at your location. Achieving an acceleration of precisely 9.8 m/s2 will be difficult. However, with good preparation and attention to detail, you should be able to get close. Before you begin working, consider the following questions.
What measurements will you need to take in order to find the acceleration due to gravity?
What relationships and equations found in this chapter may be useful in calculating the acceleration?
What variables will you need to hold constant?
What materials will you use to record your measurements?
Upon completing these four questions, record your procedure. Once recorded, you may carry out the experiment. If you find that your experiment cannot be carried out, you may revise your procedure.
Once you have found your experimental acceleration, compare it to the assumed value of 9.8 m/s2. If error exists, what were the likely sources of this error? How could you change your procedure in order to improve the accuracy of your findings?
A chunk of ice breaks off a glacier and falls 30.0 meters before it hits the water. Assuming it falls freely (there is no air resistance), how long does it take to hit the water?
Learn about graphing polynomials. The shape of the curve changes as the constants are adjusted. View the curves for the individual terms (e.g. ) to see how they add to generate the polynomial curve. |
Many think that deflation and disinflation are synonymous and use them interchangeably as they lead to fall in the general price level, due to which money supply in the economy declines. Nevertheless, these two terms are different in a sense that deflation, is a situation where the price of the goods and services goes down while disinflation is when there is a gradual decrease in the rate of inflation. Sustained disinflation may lead to disinflation.
Deflation takes places when the inflation rate is below 0%, or say negative inflation rate. Conversely, disinflation is the deceleration of the rate of inflation. Take a read of this article carefully to know the important differences between deflation and disinflation.
Content: Deflation Vs Disinflation
|Basis for Comparison||Deflation||Disinflation|
|Meaning||When there is a fall in the general price level, in the whole economy, such a situation is known as deflation.||Disinflation is a situation when the rate of inflation tends to fall over time but remains positive.|
|Cause||Shifts in demand and supply curve.||Deliberate policy of government.|
|Occurs||Prior to full employment.||Subsequent to full employment.|
|Prices||No limit of fall in prices.||Can be brought down to normal level.|
Definition of Deflation
Deflation is described as a period when the prices of the economic output fall in the economy due to the decrease in the money supply, consumer demand, investments and government spending. It occurs when the rate of inflation is less than 0% i.e. negative. It results in the rise in the real value of money. In such a situation, the purchasing power of the people goes up, and now they can buy more goods with the same amount of money.
In deflation, there is a steep decline in the general price level, which indicates an unhealthy condition of the economy. It can cause high unemployment, increase layoff, fall in the wage rates, decrease profits, low demand, low income, restricted credit supply in the economy. Deflation often leads the economy to depression. To counter deflation, Central Bank infuses credit supply in the economy.
Definition of Disinflation
Disinflation is a state when the rate of inflation is diminishing over time, but yet positive and continues until the rate is equal to zero. It is the deceleration in the rate of increase in the overall price level in the economy i.e. the prices of goods and services are not rising, as they used to rise earlier. The general price level rises in disinflation, but the rate of inflation decreases over the period.
stroitkzn.ru Between Deflation and Disinflation
The difference between deflation and disinflation can be drawn clearly on the following grounds:
- Deflation is described as a condition where the general price level declines, in the entire economy. Disinflation is a state when there is a fall in the inflation rate over time.
- A situation when the rate of inflation is positive but reducing with time is disinflation. On the other hand, when the inflation rate is negative, this situation is called deflation.
- Deflation is in contrast to inflation, whereas disinflation is opposed to reflation.
- The main cause of deflation is the shift in demand and supply of the economic output. Conversely, disinflation is a deliberate policy of the government.
- When we talk about employment level, deflation occurs before 100% employment while disinflation occurs after reaching the stage of 100% employment.
- In a deflation, the prices fall below the normal level, as there is no limit of fall in prices. As opposed to disinflation, which helps in bringing down the prices to a normal level.
For understanding the term deflation and disinflation, it is necessary to know the meaning of inflation, which is a situation when the prices of the economic output rise. When the rate of inflation slows down, it is disinflation, and it continues until the rate is zero, but when the rate is less than zero, it is deflation. The basic difference between these two is that deflation is the result of a fall in the overall price level while disinflation is the outcome of a fall in the inflation rate. |
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Elementary arithmetic is the simplified portion of arithmetic that includes the operations of addition, subtraction, multiplication, and division. It should not be confused with elementary function arithmetic.
Elementary arithmetic starts with the natural numbers and the written symbols (digits) that represent them. The process for combining a pair of these numbers with the four basic operations traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division.
- 1 The digits
- 2 Addition
- 3 Successorship and size
- 4 Counting
- 5 Subtraction
- 6 Multiplication
- 7 Division
- 8 Educational standards
- 9 Tools
- 10 See also
- 11 External links
Digits are the entire set of symbols used to represent numbers. In a particular numeral system, a single digit represents a different amount than any other digit, although the symbols in the same numeral system might vary between cultures.
In modern usage, the Arabic numerals are the most common set of symbols, and the most frequently used form of these digits is the Western style. Each single digit, if used as a standalone number, matches the following amounts:
0, zero. Used in the absence of objects to be counted. For example, a different way of saying "there are no sticks here", is to say "the number of sticks here is 0".
1, one. Applied to a single item. For example, here is one stick: I
2, two. Applied to a pair of items. Here are two sticks: I I
3, three. Applied to three items. Here are three sticks: I I I
4, four. Applied to four items. Here are four sticks: I I I I
5, five. Applied to five items. Here are five sticks: I I I I I
6, six. Applied to six items. Here are six sticks: I I I I I I
7, seven. Applied to seven items. Here are seven sticks: I I I I I I I
8, eight. Applied to eight items. Here are eight sticks: I I I I I I I I
9, nine. Applied to nine items. Here are nine sticks: I I I I I I I I I
Any numeral system defines the value of all numbers that contain more than one digit, most often by addition of the value for adjacent digits. The Hindu–Arabic numeral system includes positional notation to determine the value for any numeral. In this type of system, the increase in value for an additional digit includes one or more multiplications with the radix value and the result is added to the value of an adjacent digit. With Arabic numerals, the radix value of ten produces a value of twenty-one (equal to 2×10 + 1) for the numeral "21". An additional multiplication with the radix value occurs for each additional digit, so the numeral "201" represents a value of two-hundred-and-one (equal to 2×10×10 + 0×10 + 1).
The elementary level of study typically includes understanding the value of individual whole numbers using Arabic numerals with a maximum of seven digits, and performing the four basic operations using Arabic numerals with a maximum of four digits each.
When two numbers are added together, the result is called a sum. The two numbers being added together are called addends.
What does it mean to add two natural numbers?
Suppose you have two bags, one bag holding five apples and a second bag holding three apples. Grabbing a third, empty bag, move all the apples from the first and second bags into the third bag. The third bag now holds eight apples. This illustrates the combination of three apples and five apples is eight apples; or more generally: "three plus five is eight" or "three plus five equals eight" or "eight is the sum of three and five". Numbers are abstract, and the addition of a group of three things to a group of five things will yield a group of eight things. Addition is a regrouping: two sets of objects that were counted separately are put into a single group and counted together: the count of the new group is the "sum" of the separate counts of the two original groups.
This operation of combining is only one of several possible meanings that the mathematical operation of addition can have. Other meanings for addition include:
- comparing ("Tom has 5 apples. Jane has 3 more apples than Tom. How many apples does Jane have?"),
- joining ("Tom has 5 apples. Jane gives him 3 more apples. How many apples does Tom have now?"),
- measuring ("Tom's desk is 3 feet wide. Jane's is also 3 feet wide. How wide will their desks be when put together?"),
- and even sometimes separating ("Tom had some apples. He gave 3 to Jane. Now he has 5. How many did he start with?").
Symbolically, addition is represented by the "plus sign": +. So the statement "three plus five equals eight" can be written symbolically as 3 + 5 = 8. The order in which two numbers are added does not matter, so 3 + 5 = 5 + 3 = 8. This is the commutative property of addition.
To add a pair of digits using the table, find the intersection of the row of the first digit with the column of the second digit: the row and the column intersect at a square containing the sum of the two digits. Some pairs of digits add up to two-digit numbers, with the tens-digit always being a 1. In the addition algorithm the tens-digit of the sum of a pair of digits is called the "carry digit".
For simplicity, consider only numbers with three digits or fewer. To add a pair of numbers (written in Arabic numerals), write the second number under the first one, so that digits line up in columns: the rightmost column will contain the ones-digit of the second number under the ones-digit of the first number. This rightmost column is the ones-column. The column immediately to its left is the tens-column. The tens-column will have the tens-digit of the second number (if it has one) under the tens-digit of the first number (if it has one). The column immediately to the left of the tens-column is the hundreds-column. The hundreds-column will line up the hundreds-digit of the second number (if there is one) under the hundreds-digit of the first number (if there is one).
After the second number has been written down under the first one so that digits line up in their correct columns, draw a line under the second (bottom) number. Start with the ones-column: the ones-column should contain a pair of digits: the ones-digit of the first number and, under it, the ones-digit of the second number. Find the sum of these two digits: write this sum under the line and in the ones-column. If the sum has two digits, then write down only the ones-digit of the sum. Write the "carry digit" above the top digit of the next column: in this case the next column is the tens-column, so write a 1 above the tens-digit of the first number.
If both first and second number each have only one digit then their sum is given in the addition table, and the addition algorithm is unnecessary.
Then comes the tens-column. The tens-column might contain two digits: the tens-digit of the first number and the tens-digit of the second number. If one of the numbers has a missing tens-digit then the tens-digit for this number can be considered to be a 0. Add the tens-digits of the two numbers. Then, if there is a carry digit, add it to this sum. If the sum was 18 then adding the carry digit to it will yield 19. If the sum of the tens-digits (plus carry digit, if there is one) is less than ten then write it in the tens-column under the line. If the sum has two digits then write its last digit in the tens-column under the line, and carry its first digit (which should be a 1) over to the next column: in this case the hundreds-column.
If none of the two numbers has a hundreds-digit then if there is no carry digit then the addition algorithm has finished. If there is a carry digit (carried over from the tens-column) then write it in the hundreds-column under the line, and the algorithm is finished. When the algorithm finishes, the number under the line is the sum of the two numbers.
If at least one of the numbers has a hundreds-digit then if one of the numbers has a missing hundreds-digit then write a 0 digit in its place. Add the two hundreds-digits, and to their sum add the carry digit if there is one. Then write the sum of the hundreds-column under the line, also in the hundreds column. If the sum has two digits then write down the last digit of the sum in the hundreds-column and write the carry digit to its left: on the thousands-column.
Say one wants to find the sum of the numbers 653 and 274. Write the second number under the first one, with digits aligned in columns, like so:
Then draw a line under the second number and put a plus sign. The addition starts with the ones-column. The ones-digit of the first number is 3 and of the second number is 4. The sum of three and four is seven, so write a 7 in the ones-column under the line:
Next, the tens-column. The tens-digit of the first number is 5, and the tens-digit of the second number is 7, and five plus seven is twelve: 12, which has two digits, so write its last digit, 2, in the tens-column under the line, and write the carry digit on the hundreds-column above the first number:
Next, the hundreds-column. The hundreds-digit of the first number is 6, while the hundreds-digit of the second number is 2. The sum of six and two is eight, but there is a carry digit, which added to eight is equal to nine. Write the 9 under the line in the hundreds-column:
No digits (and no columns) have been left unadded, so the algorithm finishes, and
- 653 + 274 = 927.
Successorship and size
The result of the addition of one to a number is the successor of that number. Examples:
the successor of zero is one,
the successor of one is two,
the successor of two is three,
the successor of ten is eleven.
Every natural number has a successor.
The predecessor of the successor of a number is the number itself. For example, five is the successor of four therefore four is the predecessor of five. Every natural number except zero has a predecessor.
If a number is the successor of another number, then the first number is said to be larger than the other number. If a number is larger than another number, and if the other number is larger than a third number, then the first number is also larger than the third number. Example: five is larger than four, and four is larger than three, therefore five is larger than three. But six is larger than five, therefore six is also larger than three. But seven is larger than six, therefore seven is also larger than three ... therefore eight is larger than three ... therefore nine is larger than three, etc.
If two non-zero natural numbers are added together, then their sum is larger than either one of them. Example: three plus five equals eight, therefore eight is larger than three (8 > 3) and eight is larger than five (8 > 5). The symbol for "larger than" is >.
If a number is larger than another one, then the other is smaller than the first one. Examples: three is smaller than eight (3 < 8) and five is smaller than eight (5 < 8). The symbol for smaller than is <. A number cannot be at the same time larger and smaller than another number. Neither can a number be at the same time larger than and equal to another number. Given a pair of natural numbers, one and only one of the following cases must be true:
- the first number is larger than the second one,
- the first number is equal to the second one,
- the first number is smaller than the second one.
To count a group of objects means to assign a natural number to each one of the objects, as if it were a label for that object, such that a natural number is never assigned to an object unless its predecessor was already assigned to another object, with the exception that zero is not assigned to any object: the smallest natural number to be assigned is one, and the largest natural number assigned depends on the size of the group. It is called the count and it is equal to the number of objects in that group.
The process of counting a group is the following:
- Let "the count" be equal to zero. "The count" is a variable quantity, which though beginning with a value of zero, will soon have its value changed several times.
- Find at least one object in the group which has not been labeled with a natural number. If no such object can be found (if they have all been labeled) then the counting is finished. Otherwise choose one of the unlabeled objects.
- Increase the count by one. That is, replace the value of the count by its successor.
- Assign the new value of the count, as a label, to the unlabeled object chosen in Step 2.
- Go back to Step 2.
When the counting is finished, the last value of the count will be the final count. This count is equal to the number of objects in the group.
Often, when counting objects, one does not keep track of what numerical label corresponds to which object: one only keeps track of the subgroup of objects which have already been labeled, so as to be able to identify unlabeled objects necessary for Step 2. However, if one is counting persons, then one can ask the persons who are being counted to each keep track of the number which the person's self has been assigned. After the count has finished it is possible to ask the group of persons to file up in a line, in order of increasing numerical label. What the persons would do during the process of lining up would be something like this: each pair of persons who are unsure of their positions in the line ask each other what their numbers are: the person whose number is smaller should stand on the left side and the one with the larger number on the right side of the other person. Thus, pairs of persons compare their numbers and their positions, and commute their positions as necessary, and through repetition of such conditional commutations they become ordered.
Subtraction is the mathematical operation which describes a reduced quantity. The result of this operation is the difference between two numbers, the minuend and the subtrahend. As with addition, subtraction can have a number of interpretations, such as:
- separating ("Tom has 8 apples. He gives away 3 apples. How many does he have left?")
- comparing ("Tom has 8 apples. Jane has 3 fewer apples than Tom. How many does Jane have?")
- combining ("Tom has 8 apples. Three of the apples are green and the rest are red. How many are red?")
- and sometimes joining ("Tom had some apples. Jane gave him 3 more apples, so now he has 8 apples. How many did he start with?").
As with addition, there are other possible interpretations, such as motion.
Symbolically, the minus sign ("−") represents the subtraction operation. So the statement "five minus three equals two" is also written as 5 − 3 = 2. In elementary arithmetic, subtraction uses smaller positive numbers for all values to produce simpler solutions.
Unlike addition, subtraction is not commutative, so the order of numbers in the operation will change the result. Therefore, each number is provided a different distinguishing name. The first number (5 in the previous example) is formally defined as the minuend and the second number (3 in the previous example) as the subtrahend. The value of the minuend is larger than the value of the subtrahend so that the result is a positive number, but a smaller value of the minuend will result in negative numbers.
There are several methods to accomplish subtraction. The method which is in the United States of America referred to as traditional mathematics taught elementary school students to subtract using methods suitable for hand calculation. The particular method used varies from country to country, and within a country, different methods are in fashion at different times. Reform mathematics is distinguished generally by the lack of preference for any specific technique, replaced by guiding 2nd-grade students to invent their own methods of computation, such as using properties of negative numbers in the case of TERC.
American schools currently teach a method of subtraction using borrowing and a system of markings called crutches. Although a method of borrowing had been known and published in textbooks prior, apparently the crutches are the invention of William A. Browell, who used them in a study in November 1937 . This system caught on rapidly, displacing the other methods of subtraction in use in America at that time.
Students in some European countries are taught, and some older Americans employ, a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid the memory) which [probably] vary according to country.
In the method of borrowing, a subtraction such as 86 − 39 will accomplish the ones-place subtraction of 9 from 6 by borrowing a 10 from 80 and adding it to the 6. The problem is thus transformed into (70 + 16) − 39, effectively. This is indicated by striking through the 8, writing a small 7 above it, and writing a small 1 above the 6. These markings are called crutches. The 9 is then subtracted from 16, leaving 7, and the 30 from the 70, leaving 40, or 47 as the result.
In the additions method, a 10 is borrowed to make the 6 into 16, in preparation for the subtraction of 9, just as in the borrowing method. However, the 10 is not taken by reducing the minuend, rather one augments the subtrahend. Effectively, the problem is transformed into (80 + 16) − ( 39 + 10). Typically a crutch of a small one is marked just below the subtrahend digit as a reminder. Then the operations proceed: 9 from 16 is 7; and 40 (that is, 30 + 10) from 80 is 40, or 47 as the result.
The additions method seem to be taught in two variations, which differ only in psychology. Continuing the example of 86 − 39, the first variation attempts to subtract 9 from 6, and then 9 from 16, borrowing a 10 by marking near the digit of the subtrahend in the next column. The second variation attempts to find a digit which, when added to 9, gives 6, and recognizing that is not possible, gives 16, and carrying the 10 of the 16 as a one marking near the same digit as in the first method. The markings are the same; it is just a matter of preference as to how one explains its appearance.
As a final caution, the borrowing method gets a bit complicated in cases such as 100 − 87, where a borrow cannot be made immediately, and must be obtained by reaching across several columns. In this case, the minuend is effectively rewritten as 90 + 10, by taking a 100 from the hundreds, making ten 10s from it, and immediately borrowing that down to nine 10s in the tens column and finally placing a 10 in the ones column.
When two numbers are multiplied together, the result is called a product. The two numbers being multiplied together are called factors, with multiplicand and multiplier also used.
What does it mean to multiply two natural numbers?
Suppose there are five red bags, each one containing three apples. Now grabbing an empty green bag, move all the apples from all five red bags into the green bag. Now the green bag will have fifteen apples.
Thus the product of five and three is fifteen.
This can also be stated as "five times three is fifteen" or "five times three equals fifteen" or "fifteen is the product of five and three". Multiplication can be seen to be a form of repeated addition: the first factor indicates how many times the second factor occurs in repeated addition; the final sum being the product.
Symbolically, multiplication is represented by the multiplication sign: ×. So the statement "five times three equals fifteen" can be written symbolically as
In some countries, and in more advanced arithmetic, other multiplication signs are used, e.g. 5 ⋅ 3. In some situations, especially in algebra, where numbers can be symbolized with letters, the multiplication symbol may be omitted; e.g. xy means x × y. The order in which two numbers are multiplied does not matter, so that, for example, three times four equals four times three. This is the commutative property of multiplication.
To multiply a pair of digits using the table, find the intersection of the row of the first digit with the column of the second digit: the row and the column intersect at a square containing the product of the two digits. Most pairs of digits produce two-digit numbers. In the multiplication algorithm the tens-digit of the product of a pair of digits is called the "carry digit".
Multiplication algorithm for a single-digit factor
Consider a multiplication where one of the factors has multiple digits, whereas the other factor has only one digit. Write down the multi-digit factor, then write the single-digit factor under the last digit of the multi-digit factor. Draw a horizontal line under the single-digit factor. Henceforth, the multi-digit factor will be called the multiplicand, and the single-digit factor will be called the multiplier.
Suppose for simplicity that the multiplicand has three digits. The first digit is the hundreds-digit, the middle digit is the tens-digit, and the last, rightmost, digit is the ones-digit. The multiplier only has a ones-digit. The ones-digits of the multiplicand and multiplier form a column: the ones-column.
Start with the ones-column: the ones-column should contain a pair of digits: the ones-digit of the multiplicand and, under it, the ones-digit of the multiplier. Find the product of these two digits: write this product under the line and in the ones-column. If the product has two digits, then write down only the ones-digit of the product. Write the "carry digit" as a superscript of the yet-unwritten digit in the next column and under the line: in this case the next column is the tens-column, so write the carry digit as the superscript of the yet-unwritten tens-digit of the product (under the line).
If both first and second number each have only one digit then their product is given in the multiplication table, and the multiplication algorithm is unnecessary.
Then comes the tens-column. The tens-column so far contains only one digit: the tens-digit of the multiplicand (though it might contain a carry digit under the line). Find the product of the multiplier and the tens-digits of the multiplicand. Then, if there is a carry digit (superscripted, under the line and in the tens-column), add it to this product. If the resulting sum is less than ten then write it in the tens-column under the line. If the sum has two digits then write its last digit in the tens-column under the line, and carry its first digit over to the next column: in this case the hundreds column.
If the multiplicand does not have a hundreds-digit then if there is no carry digit then the multiplication algorithm has finished. If there is a carry digit (carried over from the tens-column) then write it in the hundreds-column under the line, and the algorithm is finished. When the algorithm finishes, the number under the line is the product of the two numbers.
If the multiplicand has a hundreds-digit, find the product of the multiplier and the hundreds-digit of the multiplicand, and to this product add the carry digit if there is one. Then write the resulting sum of the hundreds-column under the line, also in the hundreds column. If the sum has two digits then write down the last digit of the sum in the hundreds-column and write the carry digit to its left: on the thousands-column.
Say one wants to find the product of the numbers 3 and 729. Write the single-digit multiplier under the multi-digit multiplicand, with the multiplier under the ones-digit of the multiplicand, like so:
Then draw a line under the multiplier and put a multiplication symbol. Multiplication starts with the ones-column. The ones-digit of the multiplicand is 9 and the multiplier is 3. The product of 3 and 9 is 27, so write a 7 in the ones-column under the line, and write the carry-digit 2 as a superscript of the yet-unwritten tens-digit of the product under the line:
Next, the tens-column. The tens-digit of the multiplicand is 2, the multiplier is 3, and three times two is six. Add the carry-digit, 2, to the product, 6, to obtain 8. Eight has only one digit: no carry-digit, so write in the tens-column under the line. You can erase the two now.
Next, the hundreds-column. The hundreds-digit of the multiplicand is 7, while the multiplier is 3. The product of 3 and 7 is 21, and there is no previous carry-digit (carried over from the tens-column). The product 21 has two digits: write its last digit in the hundreds-column under the line, then carry its first digit over to the thousands-column. Since the multiplicand has no thousands-digit, then write this carry-digit in the thousands-column under the line (not superscripted):
No digits of the multiplicand have been left unmultiplied, so the algorithm finishes, and
Multiplication algorithm for multi-digit factors
Given a pair of factors, each one having two or more digits, write both factors down, one under the other one, so that digits line up in columns.
For simplicity consider a pair of three-digits numbers. Write the last digit of the second number under the last digit of the first number, forming the ones-column. Immediately to the left of the ones-column will be the tens-column: the top of this column will have the second digit of the first number, and below it will be the second digit of the second number. Immediately to the left of the tens-column will be the hundreds-column: the top of this column will have the first digit of the first number and below it will be the first digit of the second number. After having written down both factors, draw a line under the second factor.
The multiplication will consist of two parts. The first part will consist of several multiplications involving one-digit multipliers. The operation of each one of such multiplications was already described in the previous multiplication algorithm, so this algorithm will not describe each one individually, but will only describe how the several multiplications with one-digit multipliers shall be coordinated. The second part will add up all the subproducts of the first part, and the resulting sum will be the product.
First part. Let the first factor be called the multiplicand. Let each digit of the second factor be called a multiplier. Let the ones-digit of the second factor be called the "ones-multiplier". Let the tens-digit of the second factor be called the "tens-multiplier". Let the hundreds-digit of the second factor be called the "hundreds-multiplier".
Start with the ones-column. Find the product of the ones-multiplier and the multiplicand and write it down in a row under the line, aligning the digits of the product in the previously-defined columns. If the product has four digits, then the first digit will be the beginning of the thousands-column. Let this product be called the "ones-row".
Then the tens-column. Find the product of the tens-multiplier and the multiplicand and write it down in a row—call it the "tens-row"—under the ones-row, but shifted one column to the left. That is, the ones-digit of the tens-row will be in the tens-column of the ones-row; the tens-digit of the tens-row will be under the hundreds-digit of the ones-row; the hundreds-digit of the tens-row will be under the thousands-digit of the ones-row. If the tens-row has four digits, then the first digit will be the beginning of the ten-thousands-column.
Next, the hundreds-column. Find the product of the hundreds-multiplier and the multiplicand and write it down in a row—call it the "hundreds-row"—under the tens-row, but shifted one more column to the left. That is, the ones-digit of the hundreds-row will be in the hundreds-column; the tens-digit of the hundreds-row will be in the thousands-column; the hundreds-digit of the hundreds-row will be in the ten-thousands-column. If the hundreds-row has four digits, then the first digit will be the beginning of the hundred-thousands-column.
After having down the ones-row, tens-row, and hundreds-row, draw a horizontal line under the hundreds-row. The multiplications are over.
Second part. Now the multiplication has a pair of lines. The first one under the pair of factors, and the second one under the three rows of subproducts. Under the second line there will be six columns, which from right to left are the following: ones-column, tens-column, hundreds-column, thousands-column, ten-thousands-column, and hundred-thousands-column.
Between the first and second lines, the ones-column will contain only one digit, located in the ones-row: it is the ones-digit of the ones-row. Copy this digit by rewriting it in the ones-column under the second line.
Between the first and second lines, the tens-column will contain a pair of digits located in the ones-row and the tens-row: the tens-digit of the ones-row and the ones-digit of the tens-row. Add these digits up and if the sum has just one digit then write this digit in the tens-column under the second line. If the sum has two digits then the first digit is a carry-digit: write the last digit down in the tens-column under the second line and carry the first digit over to the hundreds-column, writing it as a superscript to the yet-unwritten hundreds-digit under the second line.
Between the first and second lines, the hundreds-column will contain three digits: the hundreds-digit of the ones-row, the tens-digit of the tens-row, and the ones-digit of the hundreds-row. Find the sum of these three digits, then if there is a carry-digit from the tens-column (written in superscript under the second line in the hundreds-column) then add this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the hundreds-column; if it has two digits then write the last digit down under the line in the hundreds-column, and carry the first digit over to the thousands-column, writing it as a superscript to the yet-unwritten thousands-digit under the line.
Between the first and second lines, the thousands-column will contain either two or three digits: the hundreds-digit of the tens-row, the tens-digit of the hundreds-row, and (possibly) the thousands-digit of the ones-row. Find the sum of these digits, then if there is a carry-digit from the hundreds-column (written in superscript under the second line in the thousands-column) then add this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the thousands-column; if it has two digits then write the last digit down under the line in the thousands-column, and carry the first digit over to the ten-thousands-column, writing it as a superscript to the yet-unwritten ten-thousands-digit under the line.
Between the first and second lines, the ten-thousands-column will contain either one or two digits: the hundreds-digit of the hundreds-column and (possibly) the thousands-digit of the tens-column. Find the sum of these digits (if the one in the tens-row is missing think of it as a 0), and if there is a carry-digit from the thousands-column (written in superscript under the second line in the ten-thousands-column) then add this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the ten-thousands-column; if it has two digits then write the last digit down under the line in the ten-thousands-column, and carry the first digit over to the hundred-thousands-column, writing it as a superscript to the yet-unwritten hundred-thousands digit under the line. However, if the hundreds-row has no thousands-digit then do not write this carry-digit as a superscript, but in normal size, in the position of the hundred-thousands-digit under the second line, and the multiplication algorithm is over.
If the hundreds-row does have a thousands-digit, then add to it the carry-digit from the previous row (if there is no carry-digit then think of it as a 0) and write the single-digit sum in the hundred-thousands-column under the second line.
The number under the second line is the sought-after product of the pair of factors above the first line.
Let our objective be to find the product of 789 and 345. Write the 345 under the 789 in three columns, and draw a horizontal line under them:
First part. Start with the ones-column. The multiplicand is 789 and the ones-multiplier is 5. Perform the multiplication in a row under the line:
Then the tens-column. The multiplicand is 789 and the tens-multiplier is 4. Perform the multiplication in the tens-row, under the previous subproduct in the ones-row, but shifted one column to the left:
Next, the hundreds-column. The multiplicand is once again 789, and the hundreds-multiplier is 3. Perform the multiplication in the hundreds-row, under the previous subproduct in the tens-row, but shifted one (more) column to the left. Then draw a horizontal line under the hundreds-row:
Second part. Now add the subproducts between the first and second lines, but ignoring any superscripted carry-digits located between the first and second lines.
The answer is
Specifically, if c times b equals a, written:
where b is not zero, then a divided by b equals c, written:
In the above expression, a is called the dividend, b the divisor and c the quotient.
Division by zero (i.e. where the divisor is zero) is not defined.
Division is most often shown by placing the dividend over the divisor with a horizontal line, also called a vinculum, between them. For example, a divided by b is written
This can be read out loud as "a divided by b" or "a over b". A way to express division all on one line is to write the dividend, then a slash, then the divisor, like this:
This is the usual way to specify division in most computer programming languages since it can easily be typed as a simple sequence of characters.
A handwritten or typographical variation, which is halfway between these two forms, uses a solidus (fraction slash) but elevates the dividend, and lowers the divisor:
- a⁄b .
Any of these forms can be used to display a fraction. A common fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator), and there is no implication that the division needs to be evaluated further.
A more basic way to show division is to use the obelus (or division sign) in this manner:
This form is infrequent except in basic arithmetic. The obelus is also used alone to represent the division operation itself, for instance, as a label on a key of a calculator.
With a knowledge of multiplication tables, two integers can be divided on paper using the method of long division. If the dividend has a fractional part (expressed as a decimal fraction), one can continue the algorithm past the ones place as far as desired. If the divisor has a decimal fractional part, one can restate the problem by moving the decimal to the right in both numbers until the divisor has no fraction.
To divide by a fraction, multiply by the reciprocal (reversing the position of the top and bottom parts) of that fraction.
Local standards usually define the educational methods and content included in the elementary level of instruction. In the United States and Canada, controversial subjects include the amount of calculator usage compared to manual computation and the broader debate between traditional mathematics and reform mathematics.
In the United States, the 1989 NCTM standards led to curricula which de-emphasized or omitted much of what was considered to be elementary arithmetic in elementary school, and replaced it with emphasis on topics traditionally studied in college such as algebra, statistics and problem solving, and non-standard computation methods unfamiliar to most adults.
The abacus is an early mechanical device for performing elementary arithmetic, which is still used in many parts of Asia. Modern calculating tools that perform elementary arithmetic operations include cash registers, electronic calculators, and computers.
- binary arithmetic
- equals sign
- number line
- long division
- plus and minus signs
- Subtraction without borrowing
- unary numeral system
- Early numeracy
- "A Friendly Gift on the Science of Arithmetic" is an Arabic document from the 15th century that talks about basic arithmetic. |
Effective Critical Writing
1: Write a rigorous & thorough argument
- Always ensure that your argument deals with the terms and scope of the question posed to you.
- When you posit an argument, you need to illustrate it with strong and sustained textual evidence (and, often, secondary sources), linked together with your articulate and thoughtful analysis. Your personal judgments are only the beginning of an academic argument, when they are grounded in a sound knowledge of the text and its nuances, when they move from opinions to arguments.
- Literary criticism is neither wholly objective, nor wholly subjective. There are no right or wrong analyses, only well- and poorly-argued ones.
- A strong argument begins with an effective thesis statement.
- A paragraph should be like an essay in miniature, with a discrete (unique) purpose, and a beginning, middle, and end. It should begin and end with analytical statements, rather than with descriptions, paraphrases, or quotations.
- A strong argument uses clear and confident language to present its ideas and evidence, avoiding non-committal phrases like “perhaps” or “this might be interpreted to mean”. If you’re uncertain about your interpretation, buttress it with more (or better) textual evidence–or do some thinking about whether what you’re writing is what you truly think.
- When responding to a text, don’t simply retell the story or argument in your own words. Assume that the reader shares your well-earned mastery of the text. Weave the text’s narrative into the pattern of your argument and analysis. This usually means that you don’t need (necessarily) to stick to the author’s chronology of events: move laterally through the material, according to your own priorities. Make the plot serve your analysis, not the other way round.
- In sum, show the reader that you’re truly engaged with the both text and with the question(s) at hand.
- The thesis statement tells your reader (1) what you will argue, and (2) how you will argue it (what categories of evidence you will use).
- It is more than just a topic sentence or starting point.
- Here is a weak thesis:
- Shakespeare uses metaphors to express Romeo and Juliet’s feelings.
- Here’s a more effective version:
- Shakespeare uses the metaphors of beholding and reading, in Romeo and Juliet’s conversations, to express their desires and judgements.
- This is more effective because it’s explicit about which feelings you’ll discuss, and it tells the reader what evidence you’ll examine.
- Strong writers argue with the confidence that comes from resourceful use of evidence. Weak writers state and restate arguments to make up for faulty or absent evidence.
- One sure sign of weak argumentation is an abrupt shift at the end of a paragraph or essay from bland analysis and excessive quotation to over-confident assertions: “thus we see…” “therefore it is clear…”. This is too glib to be trusted: your job is to make it clear, transitioning smoothly between close readings of your chosen (few) quotations and natural conclusions.
2: Use clear, concise, & natural language
- Be active, engaging, and clear in your style. Critical writing is a highly self-conscious, if not paranoid, act: you write with an imaginary (and acutely critical) reader looking over your shoulder. Ensure that every word in a sentence needs to be there; superfluous words clutter the page and give the impression that you’re being evasive.
- The greatest challenge in critical writing is to present ideas you have spent many hours crafting and clarifying as if they occurred to you naturally, without excessive strain or effort. The same applies to the structure of your argument: while you need a logical argument, you must conceal the blueprints after you build the house. Avoid phrases which simply turn your point-form notes into prose, like “In this essay I will argue…” or “In conclusion…” or “The idea of appearance versus reality also appears in Act II, Scene 3.” Essays should show the product, not the gestation, of your ideas.
Analysis / Description / Paraphrase / Quotation
- Think of these four modes of writing on a spectrum: at one end, there’s a purely analytical statement (eg. “Hamlet is a play about epistemological uncertainty”).
- For more on selecting quotations, see here.
- At the other end of the spectrum, there’s pure quotation: words quoted directly from the text (“To be or not to be”).
- Description and paraphrase are modes between these two extremes: they tell the reader about the text in your own words–describing events and dialogue, paraphrasing speeches, and so on.
- You can see how these two modes allow for analytical leeway: you inform the reader what’s happening, and add your own analytical twist (eg. “Hamlet’s musings on Claudius’ guilt reveal his epistemological uncertainty”).
- What should you do with this information? Think of paraphrase and description as a way to balance analysis with description. All four modes of writing have a place in your writing, and both ends of the spectrum need each other: you need description to tell the reader what’s happening, and analysis to tell the reader what to think (or what you’re thinking). Use paraphrase and description to move between them, to make the distinction less overt.
Particular matters of style
- Be direct: rather than saying, “In Hamlet, revenge is depicted as…”, say it outright: “Hamlet depicts revenge as…”.
- Be concise: ensure that every word in every sentence needs to be there. If you can remove it without changing the meaning, do so. The same goes for sentences within paragraphs.
- The most elegant writing is also the most economical. Unnecessarily long words and wordy phrases strain the reader’s sense that you truly believe what you are writing, rather than cloaking it in unnecessary verbiage.
- Be confident: rather than writing, “It seems that Horatio is a loyal friend”, try the more direct, “Horatio is a loyal friend.” Avoid non-committal phrases like “perhaps” or “this might be interpreted to mean.” If you can’t strengthen an assertion with textual evidence, don’t bother making it.
- Audience responses, which are notoriously unpredictable, are an exception to this rule. (eg. You can argue that a speech is designed to elicit sympathy, not that it will evoke sympathy from an audience.
- Avoid the passive voice, where possible: don’t say, “Tybalt is killed by Romeo” when it’s more direct to say, “Romeo kills Tybalt.”
- There are exceptions to this rule, as to most: when you truly don’t know who did an action. It’s appropriate, even necessary, to ask (for example), “Where were these peaches grown?” If you don’t know where they grew, it very likely follows that you don’t know who grew them.
- Don’t shy away from your critical responsibilities, or pretend that you know something you don’t. Don’t tell your reader that “it is clear” or “it can be argued that” when your task is to make the argument or idea clear.
- To improve the flow of your argument and the cadences of your sentences, try reading your early drafts aloud as you revise them. This technique also helps you avoid run-on sentences, in which you lose the original idea by the end of the sentence, a phenomenon I’m demonstrating in this very sentence, which is, if you are reading it aloud, already incoherent.
- My assignments have maximum word-limits because it is far more difficult, and more important, to be concise than to be verbose.
- The length of this web page is all the evidence you need: most writers love to fill space with words, and many of those words are unnecessary verbiage, or words that merely take up space without contributing to your argument. Here are some egregious examples:
- “For many centuries, Romeo and Juliet has been read by countless numbers of people.”
- Sure, and many of them have been reluctant undergraduates eager to impress their professors with claims of Shakespeare’s timeless genius or statements of self-evident literary history.
- “This play is an example of how society can dictate rigid lives and teaches us to step back and look at our lives apart from outside expectations.”
- Well, maybe. I appreciate the sentiment that literature can teach us to experience the world differently. But bland therapeutic pronouncements about text X teaching us moral lesson Y usually replace rigorous evidence-based arguments about that text.
Specific terms & phrases to avoid
“This piece by Sir Philip Sidney…”
- Be precise and direct. In modern usage, ‘piece’ means a short article (“Your piece in the New York Times“) or brief artistic/literary compostion (a piece of music, of rhyme).
“Through this text society learns valuable lessons.”
- There’s no such thing as society, as Margaret Thatcher once said–one of her rare statements that most literary critics would agree with. Individual readers read texts. They might together alter the social fabric, but society is not a hive mind: it cannot read or learn anything.
And this is Fowler’s advice on diction (word choice): “Prefer the familiar word to the far-fetched. Prefer the concrete word to the abstract. Prefer the single word to the circumlocution. Prefer the short word to the long.” Enough said.
3: Follow the rules of grammar
Grammatical errors are the easiest to fix, and are therefore the most exasperating to your readers.
Specific matters of grammar:
- Avoid singular/plural errors when you’re trying to avoid gender-specific language. Rather than saying, “The reader understands that they are not given the full story,” say “Readers understand that they are not given the full story.”
- Write in the ‘timeless present’ tense, not mixing past and present verb-tenses. Don’t write “This was one of the recurring themes in the book. There are numerous moments when it appears.” Instead, use the present tense throughout: “This is one of the recurring themes…” etc.
- Don’t make semicolon errors; they are to be used only between complete sentences, or between articles in a list which consist of multiple words (eg. “Daniel Price addresses his sermon to three classes of readers: his dedicatee Charles; former members of Henry’s household; and young men who neglect their spiritual duties”).
- “Never” use quotation marks for “emphasis.” It’s not only an absurd misuse of this punctuation (whose purpose is to quote something from a source), it also suggests an ironic meaning for the highlighted word, which is the purpose of single quotations (eg. Shall we ‘deconstruct’ the zoo after our field trip?). The error of quotation marks for emphasis occasionally results in amusing ironies itself (eg. I “appreciate” your grammatical smugness.).
For many other bemusing examples, visit The “Blog” of “Unnecessary” Quotation Marks.
- Use two long dashes or two double-hyphens–not single-hyphens–when interjecting a parenthetical word or statement.
- Finally, join the campaign to end the two most pervasive errors in English grammar:
- it’s = it is (or it has), while its= the possessive form of the pronoun ‘it’
- For example: “It’s in its infancy.”
- plural forms of words very rarely require an apostrophe
- it’s = it is (or it has), while its= the possessive form of the pronoun ‘it’
Remember that you are citing authors, not editors: don’t list Romeo and Juliet under its modern editor, but under Shakespeare.
Here are some of those idiosyncratic guidelines I mentioned in my note at the top of this page. Following these will make me happier when I draw your essay from the groaning piles on my desk.
- Do proofread carefully, especially a text you’re quoting from any source.
- Do include the following at the top of page one (not in a separate title-page): name, student number, date, course, professor, due date, word count, essay title, and question number.
- Do double-space your essay, with margins of at least an inch on all sides.
- Don’t justify the right-hand margin, which makes your paper look like a magazine article.
- Do include page numbers.
- Writing with style can also mean writing in style. Many students use Times New Roman font, without considering the aesthetic appeal of other fonts like Garamond or Georgia or Lucida. Anything but Courier, or one of those Baroque script-like fonts, makes for a more pleasant reading experience. But please: nothing smaller than 12 point, and no coloured text or paper.
- Finally, do give your essay an insightful title, rather than parroting the words in the question. (E.g. If the question asks you to analyze Falstaff’s sense of humour, don’t call your essay “An Analysis of Falstaff’s Sense of Humour.”)
Quoting Primary Texts
Try at all times to integrate quotations into your own prose. This effectively reassures your readers that they are reading a reliable account of the text, one that is almost interchangeable with it. I’m overstating the case somewhat, but when you adroitly interweave an author’s words and phrases with your own, you exude a proficiency and adeptness that rightly suggests your trustworthiness.
Integrating quotations begins with ensuring that if we removed the quotation-marks from your sentence, it would still be grammatically correct (more or less) and would still make sense. It’s jarring for a reader to switch syntactical gears mid-sentence: ease the transition.
Always proofread your quotations very carefully. If your transcription is wrong, can your interpretation be any more trustworthy?
When you quote a text, you assume the responsibility to discuss particular details like its tone, syntax, and diction. When you quote verse (rather than prose) you must acknowledge the ends of lines, either by inserting line breaks ( / ) or by presenting the verse as it’s laid out in the quoted text. How do you decide which one to do? If you’re quoting within your own prose, use line breaks. If you’re using the block-quotation (indented) format, lay it out as it appears. When introducing a quotation, use the proper punctuation. You should also not use “quotation marks” for block quotations.
Here’s what I mean. In these two examples, the first instance uses the correct format. The wrong format follows below, with errors in bold type.
Responding to Caesar’s critique of Anthony’s revelry, Lepidus has a more indulgent view: “I must not think there are / Evils enough to darken all his goodness” (1.4.10-11).
Responding to Caesar’s critique of Anthony’s revelry, Lepidus has a more indulgent view, “I must not think there are evils enough to darken all his goodness” (1.4.10-11).
Lepidus has a more indulgent view:
I must not think there are Evils enough to darken all his goodness.
His faults in him seem as the spots of heaven,
More fiery by night’s blackness”¦ (1.4.10-13)
Lepidus has a more indulgent view,
“I must not think there are / Evils enough to darken all his goodness. / His faults in him seem as the spots of heaven, / More fiery by night’s blackness” (1.4.10-13)
Note that in the first example I quote Lepidus within my own prose, because the speech is only two lines. For quotations longer than three lines, use the block format shown in the second example.
Quoting Critical Texts
To achieve this, your writing must weave the ideas of other critics (quoted from articles and books) into your own argument. That is, it must use those critics to add complexity””not just support””to your argument.
A common problem in undergraduate essays is that critics feel slighted: students quote them briefly and then move ahead with their arguments, as if they were merely filling a quota. Think of them as fellow readers whose interpretations you are testing, not as talking-heads in a documentary you’re editing. Engage with each of the ideas you introduce; agree or disagree with them, develop them or depart from them””use any rhetorical method that suggests you are in conversation with them.
That’s why, for instance, you should always begin a paragraph with your own ideas, not with the words of Critic X. Your own ideas should always be at the forefront: they are stucturing the argument and determining the subjects of each paragraph.
Another common problem is the use of broad interpretive or “˜background’ ideas without attribution (e.g. “pilgrims were often poor,” or “Una represents Christian truth”). These are fine when you cite the source or critic you got them from (even, say, “English 408 Lecture on November 17th, 2008″), and then when you pull them apart and question their truth. But you should never accept them at face value, or (worse) base your argument on them. You have clearly done some extra research, which is good, but you can’t let that research go unattributed. If it’s from Wikipedia and you’d rather not say so, stop using Wikipedia and find the information from a more reputable source: usually a printed book, journal, or encyclopedia in the library. This means that yes, you have to go to the library to research your subject.
4: More general advice
“nothing is more hurtfull than studying in the night. …Wherefore to watch and to be occupied in minde or bodie in the day time, is agreeable to the motions of the humours and spirites: but to watch and studie in the night, is to strive against nature, and by contrarie motions to impaire both the bodie and minde” (from The haven of health: chiefly gathered for the comfort of students).
- No writer’s desk should be without a good dictionary and thesaurus. Any acknowledged dictionary will do, but the Oxford Paperback Canadian Dictionary should be useful to English students in Canada. A thesaurus, when used judiciously, is another excellent resource for finding the words that will do justice to your ideas.
- The Penguin Dictionary of Literary Terms and Literary Theory, 4th edition. (1998). Unparalleled.
- Fowler’s Modern English Usage, the longstanding standard authority on English usage, has been thoroughly revised by R. W. Burchfield (3rd. edition, 1996). It’s an excellent guide to common and particular problems, like ‘that’ vs. ‘which’, or ‘who’ vs. ‘whom’. And it offers such entertaining definitions as the entry for “pedantry”: “the saying of things in language so learned or so demontratively accurate as to imply a slur upon the generality, who are not capable or not desirous of such displays”. Classic. For more like these, see John Ralston Saul’s The Doubter’s Companion: A Dictionary of Aggressive Common Sense (1994).
- Forgive me if any of the following resources go offline–and please let me know:
- Dan White and Jeannine DeLombard’s “Papers: Expectations, Guidelines, Advice, and Grading“ is a comprehensive guide covering all it promises to cover.
- Jack Lynch’s Guide to Writing and Style, and his Resources for Writers, at Rutgers University.
- Writery Resources [sic] is a site hosted by the University of Missouri, with online guides to style and grammar, dictionaries and thesauri. It also has links to a number of other online resources.
- Online edition of Strunk & White’s Elements of Style. |
It’s rare to see creative information directly placed into HTML code. Colours, fonts, and sizes of HTML elements are normally defined in style sheets, such as CSS. The more complex a website becomes, the more the range and amount of required CSS files increase. The extra burden can have a considerable effect on your website’s loading time, but this can be avoided by compressing your CSS.
At the end of the 1980s, the British computer scientist Tim Berners-Lee developed the basic components of the World Wide Web. As an employee at the European Organization for Nuclear Research (CERN), he initially devoted himself to an internal project, which intended to enable cross-country information exchange between CERN laboratories, partly in France and partly on Swiss soil. As the basis for the planned network infrastructure, Berners-Lee used hypertext, a text form that is conveyed through cross-references (hyperlinks) and written using a markup language. He co-developed this markup language, known as Hypertext Markup Language (HTML).
Together with many other components, such as the HTTP transfer protocol, the URL, browsers, and web servers, HTML is still the foundation of global digital networking. This means that it’s compulsory for developers to learn this web language. To help you get to know the principle of the markup language and to make it easier to get started, we have summarised the most important principles and tips for beginners in this HTML tutorial.
- What is HTML?
- Which software do you need to write HTML code?
- Creating the first HTML pages
- HTML: basics of the text structure
- Basic HTML framework: this is the basic structure of webpages
- How to integrate images, photos, and graphics onto your web pages
- Linking pages and content – the important role of hyperlinks
- On the home stretch – how to put your HTML page online
What is HTML?
HTML is one of the machine-readable languages, also known as 'computer languages' that enable interaction between computers and humans. It allows you to define and structure the typical elements of text-oriented documents, such as headings, text paragraphs, lists, tables, or graphics, by distinguishing them accordingly. The visual representation can be achieved using any web browser that interprets the code lines and therefore knows how the individual elements should be displayed. In addition, the HTML code may contain data in the form of meta information e.g. about the author. As a markup language, HTML is now mostly only used in its descriptive function, while the design is defined using stylesheet languages such as CSS (Cascading Style Sheets). In the beginning of the web era, it was quite common to make visual adjustments with HTML.
HTML has evolved from the now largely disappeared meta-language SGML (Standard Generalized Markup Language), a recognised ISO Standard (8879:1986). The same core notation of the SGML elements is also found in HTML. They are usually marked by a tag pair consisting of the start tag <> and the end tag </>. For some elements, the end tag isn’t required; furthermore, there are some empty elements like the line break <br>. In addition to the tags, the following HTML features are reminiscent of these examples:
- Document type declaration: information about the HTML version in use e.g. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
- Use of character entities: use of entities for recurring units, for example, < für „<“ or & for „&“.
- Comment labelling: comments are added in HTML according to the <!--Comment--> pattern.
- Attributes: supplementary properties of tags according to the <tag attribute="value"> pattern.
Which software do you need to write HTML code?
At the beginning of our HTML course, the question is: which software is best for writing HTML code? There is no general answer for this. On the one hand, there are so few requirements for the programme that a simple text editor (found on every operating system) is sufficient. On the other hand, special HTML applications offer clear simplifications in code writing. How suitable are the various options for learning HTML?
Simple text editors
You do not need sophisticated software in order to write clean HTML code. A simple editor like the Windows editor, also known as Notepad, or the Mac equivalent, TextEdit (in plain text mode) are sufficient basic options. You don’t have the option of changing the layout of the text, but this is the task of HTML formatting anyway. You can also theoretically use word processing programmes such as Microsoft Word or OpenOffice Writer, but you will not benefit from the added features that can help when learning HTML. In some cases, the superfluous features even slow down the learning process so you’re on the right track when using a simple text editor to acquire the HTML basics. These editors are pre-installed on every standard operating system.
In addition to simple editors and complex word processor programmes, there are also special editors that provide assistance: for example, these applications emphasise syntactic accents and give you an excellent overview of the written code. In addition, any syntax errors are found if there are any. Another standard feature is the auto-completion function that provides you with suggestions for extending or completing the code when writing HTML tags. This feature is also able to automatically complete end tags. Many HTML editors have a preview function that allows you to check the preliminary results of your code lines at any time, just by pressing a button. A highly-recommended editor for Windows users is the free, GPL-licensed Notepad ++. A free solution for Unix operating systems is Vim.
There’s another option that has its own charm and is integrated into almost all website building kits and content management systems. These are HTML editors with real-image representation, better known as WYSIWYG editors. The acronym stands for the basic idea of these programmes 'What You See Is What You Get'. These editors have been developed specifically for generating HTML code and you really don’t need very much expertise at all in the markup language to use them. Just like in a word processing programme, you can structure your text using pre-made menu buttons without needing to place a single HTML tag on the page. The WYSIWYG editor generates that in the background, simultaneously, which has its clear advantages. For learning HTML, editors such as BlueGriffon aren’t very suitable – even if you are able to look at the generated code anytime you want.
Creating the first HTML pages
In the first step of the HTML tutorial, you will create a simple page that you can display using your browser. However, this isn’t a valid HTML page that has been built according to certain standards yet, but rather a pure test page. In order to create this page as well as other HTML examples in this tutorial, we have decided to use the Notepad++ editor mentioned earlier. If you are using a different programme, the procedure may differ slightly from the following.
First, open the editor and save the new file under the name test. Choose 'Hypertext Markup Language File' as the file format, so that the browser knows that it’s an HTML page later on. If you are using a simple editor, you may have to select the film type 'All Files' (encoding: UTF-8) and define the HTML label directly in the 'File name' field by saving the file under the name test.html.
The generated file should now be displayed with your web browser’s icon. By double-clicking, you can open the page, but since all content is missing, you will see only a white page. So, in the next step, add the small sample text 'This is my first webpage!', save the document, and reopen the test.html file. The result should look something like this:
HTML: basics of the text structure
Your first website could be successfully created – even though you have not yet used any HTML markup language. However, if you insert a structured text with headings and paragraphs in the same way, you will find that you don’t get very far without tags. The formatting that you have added with a word processing programme, for example, disappears in the browser view: breaks are automatically removed, blank spaces are grouped together, etc. The solution is to label the different text modules as such using the appropriate structural elements – in other words, to take a step into the world of HTML.
Define paragraphs using the <p> tag
To indicate paragraphs, you need the <p> tag. The start tag marks the beginning of the paragraph and the end tag marks when the paragraph is over. The text is then placed between these two markers. In all HTML versions (except XHTML), the closing tag is optional, but it is good practice to include it when learning HTML. You can directly test the accuracy of the paragraph’s definitions on the newly-created test page by adding another section of text and marking them both using this tag:
<p>This is my first webpage!</p> <p>This is the second paragraph of my first webpage.</p>
Placing the headings: the <h> tag
In order to structure your website’s text sections properly, it’s important to use headings. With HTML, you not only have the general ability to label these, but you can also set a clear hierarchy for all headlines you want to use. The tags <h1> to <h6> can be used, with <h1> being the main heading of the website. You should use this tag only once per page, unlike <h2> and the other heading tags. It is important to keep to a correct hierarchical order and not to jump between the different levels so that both readers and search engines can understand the text structure from the headings. We will add a main heading and a first sub-heading as an example on our test page:
<h1>Main heading: my first webpage</h1> <p>This is my first webpage!</p> <h2>Second heading</h2> <p>This is the second paragraph of my first webpage.</p>
Emphasising passages and words using italic or bold features: <i>, <em>, <b>, and <strong>
One of the most important HTML basics is the ability to emphasise individual text excerpts in a certain way. In this way, you can make sure the reader focuses on what you want them to and means they will concentrate on the most important elements. For example, you can use the <i> and <em> tags to italicise and emphasise phrases, technical expressions, or thoughts. However, italic writing generally slows down the reading flow, which is why you should use this tag sparingly. More important are the elements <b> and <strong> that make words and text excerpts bold. The <b> should be used for content that you want to make the user intentionally aware of. In contrast to this, the <strong> tag shows which words are important for content elements and to show browsers how to display words or sections.
To illustrate the tags, we will extend our HTML code a bit:
<h1>Main heading: <i>my first webpage</i></h1> <p>This is my<strong>first</strong>first webpage!</p> <h2>Second heading</h2> <p>This is the second paragraph of my <em>first webpage</em>.</p> <p><b>Note</b>:Typical example for the<b>-tag.</p>
Creating lists: tailored lists using the <ul>, <ol>, and <li> tags
Lists aren’t only helpful when it comes to shopping: when designing texts, they can come in handy for loosening individual paragraphs and therefore optimising the reader’s experience. With HTML, you can create both unordered and ordered lists for your web project. Use the <ul> tag to create unordered lists and <ol> to create ordered lists. The individual list points can be defined with the <li> tag, which only works in combination with one of the two types of lists. Test how HTML lists work by using the following code:
<ul> <li>first unordered-list item</li> <li>second unordered-list item</li> <li>third unordered-list item</li> </ul>
If you want to turn your list into a numbered list, simply swap the list type tag:
<ol> <li>first ordered-list item</li> <li>second ordered-list item </li> <li>third ordered-list item </li> </ol>
Presenting structured data using tables: <table>, <tr>, and <td>
For many years, it has been customary to use HTML tables, not only for presenting complex data in a practical way but also for structuring the complete layout of a web page or text consisting of several columns. With the rise of CSS, however, this additional visual role has fallen into the background more and more, which means that the tables are used today for their basic function – for processing data. Each table consists of at least three components:
- <table>: the <table> start or end tag identifies the start or end of an HTML table. The browser can’t really do much with this markup alone since the tag doesn’t show the number of rows or the number of columns.
- <tr>: use the <tr> ('table row') element to add a row to the table. There’s no limit on the number.
- <td>: only once you’ve added columns have you completed the basic structure of your table. The tag <td> ('table data') is logically subordinate to a <tr> tag because one or more data cells are to be generated within one line. The content of a data field is between the opening <td> and the closing </td> element.
In order to understand the somewhat complex structure, we will now create a simple HTML table consisting of only one row and two columns:
<table> <tr> <td>first data field</td> <td>second data field</td> </tr> </table>
The preview of the generated HTML code makes it appear that an error may have occurred and that the table isn’t working as planned. The two columns haven’t been defined and it doesn’t look like a table at all. There is, however, a simple explanation for this: by default, HTML table cells do not have any visual borders. For this typical table identification, you must extend the <table> tag using the border attribute, including the value 1:
<table border="1"> <tr> <td>first data field</td> <td>second data field</td> </tr> </table>
HTML offers the possibility of highlighting column headings. For this, it is necessary to enclose the relevant column with the <thead> tag and replace the <td> labelling of individual data cells with the <th> tags. To create an example table with four rows and three columns, including the heading, use the following HTML code:
<table border="1"> <thead> <tr> <th>Column heading 1</th> <th>Column heading 2</th> <th>Column heading 3</th> </tr> </thead> <tr> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>4</td> <td>5</td> <td>6</td> </tr> <tr> <td>7</td> <td>8</td> <td>9</td> </tr> </table>
Basic HTML framework: this is the basic structure of webpages
This section of our HTML tutorial is about the general structure of a website. HTML documents contain not only text, links, and other integrated content such as images and videos, but also the aforementioned meta information, which tells the browser, as well as search engine crawlers, how they should read the pages. When a visitor accesses a web page, they don’t see many of these additional details, only some in the title bar of the browser window, in the tab, in the history, or as a headline for search engine entries.
Keeping the code to a minimum while still including all necessary components means that the HTML page will look like this:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="description" content="Here you can find all information about the HTML basic framework"> <title>Learn HTML: The basic framework</title> </head> <body> </body> </html>
The file is therefore composed of the three areas DOCTYPE, head, and body, whereby the first component, which is the document type definition, must be the only one before the <html> tag. This is where you let the interpretive applications know which standard you used when creating the document – in this case, HTML5. Every browser recognises this document type; in addition, it allows you to use both current HTML5 and older HTML codes, which is why you should use it by default, especially when learning HTML.
In the following <head> section, write down the header data if your HTML file. This includes, for example, the character encoding to be used by the browser (meta charset), the meta description (meta name="description"), and the title of the web page (title) that appears in the browser header. In addition, you can make countless other meta statements, even the information included in our example is optional, but you’re recommended to leave it in for a good search engine evaluation. One exception is the <title> information, which, in addition to the document type definition, is the only mandatory element of an HTML page. In the header, you can later add the link to your CSS file, which structured the website’s design. The <body> section contains everything that is to be displayed to the user in their browser.
Note: the tags for the HTML basic structure are optional and can theoretically be omitted. In this case, the browser automatically creates the tags <html>, <head>, and <body> and assigns the appropriate elements to them. It is, however, customary to write this information yourself. In addition, the breakdown makes the document easier to read, which is especially beneficial for HTML beginners.
How to integrate images, photos, and graphics onto your web pages
Texts are indisputably the most important part of general HTML pages. Visual stimuli in the form of images, photographs, or graphics, however, influence user experience greatly, which is why they are just as indispensable for a successful web presence. The following three formats are supported by all browsers:
- JPG: for photos or photo-like graphics with strong contrast and colourful diversity, you usually use the JPG format. JPG images support 16 million colours and heavily compress files although this may result in some loss of quality.
- PNG: graphics and logos are best saved in PNG format, which can display 256 (PNG8) to 16.7 million colours (PNG24). Unlike JPG, PNG compresses without reducing quality, but the file is also larger.
- GIF: GIF files can only display 256 colours, but are still required in web developments because they can be used to display small animations, navigational elements, or simple graphics.
Regardless of the format, you can include an image with the <img> (image) tag in the desired web page. In addition, it is necessary to specify the storage location of this image, otherwise the browser can’t find it and therefore can’t display it. For this, you need the src (source) attribute and the relative path name of the image file. Simply create a subfolder named 'Images' in your website’s project folder (which also contains the HTML document) and store all relevant images there. Our HTML tutorial sample file has the file name graphic1.png and is located in the folder entitled 'Images'. The code used to integrate this graphic looks like this:
<img src="images/graphic1.png" />
However, there are other attributes for images that are recommendable to use. You can specify the width and the height of the image. These values enable the browser to place a placeholder in the appropriate size on the page until the image has completely loaded. It can also simultaneously display additional content in the browser window without having to complete the loading process of the image file, which in turn speeds up the website’s general loading time. On the other hand, there is the alt attribute that can be used to define an alternative text for the image. You should include it in your HTML basic repertoire for a variety of reasons since it…
- Contributes to the page’s accessibility by offering an alternative to visually impaired users or when the page won’t load.
- Helps the search engine crawlers to classify images and also counts as additional content.
- Is specified in the HTML specification.
To extend these attributes, the HTML code looks something like this:
<img src="images/graphic1.png" width="960" height="274" alt="Learn HTML: this is how the embedded sample graphic 'click here' appears:" />
Note: the values for width (960) and height (274) used here are the original dimensions of the sample graphic. The browser automatically calculates the size in pixels.
Linking pages and content – the important role of hyperlinks
Hyperlinks, better known under the abbreviation 'links', are the main reason for the internet’s incomparable success. Without these electronic links, which lead the user to a different website or encourage them to carry out an action such as downloading a product, networking such as the internet wouldn’t be possible. There are three types of links:
- Internal links: internal links are used to structure the entire website and show visitors around. There are different structures that you can use. For a linear structure, for example, the user follows a certain path from page to page. Whereas for a tree structure, the user navigates from a home page to various subordinate pages. You can also place internal links within a single page, which allow the user to jump directly from the bottom of the page to the top.
- External links: external links are those that lead the user to different web projects. You use this type of link to offer your visitors extra value by directing them to another website. However, you should make sure that you don’t place too many links on a page, and also make sure that the content you’re linking to is trustworthy. Otherwise, you could be penalised by the search engine.
- Other links: not all links direct to HTML documents. Depending on the link target, clicking a link can also trigger a download, open an e-mail client, or activate a PDF viewer.
Internal links: how to link individual pages of your web presence
While you will likely have to design and develop a complex link structure for your web presence at a later stage in your HTML studies, this HTML crash course will show you how to internally link two pages. In addition to the test.html you already created, you need another HTML document. Be sure to give this second file a different name e.g. targetpage.html, and be sure that it can be found in the same directory as the test page.
In order to create a link, you require the HTML tag <a> (anchor), which is only to show that a link is being used. For this reason it cannot stand alone and needs the href (hypertext reference) attribute to specify the link destination. The link text, which the browser displays is blue and is underlined by default, should be written between the opening and closing <a> tags. Place the first internal link by adding the following code line to test.html:
<a href="targetpage.html">Jump to target page</a>
If you have set up the link correctly, clicking on it should open up an empty page since the targetpage.html is still unprocessed. For this reason, we will add a new internal link to this document in the next step, which will take you back to the source page when clicked on:
<a href="test.html">Back to previous page</a>
External links: how to link to content on other websites
If you want to add an external link to your page, you don’t require a different tag to the one used in internal linking and you do not need to know the directory where the page is saved. Links to third-party content only require the full URL – this contains all the required information. Since the linked content is not on your own web server, you have no influence on how an external link works, so it’s recommendable to check it regularly. Try to formulate an informative anchor text since meaningless placeholders such as 'here' do not give the visitor any information about where the link leads. Try using the following code when linking externally, which creates a link to our Digital Guide:
<p>HTML tutorial and numerous guides on the topic of websites, hosting, and much more at <a href=" https://www.ionos.com/digitalguide ">1&1 IONOS Digital Guide</a> </p>
When linking externally, you’re leading visitors away from your own web project. Theoretically, they can come back using the 'back' button, but many people don’t realise they have this option. There is a way to make the new page automatically open in a separate tab, which means that they won’t have to leave your website. The attribute target describes where a linked document should be opened. With the value _blank, you specify that you want the page to open in a new window or tab. The link code looks like this:
<a href="https://www.ionos.com/digitalguide" target="_blank">1&1 IONOS Digital Guide</a>
On the home stretch – how to put your HTML page online
The sample pages that you have created in the HTML tutorial can be opened normally on your computer. However, if you send the corresponding page URLs to other people to show them the results, they won’t be able to do much. This is because the HTML documents and any embedded images, etc. are only stored locally on your PC and therefore can’t be sent to the requesting browsers. So that anyone can participate in your creation, you must first register your web project online and find the right hosting structure.
The first step is to find a suitable domain (web address) for your web project and to register it. You can register with any internet provider – at 1&1 IONOS we offer many options for domain registration. The second step is to create the appropriate basis for your web project by either setting up and configuring your own web server or renting it from a web hosting provider. If you’re an HTML beginner, we recommend the latter option: you don’t have to deal with selecting, setting up, and maintaining the server software. You simply choose the desired web space package, which gives you the necessary storage space for the documents of your project.
For the last step, you need to upload your pages onto the rented web space. You usually need an FTP programme for this. Using this client software, you can exchange data with the provider’s FTP server using the File Transfer Protocol. We have some excellent programs for you in the following guide. Detailed instructions and login data for accessing the FTP server are available directly from the respective hosting provider.
Note: when uploading to the FTP server, the directory structure remains, therefore it is worth investing in structuring from the outset.
In the course of the tutorial, we emphasised several times that although HTML is the foundation for every website, the task of designing with the state of modern web development is almost a completely different language: including which colours the individual elements have, the layout of a page, or which font and size are used for text passages, headings, and other text elements. You can define all these points using the stylesheet language, Cascading Style Sheets (CSS). The strict separation of content and design makes the analysing and maintaining major web projects much easier. After learning HTML, it’s recommendable to familiarise yourself with CSS so that you can give your HTML pages the desired appearance. |
Multiplication (often denoted by the cross symbol "×") is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic (the others being addition, subtraction and division).
Because the result of scaling by whole numbers can be thought of as consisting of some number of copies of the original, whole-number products greater than 1 can be computed by repeated addition; for example, 3 multiplied by 4 (often said as "3 times 4") can be calculated by adding 4 copies of 3 together:
Here 3 and 4 are the "factors" and 12 is the "product".
Educators differ as to which number should normally be considered as the number of copies, and whether multiplication should even be introduced as repeated addition. For example 3 multiplied by 4 can also be calculated by adding 3 copies of 4 together:
Multiplication of rational numbers (fractions) and real numbers is defined by systematic generalization of this basic idea.
Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths (for numbers generally). The area of a rectangle does not depend on which side you measure first, which illustrates that the order numbers are multiplied together in doesn't matter.
In general the result of multiplying two measurements gives a result of a new type depending on the measurements. For instance:
The inverse operation of multiplication is division. For example, 4 multiplied by 3 equals 12. Then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number.
Multiplication is also defined for other types of numbers (such as complex numbers), and for more abstract constructs such as matrices. For these more abstract constructs, the order that the operands are multiplied in sometimes does matter.
Read more about Times: Notation and Terminology, Computation, Products of Measurements, Properties, Axioms, Multiplication With Set Theory, Multiplication in Group Theory, Multiplication of Different Kinds of Numbers, Exponentiation, See Also
Other articles related to "times, time":
... Reviewing the book for The Times, John Nicholson wrote it was "endearingly dotty", but doubted its commercial potential ... Austin MacCurtain of the Sunday Times reviewed the paperback edition in 1988, saying that it was "more of the same" as Hitchhiker's, and that the "cosmic romp is stretched thin at ... and read it straight through again – the only time I have ever done that, and I wrote to tell him so ...
... Ondieki received All-America accolades six times at Iowa State ... on several occasions, earning NCAA runner-up honors three times and third-place status three times ... His time broke the mark set by Richard Chelimo only five days earlier in Stockholm by over 9 seconds ...
... in wins twice, fewest walks per 9 innings five times, complete games nine times, and home runs allowed seven times ...
... This is a massive star with more than 10 times the mass of the Sun and seven times the Sun's radius ... total luminosity of this star is about 12,100 times that of the Sun, and eight times the luminosity of its companion ... smaller than the primary, with about 7 times the mass of the Sun and 3.6 times the Sun's radius ...
... To help compare orders of magnitude of different times, this page lists times between 10−3 seconds and 100 seconds (1 millisecond and one second) ... See also times of other orders of magnitude ...
Famous quotes containing the word times:
“Better to be a dog in times of peace than a human being in times of trouble.”
“Laws are silent in times of war.”
—Marcus Tullius Cicero (10643 B.C.)
“That the public can grow accustomed to any face is proved by the increasing prevalence of Keiths ruined physiognomy on TV documentaries and chat shows, as familiar and homely a horror as Grandpa in The Munsters.”
—Philip Norman, British author, journalist. The Life and Good Times of the Rolling Stones, introduction (1989) |
The word galaxy is derived from the ancient Greek term for our own galaxy, galaxias, which means milky circle. According to Greek legend, the Milky Way is so named because the dusty band of stars spreading across the night sky was thought to be milky spray from Zeus' breastfeeding wife. Today, the basis for how we classify galaxies is still rooted in morphology, or how the galaxies appear. Astronomers group galaxies by shape, and although there are many different types of galaxies, most fall into one of three categories: spiral, elliptical or irregular.
While a solar system consists of all of the objects that orbit a particular star, a galaxy is a larger unit of astronomical assemblage. A galaxy is a collection of solar systems, stars, nebulae, dust, planets and gas bound together by gravity. Galaxies are separated from one another by vast stretches of space. Galaxies can be large or small, containing as few as a million or more than 1 trillion stars. Astronomers estimate that there may be more than 100 billion galaxies in the universe.
Our own galaxy, the Milky Way, is a spiral galaxy. Spiral galaxies resemble pinwheels or flat disks of stars with nuclei (bright spots) in the center. Spirals wrap around these bright spots. The spirals themselves are made from "density waves" which move through space like a wave through water. The waves disrupt matter as they pass and squeeze interstellar gases, forming new stars.
Elliptical galaxies are football shaped, fat in the center and tapered toward the ends. Stars in an elliptical galaxy spread out evenly from the galaxy's center. The largest galaxies in the universe are giant elliptical galaxies which may have more than 1 trillion stars. Some elliptical galaxies are as much as 20 times bigger than the Milky Way. Elliptical galaxies appear reddish, indicating that they are formed by stars which are older and cooler than our own sun.
Unlike elliptical or spiral galaxies, irregular galaxies have no visible pattern. These are the smallest of the galaxies and may contain as few as 1 million stars. Some astronomers believe that irregular galaxies may act as building blocks from which other galaxies are formed.
How did galaxies originate? Astronomers believe that after the big bang, the explosion which began the universe 10 billion to 20 billion years ago, gravity began to compress masses of free-floating gas. Two main theories, bottom-up and top-down, explain what happened next. According to bottom-up theories, clusters began to form and assembled together into the larger units we know as galaxies. Top-down theories suggest that galaxies formed first, and the stars and other objects within them were subsequently produced.
Style Your World With Color
Let your clothes speak for themselves with this powerhouse hue.View Article
See if her signature black pairs well with your personal style.View Article
Barack Obama's signature color may bring presidential power to your wardrobe.View Article
Let your imagination run wild with these easy-to-pair colors.View Article |
Nitric Acid is a strong acid with a PH of about 3.01. It is a ‘sticky’ molecule that readily absorbs to surfaces, specifically if there is water on the surface. The physical state of a pure nitric acid is a colorless liquid, but older samples often acquire a yellowish tint due to decomposition into nitrogen oxides and water.
The chemical formula of nitric acid is HNO3, and it is also known as the spirit of niter and aqua fortis, which is a Latin term for ‘strong water.’
It is a highly corrosive and toxic substance that can cause severe skin damages if used without safety precautions. The acid reacts with oxides, hydroxides, and metals such as silver, copper, and iron, to form nitrate salts.
Usually, the nitric acid available in shops are a 68 percent aqueous solution. When its concentration (in water) is over 86 percent, it is called fuming nitric acid. It is stored in a tightly-closed container in a dry, cool, and well-ventilated area.
Below we have explained how is this acid produced, how it looks like on a molecular scale, what are its chemical and physical properties, and where it is mostly used.
Molar mass: 63.012 g/mol
Appearance: Colorless, or yellow/red fuming liquid
Odor: Unpleasantly bitter or pungent, suffocating
Conjugate base: Nitrate
Acidity (pKa): -1.4
Melting point: 231 K or -42 °C
Boiling point: 356 K or 83 °C (of pure acid)
Density: 1.51 g/cm3 (pure acid); 1.41 g/cm3 (68% aqueous solution)
HNO3 has one nitrogen atom (blue), one hydrogen atom (white), and three oxygen atoms (red). The nitrogen atom is bonded to all three oxygen atoms and carries a charge +1. One oxygen atom carries a charge -1, one is bonded to hydrogen, and the other one forms a double bond with nitrogen.
Since oxygen has more tendency to attract shared electrons to itself than nitrogen, it carries a negative charge while nitrogen atom carries a positive charge. The overall structure of the nitric acid is flat or planar.
To draw a lewis structure of the nitric acid, we need to count the total number of valence electrons in the HNO3 molecule.
- Valence electron in a single nitrogen atom = 5
- Valence electron in a single hydrogen atom = 1
- Valence electron in three oxygen atoms = 18 (6*3)
This gives us the total number of valence electrons (5+1+18) in a single HNO3 molecule. Since nitrogen has more valence electrons than oxygen, we can put nitrogen atom at the center of the structure.
The next step is to form the bond and mark lone pair on atoms. Then comes the charges on each atom: nitrogen atom will get a +2 charge, while two oxygen atoms will get at -1 charge.
Finally, we need to minimize charges on atoms to make the structure stable. This can be done by converting a lone pair on one oxygen atom into a bond. The final structure consists of two single bonds between the nitrogen atom and two oxygen atoms, and a double bond between the nitrogen atom and the remaining oxygen atom.
There are two correct ways to draw the lewis structure of HNO3. Thus, it has two major resonance forms. The double-headed arrow in the above image indicates that there is more than one way to draw the nitric acid structure.
How Is It Produced?
Two methods are used to produce HNO3. The first one utilizes oxidation, condensation, and absorption to synthesize weak HNO3 with concentrations between 30 and 70 percent. The second method produces strong HNO3 (with 90 percent concentration) from weak HNO3 by combining dehydration, bleach, condensation, and absorption processes.
Production of weak nitric acid
In the United States, most of the nitric acid is created by the high-temperature catalytic oxidation of ammonia. This is called the Ostwald process. It involves three steps:
1) Ammonia oxidation
4 NH3 + 5 O2 → 4 NO + 6 H2O
The ammonia/air mixture (1:9) is oxidized to at high temperature (750-800 ℃) as it passes through a catalytic convertor. The catalyst is usually made of 90% platinum and 10% rhodium gauze. This (exothermic) reaction produces nitric oxide and water as steam.
2) Nitric oxide oxidation
2 NO + O2 → 2 NO2
The nitric oxide formed in the previous reaction is oxidized: it reacts non-catalytically with residual oxygen to form nitrogen dioxide. It’s a slow, homogeneous reaction that highly depends on pressure and temperature. At high pressure and low temperatures, this reaction produces the maximum amount of nitrogen dioxide in very little time.
3 NO2 + H2O → 2 HNO3 + NO
In the final reaction, nitrogen oxide is absorbed by the water. This yields the desired product (nitric acid in dilute form) along with nitric oxide. The concentration of HNO3 relies on the pressure, temperature, number of absorption stages, as well as nitrogen oxides’ concentration entering the absorber.
Production of strong nitric acid
High-strength HNO3 is obtained by concentrating the weak HNO3 through extractive distillation. The distillation is carried out in the presence of a dehydrating agent, such as 60% of sulphuric acid.
A flow diagram of high-strength HNO3 production
This is how the process goes — the strong sulfuric acid and weak nitric acid enters a packed dehydrating column at atmospheric pressure. The concentrated HNO3 leaves from the top of the column as 99% vapor. It also consists of small amounts of oxygen and nitrogen oxide from nitric acid dissociation.
The acid passes through a bleacher and enters a condenser system that separates it from nitric oxide and oxygen. An absorption column takes these byproducts and combines nitric oxide with auxiliary air to produce nitrogen dioxide. This nitrogen dioxide gas is then recovered as weak HNO3, and minor unreacted and inert gases are ejected in the atmosphere.
Production in laboratory
In the lab, HNO3 is usually synthesized via thermal decomposition of copper nitrate. This yields copper oxide, nitrogen dioxide, and oxygen. The latter two are passed through water to produce nitric acid.
2 Cu(NO3)2 → 2 CuO + 4 NO2 + O2
And then, implement the Ostwald process
2 NO2 + H2O → HNO2 + HNO3
In the past couple of decades, researchers have developed electrochemical means to make anhydrous acid from concentrated HNO3. This process is carried out by regulating the electrolysis current until the required products are obtained.
The 68% solution of HNO3 has a boiling point of 120.5 °C at 1 atmospheric pressure. The pure HNO3, on the other hand, boils at 83 °C. At room temperature, this concentrated form looks like a colorless liquid.
Since nitric acid has the tendency to decompose in an open environment, it is kept in glass bottles.
4 HNO3 → 2 H2O + 4 NO2 + O2
The nitrogen oxides generated in the decomposition reaction is either completely or partially dissolved in acid, producing tiny variations in the vapor pressure above the liquid. When it remains dissolved, it gives acid yellow color, or red at higher temperatures.
The concentrated nitric acid gives off white fumes when it comes in contact with air, while acid dissolved with nitrogen dioxide produces reddish-brown vapors.
Based on concentration, strong HNO3 can be further categorized into two groups: red and white fuming nitric acid. The former contains 84% nitric acid, 13% dinitrogen tetroxide, and 1-2% water. In contrast, the white fuming nitric acid contains water no more than 2% and a very tiny amount of dissolved nitrogen dioxide (0.5%).
Fuming HNO3 with dissolved nitrogen oxide
Among the several important reactions of HNO3 are –
- Neutralization with ammonia to form ammonium nitrate.
- Nitration of toluene and glycerol to form explosive trinitrotoluene (TNT) and nitroglycerin, respectively.
- Oxidation of metals to the corresponding nitrates or oxides.
- Prepration of nitrocellulose.
And since it is a strong oxidizing agent, it reacts violently with various non-metallic substances. Products of such explosive reactions depend on temperature, acid concentration, and the reducing agent involved.
The chemical and physical properties of nitric acid make it a valuable substance. It has several different applications in various fields, especially in the chemical and pharmaceutical industries.
Fertilizers: Almost 80% of the manufactured nitric acid is used to make fertilizers. More specifically, it is used for producing ammonium nitrate (NH4NO3) and calcium ammonium nitrate, which find applications as fertilizers.
HNO3 + NH3 → NH4NO3
Explosives: Ammonium nitrate is also used as an explosive or blasting agent in mining, civil construction, quarrying, and other applications. Examples of explosives containing ammonium nitrate include ANFO, Amatol, and DBX.
Dyes and Plastics: Calcium ammonium nitrate is used in some ice/gel packs as an alternative to ammonium nitrate. It is also used for producing chemicals and solutions that are used in the manufacturing of dyes, plastics, and fibers.
Rocket propellants: Red and white fuming nitric acid is used in liquid-fueled rockets as an oxidizer. During World War II, the German military used red fuming nitric acid in a few rockets.
Woodworks: Very weak HNO3 (with 10% concentration) is used for artificially aging pine and maple woods. It gives a vintage oil-finished wood look.
Other Uses: A slightly concentrated solution named Nital is used to etch metal to reveal its structure at the microscale. Refluxing nitric acid is used in the purification processes of carbon nanotubes. In electrochemistry, HNO3 is used as chemical doping agents for organic semiconductors.
Read: Phosphoric Acid [H3PO4]: Structure | Properties | Uses
In 2019, the worldwide nitric acid market size was valued at about $24 billion. It is projected to reach over $30 billion by 2027, with a compounded annual growth rate of 3.3%. The key factor driving the market is the growing demand for adipic acid, which is used to produce nylon resins and fibers for automotive interiors.
The rapid development of construction, furniture, and agriculture industries will further reflect this growth. And since China and the US have a large number of chemical manufactures, both countries are projected to witness a significant increase in the nitric acid market over the forecast period.
Does HNO3 conduct electricity?
Like other strong acids, nitric acid is a good conductor of electricity. Studies show that treating the material with this acid can improve its electrical conductivity up to 200 times.
Does HNO3 dissolve gold?
Nitric acid does not react with a few precious metals such as platinum-group metals and pure gold. However, it can dissolve some gold alloys containing less noble metals such as silver and copper. Colored gold, for example, gets dissolved in nitric acid and changes its surface color.
Although pure gold shows no effect when it comes in contact with nitric acid, it does react with aqua regia, a mixture of nitric acid and hydrochloric acid, optimally in a molar ratio of 1:3. Some jewelry shops use nitric acid as a cheap means to rapidly detect low-gold alloys (less than 14 karats).
Read: 8 Strongest Acids Ever Known To Us
How is HNO3 neutralized?
At higher concentrations, the outgassing of nitric acid can be quite significant, and thus decent ventilation is essential. It can be neutralized with any inorganic base like sodium hydroxide or lime.
Such neutralization reactions emit a lot of heat. For example, neutralizing a 10% solution of nitric acid will yield a 20 °C temperature rise, while neutralizing a 70% solution will yield a 120 °C temperature rise, which is hot enough to cause steam explosions.
Leave a reply |
If this is your first time hearing about the OLS assumptions, don’t worry. If this is your first time hearing about linear regressions though, you should probably get a proper introduction. In the linked article, we go over the whole process of creating a regression. Furthermore, we show several examples so that you can get a better understanding of what’s going on. One of these is the SAT-GPA example.
Linear Regression Example
To sum up, we created a regression that predicts the GPA of a student based on their SAT score.
Below, you can see the table with the OLS regression tables, provided by statsmodels.
Some of the entries are self-explanatory, others are more advanced. One of them is the R-squared, which we have already covered.
Now, however, we will focus on the other important ones.
First, we have the dependent variable, or in other words, the variable we are trying to predict.
As you can tell from the picture above, it is the GPA. Especially in the beginning, it’s good to double check if we coded the regression properly through this cell.
After that, we have the model, which is OLS, or ordinary least squares.
The method is closely related – least squares.
In this case, there is no difference but sometimes there may be discrepancies.
What Is the OLS
OLS, or the ordinary least squares, is the most common method to estimate the linear regression equation. Least squares stands for the minimum squares error, or SSE.
You may know that a lower error results in a better explanatory power of the regression model. So, this method aims to find the line, which minimizes the sum of the squared errors.
Let’s clarify things with the following graph.
You can tell that many lines that fit the data. The OLS determines the one with the smallest error. Graphically, it is the one closest to all points, simultaneously.
The expression used to do this is the following.
But how is this formula applied? Well, this is a minimization problem that uses calculus and linear algebra to determine the slope and intercept of the line. After you crunch the numbers, you’ll find the intercept is b0 and the slope is b1.
Knowing the coefficients, here we have our regression equation.
Minimizing the SSE
We can try minimizing the squared sum of errors on paper, but with datasets comprising thousands of values, this is almost impossible.
Nowadays, regression analysis is performed through software. Beginner statisticians prefer Excel, SPSS, SAS, and Stata for calculations. Data analysts and data scientists, however, favor programming languages, like R and Python, as they offer limitless capabilities and unmatched speed. And that’s what we are aiming for here!
Alternative Methods to the OLS
Finally, we must note there are other methods for determining the regression line. They are preferred in different contexts.
However, the ordinary least squares method is simple, yet powerful enough for many, if not most linear problems.
The OLS Assumptions
So, the time has come to introduce the OLS assumptions. In this tutorial, we divide them into 5 assumptions. You should know all of them and consider them before you perform regression analysis.
The First OLS Assumption
The first one is linearity. It is called a linear regression. As you may know, there are other types of regressions with more sophisticated models. The linear regression is the simplest one and assumes linearity. Each independent variable is multiplied by a coefficient and summed up to predict the value.
The Second OLS Assumption
The second one is endogeneity of regressors. Mathematically, this is expressed as the covariance of the error and the Xs is 0 for any error or x.
The Third OLS Assumption
The third OLS assumption is normality and homoscedasticity of the error term. Normality means the error term is normally distributed. The expected value of the error is 0, as we expect to have no errors on average. Homoscedasticity, in plain English, means constant variance.
The Fourth OLS Assumption
The fourth one is no autocorrelation. Mathematically, the covariance of any two error terms is 0. That’s the assumption that would usually stop you from using a linear regression in your analysis.
The Fifth OLS Assumption
And the last OLS assumption is no multicollinearity. Multicollinearity is observed when two or more variables have a high correlation between each other.
These are the main OLS assumptions. They are crucial for regression analysis. So, let’s dig deeper into each and every one of them.
OLS Assumption 1: Linearity
The first OLS assumption we will discuss is linearity.
As you probably know, a linear regression is the simplest non-trivial relationship. It is called linear, because the equation is linear.
Each independent variable is multiplied by a coefficient and summed up to predict the value of the dependent variable.
How can you verify if the relationship between two variables is linear? The easiest way is to choose an independent variable X1 and plot it against the depended Y on a scatter plot. If the data points form a pattern that looks like a straight line, then a linear regression model is suitable.
An Example Where There is No Linearity
Let’s see a case where this OLS assumption is violated. We can plot another variable X2 against Y on a scatter plot.
As you can see in the picture above, there is no straight line that fits the data well.
Actually, a curved line would be a very good fit. Using a linear regression would not be appropriate.
Fixes for Linearity
Linearity seems restrictive, but there are easy fixes for it. You can run a non-linear regression or transform your relationship. There are exponential and logarithmical transformations that help with that. The quadratic relationship we saw before, could be easily transformed into a straight line with the appropriate methods.
Important: The takeaway is, if the relationship is nonlinear, you should not use the data before transforming it appropriately.
OLS Assumption 2: No Endogeneity
The second OLS assumption is the so-called no endogeneity of regressors. It refers to the prohibition of a link between the independent variables and the errors, mathematically expressed in the following way.
Think about it. The error is the difference between the observed values and the predicted values. In this case, it is correlated with our independent values.
Omitted Variable Bias
This is a problem referred to as omitted variable bias. Omitted variable bias is introduced to the model when you forget to include a relevant variable.
As each independent variable explains y, they move together and are somewhat correlated. Similarly, y is also explained by the omitted variable, so they are also correlated. Chances are, the omitted variable is also correlated with at least one independent x. However, you forgot to include it as a regressor.
Everything that you don’t explain with your model goes into the error. So, actually, the error becomes correlated with everything else.
A Case in Point
Before you become too confused, consider the following. Imagine we are trying to predict the price of an apartment building in London, based on its size. This is a rigid model, that will have high explanatory power.
However, from our sample, it seems that the smaller the size of the houses, the higher the price.
This is extremely counter-intuitive. We look for remedies and it seems that the covariance of the independent variables and the error terms is not 0. We are missing something crucial.
Fixing the Problem
Omitted variable bias is hard to fix. Think of all the things you may have missed that led to this poor result. We have only one variable but when your model is exhaustive with 10 variables or more, you may feel disheartened.
Critical thinking time. Where did we draw the sample from? Can we get a better sample? Why is bigger real estate cheaper?
Getting More Information
The sample comprises apartment buildings in Central London and is large. So, the problem is not with the sample. What is it about the smaller size that is making it so expensive? Where are the small houses? There is rarely construction of new apartment buildings in Central London. And then you realize the City of London was in the sample.
The place where most buildings are skyscrapers with some of the most valuable real estate in the world. If Central London was just Central London, we omitted the exact location as a variable. In almost any other city, this would not be a factor. In our particular example, though, the million-dollar suites in the City of London turned things around.
Let’s include a variable that measures if the property is in London City. As you can see in the picture below, everything falls into place.
Larger properties are more expensive and vice versa.
Important: The incorrect exclusion of a variable, like in this case, leads to biased and counterintuitive estimates that are toxic to our regression analysis. An incorrect inclusion of a variable, as we saw in our adjusted R-squared tutorial, leads to inefficient estimates. They don’t bias the regression, so you can immediately drop them. When in doubt, just include the variables and try your luck.
Dealing with Omitted Variable Bias
What’s the bottom line? Omitted variable bias is a pain in the neck.
- It is always different
- Always sneaky
- Only experience and advanced knowledge on the subject can help.
- Always check for it and if you can’t think of anything, ask a colleague for assistance!
OLS Assumption 3: Normality and Homoscedasticity
So far, we’ve seen assumptions one and two. Here’s the third one. It comprises three parts:
- zero mean
- and homoscedasticity
of the error term.
The first one is easy. We assume the error term is normally distributed.
Normal distribution is not required for creating the regression but for making inferences. All regression tables are full of t-statistics and F-statistics.
These things work because we assume normality of the error term. What should we do if the error term is not normally distributed? The central limit theorem will do the job. For large samples, the central limit theorem applies for the error terms too. Therefore, we can consider normality as a given for us.
What about a zero mean of error terms? Well, if the mean is not expected to be zero, then the line is not the best fitting one. However, having an intercept solves that problem, so in real-life it is unusual to violate this part of the assumption.
Homoscedasticity means to have equal variance. So, the error terms should have equal variance one with the other.
What if there was a pattern in the variance?
Well, an example of a dataset, where errors have a different variance, looks like this:
It starts close to the regression line and goes further away. This would imply that, for smaller values of the independent and dependent variables, we would have a better prediction than for bigger values. And as you might have guessed, we really don’t like this uncertainty.
A Real-Life Example
Most examples related to income are heteroscedastic with varying variance. If a person is poor, he or she spends a constant amount of money on food, entertainment, clothes, etc. The wealthier an individual is, the higher the variability of his expenditure.
For instance, a poor person may be forced to eat eggs or potatoes every day. Both meals cost a similar amount of money. A wealthy person, however, may go to a fancy gourmet restaurant, where truffles are served with expensive champagne, one day. And on the next day, he might stay home and boil eggs. The variability of his spending habits is tremendous; therefore, we expect heteroscedasticity.
There is a way to circumvent heteroscedasticity.
- First, we should check for omitted variable bias – that’s always an idea.
- After that, we can look for outliers and try to remove them.
- Finally, we shouldn’t forget about a statistician’s best friend – the log- transformation.
Naturally, log stands for a logarithm. You can change the scale of the graph to a log scale. For each observation in the dependent variable, calculate its natural log and then create a regression between the log of y and the independent Xs.
Conversely, you can take the independent X that is causing you trouble and do the same.
Let’s see an example. Below, you can see a scatter plot that represents a high level of heteroscedasticity.
On the left-hand side of the chart, the variance of the error is small. Whereas, on the right, it is high.
Here’s the model: as X increases by 1 unit, Y grows by b1 units.
Let’s transform the x variable to a new variable, called log of x, and plot the data. This is the new result.
Changing the scale of x would reduce the width of the graph. You can see how the points came closer to each other from left to right. The new model is called a semi-log model.
Transforming the Y Scale
What if we transformed the y scale, instead? Analogically to what happened previously, we would expect the height of the graph to be reduced.
The result is the following:
This looks like good linear regression material. The heteroscedasticity we observed earlier is almost gone. This new model is also called a semi-log model. Its meaning is, as X increases by 1 unit, Y changes by b1 percent! This is a very common transformation.
The Log-Log Model
Sometimes, we want or need to change both scales to log. The result is a log-log model. We shrink the graph in height and in width.
You can see the result in the picture below.
The improvement is noticeable, but not game-changing. However, we may be sure the assumption is not violated. The interpretation is, for each percentage point change in x, y changes by b1 percentage points. If you’ve done economics, you would recognize such a relationship is known as elasticity.
OLS Assumption 4: No Autocorrelation
The penultimate OLS assumption is the no autocorrelation assumption. It is also known as no serial correlation. Unfortunately, it cannot be relaxed.
Mathematically, it looks like this: errors are assumed to be uncorrelated.
Where can we observe serial correlation between errors? It is highly unlikely to find it in data taken at one moment of time, known as cross-sectional data. However, it is very common in time series data.
Think about stock prices – every day, you have a new quote for the same stock. These new numbers you see have the same underlying asset. We won’t go too much into the finance. But basically, we want them to be random or predicted by macro factors, such as GDP, tax rate, political events, and so on.
Unfortunately, it is common in underdeveloped markets to see patterns in the stock prices.
There is a well-known phenomenon, called the day-of-the-week effect. It consists in disproportionately high returns on Fridays and low returns on Mondays. There is no consensus on the true nature of the day of the week effect.
One possible explanation, proposed by Nobel prize winner Merton Miller, is that investors don’t have time to read all the news immediately. So, they do it over the weekend. The first day to respond to negative information is on Mondays. Then, during the week, their advisors give them new positive information, and they start buying on Thursdays and Fridays.
Another famous explanation is given by the distinguished financier Kenneth French, who suggested firms delay bad news for the weekends, so markets react on Mondays.
Correlation of the Errors
Whatever the reason, there is a correlation of the errors when building regressions about stock prices. The first observation, the sixth, the eleventh, and every fifth onwards would be Mondays. The fifth, tenth, and so on would be Fridays. Errors on Mondays would be biased downwards, and errors for Fridays would be biased upwards.
The mathematics of the linear regression does not consider this. It assumes errors should be randomly spread around the regression line.
How to Detect Autocorrelation
A common way is to plot all the residuals on a graph and look for patterns. If you can’t find any, you’re safe.
Another is the Durbin-Watson test which you have in the summary for the table provided by ‘statsmodels’. Generally, its value falls between 0 and 4. 2 indicates no autocorrelation. Whereas, values below 1 and above 3 are a cause for alarm.
But, what’s the remedy you may ask? Unfortunately, there is no remedy. As we mentioned before, we cannot relax this OLS assumption. The only thing we can do is avoid using a linear regression in such a setting.
There are other types of regressions that deal with time series data. It is possible to use an autoregressive model, a moving average model, or even an autoregressive moving average model. There’s also an autoregressive integrated moving average model.
Make your choice as you will, but don’t use the linear regression model when error terms are autocorrelated.
OLS Assumption 5: No Multicollinearity
The last OLS assumption is no multicollinearity.
We observe multicollinearity when two or more variables have a high correlation.
Let’s exemplify this point with an equation.
a and b are two variables with an exact linear combination. a can be represented using b, and b can be represented using a. In a model containing a and b, we would have perfect multicollinearity.
This imposes a big problem to our regression model as the coefficients will be wrongly estimated. The reasoning is that, if a can be represented using b, there is no point using both. We can just keep one of them.
Providing a Case in Point
Another example would be two variables c and d with a correlation of 90%. If we had a regression model using c and d, we would also have multicollinearity, although not perfect. Here, the assumption is still violated and poses a problem to our model.
A Real-Life Example
Usually, real-life examples are helpful, so let’s provide one.
There are two bars in the neighborhood – Bonkers and the Shakespeare bar. We want to predict the market share of Bonkers. Most people living in the neighborhood drink only beer in the bars. So, a good approximation would be a model with three variables: the price of half a pint of beer at Bonkers, the price of a pint of beer at Bonkers, and the price of a pint of beer at Shakespeare’s.
This should make sense. If one bar raises prices, people would simply switch bars.
So, the price in one bar is a predictor of the market share of the other bar.
Well, what could be the problem? Half a pint of beer at Bonkers costs around 1 dollar, and one pint costs 1.90. Bonkers tries to gain market share by cutting its price to 90 cents. It cannot keep the price of one pint at 1.90, because people would just buy 2 times half a pint for 1 dollar 80 cents. Bonkers management lowers the price of the pint of beer to 1.70.
Running a Regression
Let’s see what happens when we run a regression based on these three variables. Take a look at the p-value for the pint of beer at Bonkers and half a pint at Bonkers. They are insignificant!
This is because the underlying logic behind our model was so rigid! Well, no multicollinearity is an OLS assumption of the calculations behind the regression. The price of half a pint and a full pint at Bonkers definitely move together.
This messed up the calculations of the computer, and it provided us with wrong estimates and wrong p-values.
How to Fix it
There are three types of fixes:
- The first one is to drop one of the two variables.
- The second is to transform them into one variable.
- The third possibility is tricky. If you are super confident in your skills, you can keep them both, while treating them with extreme caution.
The correct approach depends on the research at hand.
Multicollinearity is a big problem but is also the easiest to notice. Before creating the regression, find the correlation between each two pairs of independent variables. After doing that, you will know if a multicollinearity problem may arise.
Summary of the 5 OLS Assumptions and Their Fixes
Let’s conclude by going over all OLS assumptions one last time.
- The first OLS assumption is linearity. It basically tells us that a linear regression model is appropriate. There are various fixes when linearity is not present. We can run a non-linear regression or perform a mathematical transformation.
- The second one is no endogeneity. It means that there should be no relationship between the errors and the independent variables. This is called omitted variable bias and it is not as easy to fix, nor recognize. The best thing you can do is to develop further expertise in the domain or ask someone to help you with fresh eyes.
- Normality and homoscedasticity are next. Normality suggests that the error term is normally distributed. Homoscedasticity, on the other hand, proposes that the error terms should have equal variance. The way to circumvent heteroscedasticity consists of the following 3 steps: looking for omitted variable bias, removing outliers, and performing a transformation – usually a log transformation works well.
- Another OLS assumption is no autocorrelation. Here, the idea is that errors are assumed to be uncorrelated. The twist is that, although you may spot it by plotting residuals on a graph and looking for patterns, you cannot actually fix it. Your only option is to avoid using a linear regression in that case.
- The last OLS assumption is called no multicollinearity. It means that there shouldn’t be high linear dependence between 2 or more variables. There are 3 common ways to deal with You can drop one of the 2 variables or transform them into one variable. You can also keep both of them, but you should be very careful.
The Next Challenge: Representing Categorical Data via Regressions
So, if you understood the whole article, you may be thinking that anything related to linear regressions is a piece of cake. Yes, and no. There are some peculiarities.
Like: how about representing categorical data via regressions? How can it be done? Find the answers to all of those questions in the following tutorial.
Interested in learning more? You can take your skills from good to great with our statistics course!
Next Tutorial: How to Include Dummy Variables into a Regression |
Area of a Circle Teacher Resources
Find Area of a Circle educational ideas and activities
Showing 1 - 20 of 544 resources
A second installment in a series on circles, this production demonstrates the process of finding the area of a circle. As an introductory tool, it is effective. However, it does not explain the reasoning behind the formula because that information will be covered in a later lecture.
There are basically two things you need to know to find the area of a circle. First, you need to use the correct formula. Second, you have to know the value of the radius. If you know these two things then all you have to do is plug the given value into the formula and do the math.
Students investigate how to input data into a TI. In this geometry lesson, students calculate the area of a circle and interpret the area through graphing. They identify quadratic regression as part of the experiment.
When mathematical errors happen, part of the learning is to figure out how it affects the rest of your calculations. The activity has your mathematicians solving for the area of a circular pipe and taking into consideration any errors that may happen with measuring. The problem may be challenging for some learners to do on their own, so a group discussion would be beneficial as there multiple areas of measurement error. The answer key includes a detailed commentary that can be used as teacher notes for a lesson and to help guide the discussion.
Use this lesson plan to help your geometers develop informal proofs of the circumference and area of a circle. Working in small groups, students rotate between three stations to complete hands-on activities that illustrate the relationships in the circumference and area formulas. The station activities include dissecting a paper plate, rolling a can to measure its circumference, and an exploration of relationships using an online circle tool applet. A reflection sheet is provided for each learner to record observations and information about each station's activity. The lesson concludes with learners using their notes to individually write an informal argument for the circumference and area of a circle.
Learners discover the area of circles. In this circles lesson, students work to find the circumference and diameter of a circle. Learners compare the relation of the area of a circle to a square.
Pupils develop techniques for estimating the area of a circle and use ideas about area and perimeter to solve practical problems. In this area and perimeter instructional activity, students apply the concepts of perimeter and area to the solution of problem. Pupils apply formulas where appropriate to identify, measure, and describe circles and the relationships of the radius, diameter, circumference, and area.
Students explore the area of a circle. In this area of a circle lesson, students construct a circle and find the length of its' radius. Students plot the length of the radius v. area of the circle of their circle with varying radius length. Students find a quadratic function to model the scatter plot of radius v. area. Students determine the domain and range of their graph.
Young geometers explore the concept of circumference and area of circles. They discuss what information is needed to find circumference and area. The resource employs several instructional methods: Frayer Model for Vocabulary, literature connections, Think-Team-Share, Mix-Freeze-Pair, a game of Red Rover, and more. Several supporting materials are attached.
Students calculate the area of a circle. In this geometry lesson plan, students discuss the area and relationship of a circle to the unit circle. They derive trigonometric values using the unit circle.
In this area of a circle instructional activity, students are given the radius of a circle and the middle line of a triangle and they are to find the area. Students complete 12 problems.
Five problems provide practice for learners to find the area of a circle given the distance of the radius. Answers on the key use pi as a variable and do not include fully computed numeric answers. To reinforce basic multiplication facts for middle schoolers, I'd have them multiply the results completely.
Bring your math class around with this task. Learners simply identify parts of a given circle, compute its radius, and estimate the circumference and area. It is a strong scaffolding exercise in preparation for applying the formulas for the area and circumference of circles.
Students discover the area formula for circles from their knowledge of parallelograms.
Sixth graders discover what circumference is. In this measurement lesson, 6th graders identify the radius and diameter of different circles. Students discover how to find the area of a circle using the radius and diameter.
Fifth graders experience a lesson to investigate the methods for finding the area of a circle. They use the drawing of segments in order to visualize how a circle can be proposed of many parts. Then students brainstorm in order to derive the formula.
Farming is full of mathematics, and it provides numerous real-world examples for young mathematicians to study. Here, we look at a cylinder-shaped storage silo that has one flat side. Given certain dimensions, students need to determine the current storage capacity and design a new storage facility to use for an anticipated increase in production. The activity uses knowledge of the Pythagorean Theorem, area of a circle, properties of triangles, understanding of volume, unit analysis, and percentage increase.
Cut up a circle and make a parallelogram! What? No way! Yes way! Watch the instructor illustrate just how to cut up the circle and get that parallelogram to then get the formula for the area of a circle. It really works! Base, circumference, height, radius, pi, put these all together to find the formula of the area of a circle.
Do you know the formula to find the area of a circle? Do you know the value of pi? Do you know what the relationship is between diameter and radius? Yes? Then you can solve this problem. If your scholars don't know this formula then this is a good way to introduce them to all the variables in the formula. |
Global Change Research Program in 2000, the average national temperature has risen by 1 *F and precipitation has increased 5-10%. Although these trends have been more apparent in recent years, the projected warming for the 21st century is significantly higher. The increased temperatures are also very likely to be accompanied with “more extreme precipitation and faster evaporation of water”. Today we see evidence of global warming via shrinkage of glaciers, thawing of permafrost, earlier melting and later freezing of ice on lakes and river, and shifts in plant and animal systems,
Models only showing temperature fluctuations of the last 150 years or so appear to show that the current increase in temperature is due to natural trends. However, data from the last 1,000 years (from tree rings, corals, ice cores, and historical records) show a tremendous spike in temperature increase- starting around the time of the Industrial Revolution (alas, the rise of burning fossil fuels). While all these facts are true, the main evidence linking humans to global warming is in the models.
Don’t waste your time!
Order your assignment!
Exponential rise in surface temperatures is not a natural trend, despite models Of all climate factors, the only way to produce he rate of warming we are seeing now is through unnatural causes. A rise in anthropogenic CA (that is, carbon dioxide produced by human activities) over the years is the only plausible explanation for the high concentration of greenhouse gases today. And although we could stop burning fossil fuels today, the carbon dioxide in the atmosphere would not decrease for decades because the CA molecules linger in the atmosphere.
There is growing evidence supporting global warming and its [potential] negative effects on vulnerable human and natural resource systems. Natural systems are alienable to climate change, and some systems may be irreversibly damaged. Change in climate in some natural systems- such as coral reefs, tropical forests, wetlands, polar ecosystems, etc. – will lead to their extinction and, ultimately, loss of biodiversity in the natural world. And this is due to climate change only. Improper land-use and pollution also affect these fragile systems. According to the LIST.
Assessment, alterations in natural systems due to climate change could possibly result in negative consequences for our economy which, in part, depends on our nation’s bountiful lands, waters, and native plant and animal communities. One system affected by global warming in the U. S. Is the coastal wetlands, including the recently devastated Gulf Coast region. The southeast is home to more than half of the nation’s remaining wetlands. Salt water intrusion due to rising sea levels and increases in violent tropical storms, along with human destruction, are major causes in the loss of wetlands.
These wetlands not only play a vital role in helping protect coastal cities trot storm surges, but also provide habitats and nurseries for many fish species, Therefore agriculture systems (fisheries), which make up a large part of the region’s economy, are vulnerable to climate change. Also to be affected by global warming are the nation’s glacial and mountain regions (such as Alaska and Colorado). Reduction in snowplow- which affects the timing and flow of water in these regions- due to climate change could easily lead to water conflicts and shortages.
Furthermore, many of the cities’ economies in Colorado (and other states in the Rocky Mountain region) depend entirely on the area’s soon;all to attract tourists. It is estimated that the United States has already used h of its own oil resources. Many Of the middle-east countries we buy our Oil from Will be approaching their peak production soon. It’s not that the world is running out of all oil, but that the world is running out of the cheap oil that we use for fuel today. The discovery of new oil fields and the drilling of reserves would only delay this peak by about 20 years or so.
For the LIE. S. , total drilling of all of Alaskan oil reserves would only sustain our country for a little over one year, Reports that the U. S. Has enough oil and coal to last us hundreds of years do not take account to the historical growth ate of 4% of our country. Therefore, Mr.. President, there is a rising energy crisis that cannot be ignored. Concerns about the effects of global warming along with the energy crisis, call for a change in the IS,S, Energy policy. My first suggested plan is to inform the public immediately about global warming and the energy crisis.
Any action by the government to educate the public on how to conserve energy and protect the resources we have would be well worth the effort. Would suggest making resources available for citizens to learn about conserving energy on an individual level. Also, incorporating Environmental Science as a required high school curriculum class might encourage the younger generation to take better care of our environment. Action needs to Start on the local level. Would next suggest placing regulations on the energy efficiency of new buildings, both home and commercial.
One of the best ways to cut back energy use immediately is to install efficient lighting in buildings. All government buildings should be required to use compact fluorescent or LED lighting systems. Furthermore, all cities should replace their traffic lights and signs with LED lights. Although the initial cost of purchasing and installing these lighting systems is high, they last at least 10 times longer than traditional incandescent bulbs and use approximately 60% less energy.
One helpful statistic to keep in mind: If every household in America replaced just one incandescent light bulb with an Energy Star certified compact fluorescent light bulb, it would save enough energy and reduce pollution equal to getting 1 cars off the road. Next, would set emission standards on all existing commercial vehicles. Businesses are not willing to spend the money themselves, however. The overspent is going to have to spend money in order to save fuel and, in the long-run, help the economy. Y providing the money and labor to repair and upgrade currently used commercial vehicles, we can make sure the standards are met. And although this would take a certain amount of money to begin with, the money these companies and individuals save by not consuming/wasting as much fuel would go back into the economy; which, in the long run, would pay for itself. Furthermore, the government should mandate efficiency standards on all new motorized vehicles. America won’t quit using oil, but making today’s vehicles ore efficient would decrease wasteful use Of Oil.
The government could also set initiatives for the public to buy energy-efficient hybrid cars (such as the Toyota Pries). The main reason people today are not buying these hybrid cars is because they are not aware of an energy crisis. My suggested plan of action concerning new cars is this: Increase the public awareness of the energy crisis, help fund advertisement of more efficient cars, and set up tax rebates after purchase of designated efficient cars. With this help by the government, we can get more efficient cars on Inertia’s roads and conserve more energy,
Finally, the government should spend much more money on research toward the development of new energy sources. Although some may believe spending money on new energy sources would be bad for the economy, think just the opposite. Lamar, Colorado, is the perfect example of how incorporating a windward as a source of energy for a small town helped boost the economy in the area. During the construction off 162-megawatt windward in 2003, nearly 400 provokers filled the town’s restaurants and motels, providing a substantial economic boost to the town of Lamar and its business community.
The revenues from this project increased the county tax base 26 percent in 204, providing funding for schools, hospitals, social services, and other functions. Also, the project left 14 permanent, full-time jobs in the community and many other part- time jobs. Therefore, I would certainly suggest putting money into renewable energy efforts such as these. Would also suggest development Of a few more nuclear power plants. Nuclear power was once high at public concern (and perhaps still is). However the technology we have nowadays for construction and management Of power lanes far supersedes that of the past. |
Table of Contents
- 1 What are the two units for measuring angles?
- 2 What are the three basic units of measure for angles and how they are related for each other?
- 3 What are measure angles?
- 4 Why do we measure angles?
- 5 What are the three measurements of an angle?
- 6 Which is the standard unit of measuring temperature?
- 7 What is the measure of right angle?
- 8 What is measuring angle?
What are the two units for measuring angles?
Throughout history, angles have been measured in many different units. These are known as angular units, with the most contemporary units being the degree ( ° ), the radian (rad), and the gradian (grad), though many others have been used throughout history.
There are three units of measure for angles: revolutions, degrees, and radians. In trigonometry, radians are used most often, but it is important to be able to convert between any of the three units.
What’s a unit angle?
Unit, Angle. A factor or datum in angular velocity, q. v. It is the angle subtended by a portion of the circumference equal in length to the radius of the circle. It is equal very nearly to 57.29578° or 57° 17′ 44.8″.
What are measure angles?
In geometry, an angle measure can be defined as the measure of the angle formed by the two rays or arms at a common vertex. Angles are measured in degrees ( °), using a protractor.
Why do we measure angles?
In the field of astronomy, the ability to measure angles accurately and precisely enables us to calculate the position and relative movement of the stars and galaxies in relation to each other, to determine how far distant they are from us, and even to estimate their relative size.
What type of angle is a 31 angle?
Acute Angle Degree In the above section, we read that an angle that measures less than 90 degrees, i.e. less than a right angle is an acute angle. The examples of acute angle degrees are 63°, 31°, 44°, 68°, 83°, 85°. Hence, the acute angle degree lies within the ranges from 0° to less than 90°.
What are the three measurements of an angle?
Which is the standard unit of measuring temperature?
The kelvin is the SI unit of thermodynamic temperature, and one of the seven SI base units. Unusually in the SI, we also define another unit of temperature, called the degree Celsius (°C). Temperature in degrees Celsius is obtained by subtracting 273.15 from the numerical value of the temperature expressed in kelvin.
What are the different units of angle?
There are three units of measure for angles: revolutions, degrees, and radians.
What is the measure of right angle?
A right angle is 90 degrees. An acute angle is less than 90 degrees. |
|Part of a series on|
In physics, spacetime is any mathematical model that combines space and time into a single interwoven continuum. Since 300 BCE, the spacetime of our universe has historically been interpreted from a Euclidean space perspective, which regards space as consisting of three dimensions, and time as consisting of one dimension, the "fourth dimension". By combining space and time into a single manifold called Minkowski space in 1908, physicists have significantly simplified a large number of physical theories, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.
- 1 Explanation
- 2 Spacetime in literature
- 3 Basic concepts
- 4 Mathematics of spacetimes
- 5 Spacetime in special relativity
- 6 Spacetime in general relativity
- 7 Quantized spacetime
- 8 See also
- 9 References
- 10 Further reading
- 11 External links
In non-relativistic classical mechanics, the use of Euclidean space instead of spacetime is appropriate, because time is treated as universal with a constant rate of passage that is independent of the state of motion of an observer. In relativistic contexts, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer and also on the strength of gravitational fields, which can slow the passage of time for an object as seen by an observer outside the field.
In cosmology, the concept of spacetime combines space and time to a single abstract universe. Mathematically it is a manifold whose points correspond to physical events. In a local coordinate system whose domain is an open set of the spacetime manifold, three spacelike coordinates and one timelike coordinate typically emerge. Dimensions are independent components of a coordinate grid needed to locate a point in a certain defined "space". For example, on the globe the latitude and longitude are two independent coordinates which together uniquely determine a location. In spacetime, a coordinate grid that spans the 3+1 dimensions locates events (rather than just points in space), i.e., time is added as another dimension to the coordinate grid. This way the coordinates specify where and when events occur. However, the unified nature of spacetime and the freedom of coordinate choice it allows, imply that to express the temporal coordinate in one coordinate system requires both temporal and spatial coordinates in another coordinate system. Unlike in normal spatial coordinates, there are still restrictions for how measurements can be made spatially and temporally (see Spacetime intervals). These restrictions correspond roughly to a particular mathematical model which differs from Euclidean space in its manifest symmetry.
Until the beginning of the 20th century, time was believed to be independent of motion, progressing at a fixed rate in all reference frames; however, following its prediction by special relativity, later experiments confirmed that time slows at higher speeds of the reference frame relative to another reference frame. Such slowing, called time dilation, is explained in special relativity theory. Many experiments have confirmed time dilation, such as the relativistic decay of muons from cosmic ray showers and the slowing of atomic clocks aboard a Space Shuttle relative to synchronized Earth-bound inertial clocks. The duration of time can therefore vary according to events and reference frames.
When dimensions are understood as mere components of the grid system, rather than physical attributes of space, it is easier to understand the alternate dimensional views as being simply the result of coordinate transformations.
The term spacetime has taken on a generalized meaning beyond treating spacetime events with the normal 3+1 dimensions. It is really the combination of space and time. Other proposed spacetime theories include additional dimensions—normally spatial but there exist some speculative theories that include additional temporal dimensions and even some that include dimensions that are neither temporal nor spatial (e.g., superspace). How many dimensions are needed to describe the universe is still an open question. Speculative theories such as string theory predict 10 or 26 dimensions (with M-theory predicting 11 dimensions: 10 spatial and 1 temporal), but the existence of more than four dimensions would only appear to make a difference at the subatomic level.
Spacetime in literature
The idea of a unified spacetime is stated by Edgar Allan Poe in his essay on cosmology titled Eureka (1848) that "Space and duration are one". In 1895, in his novel The Time Machine, H. G. Wells wrote, "There is no difference between time and any of the three dimensions of space except that our consciousness moves along it", and that "any real body must have extension in four directions: it must have Length, Breadth, Thickness, and Duration".
Marcel Proust, in his novel Swann's Way (published 1913), describes the village church of his childhood's Combray as "a building which occupied, so to speak, four dimensions of space—the name of the fourth being Time".
Another early venture was by Joseph Louis Lagrange in his Theory of Analytic Functions (1797, 1813). He said, "One may view mechanics as a geometry of four dimensions, and mechanical analysis as an extension of geometric analysis".
The ancient idea of the cosmos gradually was described mathematically with differential equations, differential geometry, and abstract algebra. These mathematical articulations blossomed in the nineteenth century as electrical technology stimulated men like Michael Faraday and James Clerk Maxwell to describe the reciprocal relations of electric and magnetic fields. Daniel Siegel phrased Maxwell's role in relativity as follows:
[...] the idea of the propagation of forces at the velocity of light through the electromagnetic field as described by Maxwell's equations—rather than instantaneously at a distance—formed the necessary basis for relativity theory.
[Maxwell] was not able to create the theory that he envisaged except by giving up the use of any model, and by extending by means of analogy the abstract system of electrodynamics to displacement currents.
In Siegel's estimation, "this very abstract view of the electromagnetic fields, involving no visualizable picture of what is going on out there in the field, is Maxwell's legacy." Describing the behaviour of electric fields and magnetic fields led Maxwell to view the combination as an electromagnetic field. These fields have a value at every point of spacetime. It is the intermingling of electric and magnetic manifestations, described by Maxwell's equations, that give spacetime its structure. In particular, the rate of motion of an observer determines the electric and magnetic profiles of the electromagnetic field. The propagation of the field is determined by the electromagnetic wave equation, which requires spacetime for description.
Spacetime was described as an affine space with quadratic form in Minkowski space of 1908. In his 1914 textbook The Theory of Relativity, Ludwik Silberstein used biquaternions to represent events in Minkowski space. He also exhibited the Lorentz transformations between observers of differing velocities as biquaternion mappings. Biquaternions were described in 1853 by W. R. Hamilton, so while the physical interpretation was new, the mathematics was well known in English literature, making relativity an instance of applied mathematics.
The first inkling of general relativity in spacetime was articulated by W. K. Clifford. Description of the effect of gravitation on space and time was found to be most easily visualized as a "warp" or stretching in the geometrical fabric of space and time, in a smooth and continuous way that changed smoothly from point-to-point along the spacetime fabric. In 1947 James Jeans provided a concise summary of the development of spacetime theory in his book The Growth of Physical Science.
The basic elements of spacetime are events. In any given spacetime, an event is a unique position at a unique time. Because events are spacetime points, an example of an event in classical relativistic physics is , the location of an elementary (point-like) particle at a particular time. A spacetime itself can be viewed as the union of all events in the same way that a line is the union of all of its points, formally organized into a manifold, a space which can be described at small scales using coordinate systems.
Spacetime is independent of any observer. However, in describing physical phenomena (which occur at certain moments of time in a given region of space), each observer chooses a convenient metrical coordinate system. Events are specified by four real numbers in any such coordinate system. The trajectories of elementary (point-like) particles through space and time are thus a continuum of events called the world line of the particle. Extended or composite objects (consisting of many elementary particles) are thus a union of many world lines twisted together by virtue of their interactions through spacetime into a "world-braid".
However, in physics, it is common to treat an extended object as a "particle" or "field" with its own unique (e.g., center of mass) position at any given time, so that the world line of a particle or light beam is the path that this particle or beam takes in the spacetime and represents the history of the particle or beam. The world line of the orbit of the Earth (in such a description) is depicted in two spatial dimensions x and y (the plane of the Earth's orbit) and a time dimension orthogonal to x and y. The orbit of the Earth is an ellipse in space alone, but its world line is a helix in spacetime.
The unification of space and time is exemplified by the common practice of selecting a metric (the measure that specifies the interval between two events in spacetime) such that all four dimensions are measured in terms of units of distance: representing an event as (in the Lorentz metric) or (in the original Minkowski metric) where is the speed of light. The metrical descriptions of Minkowski Space and spacelike, lightlike, and timelike intervals given below follow this convention, as do the conventional formulations of the Lorentz transformation.
Spacetime intervals in flat space
In a Euclidean space, the separation between two points is measured by the distance between the two points. The distance is purely spatial, and is always positive. In spacetime, the displacement four-vector ΔR is given by the space displacement vector Δr and the time difference Δt between the events. The spacetime interval, also called invariant interval, between the two events, s2, is defined as:
- (spacetime interval),
where c is the speed of light. The choice of signs for above follows the space-like convention (−+++). Spacetime intervals may be classified into three distinct types, based on whether the temporal separation () is greater than, equal to, or smaller than the spatial separation (), corresponding to resp. time-like, light-like, or space-like separated intervals.
Certain types of world lines are called geodesics of the spacetime – straight lines in the case of Minkowski space and their closest equivalent in the curved spacetime of general relativity. In the case of purely time-like paths, geodesics are (locally) the paths of greatest separation (spacetime interval) as measured along the path between two events, whereas in Euclidean space and Riemannian manifolds, geodesics are paths of shortest distance between two points. The concept of geodesics becomes central in general relativity, since geodesic motion may be thought of as "pure motion" (inertial motion) in spacetime, that is, free from any external influences.
For two events separated by a time-like interval, enough time passes between them that there could be a cause–effect relationship between the two events. For a particle traveling through space at less than the speed of light, any two events which occur to or by the particle must be separated by a time-like interval. Event pairs with time-like separation define a negative spacetime interval () and may be said to occur in each other's future or past. There exists a reference frame such that some pairs of events are observed to occur in the same spatial location, but there is no reference frame in which the two events can occur at the same time.
The measure of a time-like spacetime interval is described by the proper time interval, :
- (proper time interval).
The proper time interval would be measured by an observer with a clock traveling between the two events in an inertial reference frame, when the observer's path intersects each event as that event occurs. (The proper time interval defines a real number, since the interior of the square root is positive.)
In a light-like interval, the spatial distance between two events is exactly balanced by the time between the two events. The events define a spacetime interval of zero (). Light-like intervals are also known as "null" intervals.
Events which occur to or are initiated by a photon along its path (i.e., while traveling at , the speed of light) all have light-like separation. Given one event, all those events which follow at light-like intervals define the propagation of a light cone, and all the events which preceded from a light-like interval define a second (graphically inverted, which is to say "pastward") light cone.
When a space-like interval separates two events, not enough time passes between their occurrences for there to exist a causal relationship crossing the spatial distance between the two events at the speed of light or slower. Generally, the events are considered not to occur in each other's future or past. There exists a reference frame such that the two events are observed to occur at the same time, but there is no reference frame in which the two events can occur in the same spatial location.
For these space-like event pairs with a positive spacetime interval (), the measurement of space-like separation is the proper distance, :
- (proper distance).
Like the proper time of time-like intervals, the proper distance of space-like spacetime intervals is a real number value.
Interval as area
The interval has been presented as the area of an oriented rectangle formed by two events and isotropic lines through them. Time-like or space-like separations correspond to oppositely oriented rectangles, one type considered to have rectangles of negative area. The case of two events separated by light corresponds to the rectangle degenerating to the segment between the events and zero area. The transformations leaving interval-length invariant are the area-preserving squeeze mappings.
The parameters traditionally used rely on quadrature of the hyperbola, which is the natural logarithm. This transcendental function is essential in mathematical analysis as its inverse unites circular functions and hyperbolic functions: The exponential function, et, t a real number, used in the hyperbola (et, e–t ), generates hyperbolic sectors and the hyperbolic angle parameter. The functions cosh and sinh, used with rapidity as hyperbolic angle, provide the common representation of squeeze in the form or as the split-complex unit
Mathematics of spacetimes
For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold . This means the smooth Lorentz metric has signature . The metric determines the geometry of spacetime, as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light is equal to 1.
A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event . Another reference frame may be identified by a second coordinate chart about . Two observers (one in each reference frame) may describe the same event but obtain different descriptions.
Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing (representing an observer) and another containing (representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally.
For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event ). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples (as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented.
Geodesics are said to be time-like, null, or space-like if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by time-like and null (light-like) geodesics, respectively.
The assumptions contained in the definition of a spacetime are usually justified by the following considerations.
The connectedness assumption serves two main purposes. First, different observers making measurements (represented by coordinate charts) should be able to compare their observations on the non-empty intersection of the charts. If the connectedness assumption were dropped, this would not be possible. Second, for a manifold, the properties of connectedness and path-connectedness are equivalent, and one requires the existence of paths (in particular, geodesics) in the spacetime to represent the motion of particles and radiation.
Every spacetime is paracompact. This property, allied with the smoothness of the spacetime, gives rise to a smooth linear connection, an important structure in general relativity. Some important theorems on constructing spacetimes from compact and non-compact manifolds include the following:
- A compact manifold can be turned into a spacetime if, and only if, its Euler characteristic is 0. (Proof idea: the existence of a Lorentzian metric is shown to be equivalent to the existence of a nonvanishing vector field.)
- Any non-compact 4-manifold can be turned into a spacetime.
Often in relativity, spacetimes that have some form of symmetry are studied. As well as helping to classify spacetimes, these symmetries usually serve as a simplifying assumption in specialized work. Some of the most popular ones include:
The causal structure of a spacetime describes causal relationships between pairs of points in the spacetime based on the existence of certain types of curves joining the points.
Spacetime in special relativity
The geometry of spacetime in special relativity is described by the Minkowski metric on R4. This spacetime is called Minkowski space. The Minkowski metric is usually denoted by and can be written as a four-by-four matrix:
where the Landau–Lifshitz time-like convention is being used. A basic assumption of relativity is that coordinate transformations must leave spacetime intervals invariant. Intervals are invariant under Lorentz transformations. This invariance property leads to the use of four-vectors (and other tensors) in describing physics.
Strictly speaking, one can also consider events in Newtonian physics as a single spacetime. This is Galilean–Newtonian relativity, and the coordinate systems are related by Galilean transformations. However, since these preserve spatial and temporal distances independently, such a spacetime can always be decomposed into spatial coordinates plus temporal coordinates, which is not possible for general spacetimes.
Spacetime in general relativity
In general relativity, it is assumed that spacetime is curved by the presence of matter (energy), this curvature being represented by the Riemann tensor. In special relativity, the Riemann tensor is identically zero, and so this concept of "non-curvedness" is sometimes expressed by the statement Minkowski spacetime is flat.
The earlier discussed notions of time-like, light-like and space-like intervals in special relativity can similarly be used to classify one-dimensional curves through curved spacetime. A time-like curve can be understood as one where the interval between any two infinitesimally close events on the curve is time-like, and likewise for light-like and space-like curves. Technically the three types of curves are usually defined in terms of whether the tangent vector at each point on the curve is time-like, light-like or space-like. The world line of a slower-than-light object will always be a time-like curve, the world line of a massless particle such as a photon will be a light-like curve, and a space-like curve could be the world line of a hypothetical tachyon. In the local neighborhood of any event, time-like curves that pass through the event will remain inside that event's past and future light cones, light-like curves that pass through the event will be on the surface of the light cones, and space-like curves that pass through the event will be outside the light cones. One can also define the notion of a three-dimensional "spacelike hypersurface", a continuous three-dimensional "slice" through the four-dimensional property with the property that every curve that is contained entirely within this hypersurface is a space-like curve.
Many spacetime continua have physical interpretations which most physicists would consider bizarre or unsettling. For example, a compact spacetime has closed timelike curves, which violate our usual ideas of causality (that is, future events could affect past ones). For this reason, mathematical physicists usually consider only restricted subsets of all the possible spacetimes. One way to do this is to study "realistic" solutions of the equations of general relativity. Another way is to add some additional "physically reasonable" but still fairly general geometric restrictions and try to prove interesting things about the resulting spacetimes. The latter approach has led to some important results, most notably the Penrose–Hawking singularity theorems.
|This section needs additional citations for verification. (December 2015) (Learn how and when to remove this template message)|
In general relativity, spacetime is assumed to be smooth and continuous—and not just in the mathematical sense. In the theory of quantum mechanics, there is an inherent discreteness present in physics. In attempting to reconcile these two theories, it is sometimes postulated that spacetime should be quantized at the very smallest scales. Current theory is focused on the nature of spacetime at the Planck scale. Causal sets, loop quantum gravity, string theory, causal dynamical triangulation, and black hole thermodynamics all predict a quantized spacetime with agreement on the order of magnitude. Loop quantum gravity makes precise predictions about the geometry of spacetime at the Planck scale.
Spin networks provide a language to describe quantum geometry of space. Spin foam does the same job on spacetime. A spin network is a one-dimensional graph, together with labels on its vertices and edges which encodes aspects of a spatial geometry.
- Anthropic_principle § Applications of the principle §§ Spacetime
- Basic introduction to the mathematics of curved spacetime
- Global spacetime structure
- Hole argument
- List of mathematical topics in relativity
- Local spacetime structure
- Lorentz invariance
- Mathematics of general relativity
- Metric space
- Philosophy of space and time
- Relativity of simultaneity
- Strip photography
- World manifold
- Ashby, Neil (2003). "Relativity in the Global Positioning System" (PDF). Living Reviews in Relativity. 6: 16. Bibcode:2003LRR.....6....1A. doi:10.12942/lrr-2003-1.
- Kopeikin, Sergei; Efroimsky, Michael; Kaplan, George (2011). Relativistic Celestial Mechanics of the Solar System. John Wiley & Sons. p. 157. ISBN 3527634576. Retrieved 2016-02-28. Extract of page 157
- Atuq Eusebio Manga Qespi, Instituto de lingüística y Cultura Amerindia de la Universidad de Valencia. Pacha: un concepto andino de espacio y tiempo. Revísta española de Antropología Americana, 24, p. 155–189. Edit. Complutense, Madrid. 1994
- Paul Richard Steele, Catherine J. Allen, Handbook of Inca mythology, p. 86, (ISBN 1-57607-354-8)
- Shirley Ardener, University of Oxford, Women and space: ground rules and social maps, p. 36 (ISBN 0-85496-728-1)
- Jean d'Alembert (1754) Dimension from ARTFL Encyclopedie project
- R.C. Archibald (1914) Time as a fourth dimension Bulletin of the American Mathematical Society 20:409.
- Daniel M. Siegel (2014) "Maxwell's contributions to electricity and magnetism", chapter 10 in James Clerk Maxwell: Perspectives on his Life and Work, Raymond Flood, Mark McCartney, Andrew Whitaker, editors, Oxford University Press ISBN 978-0-19-966437-5
- Pierre Duhem (1954) The Aim and Structure of Physical Theory, page 98, Princeton University Press
- Siegel 2014 p 191
- Minkowski, Hermann (1909), "Raum und Zeit", Physikalische Zeitschrift, 10: 75–88
- Various English translations on Wikisource: Space and Time.
- James Jeans (1947) The Growth of Physical Science, "Space-time", pp. 205–301, link from Internet Archive
- Matolcsi, Tamás (1994). Spacetime Without Reference Frames. Budapest: Akadémiai Kiadó.
- Ellis, G. F. R.; Williams, Ruth M. (2000). Flat and curved space–times (2nd ed.). Oxford University Press. p. 9. ISBN 0-19-850657-0.
- Petkov, Vesselin (2010). Minkowski Spacetime: A Hundred Years Later. Springer. p. 70. ISBN 90-481-3474-9. Retrieved 2016-02-28., Section 3.4, p. 70
- Note that the term spacetime interval is applied by several authors to the quantity s2 and not to s. The reason that the quantity s2 is used and not s is that s2 can be positive, zero or negative, and is a more generally convenient and useful quantity than the Minkowski norm with a timelike/null/spacelike distinguisher: the pair (√, sgn(s2)). Despite the notation, it should not be regarded as the square of a number, but as a symbol. The cost for this convenience is that this "interval" is quadratic in linear separation along a straight line.
- More generally the spacetime interval in flat space can be written as with metric tensor g independent of spacetime position.
- This characterization is not universal: both the arcs between two points of a great circle on a sphere are geodesics.
- Berry, Michael V. (1989). Principles of Cosmology and Gravitation. CRC Press. p. 58. ISBN 0-85274-037-9. Retrieved 2016-02-28. Extract of page 58, caption of Fig. 25
- I. M. Yaglom (1979) A Simple Non-Euclidean Geometry and its Physical Basis, page 178, Springer, ISBN 0387-90332-1, MR 520230
- Geroch, Robert; Horowitz, Gary T. (1979). "Chapter 5. Global structure of spacetimes". In Hawking, S.W.; Israel, W. General Relativity: An Einstein Centenary Survey. Cambridge University Press. p. 219. ISBN 0521299284.
- See "Quantum Spacetime and the Problem of Time in Quantum Gravity" by Leszek M. Sokolowski, where on this page he writes "Each of these hypersurfaces is spacelike, in the sense that every curve, which entirely lies on one of such hypersurfaces, is a spacelike curve." More commonly a space-like hypersurface is defined technically as a surface such that the normal vector at every point is time-like, but the definition above may be somewhat more intuitive.
- Barrow, John D.; Tipler, Frank J. (1988). The Anthropic Cosmological Principle. Oxford University Press. ISBN 978-0-19-282147-8. LCCN 87028148.
- Ehrenfest, Paul (1920) "How do the fundamental laws of physics make manifest that Space has 3 dimensions?" Annalen der Physik 366: 440.
- George F. Ellis and Ruth M. Williams (1992) Flat and curved space–times. Oxford Univ. Press. ISBN 0-19-851164-7
- Isenberg, J. A. (1981). "Wheeler–Einstein–Mach spacetimes". Phys. Rev. D. 24 (2): 251–256. Bibcode:1981PhRvD..24..251I. doi:10.1103/PhysRevD.24.251.
- Kant, Immanuel (1929) "Thoughts on the true estimation of living forces" in J. Handyside, trans., Kant's Inaugural Dissertation and Early Writings on Space. Univ. of Chicago Press.
- Lorentz, H. A., Einstein, Albert, Minkowski, Hermann, and Weyl, Hermann (1952) The Principle of Relativity: A Collection of Original Memoirs. Dover.
- Lucas, John Randolph (1973) A Treatise on Time and Space. London: Methuen.
- Penrose, Roger (2004). The Road to Reality. Oxford: Oxford University Press. ISBN 0-679-45443-8. Chpts. 17–18.
- Poe, Edgar A. (1848). Eureka; An Essay on the Material and Spiritual Universe. Hesperus Press Limited. ISBN 1-84391-009-8.
- Robb, A. A. (1936). Geometry of Time and Space. University Press.
- Erwin Schrödinger (1950) Space–time structure. Cambridge Univ. Press.
- Schutz, J. W. (1997). Independent axioms for Minkowski Space–time. Addison-Wesley Longman. ISBN 0-582-31760-6.
- Tangherlini, F. R. (1963). "Schwarzschild Field in n Dimensions and the Dimensionality of Space Problem". Nuovo Cimento. 14 (27): 636.
- Taylor, E. F.; Wheeler, John A. (1963). Spacetime Physics. W. H. Freeman. ISBN 0-7167-2327-1.
- Wells, H.G. (2004). The Time Machine. New York: Pocket Books. ISBN 0-671-57554-6. (pp. 5–6)
|Wikiquote has quotations related to: Spacetime|
|Wikibooks has a book on the topic of: Special Relativity| |
Tutor profile: Jenna F.
what is an asymptote?
an asymptote is best explained as a line that represents an infinite problem in a function. The best example is to think about a water bottle and a group of people stranded in the desert. If there are only 2 stranded people, then each person can get one half of the bottle, or 1 out of 2 equal parts of the bottle. However, if there are 50 stranded people, then each person would get one-fiftieth of the bottle. That is such a small amount of water, but for people stranded in a desert, it is still something. so as this number of people increases, the amount each one of them gets becomes infinity smaller until you reach a point where the amount each person gets, although nothing, is basically nothing. This is what an asymptote represents. it shows an infinite issue in an equation, where the equation goes towards that value forever, but never really actually reaches it.
Sally is playing outside in the evening. She notices that her shadow is about 10 feet away. If Sally is 5 feet tall, what is the measure of the angle created by her shadow and the distance between her shadow and her head?
We will assume that sally is standing perfectly straight. This will allow us to use a triangle to represent this situation. The angle in question would be the angle opposite the sidelength of her height, and the angle adjacent to the sidelength of her shadow. This means we can set Tan θ ( θ representing the angle we care about) equal to the opposite sidelength divided by the adjacent sidelength. and to solve for the angle, we can find the arctan(5/10) which would equal about 26.56505118, or approximately 27 degrees.
What are the steps to solve a related rates question?
1. Draw a picture based on what they tell you Drawing a picture is very beneficial to understand what the question is trying to ask you. Creating a picture while reading the question helps you take words on a screen/piece of paper and imagine the real-life situation they are asking you about. 2. Organize the given information I tend to organize it using a little homemade 2x3 table, but whatever works best is all that is needed. Given information is not only the things they tell you, but the things they are asking for. It is very important to identify what the question is asking you to find. If you cant see the end of the path before you start, it becomes way harder to get there. 3. Find an equation that represents a relationship related to the problem. This can be anything from figuring out the radius and height of a cone is related, to using the Pythagorean theorem. If the question is asking you about the surface area, you need to write an equation that represents the surface area using the given information. 4. Take the derivative of the equation. I include this step because students usually get here and then forget what they need to do next. This is a calculus problem after all, and a lot of the time if you can not figure out what your next step is when you have an equation already, its time to take the derivative. 5. Using the derivative as a tool, see if you can plug things in to find the information you don't already know. doing this will help if you're not sure what order to do things. If you take the derivative of the equation you found, and you discover that to find what the question is asking for you need the rate of a different variable, you can now go do that and are one step closer to finding what you want. 6. Repeat steps 3-5 until you can solve for the information the question is asking for. Depending on the problem, you might only need to do steps 3-5 once, but more than likely you will have to find multiple derivatives to solve the question Done! Don't forget units :)
needs and Jenna will reply soon. |
Function Notation Teacher Resources
Find Function Notation educational ideas and activities
Showing 1 - 20 of 215 resources
A simple definition of f(x), function notation.
Your young algebra engineers brainstorm a list of machines that are then related to algebra functions to begin their exploration of the mighty invention called function notation. A discussion follows using a box to represent the machine (which is named F) that takes in the raw material and alters it. The young engineers are then catapulted into an activity to soar higher in their understanding of a function. When they land, they are able to model a situation using function notation, evaluate function values given domain inputs, interpret the meaning of a function value in context, and leap tall buildings in a single bound.
Show learners that function notation and multiplication notation are not the same. In the example, Katie is given a function, C(x), which is the cost of producing x amount of DVDs. Ask learners if Katie can divide the function notation, C(x), by x when finding the average.
Burning calories is a function of the time an athlete runs, and modeling this scenario with function notation will make your algebra learners mathematically healthier. Math athletes begin their trek by determining a definition for function by analyzing a functions vs. non-functions worksheet with tables and graphs. Developing further, an all-class discussion explores a real-life example of functions using, ironically, a vending machine scenario. Athletes then use the included video on calories to practice writing in function notation and computing the value of a function given various inputs.
Using a TI-Nspire calculator, learners will work to better understand function notation and input/output functions. They write equations with a function symbols, identify what makes an equation a function, and graph lines in order to classify them as functions or not.
Young scholars differentiate between function and relation by analyzing graphs and data to determine if the pair of coordinates are a function or not. They rewrite functions using the correct notations. This lesson is broken down into timed segments which makes for easy planning.
A simple definition of f(x). Function notation.
Tables, functions, and stories with real-life scenarios are used in this cooperative learning activity. Young algebra engineers are catapulted into the activity "Flying T-Shirts' to sore higher in their understanding of a function. When they land they are able to model a situation using function notation, evaluate function values given domain inputs, interpret the meaning of a function value in context, and leap tall buildings in a single bound.
Students solve quadratic equations using graphing. In this algebra lesson, students apply the correct form of functions notations. They solve parabolas by graphing and by algebraic expressions and equations.
Learners investigate trigonometric function. In this Algebra II instructional activity, students explore functional notation and transformational graphing of trigonometric functions.
For this graphing worksheet, one completes 13 problem solving questions regarding quadratic functions and function notation. The emphasis is on understanding parent functions and transformations.
Here is an unexpected resource: chapter 1 of an Algebra textbook. You can use all or some of its contents to teach your Middle Schoolers all about algebraic expression, domain, function notation, linear equations, order of operations, input/output, ordered pairs, and variable expressions. This would be great for a substitute or newer teacher looking for reliable tools.
Learners identify the different stretches and compressions of their polynomial equations. They define when a graph will stretch vertically and when will it stretch horizontally. They also calculate when it will compress vertically and horizontally.
Students explore functions with nomographs on their TI-nspire calculators. In this function notation instructional activity, students find the function rule on three problems. They drag the point to observe a disappearing arrow, and tell why this happens. Students solve two compound and three inverse function problems.
In this variables and functions worksheet, students solve 7 different types of problems related to variables and functions and a vending machine. First, they determine which machine represents a function and why. Then, students determine which one does not represent a function and why. They also use function notation to represent the relationship between each code and the type of beverage the machine delivers.
Students apply the concept of functions to the real world. In this algebra lesson, students define inverse function and practice using the correct function notations. They graph and define functions as relations.
Students solve problems and answer questions on Gateway. In this algebra instructional activity, students evaluate and solve problems using the correct function notation. They represent function in the real world.
Students investigate reflections and symmetry about a line. In this algebra lesson, students apply function notations correctly to solve problems. They differentiate between even and odd function and discuss the reason for their names.
Students, after completing a warm up exercise on one proportion word problem, distinguish between functions and non-functions. They identify domain and range on a function as well as recognize and implement function notation. They solve problems on the board in groups.
Learners can use this online Algebra 1 sheet to practice completing function tables. They use functional notation to determine the functional value of a domain element. The one page sheet includes 5 multiple choice problems, hint options, and can be self grading so kids can check their own answers. |
GREAT DEPRESSION, the longest, deepest, and most pervasive depression in American history, lasted from 1929 to 1939. Its effects were felt in virtually all corners of the world, and it is one of the great economic calamities in history.
In previous depressions, such as those of the 1870s and 1890s, real per capita gross domestic product (GDP)—the sum of all goods and services produced, weighted by market prices and adjusted for inflation—had returned to its original level within five years. In the Great Depression, real per capita GDP was still below its 1929 level a decade later.
Economic activity began to decline in the summer of 1929, and by 1933 real GDP fell more than 25 percent, erasing all of the economic growth of the previous quarter century. Industrial production was especially hard hit, falling some 50 percent. By comparison, industrial production had fallen 7 percent in the 1870s and 13 percent in the 1890s.
From the depths of depression in 1933, the economy recovered until 1937. This expansion was followed by a brief but severe recession, and then another period of economic growth. It was not until the 1940s that previous levels of output were surpassed. This led some to wonder how long the depression would have continued without the advent of World War II.
In the absence of government statistics, scholars have had to estimate unemployment rates for the 1930s. The sharp drop in GDP and the anecdotal evidence of millions of people standing in soup lines or wandering the land as hoboes suggest that these rates were unusually high. It is widely accepted that the unemployment rate peaked above 25 percent in 1933 and remained above 14 percent into the 1940s. Yet these figures may underestimate the true hardship of the times: those who became too discouraged to seek work would not have been counted as unemployed. Likewise, those who moved from the cities to the countryside in order to feed their families would not have been counted. Even those who had jobs tended to see their hours of work fall: the average work week, 47 to 49 hours in the 1920s, fell to 41.7 hours in 1934 and stayed between 42 and 45 until 1942.
The banking system witnessed a number of "panics" during which depositors rushed to take their money out of banks rumored to be in trouble. Many banks failed under this pressure, while others were forced to merge: the number of banks in the United States fell 35 percent between 1929 and 1933.
While the Great Depression affected some sectors of the economy more than others, and thus some regions of the country more than others, all sectors and regions experienced a serious decline in output and a sharp rise in unemployment. The hardship of unemployment, though concentrated in the working class, affected millions in the middle class as well. Farmers suffered too, as the average price of their output fell by half (whereas the aggregate price level fell by only a third).
The Great Depression followed almost a decade of spectacular economic growth. Between 1921 and 1929, output per worker grew about 5.9 percent per year, roughly double the average in the twentieth century. Unemployment and inflation were both very low throughout this period as well. One troublesome characteristic of the 1920s, however, was that income distribution became significantly less equal. Also, a boom in housing construction, associated in part with an automobile-induced rush to the suburbs, collapsed in the late 1920s. And automakers themselves worried throughout the late 1920s that they had saturated their market fighting for market share; auto sales began to slide in the spring of 1929.
Technological advances in production processes (notably electrification, the assembly line, and continuous processing of homogenous goods such as chemicals) were largely responsible for the advances in productivity in the 1920s. These advances induced the vast bulk of firms to invest in new plants and equipment In the early 1920s, there were also innovative new products, such as radio, but the decade after 1925 was the worst in the twentieth century for new product innovation.
Causes of the Great Depression
In 1929 the standard economic theory suggested that a calamity such as the Great Depression could not happen: the economy possessed equilibrating mechanisms that would quickly move it toward full employment. For example, high levels of unemployment should put downward pressure on wages, thereby encouraging firms to increase employment. Before the Great Depression, most economists urged governments to concentrate on maintaining a balanced budget. Since tax receipts inevitably fell during a downturn, governments often increased tax rates and reduced spending. By taking money out of the economy, such policies tended to accelerate the downturn, though the effect was likely small.
As the depression continued, many economists advised the federal government to increase spending, in order to provide employment. Economists also searched for theoretical justifications for such policies. Some thought
the depression was caused by overproduction: consumers did not wish to consume all that was produced. These analysts often attributed overproduction to the increased disparity in income that developed in the 1920s, for the poor spend a greater percentage of their income than do the rich. Others worried about a drop in the number of profitable investment opportunities. Often, these arguments were couched in apocalyptic terms: the Great Depression was thought to be the final crisis of capitalism, a crisis that required major institutional restructuring. Others, notably Joseph Schumpeter, pointed the finger at technology and suggested that the Great Depression reflected the failure of entrepreneurs to bring forth new products. He felt the depression was only temporary and a recovery would eventually occur.
The stock market crash of 1929 and the bank panics of the early 1930s were dramatic events. Many commentators emphasized the effect these had in decreasing the spending power of those who lost money. Some went further and blamed the Federal Reserve System for allowing the money supply, and thus average prices, to decline.
John Maynard Keynes in 1936 put forward a theory arguing that the amount individuals desired to save might exceed the amount they wanted to invest. In such an event, they would necessarily consume less than was produced (since, if we ignore foreign trade, total income must be either consumed or saved, while total output is the sum of consumption goods and investment goods). Keynes was skeptical of the strength of equilibrating mechanisms and shocked many economists who clung to a faith in the ability of the market system to govern itself. Yet within a decade the profession had largely embraced his approach, in large part because it allowed them to analyze deficient consumption and investment demand without reference to a crisis of capitalism. Moreover, Keynes argued that, because a portion of income was used for taxes and output included government services, governments might be able to correct a situation of deficient demand by spending more than they tax.
In the early postwar period, Keynesian theory dominated economic thinking. Economists advised governments to spend more than they taxed during recessions and tax more than spend during expansions. Although governments were not always diligent in following this prescription, the limited severity of early postwar business cycles was seen as a vindication of Keynesian theory. Yet little attention was paid to the question of how well it could explain the Great Depression.
In 1963, Milton Friedman and Anna Schwartz proposed a different view of the depression. They argued that, contrary to Keynesian theory, the deflationary actions of the Federal Reserve were primarily at fault. In the ensuing decades, Keynesians and "monetarists" argued for the supremacy of their favored theory. The result was a recognition that both explanations had limitations. Keynesians struggled to comprehend why either consumption or investment demand would have fallen so precipitously as to trigger the depression (though saturation in the housing and automobile markets, among others, may have been important). Monetarists struggled to explain how smallish decreases in the money supply could trigger such a massive downturn, especially since the price level fell as fast as the supply of money, and thus real (inflation-adjusted) aggregate demand need not have fallen.
In the 1980s and 1990s, some economists argued that the actions of the Federal Reserve had caused banks to decrease their willingness to loan money, leading to a severe decrease in consumption and, especially, investment. Others argued that the Federal Reserve and central banks in other countries were constrained by the gold standard, under which the value of a particular currency is fixed to the price of gold.
Some economists today speak of a consensus that holds the Federal Reserve, the gold standard, or both, largely responsible for the Great Depression. Others suggest that a combination of several theoretical approaches is needed to understand this calamity.
Most economists have analyzed the depression from a macroeconomic perspective. This perspective, spawned by the depression and by Keynes's theories, focuses on the interaction of aggregate economic variables, including consumption, investment, and the money supply. Only fairly recently have some macroeconomists begun to consider how other factors, such as technological innovation, would influence the level of economic activity.
Beginning initially in the 1930s, however, some students of the Great Depression have examined the unusually high level of process innovation in the 1920s and the lack of product innovation in the decade after 1925. The introduction of new production processes requires investment but may well cause firms to let some of their workforce go; by reducing prices, new processes may also reduce the amount consumers spend. The introduction of new products almost always requires investment and more employees; they also often increase the propensity of individuals to consume. The time path of technological innovation may thus explain much of the observed movements in consumption, investment, and employment during the interwar period. There may also be important interactions with the monetary variables discussed above: in particular, firms are especially dependent on bank finance in the early stages of developing a new product.
Effects of the Great Depression
The psychological, cultural, and political repercussions of the Great Depression were felt around the world, but it had a significantly different impact in different countries. In particular, it is widely agreed that the rise of the Nazi Party in Germany was associated with the economic turmoil of the 1930s. No similar threat emerged in the United States. While President Franklin Roosevelt did introduce a variety of new programs, he was initially elected on a traditional platform that pledged to balance the budget. Why did the depression cause less political change in the United States than elsewhere? A much longer experience with democracy may have been important. In addition, a faith in the "American dream," whereby anyone who worked hard could succeed, was apparently retained and limited the agitation for political change.
Effects on individuals. Much of the unemployment experience of the depression can be accounted for by workers who moved in and out of periods of employment and unemployment that lasted for weeks or months. These individuals suffered financially, to be sure, but they were generally able to save, borrow, or beg enough to avoid the severest hardships. Their intermittent periods of employment helped to stave off a psychological sense of failure. Yet there were also numerous workers who were unemployed for years at a time. Among this group were those with the least skills or the poorest attitudes. Others found that having been unemployed for a long period of time made them less attractive to employers. Long-term unemployment appears to have been concentrated among people in their late teens and early twenties and those older than fifty-five. For many that came of age during the depression, World War II would provide their first experience of full-time employment.
With unemployment rates exceeding 25 percent, it was obvious that most of the unemployed were not responsible for their plight. Yet the ideal that success came to those who worked hard remained in place, and thus those who were unemployed generally felt a severe sense of failure. The incidence of mental health problems rose, as did problems of family violence. For both psychological and economic reasons, decisions to marry and to have children were delayed. Although the United States provided more relief to the unemployed than many other countries (including Canada), coverage was still spotty. In particular, recent immigrants to the United States were often denied relief. Severe malnutrition afflicted many, and the palpable fear of it, many more.
Effects by gender and race. Federal, state, and local governments, as well as many private firms, introduced explicit policies in the 1930s to favor men over women for jobs. Married women were often the first to be laid off. At a time of widespread unemployment, it was felt that jobs should be allocated only to male "breadwinners." Nevertheless, unemployment rates among women were lower than for men during the 1930s, in large part because the labor market was highly segmented by gender, and the service sector jobs in which women predominated were less affected by the depression. The female labor force participation rate—the proportion of women seeking or possessing paid work—had been rising for decades; the 1930s saw only a slight increase; thus, the depression acted to slow this societal change (which would greatly accelerate during World War II, and then again in the postwar period).
Many surveys found unemployment rates among blacks to be 30 to 50 percent higher than among whites. Discrimination was undoubtedly one factor: examples abound of black workers being laid off to make room for white workers. Yet another important factor was the preponderance of black workers in industries (such as automobiles) that experienced the greatest reductions in employment. And the migration of blacks to northern industrial centers during the 1920s may have left them especially prone to seniority-based layoffs.
Cultural effects. One might expect the Great Depression to have induced great skepticism about the economic system and the cultural attitudes favoring hard work and consumption associated with it. As noted, the ideal of hard work was reinforced during the depression, and those who lived through it would place great value in work after the war. Those who experienced the depression were disposed to thrift, but they were also driven to value their consumption opportunities. Recall that through the 1930s it was commonly thought that one cause of the depression was that people did not wish to consume enough: an obvious response was to value consumption more.
The New Deal. The nonmilitary spending of the federal government accounted for 1.5 percent of GDP in 1929
but 7.5 percent in 1939. Not only did the government take on new responsibilities, providing temporary relief and temporary public works employment, but it established an ongoing federal presence in social security (both pensions and unemployment insurance), welfare, financial regulation and deposit insurance, and a host of other areas. The size of the federal government would grow even more in the postwar period. Whether the size of government today is larger than it would have been without the depression is an open question. Some scholars argue for a "ratchet effect," whereby government expenditures increase during crises, but do not return to the original level thereafter. Others argue that the increase in government brought on by the depression would have eventually happened anyhow.
In the case of unemployment insurance, at least, the United States might today have a more extensive system if not for the depression. Both Congress and the Supreme Court were more oriented toward states' rights in the 1930s than in the early postwar period. The social security system thus gave substantial influence to states. Some have argued that this has encouraged a "race to the bottom," whereby states try to attract employers with lower unemployment insurance levies. The United States spends only a fraction of what countries such as Canada spend per capita on unemployment insurance.
Some economists have suggested that public works programs exacerbated the unemployment experience of the depression. They argue that many of those on relief would have otherwise worked elsewhere. However, there were more workers seeking employment than there were job openings; thus, even if those on relief did find work elsewhere, they would likely be taking the jobs of other people.
The introduction of securities regulation in the 1930s has arguably done much to improve the efficiency, fairness, and thus stability of American stock markets. Enhanced bank supervision, and especially the introduction of deposit insurance from 1934, ended the scourge of bank panics: most depositors no longer had an incentive to rush to their bank at the first rumor of trouble. But deposit insurance was not an unmixed blessing; in the wake of the failure of hundreds of small savings and loan institutions decades later, many noted that deposit insurance allowed banks to engage in overly risky activities without being penalized by depositors. The Roosevelt administration also attempted to stem the decline in wages and prices by establishing "industry codes," whereby firms and unions in an industry agreed to maintain set prices and wages. Firms seized the opportunity to collude and agreed in many cases to restrict output in order to inflate prices; this particular element of the New Deal likely served to slow the recovery. Similar attempts to enhance agricultural prices were more successful, at least in the goal of raising farm incomes (but thus increased the cost of food to others).
It was long argued that the Great Depression began in the United States and spread to the rest of the world. Many countries, including Canada and Germany, experienced similar levels of economic hardship. In the case of Europe, it was recognized that World War I and the treaties ending it (which required large reparation payments from those countries that started and lost the war) had created weaknesses in the European economy, especially in its financial system. Thus, despite the fact that trade and capital flows were much smaller than today, the American downturn could trigger downturns throughout Europe. As economists have come to emphasize the role the international gold standard played in, at least, exacerbating the depression, the argument that the depression started in the United States has become less central.
With respect to the rest of the world, there can be little doubt that the downturn in economic activity in North America and Europe had a serious impact. Many Third World countries were heavily dependent on exports and suffered economic contractions as these markets dried up. At the same time, they were hit by a decrease in foreign investment flows, especially from the United States, which was a reflection of the monetary contraction in the United States. Many Third World countries, especially in Latin America, responded by introducing high tariffs and striving to become self-sufficient. This may have helped them recover from the depression, but probably served to seriously slow economic growth in the postwar period.
Developed countries also introduced high tariffs during the 1930s. In the United States, the major one was the Smoot-Hawley Tariff of 1930, which arguably encouraged other countries to retaliate with tariffs of their own. Governments hoped that the money previously spent on imports would be spent locally and enhance employment. In return, however, countries lost access to foreign markets, and therefore employment in export-oriented sectors. The likely effect of the increase in tariffs was to decrease incomes around the world by reducing the efficiency of the global economy; the effect the tariffs had on employment is less clear.
Barnard, Rita. The Great Depression and the Culture of Abundance: Kenneth Fearing, Nathanael West, and Mass Culture in the 1930s. New York: Cambridge University Press, 1995. Explores the impact of the depression on cultural attitudes and literature.
Bernanke, Ben S. Essays on the Great Depression. Princeton, N.J.: Princeton University Press, 2000. Emphasizes bank panics and the gold standard.
Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929–1939. New York: Cambridge University Press, 1987. Argues for the interaction of technological and monetary forces and explores the experience of several industries.
Bordo, Michael D., Claudia Goldin, and Eugene N. White, eds. The Defining Moment: The Great Depression and the American Economy in the Twentieth Century. Chicago: University of Chicago Press, 1998. Evaluates the impact of a range of New Deal policies and international agreements.
Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867–1960. Princeton, N.J.: Princeton University Press, 1963.
Hall, Thomas E., and J. David Ferguson. The Great Depression: An International Disaster of Perverse Economic Policies. Ann Arbor: University of Michigan Press, 1998.
Keynes, John M. The General Theory Of Employment, Interest, and Money. New York: St. Martin's Press, 1964. Original edition published in 1936.
Margo, Robert A. "Employment and Unemployment in the 1930s." Journal of Economic Perspectives 7, no. 2 (spring 1993): 41–59.
Rosenbloom, Joshua, and William Sundstrom. "The Sources of Regional Variation in the Severity of the Great Depression: Evidence from U.S. Manufacturing 1919–1937." Journal of Economic History 59 (1999): 714–747.
Rosenof, Theodore. Economics in the Long Run: New Deal Theorists and Their Legacies, 1933–1993. Chapel Hill: University of North Carolina Press, 1997. Looks at how Keynes, Schumpeter, and others influenced later economic analysis.
Rothermund, Dietmar. The Global Impact of the Great Depression, 1929–1939. London: Routledge, 1996. Extensive treatment of the Third World.
Schumpeter, Joseph A. Business Cycles: A Theoretical, Historical, and Statistical Analysis of the Capitalist Process. New York: McGraw-Hill, 1939.
Szostak, Rick. Technological Innovation and the Great Depression. Boulder, Colo.: Westview Press, 1995. Explores the causes and effects of the unusual course that technological innovation took between the wars.
Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: Norton, 1976. Classic early defense of Keynesian explanation.
———. Lessons from the Great Depression. Cambridge, Mass.: MIT Press, 1989. Emphasizes the role of the gold standard.
See also Agricultural Price Supports ; Banking: Bank Failures, Banking Crisis of 1933 ; Business Cycles ; Keynesianism ; New Deal ; and vol. 9: Advice to the Unemployed in the Great Depression, June 11, 1932 .
Personal Effects of the Depression
The study of the human cost of unemployment reveals that a new class of poor and dependents is rapidly rising among the ranks of young sturdy ambitious laborers, artisans, mechanics, and professionals, who until recently maintained a relatively high standard of living and were the stable self-respecting citizens and taxpayers of the state. Unemployment and loss of income have ravaged numerous homes. It has broken the spirit of their members, undermined their health, robbed them of self-respect, and destroyed their efficiency and employability. Many households have been dissolved, little children parcelled out to friends, relatives, or charitable homes; husbands and wives, parents and children separated, temporarily or permanently.…Men young and old have taken to the road. Day after day the country over they stand in the breadlines for food. … The law must step in and brand as criminals those who have neither desire nor inclination to violate accepted standards of society.… Physical privation un dermines body and heart.… Idleness destroys not only purchasing power, lowering the standards of living, but also destroys efficiency and finally breaks the spirit.
SOURCE: From the 1932 Report of the California Unemployment Commission.
"Great Depression." Dictionary of American History. . Encyclopedia.com. (December 12, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/great-depression
"Great Depression." Dictionary of American History. . Retrieved December 12, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/great-depression
Between 1929 and 1933 the world economy collapsed. In country after country, although not in all, prices fell, output shrank, and unemployment soared. In the United States the rate of unemployment reached 25 percent of the labor force, in the United Kingdom 16 percent, and in Germany a staggering 30 percent. These rates are only roughly comparable across countries and with twenty-first century unemployment rates because of different definitions of unemployment and methods of collection; nevertheless, they show the extremity of the crisis. The recovery, moreover, was slow and in some countries incomplete. In 1938 the rate of unemployment was still at double-digit levels in the United States and the United Kingdom, although thanks to rearmament it was considerably lower in Germany. A number of previous depressions were extremely painful, but none was as deep or lasted as long. There were many recessions that came after, but none could begin to compare in terms of prolonged industrial stagnation and high unemployment. The consequences stemming from the Great Depression for economies and polities throughout the world were profound. The early appearance of depression in the United States and the crucial role of the United States in world trade make it important to consider the U.S. case in some detail.
THE GREAT DEPRESSION IN THE UNITED STATES
There had been severe depression in the United States before the 1930s. The most similar occurred in the 1890s. Indeed, the sequence of events in the 1890s foreshadowed what was to happen in the 1930s in some detail. Prices of stocks began to decline in January 1893, and a crash came in May and June after the failure of several well-regarded firms. The market continued to decline, and at its low point in 1896 had lost 30 percent of its value. The decline in the stock market was associated with a sharp contraction in economic activity. A banking panic intensified the contraction. There seem to have been two sources for the panic. First, fears that the United States would leave the gold standard prompted by the growing strength of the free silver movement led to withdrawals of gold from New York. In addition, a wave of bank failures in the South and West produced by low agricultural prices also undermined confidence in the banking system. Runs on banks spread throughout the country and the crisis ended with a general restriction of the conversion of bank notes and deposits into gold. The money supply fell, the economy slowed, bank and business failures multiplied, and unemployment rose. Although a recovery began in June 1894, the recovery was slow and uneven. By 1897 one-third of the railroad mileage in the United States was in receivership. It took until 1898 for the stock market to match its 1893 peak, and for annual real gross domestic product (GDP) per capita to match its 1892 level.
During the early 1930s events unfolded in a similar fashion. There were few signs in 1929, however, that a Great Depression was on the horizon. There had been a severe contraction in 1920–1921, but the economy had recovered quickly. There were minor contractions in 1923–1924 and 1926–1927, and the agricultural sector had struggled during the 1920s, but overall the economy prospered after the 1920–1921 recession. In 1929 unemployment in the United States was just 3.2 percent of the labor force; in many ways it was a vintage year.
The stock market boomed in the late 1920s and reached a peak in 1929; prices rose nearly 2.5 times between 1927 and its peak in 1929. Economic historians have long debated whether there was a bubble in the market in the late 1920s, meaning that prices of shares had risen more rapidly than “fundamentals.” Research conducted in 1993 by Peter Rappoport and Eugene White and other late-twentieth-century views have strengthened the case for a bubble. They have shown that many well-informed investors doubted the long-run viability of prevailing prices. There were undoubtedly, however, many other investors who believed that the economy had entered a so-called New Age, as was said at the time, in which scientific and technical research would produce rising real incomes, rising profits, and an eventual end to poverty.
The crash of the stock market in the fall of 1929 was partly a reflection of the state of the economy—a recession was already under way—but the crash also intensified the slowdown by undermining confidence in the economic future. The major impact of the crash, as shown by Christina Romer in her 1990 work, was to slow the sale of consumer durables. The crash may also have influenced markets around the world by forcing investors to reassess their optimistic view of the future. In any case, the stock markets in most other industrial countries after having risen in the 1920s also fell to very low levels in the first half of the 1930s. The U.S. market lost two-thirds of its value by 1933, the German market (which had peaked before the American market) lost one half, and the British market, which did somewhat better, lost one-fifth.
The collapse of the American banking system then intensified the contraction. There were repeated waves of bank failures between 1930 and 1933 produced by the economic contraction, by the decline in prices, especially in the agricultural sector, and perhaps by a contagion of fear. As people withdrew their cash from banks to protect their wealth, and as banks increased their reserves to prepare for runs, the stock of money shrank. The collapse of the American banking system reflected a number of unique circumstances. First, laws that prevented banks based in one state from establishing branches in other states, and sometimes from establishing additional branches within a state, had created a system characterized by thousands of small independent banks. In contrast, most other countries had systems dominated by a few large banks with branches. In Canada where the system consisted of a small number of banks with head offices in Toronto or Montreal and branches throughout the country there were no bank failures. In addition, the young and inexperienced Federal Reserve System (it was established in 1913) proved incapable of taking the bold actions needed to end the crisis.
Many explanations have been put forward for the failure of the Federal Reserve to stem the tide of bank runs and closures. Milton Friedman and Anna J. Schwartz in their classic Monetary History of the United States (1963) stressed an internal political conflict between the Federal Reserve Board in Washington and the New York Federal Reserve Bank that paralyzed the system. A 2003 study by Allan Meltzer stresses adherence to economic doctrines that led the Federal Reserve to misinterpret the fall in nominal interest rates during the contraction. The Treasury bill rate fell from about 5 percent in May 1929 to. 10 percent in September 1933. The Federal Reserve viewed low rates as proof that it had made liquidity abundant and that there was little more it could do to combat the depression. The bank failures, which were concentrated among smaller banks in rural areas, or in some cases larger banks that had engaged in questionable activities, the Federal Reserve regarded as a benign process that would result in a stronger banking system. From 1930 to 1933 about 9,000 banks in the United States suspended operation and the money supply fell by one-third.
During the interregnum between the election of President Franklin Roosevelt in November 1932 and his taking office in March 1933 the banking system underwent further turmoil. In state after state governors proclaimed “bank holidays” that prohibited or limited withdrawals from banks and brought the banking and economic system to a standstill. The purpose of the holidays was to protect the banks from panicky withdrawals, but the result was to disrupt commerce and increase fears that the system was collapsing. By the time Roosevelt took office virtually all of the banks in the United States were closed and perhaps one-quarter of the labor force was unemployed. Roosevelt addressed the situation boldly. Part of his response was to rally the spirits of the nation. In his famous first inaugural address he told the people that “the only thing we have to fear is fear itself.” His address also promised work for the unemployed and reforms of the banking system. The administration soon followed through. Public works programs, which focused on conservation in national parks and building infrastructure, were created to hire the unemployed. In the peak year of 1936 approximately 7 percent of the labor force was working in emergency relief programs.
The banking crisis was addressed in several ways. Banks were inspected and only “sound” banks were allowed to reopen. The process of inspection and phased reopening was largely cosmetic, but it appears to have calmed fears about the safety of the system. Deposit insurance was also instituted. In 1963 Milton Friedman and Ann Jacobson Schwartz argued that deposit insurance was important in ending the banking crisis and preventing a new eruption of bank failures by removing the fears that produced bank runs. Once depositors were insured by a federal agency they had no reason to withdraw their funds in cash when there was a rumor that the bank was in trouble. The number of bank failures in the United States dropped drastically after the introduction of deposit insurance.
The recovery that began in 1933, although not without setbacks, was vigorous and prolonged. By the middle of 1937 industrial production was close to the 1929 average. Still, there was considerable concern about the pace of recovery and the level of the economy. After all, with normal economic growth the levels of industrial production and real output would have been above their 1929 levels in 1937. Unemployment, moreover, remained stubbornly high. With a few more years of continued growth the economy might well have recovered fully. However, another recession, the “recession within the depression,” hit the economy in 1937. By the trough in 1938 industrial production had fallen almost 60 percent and unemployment had risen once more. Mistakes in both fiscal and monetary policy contributed to the severity of the contraction, although the amounts contributed are disputed. The new Social Security system financed by a tax on wages was instituted in 1935, and the taxes were now put in place. The Federal Reserve, moreover, chose at this time to double the required reserve ratios of the banks. The main purpose of the increase was to prevent the reserves from being a factor in the future, to tie them down. The banks, however, were now accustomed to having a large margin of reserves above the required level and they appear to have cut their lending in order to rebuild this margin. The economic expansion that began in the summer of 1938, however, would last throughout the war and pull the economy completely out of the depression. Indeed, even before the United States entered the war as an active participant at the end of 1941, fiscal and monetary stimuli had done much to cure the depression.
Most market-oriented countries, especially those that adhered to the gold standard, were affected by the Great Depression. One reason was the downward spiral of world trade. The economic decline in the United States hit hard at firms throughout the world that produced for the American market. As the depression spread from country to country, imports declined further.
The gold standard, to which most industrial countries adhered, provided another channel for the transmission of the Great Depression. The reputation of the gold standard had reached unchallenged heights during the period of expanding world trade before World War I. Most countries, with the exception of the United States, had abandoned the gold standard during the war to be free to print money to finance wartime expenditures. After the war, the gold standard had been reconstructed, but in a way that left it fragile. Most countries decided not to deflate their price levels back to prewar levels. Hence the nominal value of world trade relative to the amount of gold in reserve was much higher after the war than before. Under the gold standard orthodoxy of the day central banks were supposed to place maintenance of the gold standard above other priorities. If a country was losing gold because its exports had fallen faster than its imports, the central bank was supposed to raise interest rates to protect its gold reserve, even if this policy exacerbated the economic contraction. Countries that gained gold might have lowered their rates, but they were reluctant to do so because lower rates would put their gold reserves at risk.
The global transmission of information and opinion provided a third, hard to measure, but potentially important channel. The severe slide on the U.S. stock market and other stock markets focused attention throughout the rest of the world on factors that might produce a decline in local markets. Waves of bank failures in the United States and central Europe forced depositors throughout the rest of the world to raise questions about the safety of their own funds. Panic, in other words, did not respect international borders.
Although these transmission channels assured that the whole world was affected in some degree by the depression, the experience varied markedly from country to country, as even a few examples will illustrate. In Britain output fell from 1929 to 1932, but the fall was less than 6 percent. The recovery, moreover, seems to have started sooner in Britain than in the United States and the growth of output from 1932 to 1937 was extremely rapid. Unemployment, however, soared in 1929 to 1931 and remained stubbornly high for the remainder of the decade. Although Britain was becoming less dependent on exports, exports were still about 15 percent of national product. The fall in exports produced by the economic decline in the United States and other countries, therefore, probably explains a good deal of the decline in economic activity in Britain. In September 1931 Britain devalued the pound and left the gold standard. The recovery in Britain began soon after. Export growth produced by a cheaper pound does not seem to have played a prominent part in the recovery, but a more expansionary monetary policy permitted by leaving gold does seem to have played a role. On the whole it may be said that the British economy displayed surprising resiliency in the face of the loss of its export markets.
Germany, on the other hand, suffered one of the most catastrophic declines. A severe banking crisis hit Germany in July 1931, punctuated by the failure of the Darmstädter-und Nationalbank on July 13. The German crisis may have been provoked by the failure of the Credit Anstalt bank in Austria in May 1931 and the subsequent run on the Austrian shilling, although economists have debated these factors. Germany soon closed its banks in an effort to stem the runs, and abandoned the gold standard. Germany, however, did not use the monetary freedom won by abandoning the commitment to gold to introduce expansionary policies. Between June 1930 and June 1933 the stock of money in Germany fell by nearly 40 percent. Prices and industrial production fell, and unemployment soared. Under the Nazis government spending, much of it for rearmament, and monetary expansion produced an extended economic boom that restored industrial production and full employment.
The experience of Japan where the depression was unusually mild has stimulated considerable interest. Unemployment rose mildly by Western standards between 1929 and 1933 and fell to 3.8 percent by 1938. Other indicators, such as the stock market, also rose between 1933 and 1938. Many observers have attributed this performance to the actions of Finance Minister Korekiyo Takahashi. In 1931 Takahashi introduced a stimulus package that included a major devaluation of the yen, interest rate cuts, and increases in government spending. The latter element of his package has led some observers to refer to Takahashi as a “Keynesian before Keynes.” Late twentieth-century research has challenged the notion that Takahashi was able to break completely free of the economic orthodoxies of the day, but the strong performance of the Japanese economy remains an important signpost for scholars attempting to understand the factors that determined the course of events in the 1930s.
The factors previously stressed, the collapse of the banking system in the early 1930s, and the policy mistakes by the Federal Reserve and other central banks are of most relevance to what has come to be called the monetarist interpretation of the Great Depression. Some economists writing in the 1930s, such as Jacob Viner and Laughlin Currie, developed this view, concluding that much of the trouble could have been avoided if the Federal Reserve and other central banks had acted wisely.
In the aftermath of the publication of John Maynard Keynes’ General Theory (1936), however, an alternative interpretation held sway. The Keynesians argued that the breakdown of the banking system, although disturbing, was mainly a consequence of the collapse of aggregate demand. The behavior of the Federal Reserve was at most a secondary problem. The Keynesians blamed the fall in aggregate demand on the failure of one or more categories of autonomous spending. At first, attention focused on investment; later attention shifted to consumption. The answer to the Great Depression was public works financed, if necessary, by borrowing. The New Deal in the United States had spent a great deal of money and run up highly controversial deficits; 1956 calculations by E. Cary Brown, however, showed that a number of factors, including cuts in spending at the state and local level, had offset the effects of New Deal spending. Fiscal policy had failed to return the economy to full employment, according to Brown, “not because it did not work, but because it was not tried” (1956, pp. 863–866).
Friedman and Schwartz’s Monetary History, which provided an extraordinarily detailed account of the effects of monetary policies during the 1930s and put the Great Depression into the broader context of American monetary history, returned the collapse of the banking system to center stage. Their interpretation was challenged in turn by Peter Temin in Did Monetary Forces Cause the Great Depression (1976) who defended the Keynesian interpretation. Subsequent work, however, continued to emphasize the banking crisis. The 1983 research of Ben Bernanke, who later became chair of the U.S. Federal Reserve, was particularly influential. Bernanke argued that the banking and financial crises had disrupted the ability of the banking system to act as an efficient financial intermediary. Even sound businesses found it hard to borrow when their customary lender had closed its doors and the assets they could offer as collateral to another lender had lost value. The Bernanke thesis not only explained why the contraction was severe, but also why it took so long for the economy to recover: It took time for financial markets to rebuild the relationships that had been sundered in the early 1930s.
Research that took a more global view of the Great Depression, such as Peter Temin’s 1989 work, reinforced the case for viewing monetary forces as decisive. Barry Eichengreen’s Golden Fetters (1992), one of the most influential statements of this view, stressed the role of the gold standard in transmitting the Depression and inhibiting recovery. Countries faced with balance of trade deficits because of declining exports should have maintained their stocks of money and aimed for internal price stability. Instead they often adopted contractionary policies aimed at stemming the outflow of gold. Those countries that abandoned the gold standard and undertook expansionary monetary policies recovered more rapidly than those who clung to gold. The examples provided by countries, such as Japan, which avoided trouble because they had never been on gold or quickly abandoned it were particularly telling.
In the twenty-first century economists have turned to formal models, such as dynamic computable general equilibrium models, to address macroeconomic questions, and have used these models to formulate and test ideas about the Great Depression. The 2002 and 2005 work of Harold Cole and Lee Ohanian has received considerable attention in both academic and mainstream circles. It is too early to say, however, whether this work will serve to reinforce traditional interpretations of the Great Depression reached by other methods or produce entirely new interpretations. It is not too soon to predict, however, that the Great Depression will continue to attract the interest of scholars attempting to understand basic macroeconomic processes.
One cannot say for certain that another Great Depression is impossible, but important lessons have been learned and important changes made in the financial system that make a repetition highly unlikely. For example, it seems improbable that any modern central bank would allow a massive collapse of the banking system and deflation to proceed unabated as happened in a number of countries in the early 1930s.
SEE ALSO Aggregate Demand; Banking; Bull and Bear Markets; Business Cycles, Real; Central Banks; Depression, Economic; Economic Crises; Economics, Keynesian; Federal Reserve System, U.S.; Finance; Financial Markets; Fisher, Irving; Friedman, Milton; Gold Standard; Interest Rates; Investment; Keynes, John Maynard; Kindleberger, Charles Poor; Long Waves; Monetarism; Policy, Fiscal; Policy, Monetary; Recession; Stagnation; Unemployment
Bernanke, Ben S. 1983. Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression. American Economic Review 73 (3): 257–276.
Bernanke, Ben S. 1995. The Macroeconomics of the Great Depression: A Comparative Approach. Journal of Money, Credit and Banking 27 (1): 1–28.
Brown, E. Cary. 1956. Fiscal Policy in the Thirties: A Reappraisal. American Economic Review 46 (5): 857–879.
Cole, Harold L., and Lee E. Ohanian. 2002. The U.S. and U.K. Great Depressions through the Lens of Neoclassical Growth Theory. American Economic Review 92 (2): 28–32.
Cole, Harold L., Lee E. Ohanian, and Ron Leung. 2005. Deflation and the International Great Depression: A Productivity Puzzle. Minneapolis, MN: Federal Reserve Bank of Minneapolis.
Eichengreen, Barry J. 1992. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939. New York: Oxford University Press.
Friedman, Milton, and Anna Jacobson Schwartz. 1963. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton University Press.
James, Harold. 1984. The Causes of the German Banking Crisis of 1931. Economic History Review 37 (1): 68–87.
Kindleberger, Charles Poor. 1973. The World in Depression, 1929–1939. Berkeley: University of California Press.
Meltzer, Allan H. 2003. A History of the Federal Reserve. Chicago: University of Chicago Press.
Rappoport, Peter, and Eugene N. White. 1993. Was There a Bubble in the 1929 Stock Market? Journal of Economic History 53 (3): 549–574.
Romer, Christina D. 1990. The Great Crash and the Onset of the Great Depression. Quarterly Journal of Economics 105 (3): 597–624.
Romer, Christina D. 1993. The Nation in Depression. Journal of Economic Perspectives 7 (2): 19–39.
Sicsic, Pierre. 1992. Was the Franc Poincaré Deliberately Undervalued? Explorations in Economic History 29: 69–92.
Temin, Peter. 1976. Did Monetary Forces Cause the Great Depression? New York: Norton.
Temin, Peter. 1989. Lessons from the Great Depression. Cambridge, MA: MIT Press.
Temin, Peter. 1993. Transmission of the Great Depression. Journal of Economic Perspectives 7 (2): 87–102.
"Great Depression." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (December 12, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/great-depression
"Great Depression." International Encyclopedia of the Social Sciences. . Retrieved December 12, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/great-depression
The stock market crash on October 29, 1929, sent the United States careening into the longest and darkest economic depression in American history. Between 1929 and 1933, all major economic indexes told the same story. The gross national product (GNP), the total of all goods and services produced each year, fell from $104.4 billion in 1929 to $74.2 billion in 1933, setting back the GNP per capita rate by twenty years. Industrial production declined 51 percent before reviving slightly in 1932. Unemployment statistics revealed the impact of the Depression on Americans. In 1929, the U.S. Labor Department reported that there were nearly 1.5 million persons without jobs in the country. After the crash, the figure soared. At its peak in 1933, unemployment stood at more than 12.6 million without jobs, although some estimates placed unemployment as high as 16 million. By 1933, the annual national combined income had shrunk from $87.8 billion to $40.2 billion. Farmers, perhaps the hardest hit economic group, saw their total combined income drop from $11.9 billion to $5.3 billion.
For the first two years of the Depression, which spread worldwide, President Herbert Hoover (1929–1933) relied on the voluntary cooperation of business and labor to maintain payrolls and production. When the crisis deepened, he took positive steps to stop the spread of economic collapse. Hoover's most important achievement was the creation of the Reconstruction Finance Corporation (RFC), a loan agency designed to aid large business concerns, including banks, railroads, and insurance companies. The RFC later became an essential agency of the New Deal. In addition, Hoover obtained new funds from Congress to cut down the number of farm foreclosures. The Home Loan Bank Act helped prevent the foreclosure of home mortgages. On the relief issue, the President and Congress fought an ongoing battle that lasted for months. The Democrats wanted the federal government to assume responsibility for direct relief and to spend heavily on public works. Hoover, however, insisted that unemployment relief was a problem for local, not federal, governments. At first, he did little more than appoint two committees to mobilize public and private agencies against distress. Yet after a partisan fight, Hoover signed a relief bill unmatched in American history. The Emergency Relief and Construction Act provided $300 million for local relief loans and $1.5 billion for self-liquidating public works. Tragically, the Depression only worsened. By the time Hoover's term in office expired, the nation's banking system had virtually collapsed and the economic machinery of the nation was grinding to a halt. Hoover left office with the reputation of a do-nothing President. The judgment was rather unfair. He had done much, including establishing many precedents for the New Deal; but it was not enough.
What happened to the economy after the stock market crash of 1929 left most people baffled. The physical structure of business and industry was still intact, undamaged by war or natural disaster, but businesses closed. Men wanted to go to work, but plants stood dark and idle. Prolonged unemployment created a new class of people. The jobless sold apples on street corners. They stood in breadlines and outside soup kitchens. Many lived in "Hoovervilles," shantytowns on the outskirts of large cities. Thousands of unemployed men and boys took to the road in search of work, and the gas station became a meeting place for men "on the bum." In 1932, a crowd of 50 men fought for a barrel of garbage outside the back door of a Chicago restaurant. In northern Alabama, poor families exchanged a dozen eggs, which they sorely needed, for a box of matches. Despite such mass suffering, for the most part there was little violence. The angriest Americans were those in rural areas, where cotton was bringing only five cents a pound and wheat only 35 cents a bushel. In August 1932, Iowa farmers began dumping milk bound for Sioux City. To dramatize their plight, Milo Reno, former president of the Iowa Farmers Union, organized a farm strike on the northern plains and cut off all agricultural products from urban markets until prices rose. During the same summer, 25,000 World War I (1914–1918) veterans, led by former sergeant Walter W. Waters, staged the Bonus March on Washington, DC, to demand immediate payment of a bonus due to them in 1945. They stood passively on the Capitol steps while Congress voted it down. After a riot with police, Hoover ordered the U.S. Army to clean the veterans out of their shanty-town, for fear they would breed a revolution.
The Great Depression was a crisis of the American mind. Many people believed that the country had reached all its frontiers and faced a future of limited opportunity. The slowdown of marriage and birth rates expressed this pessimism. The Depression smashed the old beliefs of rugged individualism, the sanctity of business, and a limited government. Utopian movements found an eager following. The Townsend Plan, initiated by retired California physician Francis E. Townsend, demanded a monthly pension to people over age 65. Charles E. Coughlin (1891–1979), a radio priest in Royal Oak, Michigan, advocated the nationalization of banks, utilities, and natural resources. Senator Huey P. Long (1893–1935), Governor of Louisiana, led a movement that recommended a redistribution of the wealth. All the programs tapped a broad sense of resentment among those who felt they had been left out of President Franklin Roosevelt's (1933–1945) New Deal. Americans did gradually regain their sense of optimism. The progress of the New Deal revived the old faith that the nation could meet any challenge and control its own destiny. Even many intellectuals who had "debunked" American life in the 1920s began to revise their opinions for the better.
By early 1937, there were signs of recovery in the American economy. Business indexes were up—some near pre-crash levels. The New Deal had eased much of the acute distress, although unemployment remained around 7.5 million. The economy again went into a sharp recession that was almost as bad as 1929. Although conditions improved by mid-1938, the Depression did not truly end until the government launched massive defense spending in preparation for World War II (1939–1945).
See also: Great Depression (Causes of), Hoovervilles, New Deal, Recession, Reconstruction Finance Corporation, Franklin D. Roosevelt, Stock Market Crash of 1929, Unemployment
Phillips, Cabell. From the Crash to the Blitz, 1929– 1939. New York: The Macmillan Co., 1969.
Schlesinger, Arthur M., Jr. The Age of Roosevelt. Boston: Houghton Mifflin Co., 1957.
Shannon, David A. The Great Depression. Englewood Cliffs, NJ: Prentice-Hall, 1960.
Terkel, Studs. Hard Times: An Oral History of the Great Depression in America. New York: Pantheon Books Inc., 1970.
Wecter, Dixon. The Age of the Great Depression, 1929–1941. New York: The Macmillan Company, 1948.
"Great Depression." Gale Encyclopedia of U.S. Economic History. . Encyclopedia.com. (December 12, 2017). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/great-depression
"Great Depression." Gale Encyclopedia of U.S. Economic History. . Retrieved December 12, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/great-depression
Great Depression, in U.S. history, the severe economic crisis generally considered to have been precipitated by the U.S. stock-market crash of 1929. Although it shared the basic characteristics of other such crises (see depression), the Great Depression was unprecedented in its length and in the wholesale poverty and tragedy it inflicted on society. Economists have disagreed over its causes, but certain causative factors are generally accepted. The prosperity of the 1920s was unevenly distributed among the various parts of the American economy—farmers and unskilled workers were notably excluded—with the result that the nation's productive capacity was greater than its capacity to consume. In addition, the tariff and war-debt policies of the Republican administrations of the 1920s had cut down the foreign market for American goods. Finally, easy-money policies led to an inordinate expansion of credit and installment buying and fantastic speculation in the stock market.
The American depression produced severe effects abroad, especially in Europe, where many countries had not fully recovered from the aftermath of World War I; in Germany, the economic disaster and resulting social dislocation contributed to the rise of Adolf Hitler. In the United States, at the depth (1932–33) of the depression, there were 16 million unemployed—about one third of the available labor force. The gross national product declined from the 1929 figure of $103,828,000,000 to $55,760,000,000 in 1933, and in two years more than 5,000 banks failed. As a social consequence of the depression, the birthrate fell precipitously, for the first time in American history falling below the replacement rate. The economic, agricultural, and relief policies of the New Deal administration under President Franklin Delano Roosevelt did a great deal to mitigate the effects of the depression and, most importantly, to restore a sense of confidence to the American people. Yet it is generally agreed that complete business recovery was not achieved and unemployment ended until the early 1940s, when as a result of World War II the government began to spend heavily for defense.
See R. R. and H. M. Lynd, Middletown in Transition (1937, repr. 1982); F. L. Allen, Since Yesterday: The 1930s in America (1940); D. Wecter, The Age of the Great Depression (1948, repr. 1956); A. M. Schlesinger, Jr., The Crisis of the Old Order (1957); D. A. Shannon, ed., The Great Depression (1960); C. Bird, The Invisible Scar: The Great Depression, and What It Did to American Life … (1966); A. U. Romasco, The Poverty of Abundance (1965); G. Rees, The Great Slump (1970); S. Terkel, Hard Times: An Oral History of the Great Depression (1970, repr. 2000); C. P. Kindleberger, The World in Depression (1973); G. H. Elder, Jr., Children of the Great Depression (1974, upd. ed. 1998); D. M. Kennedy, Freedom from Fear (1999); T. H. Watkins, The Hungry Years (1999); L. Ahamed, Lords of Finance: The Bankers Who Broke the World (2009); M. Dickstein, Dancing in the Dark: A Cultural History of the Great Depression (2009).
"Great Depression." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (December 12, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/great-depression
"Great Depression." The Columbia Encyclopedia, 6th ed.. . Retrieved December 12, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/great-depression
"Great Depression." World Encyclopedia. . Encyclopedia.com. (December 12, 2017). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/great-depression
"Great Depression." World Encyclopedia. . Retrieved December 12, 2017 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/great-depression |
Volcanism has played a major part in shaping not only planet Earth, but other places in our universe. Though other planets show signs of volcanic eruptions, most seemed to have erupted in the distant past and are inactive now. Both Mars and Venus have volcanoes much larger than any on Earth, and they have erupted huge amounts of lava onto their surfaces in the past.
The Earth's Moon has no large volcanoes and no current volcanic activity, although recent evidence suggests it may still possess a partially molten core. However, the Moon does have many volcanic features such as maria (the darker patches seen on the moon), rilles and domes.
The planet Venus has a surface that is 90% basalt, indicating that volcanism played a major role in shaping its surface. The planet may have had a major global resurfacing event about 500 million years ago, from what scientists can tell from the density of impact craters on the surface. Lava flows are widespread and forms of volcanism not present on Earth occur as well. Changes in the planet's atmosphere and observations of lightning have been attributed to ongoing volcanic eruptions, although there is no confirmation of whether or not Venus is still volcanically active. However, radar sounding by the Magellan probe revealed evidence for comparatively recent volcanic activity at Venus's highest volcano Maat Mons, in the form of ash flows near the summit and on the northern flank.
There are several extinct volcanoes on Mars, four of which are vast shield volcanoes far bigger than any on Earth. They include Arsia Mons, Ascraeus Mons, Hecates Tholus, Olympus Mons, and Pavonis Mons. These volcanoes have been extinct for many millions of years, but the European Mars Express spacecraft has found evidence that volcanic activity may have occurred on Mars in the recent past as well.
Jupiter's moon Io is the most volcanically active object in the solar system because of tidal interaction with Jupiter. It is covered with volcanoes that erupt sulfur, sulfur dioxide and silicate rock, and as a result, Io is constantly being resurfaced. Its lavas are the hottest known anywhere in the solar system, with temperatures exceeding 1,800 K (1,500 °C). In February 2001, the largest recorded volcanic eruptions in the solar system occurred on Io. Europa, the smallest of Jupiter's Galilean moons, also appears to have an active volcanic system, except that its volcanic activity is entirely in the form of water, which freezes into ice on the frigid surface. This process is known as cryovolcanism, and is apparently most common on the moons of the outer planets of the solar system.
In 1989 the Voyager 2 spacecraft observed cryovolcanoes (ice volcanoes) on Triton, a moon of Neptune, and in 2005 the Cassini–Huygens probe photographed fountains of frozen particles erupting from Enceladus, a moon of Saturn. The ejecta may be composed of water, liquid nitrogen, ammonia, dust, or methane compounds. Cassini–Huygens also found evidence of a methane-spewing cryovolcano on the Saturnian moon Titan, which is believed to be a significant source of the methane found in its atmosphere. It is theorized that cryovolcanism may also be present on the Kuiper Belt Object Quaoar. A 2010 study of the exoplanet COROT-7b, which was detected by transit in 2009, suggested that tidal heating from the host star very close to the planet and neighboring planets could generate intense volcanic activity similar to that found on Io.
Weird Volcanoes Are Erupting Across the Solar System Live Science - August 12, 2018
NASA's Juno spacecraft recently spotted a possible new volcano at the south pole of Jupiter's most lava-licious moon, Io. But this volcanically active moon is not alone in the solar system, where sizzling-hot rocks explode and ooze onto the surface of several worlds. So how do Earthly volcanoes differ from those erupting across the rest of the solar system? Let's start with Io. The moon is famous for its hundreds of volcanoes, including fountains that sometimes spurt lava dozens of miles above the surface, according to NASA. This Jupiter moon is constantly re-forming its surface through volcanic eruptions, even to this day. Io's volcanism results from strong gravitational encounters between Jupiter and two of its large moons, Europa and Ganymede, which shake up Io's insides.
The robotic NASA spacecraft Dawn entered orbit around Ceres on March 6, 2015. Pictures with a resolution previously unattained were taken during imaging sessions starting in January 2015 as Dawn approached Ceres, showing a cratered surface. Two distinct bright spots (or high-albedo features) inside a crater (different from the bright spots observed in earlier Hubble images) were seen in a February 19, 2015 image, leading to speculation about a possible cryovolcanic origin or outgassing.
On March 3, 2015, a NASA spokesperson said the spots are consistent with highly reflective materials containing ice or salts, but that cryovolcanism is unlikely, however on 2 September 2016, published alongside six other studies, NASA scientists released a paper in Science that claims that a massive ice volcano called Ahuna Mons is the strongest evidence yet for the existence of these mysterious ice volcanoes.
On May 11, 2015, NASA released a higher-resolution image showing that, instead of one or two spots, there are actually several. On December 9, 2015, NASA scientists reported that the bright spots on Ceres may be related to a type of salt, particularly a form of brine containing magnesium sulfate hexahydrite (MgSO4Š6H2O); the spots were also found to be associated with ammonia-rich clays. In June 2016, near-infrared spectra of these bright areas were found to be consistent with a large amount of sodium carbonate (Na 2CO 3), implying that recent geologic activity was probably involved in the creation of the bright spots.
Lonely Ice Volcano On Ceres May Have Once Had Company Live Science - February 6, 2017
The mystery of the dwarf planet Ceres' lonely ice volcano may have just been solved. NASA's Dawn probe discovered the 2.5-mile-high (4 kilometers) cryovolcano, named Ahuna Mons, in 2015. There's nothing else remotely like it on the 590-mile-wide (950 km) Ceres - a fact that has had scientists scratching their heads. The dwarf planet Ceres may once have harbored many ice volcanoes, all but one of which have flattened out and vanished into the dwarf planet's surface, a new study suggests.
The Earth's Moon has no large volcanoes. However, vast plains of basaltic lavas cover much of the lunar surface. The earliest astronomers thought, wrongly, that these 'Plains were seas of lunar water'. Thus, they were called "mare " (pronounced "mahr-ay"). Mare means "sea" in Latin. In addition, other volcanic features also occur within the lunar mare. The most important are sinuous rilles , dark mantling deposits, and small volcanic domes and cones . Most of these features are fairly small, however. They form only a tiny fraction of the lunar volcanic record.
Astronomers May Have Found Volcanoes 40 Light-Years From Earth National Geographic - May 6, 2015
Of all the recent discoveries reported of late from worlds outside our solar system, a new find may be the most extraordinary yet: A world known as 55 Cancri e, just 40 light-years from Earth, might be volcanically active. When astronomers first began discovering exoplanets back in the 1990s, they figured it would take a bigger, more powerful successor to the Hubble to study them in any detail. They were wrong. Even with the launch of the James Webb Space Telescope still several years in the future, exoplanet experts have managed to probe the atmospheres of these distant worlds; determine whether they're gaseous, like Jupiter, or rocky, like Earth; and find what appears to be gigantic storms whipping across them.
Volcanism on Io, a moon of Jupiter, produces lava flows, volcanic pits, and plumes of sulfur and sulfur dioxide hundreds of kilometres high. This volcanic activity was discovered in 1979 by Voyager 1 imaging scientists.
Observations of Io by passing spacecraft (the Voyagers, Galileo, Cassini, and New Horizons) and Earth-based astronomers have revealed more than 150 active volcanoes. Up to 400 such volcanoes are predicted to exist based on these observations. Io's volcanism makes the satellite one of only five known currently volcanically active worlds in the solar system (the other four being Earth, Venus, Saturn's moon Enceladus, and Neptune's moon Triton).
First predicted shortly before the Voyager 1 flyby, the heat source for Io's volcanism comes from tidal heating produced by its forced orbital eccentricity. This differs from Earth's internal heating, which is derived primarily from radioactive isotope decay.
Io's eccentric orbit leads to a slight difference in Jupiter's gravitational pull on the satellite between its closest and farthest points on its orbit, causing a varying tidal bulge. This variation in the shape of Io causes frictional heating in its interior. Without this tidal heating, Io might have been similar to the Earth's moon, a world of similar size and mass, geologically dead and covered with numerous impact craters.
Io's volcanism has led to the formation of hundreds of volcanic centres and extensive lava formations, making the moon the most volcanically active body in the Solar System. Three different types of volcanic eruptions have been identified, differing in duration, intensity, lava effusion rate, and whether the eruption occurs within a volcanic pit (known as a patera). Lava flows on Io, tens or hundreds of kilometres long, have primarily basaltic composition, similar to lavas seen on Earth at shield volcanoes such as Kilauea in Hawaii.
While most lavas on Io are made of basalt, a few lava flows consisting of sulfur and sulfur dioxide have been seen. In addition, eruption temperatures as high as 1,600 K (1,300 °C; 2,400 °F) were detected, which can be explained by the eruption of high-temperature ultramafic silicate lavas.
As a result of the presence of significant quantities of sulfurous materials in Io's crust and on its surface, some eruptions propel sulfur, sulfur dioxide gas, and pyroclastic material up to 500 kilometres (310 mi) into space, producing large, umbrella-shaped volcanic plumes.
This material paints the surrounding terrain in red, black, and/or white, and provides material for Io's patchy atmosphere and Jupiter's extensive magnetosphere. Spacecraft that have flown by Io since 1979 have observed numerous surface changes as a result of Io's volcanic activity.
Volcanoes on Io, a moon of the planet Jupiter, are believed to eject sulfur or possibly sulfur dioxide.
Scientists have never recorded an active volcano eruption on the surface of Mars, however, the European Space Agency's Mars Express orbiter photographed lava flows that must have occurred within the past two million years, suggesting a relatively recent geologic activity.
Volcanic activity, or volcanism, has played a significant role in the geologic evolution of Mars. Scientists have known since the Mariner 9 mission in 1972 that volcanic features cover large portions of the Martian surface. These features include extensive lava flows, vast volcanic plains, and the largest known volcanoes in the Solar System. Martian volcanic features range in age from Noachian (>3.7 billion years) to late Amazonian (< 500 million years), indicating that the planet has been volcanically active throughout its history and probably still is so today.
Volcanic plains are widespread on Mars. Two types of plains are commonly recognized: those where lava flow features are common, and those where flow features are generally absent but a volcanic origin is inferred by other characteristics.
Plains with abundant lava flow features occur in and around the large volcanic provinces of Tharsis and Elysium.
Flow features include both sheet flow and tube- and channel-fed flow morphologies. Sheet flows show complex, overlapping flow lobes and may extend for many hundreds of kilometers from their source areas.
Lava flows can form a lava tube when the exposed upper layers of lava cool and solidify to form a roof while the lava underneath continues flowing. Often, when all the remaining lava leaves the tube, the roof collapses to make a channel or line of pit craters (catena).
An unusual type of flow feature occurs in the Cerberus plains south of Elysium and in Amazonis. These flows have a broken platey texture, consisting of dark, kilometer-scale slabs embedded in a light-toned matrix. They have been attributed to rafted slabs of solidified lava floating on a still-molten subsurface. Others have claimed the broken slabs represent pack ice that froze over a sea that pooled in the area after massive releases of groundwater from the Cerberus Fossae area.
Both Earth and Mars are large, differentiated planets built from similar chondritic materials. Many of the same magmatic processes that occur on Earth also occur on Mars, and both planets are similar enough compositionally that the same names can be applied to their igneous rocks and minerals.
Volcanism is a process in which magma from a planet's interior rises through the crust and erupts on the surface. The erupted materials consist of molten rock (lava), hot fragmental debris (tephra or ash), and gases. Volcanism is a principal way that planets release their internal heat. Volcanic eruptions produce distinctive landforms, rock types, and terrains that provide a window on the chemical composition, thermal state, and history of a planet's interior.
Magma is a complex, high-temperature mixture of molten silicates, suspended crystals, and dissolved gases. Magma on Mars likely ascends in a similar manner to that on Earth. It rises through the lower crust in diapiric bodies that are less dense than the surrounding material. As the magma rises, it eventually reaches regions of lower density.
When the magma density matches that of the host rock, buoyancy is neutralized and the magma body stalls. At this point, it may form a magma chamber and spread out laterally into a network of dikes and sills. Subsequently, the magma may cool and solidify to form intrusive igneous bodies (plutons). Geologists estimate that about 80% of the magma generated on Earth stalls in the crust and never reaches the surface.
As magma rises and cools, it undergoes many complex and dynamic compositional changes. Heavier minerals may crystallize and settle to the bottom of the magma chamber. The magma may also assimilate portions of host rock or mix with other batches of magma. These processes alter the composition of the remaining melt, so that any magma reaching the surface may be chemically quite different from its parent melt.
Magmas that have been so altered are said to be "evolved" to distinguish them from "primitive" magmas that more closely resemble the composition of their mantle source. (See igneous differentiation and fractional crystallization.) More highly evolved magmas are usually felsic, that is enriched in silica, volatiles, and other light elements compared to iron- and magnesium-rich (mafic) primitive magmas.
The degree and extent to which magmas evolve over time is an indication of a planet's level of internal heat and tectonic activity. The Earth's continental crust is made up of evolved granitic rocks that developed through many episodes of magmatic reprocessing. Evolved igneous rocks are much less common on cold, dead bodies such as the Moon. Mars, being intermediate in size between the Earth and the Moon, is thought to be intermediate is its level of magmatic activity.
At shallower depths in the crust, the lithostatic pressure on the magma body decreases. The reduced pressure can cause gases (volatiles), such as carbon dioxide and water vapor, to exsolve from the melt into a froth of gas bubbles. The nucleation of bubbles causes a rapid expansion and cooling of the surrounding melt, producing glassy shards that may erupt explosively as tephra (also called pyroclastics).
Fine-grained tephra is commonly referred to as volcanic ash. Whether a volcano erupts explosively or effusively as fluid lava depends on the composition of the melt. Felsic magmas of andesitic and rhyolitic composition tend to erupt explosively. They are very viscous (thick and sticky) and rich in dissolved gases. Mafic magmas, on the other hand, are low in volatiles and commonly erupt effusively as basaltic lava flows.
However, these are only generalizations. For example, magma that comes into sudden contact with groundwater or surface water may erupt violently in steam explosions called hydromagmatic (phreatomagmatic or phreatic) eruptions. Also, erupting magmas may behave differently on planets with different interior compositions, atmospheres, and gravity fields.
The most common form of volcanism on the Earth is basaltic. Basalts are extrusive igneous rocks derived from the partial melting of the upper mantle. They are rich in iron and magnesium (mafic) minerals and commonly dark gray in color. The principal type of volcanism on Mars is almost certainly basaltic too.
On Earth, basaltic magmas commonly erupt as highly fluid flows, which either emerge directly from vents or form by the coalescence of molten clots at the base of fire fountains (Hawaiian eruption). These styles are also common on Mars, but the lower gravity and atmospheric pressure on Mars allow nucleation of gas bubbles to occur more readily and at greater depths than on Earth.
As a consequence, Martian basaltic volcanoes are also capable of erupting large quantities of ash in Plinian-style eruptions. In a Plinian eruption, hot ash is incorporated into the atmosphere, forming a huge convective column (cloud). If insufficient atmosphere is incorporated, the column may collapse to form pyroclastic flows. Plinian eruptions are rare in basaltic volcanoes on Earth where such eruptions are most commonly associated with silica-rich andesitic or rhyolitic magmas (e.g., Mount St. Helens).
Because the lower gravity of Mars generates less buoyancy forces on magma rising through the crust, the magma chambers that feed volcanoes on Mars are thought to be deeper and much larger than those on Earth. If a magma body on Mars is to reach close enough to the surface to erupt before solidifying, it must be big. Consequently, eruptions on Mars are less frequent than on Earth, but are of enormous scale and eruptive rate when they do occur. Somewhat paradoxically, the lower gravity of Mars also allows for longer and more widespread lava flows. Lava eruptions on Mars may be unimaginably huge. A vast lava flow the size of the state of Oregon has recently been described in western Elysium Planitia. The flow is believed to have been emplaced turbulently over the span of several weeks and thought to be one of the youngest lava flows on Mars.
The tectonic settings of volcanoes on Earth and Mars are very different. Most active volcanoes on Earth occur in long, linear chains along plate boundaries, either in zones where the lithosphere is spreading apart (divergent boundaries) or being subducted back into the mantle (convergent boundaries).
Because Mars currently lacks plate tectonics, volcanoes there do not show the same global pattern as on Earth. Martian volcanoes are more analogous to terrestrial mid-plate volcanoes, such as those in the Hawaiian Islands, which are thought to have formed over a stationary mantle plume.
The largest and most conspicuous volcanoes on Mars occur in Tharsis and Elysium regions. These volcanoes are strikingly similar to shield volcanoes on Earth. Both have shallow-sloping flanks and summit calderas. The main difference between Martian shield volcanoes and those on Earth is in size: Martian shield volcanoes are truly colossal. For example, the tallest volcano on Mars, Olympus Mons, is 550 km across and 21 km high.
It is nearly 100 times greater in volume than Mauna Loa in Hawaii, the largest shield volcano on Earth. Geologists think one of the reasons that volcanoes on Mars are able to grow so large is because Mars lacks plate tectonics. The Martian lithosphere does not slide over the upper mantle (asthenosphere) as on Earth, so lava from a stationary hot spot is able to accumulate at one location on the surface for a billion years or longer.
Olympus Mons is the youngest and tallest large volcano on Mars. It is located 1200 km northwest of of the Tharsis Montes, just off the western edge of the Tharsis bulge. Its summit is 21 km above datum (Mars "sea" level) and has a central caldera complex consisting of six nested calderas that together form a depression 72 x 91 km wide and 3.2 km deep.
As a shield volcano, it has an extremely low profile with shallow slopes averaging between 4-5 degrees. The volcano was built up by many thousands of individual flows of highly fluid lava. An irregular escarpment, in places up to 8 km tall, lies at the base of the volcano, forming a kind of pedestal on which the volcano sits. At various locations around the volcano, immense lava flows can be seen extending into the adjacent plains, burying the escarpment. In medium resolution images (100 m/pixel), the surface of the volcano has a fine radial texture due to the innumerable flows and leveed lava channels that line its flanks. Volcanism on Mars
Mars volcano, Earth's dinosaurs went extinct about the same time PhysOrg - March 20, 2017
Around the same time that the dinosaurs became extinct on Earth, a volcano on Mars went dormant, NASA researchers have learned. Arsia Mons is the southernmost volcano in a group of three massive Martian volcanoes known collectively as Tharsis Montes. Until now, the volcano's history has remained a mystery. But thanks to a new computer model, scientists were finally able to figure out when Arsia Mons stopped spewing out lava. According to the model, volcanic activity at Arsia Mons came to a halt about 50 million years ago. Around that same time, Earth experienced the Cretaceous-Paleogene extinction event, which wiped out three-quarters of its animal and plant species, including the dinosaurs.
Neighboring volcanoes on Mars PhysOrg - April 4, 2011
ESA's Mars Express has returned images of mist-capped volcanoes located in the northern hemisphere of the red planet. Long after volcanic activity ceased, the area was transformed by meteor impacts that deposited ejected material over the lower flanks of the volcanoes.
Many of Mercury's basins contain smooth plains, like the lunar mare, that are believed likely to be filled with lava flows. Collapse structures possibly indicative of volcanism have been found in some craters. Eleven volcanic domes were identified in Mariner 10 images, including a 7-km high dome near the centre of Odin Planitia. Odin Planitia is a large basin on Mercury located in the Tolstoj quadrangle at 23.3° N, 171.6° W. It was named after the Norse god Odin in 1976 by the IAU. A large volcanic dome 7 km in diameter and 1.4 km high is situated near the center of Odin.
Did Volcano on Mercury Erupt for a Billion Years? Live Science - December 8, 2013
Extra-terrestrial volcanism is every bit as stellar as its sounds. The Earth puts on its fair share of spectacular eruptions - but it's Earth's distant cousins who win the awards. Lava-scarred Venus has more volcanoes than any other planet we know; Olympus Mons, a treble Everest soaring above Mars' Northern Hemisphere, is the largest active peak in the solar system; while Saturn's frozen moon, Enceladus, where cryovolcanoes shoot towering streams of water through a crust of solid ice, must surely rank as the strangest. But what about the one place where you'd expect the ground to melt? Sitting just 36 million miles in front of our star, sun-baked Mercury receives a colossal dose of solar radiation with almost no atmosphere to soften the blast. It's perhaps not surprising, then, that alongside its thick coating of meteor scars, the gray, scorched crust also shows signs of damage from within. Since Mariner 10 first revealed its surface in the 1970s, conspicuously smooth plains - reminiscent of the lunar mare - suggested that in places, the impact craters had once been resurfaced by giant lava flows.
Saturn's moon Enceladus has geysers that spew water which have been photographed erupting by NASA's Cassini-Huygens spacecraft.
Another moon of Saturn, Titan, may also be volcanic (which would explain the satellite's dense atmosphere), as may be Neptune's moon, Triton.
'Ice volcano' identified on Saturn's moon Titan BBC - December 15, 2010
Scientists think they now have the best evidence yet for an ice volcano on Titan, the largest moon of Saturn. The Cassini probe has spotted a 1,500m-high mountain with a deep pit in it, and what looks like a flow of material on the surrounding surface. The new feature, which has been dubbed "The Rose", was seen with the probe's radar and infrared instruments. Titan has long been speculated to have cryovolcanoes but its hazy atmosphere makes all observations very difficult.
Ice Volcano Found on Saturn Moon Titan Space.com - April 1, 2011
For the first time, scientists now have solid evidence for an ice volcano on Saturn's moon Titan, according to a new study. Instead of regular lava, the volcano may spew water ice, hydrocarbons or a variety of other materials into Titan's thick atmosphere, scientists said. Such an ice volcano's existence could help solve some mysteries about Titan's carbon cycle, researchers said, and it could increase the likelihood that life exists on the huge moon. This image shows the location of an area known as Sotra Facula on Saturn's moon Titan. The black and white swaths show data obtained by the radar instrument on NASA's Cassini spacecraft. The swaths were laid on top of a global composite image from Cassini's visual and infrared mapping spectrometer. Scientists believe the Sotra Facula region makes the best case for a cryovolcanic – or ice volcano – region on Titan.
The surface of Venus is dominated by volcanism and has produced more volcanoes than any other planet in the solar system. It has a surface that is 90% basalt, and about 80% of the planet consists of a mosaic of volcanic lava plains, indicating that volcanism played a major role in shaping its surface.
The planet may have had a major global resurfacing event about 500 million years ago, from what scientists can tell from the density of impact craters on the surface. Even though there are over 1,600 major volcanoes on Venus, none is known to be erupting at present and most are probably long extinct. However, radar sounding by the Magellan probe revealed evidence for comparatively recent volcanic activity at Venus's highest volcano Maat Mons, in the form of ash flows near the summit and on the northern flank.
Although many lines of evidence suggest that Venus is likely to be volcanically active, present-day eruptions at Maat Mons have not been confirmed.
In April 2010, Suzanne E. Smrekar et al. announced the discovery of three active volcanoes, which suggests that Venus is periodically resurfaced by lava flows.
Venus contains shield volcanoes, widespread lava flows and some unusual volcanoes called pancake domes and "tick-like" structures which are not present on Earth. Pancake dome volcanoes are up to 15 km (9.3 mi) in diameter and less than 1 km (0.62 mi) in height and are 100 times larger than those formed on Earth. They are usually associated with coronae and tesserae (large regions of highly deformed terrain, folded and fractured in two or three dimensions, believed to be unique to Venus). The pancakes are thought to be formed by highly viscous, silica-rich lava erupting under Venus's high atmospheric pressure.
The "tick-like" structures are called scalloped margin domes. They are commonly called ticks because they appear as domes with numerous legs. They are thought to have undergone mass wasting events such as landslides on their margins. Sometimes deposits of debris can be seen scattered around them.
On Earth, volcanoes are mainly of two types: shield volcanoes and composite or stratovolcanoes. The shield volcanoes, for example those in Hawaii, eject magma from the depths of the Earth in zones called hot spots. The lava from these volcanos is relatively fluid and permits the escape of gases. Composite volcanos, such as Mount Saint Helens and Mount Pinatubo, are associated with tectonic plates. In this type of volcano, the oceanic crust of one plate is sliding underneath the other in a subduction zone, together with an inflow of seawater, producing a gummier lava that restricts the exit of the gases, and for that reason, composite volcanoes tend to erupt more violently.
On Venus, where there are no tectonic plates or seawater, volcanoes are mostly of the shield type. Nevertheless, the morphology of volcanoes on Venus is different: on Earth, shield volcanoes can be a few tens of kilometres wide and up to 10 km (6.2 mi) high in the case of Mauna Kea, measured from the sea floor. On Venus, these volcanoes can cover hundreds of kilometres in area, but they are relatively flat, with an average height of 1.5 km (0.93 mi).
Other unique features of Venus's surface are novae (radial networks of dikes or grabens) and arachnoids. A nova is formed when large quantities of magma are extruded onto the surface to form radiating ridges and trenches which are highly reflective to radar. These dikes form a symmetrical network around the central point where the lava emerged, where there may also be a depression caused by the collapse of the magma chamber.
Arachnoids are so named because they resemble a spider's web, featuring several concentric ovals surrounded by a complex network of radial fractures similar to those of a nova. It is not known whether the 250 or so features identified as arachnoids actually share a common origin, or are the result of different geological processes. Read more
PHYSICAL SCIENCES INDEX
CRYSTALINKS HOME PAGE
PSYCHIC READING WITH ELLIE
2012 THE ALCHEMY OF TIME |
ESA said on September 11, 2019, that its X-ray space telescope XMM-Newton has detected never-before-seen periodic flares of X-ray radiation from a giant black hole in a distant galaxy. These scientists said in a statement that the flares:
… could help explain some enigmatic behaviors of active black holes
Active black holes are those that are still actively swallowing material – stars, gas, dust – supplied by their home galaxies. We tend to see active supermassive black holes – like the one observed to flare in a galaxy known as GSN 069, located about 250 million light-years away – in distant galaxies, in contrast to the more quiescent supermassive black hole at the center of our own Milky Way (although even the Milky Way’s central black hole appeared to gobble something up earlier this year). Of the flares seen coming from the central black hole in GSN 069, astronomers said:
On December 24, 2018, the source was seen to suddenly increase its brightness by up to a factor of 100, then dimmed back to its normal levels within one hour and lit up again nine hours later.
It was completely unexpected. Giant black holes regularly flicker like a candle but the rapid, repeating changes seen in GSN 069 from December onwards are something completely new.
After word got around that this black hole was flaring, other telescopes were turned in its direction. XMM-Newton performed follow-up observations in the following couple of months, as did NASA’s Chandra X-ray observatory. These telescopes confirmed:
… that the distant black hole was still keeping the tempo, emitting nearly periodic bursts of X-rays every nine hours.
The researchers are calling the new phenomenon “quasi-periodic eruptions”, or QPEs.
You’ve probably heard that black holes are black because no light can escape them. So what is doing the flaring? The flares are coming from the process of accretion. It happens just before the gas, dust or stellar debris falls past the point of no return, known as the event horizon. Prior to that ultimate plunge over the event horizon, the material forms a flattened ring of spinning matter, known as an accretion disk. Miniutti explained that the X-ray flares:
… come from material that is being accreted into the black hole and heats up in the process.
There are various mechanisms in the accretion disk that could give rise to this type of quasi-periodic signal, potentially linked to instabilities in the accretion flow close to the central black hole.
Alternatively, the eruptions could be due to the interaction of the disk material with a second body – another black hole or perhaps the remnant of a star previously disrupted by the black hole.
Giovanni and colleagues said that – although no one has observed this phenomenon before – the periodic flares like those just seen might be common in the universe. Their statement explained:
It is possible that the phenomenon had not been identified before because most black holes at the cores of distant galaxies, with masses millions to billions of times the mass of our sun, are much larger than the one in GSN 069, which is only about 400,000 times more massive than our sun.
The bigger and more massive the black hole, the slower the fluctuations in brightness it can display, so a typical supermassive black hole would erupt not every nine hours, but every few months or years. This would make detection unlikely as observations rarely span such long periods of time.
They said that quasi-periodic eruptions like those found in GSN 069 could provide a natural framework to interpret some puzzling patterns observed in a significant fraction of active black holes, whose brightness seems to vary too fast to be easily explained by current theoretical models. Miniutti said:
We know of many massive black holes whose brightness rises or decays by very large factors within days or months, while we would expect them to vary at a much slower pace.
But if some of this variability corresponds to the rise or decay phases of eruptions similar to those discovered in GSN 069, then the fast variability of these systems, which appears currently unfeasible, could naturally be accounted for.
New data and further studies will tell if this analogy really holds.
The quasi-periodic eruptions spotted in GSN 069 could also explain other intriguing properties observed in the X-ray emission from nearly all bright, accreting supermassive black holes, which you can read about at ESA Space News.
For now, this team of astronomers is trying to organize their studies of the distant galaxy GSN 069; for example, they are trying to pinpoint this galaxy’s defining properties at the time when the periodic eruptions were first detected. Norbert Schartel, ESA’s XMM-Newton project scientist, said:
GSN 069 is an extremely fascinating source, with the potential to become a reference in the field of black hole accretion.
Knowing what to look for, they said, will also help them more efficiently use X-ray telescopes like XMM-Newton and Chandra to search for more quasi-periodic eruptions from other supermassive black holes in other distant galaxies. Margherita Giustini of Madrid’s Centro de Astrobiología – a study co-author – said the goal is:
… to further understand the physical origin of this new phenomenon.
Bottom line: In late 2018, the supermassive black hole in the galaxy GSN 069 – about 250 million light-years away – was seen to suddenly increase its brightness by up to a factor 100, then dim back to its normal levels within one hour, and then light up again nine hours later. Since then it has continued these quasi-periodic flares. Scientists want to know what’s causing them.
Deborah Byrd created the EarthSky radio series in 1991 and founded EarthSky.org in 1994. Today, she serves as Editor-in-Chief of this website. She has won a galaxy of awards from the broadcasting and science communities, including having an asteroid named 3505 Byrd in her honor. A science communicator and educator since 1976, Byrd believes in science as a force for good in the world and a vital tool for the 21st century. "Being an EarthSky editor is like hosting a big global party for cool nature-lovers," she says. |
UNIT 1 – Basic Economic Problem: choice and the allocation of resources
1 – Define the nature of the economic problem (finite resources and unlimited wants)
The resources we have (land, labour, capital and enterprise) are in limited quantities
However, the needs and wants we have are unlimited in nature
This leads to a scarcity of resources
So, all goods and services demanded cannot be produced, so there arises the need for choice since resources can be used in alternative ways
2 – Define the factors of production (land, labour, capital and enterprise)
Land – all the natural resources used in the production process, such as soil, farmland, coal, oil etc.
Labour – all human contribution, both mental and physical, to the production process, such as miner, mason, carpenter, clerk, accountant etc.
Capital – all the man-made resources that go into the production process, such as machinery, tools, vehicles etc.
Enterprise – the risk-taking ability of an entrepreneur who brings al the other factors of production together to produce goods and services
3 – Define opportunity cost and analyse particular circumstances to illustrate the concept
Opportunity cost is the next-best alternative foregone when a choice is made
If one chooses to do ABC with a resource (s)he cannot do DEF with it
Time/money is scarce in day-to-day life
Buying an expensive pen may be your decision, but you, in the process, might have sacrificed the opportunity to buy the MP3 player you always wanted, because money is a scarce resource, and we need to make a choice about how we need to spend it
Opportunity cost is always there whenever an economic decision/choice is made
4 – Demonstrate how production possibility curves can be used to illustrate choice and resource allocation
A, B and C are points on the PPC which show the different possible combinations in which Product A and product B can be produced simultaneously by the resources currently present in the economy
X is a point inside the curve and it shows a sate in which there is inefficiency in the economy since it is not operating at its maximum production potential (on the curve)
Y is a point outside the curve and it shows a point currently unachievable (with the current state of resources in the economy)
The PPC can be shifted outwards when economic growth takes place, due to technological advancements or an increase in the quality or quantity of resources in the economy
Once it has shifted outwards, the point Y becomes achievable (and thus the total output of the economy increases, the reason for economic growth)
If an economy decides to move from point A to point B, an opportunity cost is involved
While more of Product B is being produced the economy is ‘sacrificing’ the opportunity to produce some units of Product A, the opportunity cost
Thus, the PPC is also sometimes called an opportunity cost curve
5 – Evaluate the implications of particular courses of action in terms of opportunity cost
Look at it from these perspectives –
An individual A might decide to purchase a car, but the opportunity cost to this decision could be the long holiday with his family that he had been wanting for quite some while now
A jam producer might decide to move a few of its resources from producing apricot jam to the production of mixed fruit jam, in which case, while the output of mixed fruit jam will increase, the opportunity cost will be the loss in the production of apricot jam
A government might decide to move some of its revenue from subsidising university students to healthcare, in which case, the healthcare facilities will probably become better, but the opportunity cost to the government of this choice will be the loss in subsidy to university students
UNIT 2 – The allocation of resources: how the market work; market failure
1 – Describe the allocation of resources in market and mixed economic systems
There are three questions which every economy needs to answer –
What to produce?
How to produce?
For whom to produce?
In a market economic system, goods and services are freely exchanged
The three questions are answered by the buyers and sellers when they trade in different goods and services
All the resources are privately owned by firms or people
All these private firms aim for profit
What to produce – Since firms aim to make profits, they will move the scarce resources from the production of goods and services which consumers do not demand, into the production of those goods and services which they do demand
Therefore, only those goods and services will be produced which are demanded for by the consumer.
How to produce – Goods and services will be produced in the cheapest possible method, either labour-intensive (in countries with large populations) or capital intensive (in countries where capital is easily available)
For whom to produce – Goods and services will be produced for those who can afford to pay for them
In a mixed economic system, there are two sectors – the private sector, consisting of privately owned firms (as in a pure market system), and the public sector which is run by the government
The private sector looks to make the greatest possible profits while the public sector utilises the scarce resources to benefit society in general
Resources are owned both by the government and by private individuals and organisations |
Conceptual animation showing how a slant cut torus reveals a pair of circles, known as Villarceau circles
In geometry, Villarceau circles (pron.: /viːlɑrˈsoʊ/) are a pair of circles produced by cutting a torus diagonally through the center at the correct angle. Given an arbitrary point on a torus, four circles can be drawn through it. One is in the plane (containing the point) parallel to the equatorial plane of the torus. Another is perpendicular to it. The other two are Villarceau circles. They are named after the French astronomer and mathematician Yvon Villarceau (1813–1883). Mannheim (1903) showed that the Villarceau circles meet all of the parallel circular cross-sections of the torus at the same angle, a result that he said a Colonel Schoelcher had presented at a congress in 1891.
For example, let the torus be given implicitly as the set of points on circles of radius three around points on a circle of radius five in the xy plane
Slicing with the z = 0 plane produces two concentric circles, x2 + y2 = 22 and x2 + y2 = 82. Slicing with the x = 0 plane produces two side-by-side circles, (y − 5)2 + z2 = 32 and (y + 5)2 + z2 = 32.
Two example Villarceau circles can be produced by slicing with the plane 3x = 4z. One is centered at (0, +3, 0) and the other at (0, −3, 0); both have radius five. They can be written in parametric form as
The slicing plane is chosen to be tangent to the torus while passing through its center. Here it is tangent at (16⁄5, 0, 12⁄5) and at (−16⁄5, 0, −12⁄5). The angle of slicing is uniquely determined by the dimensions of the chosen torus, and rotating any one such plane around the vertical gives all of them for that torus.
Existence and equations [
A proof of the circles’ existence can be constructed from the fact that the slicing plane is tangent to the torus at two points. One characterization of a torus is that it is a surface of revolution. Without loss of generality, choose a coordinate system so that the axis of revolution is the z axis. Begin with a circle of radius r in the xz plane, centered at (R, 0, 0).
Sweeping replaces x by (x2 + y2)1/2, and clearing the square root produces a quartic equation.
The cross-section of the swept surface in the xz plane now includes a second circle.
This pair of circles has two common internal tangent lines, with slope at the origin found from the right triangle with hypotenuse R and opposite side r (which has its right angle at the point of tangency). Thus z/x equals ±r / (R2 − r2)1/2, and choosing the plus sign produces the equation of a plane bitangent to the torus.
By symmetry, rotations of this plane around the z axis give all the bitangent planes through the center. (There are also horizontal planes tangent to the top and bottom of the torus, each of which gives a “double circle”, but not Villarceau circles.)
We can calculate the intersection of the plane(s) with the torus analytically, and thus show that the result is a symmetric pair of circles, one of which is a circle of radius R centered at
A treatment along these lines can be found in Coxeter (1969).
A more abstract — and more flexible — approach was described by Hirsch (2002), using algebraic geometry in a projective setting. In the homogeneous quartic equation for the torus,
setting w to zero gives the intersection with the “plane at infinity”, and reduces the equation to
This intersection is a double point, in fact a double point counted twice. Furthermore, it is included in every bitangent plane. The two points of tangency are also double points. Thus the intersection curve, which theory says must be a quartic, contains four double points. But we also know that a quartic with more than three double points must factor (it cannot be irreducible), and by symmetry the factors must be two congruent conics. Hirsch extends this argument to any surface of revolution generated by a conic, and shows that intersection with a bitangent plane must produce two conics of the same type as the generator when the intersection curve is real.
Filling space [
The torus plays a central role in the Hopf fibration of the 3-sphere, S3, over the ordinary sphere, S2, which has circles, S1, as fibers. When the 3-sphere is mapped to Euclidean 3-space by stereographic projection, the inverse image of a circle of latitude on S2 under the fiber map is a torus, and the fibers themselves are Villarceau circles. Banchoff (1990) has explored such a torus with computer graphics imagery. One of the unusual facts about the circles is that each links through all the others, not just in its own torus but in the collection filling all of space; Berger (1987) has a discussion and drawing.
See also [
- Banchoff, Thomas F. (1990). Beyond the Third Dimension. Scientific American Library. ISBN 978-0-7167-5025-3.
- Berger, Marcel (1987). "§18.9: Villarceau circles and parataxy". Geometry II. Springer. pp. 304–305. ISBN 978-3-540-17015-0.
- Coxeter, H. S. M. (1969). Introduction to Geometry (2/e ed.). Wiley. pp. 132–133. ISBN 978-0-471-50458-0.
- Hirsch, Anton (2002). "Extension of the ‘Villarceau-Section’ to Surfaces of Revolution with a Generating Conic". Journal for Geometry and Graphics (Lemgo, Germany: Heldermann Verlag) 6 (2): 121–132. ISSN 1433-8157.
- Mannheim, M. A. (1903). "Sur le théorème de Schoelcher". Nouvelles Annales de Mathématiques (Paris: Carilian-Gœury et Vor. Dalmont). 4th series, volume 3: 105–107.
- Stachel, Hellmuth (2002). "Remarks on A. Hirsch's Paper concerning Villarceau Sections". Journal for Geometry and Graphics (Lemgo, Germany: Heldermann Verlag) 6 (2): 133–139. ISSN 1433-8157.
- Yvon Villarceau, Antoine Joseph François (1848). "Théorème sur le tore". Nouvelles Annales de Mathématiques. Série 1 (Paris: Gauthier-Villars) 7: 345–347. OCLC: 2449182.
External links [ |
In this lecture we will learn about electric potential, and electric potential difference, which is also known as voltage.
You can watch the following video or read the written tutorial below the video.
In the previous lecture, we’ve talked about electric potential energy which is dependent upon the charge of the object experiencing the electric field. Now, we’re going to learn about electric potential, which is only dependent upon the position of the object.
Electric potential (or just potential), is simply a measure of the electric potential energy per unit of charge.
Electric Potential Formula
This is the basic equation for calculating the electric potential, which shows that the electric potential V is equal to the electric potential energy U, divided by the charge q that would be placed at a point some distance away from the main charge.
The electric potential energy U is equal to the Coulomb’s constant k, multiplied by the charge that creates the potential big Q, times the charge that would be placed at a point some distance away from the main charge small q, and divided by that distance r.
In order to calculate the electric potential, we just need to divide the potential energy with small q.
We can notice that small q appears twice in the equation, so we can cancel it out.
Now we have this simple equation.
The equation shows that the potential is directly proportional to the amount of charge Q – as the charge increases, the potential increases, and opposite, as the charge decreases, the potential decreases.
On the other side, it is inversely proportional to the distance r, because as you move away from the charge, the potential is going to decrease, and as you move closer to the charge, the potential is going to increase.
Finally, we would get an amount of electric potential energy that each unit of charge would have at that point.
Related: Coulomb’s Law
Unit of Electric Potential
Now let’s go back to the basic equation.
We know that electric potential energy is measured in Joules, and the unit of charge is Coulomb. So, the unit of measurement for electric potential is Joules per Coulomb, or in one word Volts.
Example for Electric Potential
There’s a point charge equal to +2 µC, and we want to find the electric potential 15 cm (0.15 m) away from that charge.
Now, we can use the equation in order to calculate the electric potential.
We got a positive electric potential of +1.2×105 V.
In case we had a negative charge, let’s say -2 µC, the electric potential at that same point would be -1.2×105 V. We would get the same value, but with the minus sign.
Graph of Electric Potential
Let’s take a look at this graph of electric potential. The X axis shows the distance away from the charge, while the Y axis shows the electric potential at a certain point.
Here we have a positive charge, and the potential around a positive charge is always positive. As you move away from the charge, as the distance from the charge increases, the potential becomes less positive, and decreases getting closer and closer to zero.
On the other side, we have a negative charge, and the potential around a negative charge is always negative. As you move away from the charge, as the distance from the charge increases, the potential becomes less negative, and actually increases, also getting closer and closer to zero.
If you’re infinitely far away from the charge, the potential is going to be zero for both positive and negative charges.
Electric Potential Difference (Voltage)
Now we can move on to the electric potential difference, or voltage.
By definition, the electric potential difference, or voltage, is the difference in electric potential between the final and the initial position when work is done upon a charge to change its potential energy.
Example for Electric Potential Difference (Voltage)
Now, let’s look at an example which will help us easily understand the term voltage.
We have a positive charge of +1.6×10-19 C. It is the main charge that is creating the potential.
The first circle is the first energy level, at a distance of 2.5×10-11 m away from the charge. The second circle is the second energy level, at a distance of 4.2×10-12 m away from the charge.
In order to find the electric potential difference, or voltage, we need to find the potential at the point A and the potential at the point B.
The potential at the point A, which is the first energy level is going to be 57.6 V.
The potential at the point B, which is at a greater distance, is going to be 34.2 V.
First, we’re going to calculate the voltage as we move from A to B, and then from B to A.
In the first case, A is our initial potential, and B is our final potential. So, the potential difference is going to be final minus initial potential, or 34.2–57.6 =-23.4 V. We got a negative potential, which means as we go from A to B the potential decreases.
In the second case, B is our initial potential, and A is our final potential. So, the potential difference is going to be 57.6-34.2=+23.4 V. We have a positive potential, or as we go from B to A the potential increases.
What does that mean?
As we go from A to B, the electric potential decreases due to the fact that we have a positive main charge, and its electric field lines point outwards. If we place a positive test charge in the first energy level, the electric potential energy will be larger. The point charge will repel the test charge, because the density of the electric field lines is much stronger. In B, the density of the electric field lines is weaker, and the electric potential energy is smaller.
That’s all for electric potential and electric potential difference. I hope it was helpful and you learned something new. |
Geology of the Death Valley area
The exposed geology of the Death Valley area presents a diverse and complex set of at least 23 formations of sedimentary units, two major gaps in the geologic record called unconformities, and at least one distinct set of related formations geologists call a group. The oldest rocks in the area that now includes Death Valley National Park are extensively metamorphosed by intense heat and pressure and are at least 1700 million years old. These rocks were intruded by a mass of granite 1400 Ma (million years ago) and later uplifted and exposed to nearly 500 million years of erosion.
Marine deposition occurred 1200 to 800 Ma, creating thick sequences of conglomerate, mudstone, and carbonate rock topped by stromatolites, and possibly glacial deposits from the hypothesized Snowball Earth event. Rifting thinned huge roughly linear parts of the supercontinent Rodinia enough to allow sea water to invade and divide its landmass into component continents separated by narrow straits. A passive margin developed on the edges of these new seas in the Death Valley region. Carbonate banks formed on this part of the two margins only to be subsided as the continental crust thinned until it broke, giving birth to a new ocean basin. An accretion wedge of clastic sediment then started to accumulate at the base of the submerged precipice, entombing the region's first known fossils of complex life. These sandy mudflats gave way about 550 Ma to a carbonate platform which lasted for the next 300 million years of Paleozoic time.
The passive margin switched to active margin in the early-to-mid Mesozoic when the Farallon Plate under the Pacific Ocean started to dive below the North American Plate, creating a subduction zone; volcanoes and uplifting mountains were created as a result. Erosion over many millions of years created a relatively featureless plain. Stretching of the crust under western North America started around 16 Ma and is thought to be caused by upwelling from the subducted spreading-zone of the Farallon Plate. This process continues into the present and is thought to be responsible for creating the Basin and Range province. By 2 to 3 million years ago this province had spread to the Death Valley area, ripping it apart and creating Death Valley, Panamint Valley and surrounding ranges. These valleys partially filled with sediment and, during colder periods during the current ice age, with lakes. Lake Manly was the largest of these lakes; it filled Death Valley during each glacial period from 240,000 years ago to 10,000 years ago. By 10,500 years ago these lakes were increasingly cut off from glacial melt from the Sierra Nevada, starving them of water and concentrating salts and minerals. The desert environment seen today developed after these lakes dried up.
Little is known about the history of the oldest exposed rocks in the area due to extensive metamorphism; the rock has been pressure-cooked. This somber, gray, almost featureless crystalline complex is composed of originally sedimentary and igneous rocks with large quantities of quartz and feldspar mixed in. The original rocks were transformed to contorted schist and gneiss, making their original parentage almost unrecognizable. Radiometric dating gives an age of 1700 million years for the metamorphism, placing it in the early part of the Proterozoic eon.
A mass of granite now in the Panamint Mountains intruded this complex 1400 mya. Pegmatic dikes and other widely spaced plutons of granite are also in the complex (a pluton is a large blob of magma deep underground and dikes are projections of that). Outcrops can be seen along the front of the Black Mountains in Death Valley and in the Talc and Ibex Hills. When the granite was being intruded, the west coast of North America ran through Eastern California and through an embayment that spread toward the Las Vegas Valley. This embayment, called the Amargosa aulacogen, had highlands north and south of it and was the result of a failed rift. Many thousands of feet of sediment filled the slowly subsiding basin.
Next, the metamorphosed Precambrian basement rocks were uplifted and a nearly 500-million-year-long gap in the geologic record, a major unconformity, affected the region. Geologists do not know what happened to the eroded sediment that must have overlain the complex, but they do know that regional uplift was responsible; the area was originally below the surface of a shallow sea.
The Pahrump Group of formations were deposited from 1200 to 800 mya in the Amargosa aulacogen. This was after uplift-associated erosion removed whatever rocks covered the Proteozoic Complex. Pahrump is composed of, from oldest to youngest:
Outcrops of this group can be seen in a highly metamorphosed belt that extends from the Panamint Mountains to the eastern part of the Kingston Range, including an area near the Ashford Mill site.
Uplift eventually exposed the crystalline complex to erosion. Arkose conglomerate and mudstone of the lower Crystal Spring Formation were created from muddy debris derived from stream erosion of these uplands. A warm shallow sea spread over the area as the Amargosa aulacogen slowly subsided; thick sequences of lime-rich ooze with abundant colonies of algae called stromatolites were then laid down. Dolomite and limestone resulted, forming the middle part of the Crystal Spring Formation. The upper part was formed after silt and sand destroyed the algal mat, forming siltstone and sandstone. Laterally extensive diabase sills of molten rock later intruded above and below the carbonate rock layers; commercial grade talc was formed from thermal decay of carbonate rock at its contact with the lowest sill, which covers hundreds of square miles (many hundreds of km²). Today the formation is 3,000 feet (910 m) thick.
The Death Valley region once again rose above sea level, resulting in erosion. The Amargosa aulacogen then slowly sank beneath the seas; a sequence of carbonate banks that were topped by algal mats of stromatolites were laid on top of its eroded surface. Eventually these sediments and fossils became the Beck Spring Formation, which is 1,000 feet (300 m) thick.
Another round of uplift exposed the Beck Spring rocks and the underlying Crystal Spring to erosion; subsequent faster subsidence of the Amargosa aulacogen broke these formations into islands in later Proterozoic time. The resulting large sequence of thick conglomerate beds of pebbles and boulders in a sandy and muddy matrix that blanketed basins between higher areas is known as the Kingston Peak Formation. This formation is prominent near Wildrose, Harrisburg Flats, and Butte Valley and is 7,000 feet (2,100 m) thick.
Part of the Kingston Peak resembles glacial till by being poorly sorted and other parts have large boulder-sized dropstones resting in a fine-grained matrix of sandstone and siltstone. Similar deposits are found over North America during the same period, some 700 to 800 mya. Geologists therefore hypothesize that the world at that time was affected by a very severe glaciation, perhaps the most severe in geologic history (see Snowball Earth). The youngest rocks in the Pahrump Group are from basaltic lava flows.
Crustal thinning and riftingEdit
A new rift opened that started to break apart the supercontinent Rodinia, which North America was then a part of. A shoreline similar to the present Atlantic Ocean margin of the United States, with coastal lowlands and a wide, shallow shelf but no volcanoes, lay to the east near where Las Vegas now resides.
The first formation to be deposited in this setting was the Noonday Dolomite, which was formed from an algal mat-covered carbonate bank. Today it is up to 1,000 feet (300 m) thick and is a pale yellowish-gray cliff-former. The area subsided as the continental crust thinned and the new ocean widened; the carbonate bank soon became covered by thin beds of silt and layers of lime-rich ooze. These sediments in time hardened to become the siltstone and limestone of the Ibex Formation. A good outcrop of both the Noonday and overlying Ibex formations can be seen just east of the Ashford Mill Site.
An angular unconformity truncates progressively older (lower) parts of the underlying Pahrump Group starting in the southern part of the area and moving north. At its northernmost extent, the unconformity in fact removed all of the Pahrump, and the Noonday rests directly on the Proterozoic Complex. An ancient period of erosion removed that part of the Pahrump due to its being higher (and thus more exposed) than the rest of the formation.
Passive margin formedEdit
As the incipient ocean widened in the Late Proterozoic and Early Paleozoic, it broke the continental crust in two and a true ocean basin developed to the west. All the earlier formations were thus dissected along a steep front on the two halves of the previous continent. A wedge of clastic sediment then started to accumulate at the base of the two underwater precipices, starting the formation of opposing continental shelves.
Three formations developed from sediment that accumulated on the wedge. They are, from oldest to youngest:
- Johnnie Formation (varicolored shaly),
- Stirling Quartzite,
- Wood Canyon Formation, and the
- Zabriskie Quartzite.
Together the Stirling, Wood Canyon, and Zabriskie units are about 6,000 feet (1,800 m) thick and are made of well-cemented sandstones and conglomerates. They also contain the region's first known fossils of complex life: Ediacara fauna, trilobites, archaeocyathas, and primitive echinoderm burrows have been found in the Wood Canyon Formation. The very earliest animals are exceedingly rare, occurring well west of Death Valley in lime-rich offshore muds contemporary to the Stirling Quartzite. Good outcrops of these formations are exposed on the north face of Tucki Mountain in the northern Panamint Mountains.
The side road to Aguereberry Point successively traverses the shaly Johnnie Formation, the white Stirling Quartzite, and dark quartzites of the Wood Canyon Formation; at the Point itself is the great light-colored band of Zabriskie Quartzite dipping away toward Death Valley. Prominent outcrops are located between Death Valley Buttes and Daylight Pass, in upper Echo Canyon, and just west of Mare Spring in Titus Canyon. Before tilting to their present orientation, these four formations were a continuous pile of mud and sand 3 miles (4.8 km) deep that accumulated slowly on the nearshore ocean bottom.
A carbonate shelf formsEdit
A carbonate shelf started to develop over the sandy mudflats early in Paleozoic time. Sediment accumulated on the new but slowly subsiding continental shelf all through the Paleozoic and into the Early Mesozoic. Erosion had so subdued nearby parts of the continent that rivers ran clear, no longer supplying abundant sand and silt to the continental shelf. At the time, the Death Valley area was within ten or twenty degrees of the Paleozoic equator. So the combination of a warm sunlit climate and clear mud-free waters promoted prolific production of biotic (from life) carbonates. Thick beds of carbonate-rich sediments were periodically interrupted by periods of emergence, creating the (in order of deposition);
These sediments were lithified into limestone and dolomite after they were buried and compacted by yet more sediment. Thickest of these units is the dolomitic Bonanza King Formation, which forms the dark and light banded lower slopes of Pyramid Peak and the gorges of Titus and Grotto Canyons.
An intervening period occurred in the Mid Ordovician (about 450 Ma) when a sheet of quartz-rich sand blanketed a large part of the continent after the above-mentioned units were laid down. The sand later hardened into sandstone and later still metamorphosed into the 400-foot (100 m) thick Eureka Quartzite. This great white band of Ordovician rock stands out on the summit of Pyramid Peak, near the Racetrack, and high on the east shoulder of Tucki Mountain. No American source is known for the Eureka sand, which once blanketed a 150,000 square miles (390,000 km2) belt from California to Alberta. It may have been swept southward by longshore currents from an eroding sandstone terrain in Canada.
Deposition of carbonate sediments resumed and continued into the Triassic. Four formations were deposited during this time (from oldest to youngest);
Although details of geography varied during this immense interval of time, a north-northeasterly trending coastline generally ran from Arizona up through Utah. A marine carbonate platform only tens of feet deep but more than 100 miles (160 km) wide stretched westward to a fringing rim of offshore reefs. Lime-rich mud and sand eroded by storm waves from the reefs and the platform collected on the quieter ocean floor at depths of 100 feet (30 m) or so. The Death Valley area's carbonates appear to represent all three environments (down-slope basin, reef, and back-reef platform) owing to movement through time of the reef-line itself.
All told these eight formations and one group are 20,000 feet (6,100 m) thick and are buried below much of the Cottonwood, Funeral, Grapevine, and Panamint ranges. Good outcrops can be seen in the southern Funeral Mountains outside the park and in Butte Valley within park borders. The Eureka Quartzite appears as a relatively thin, nearly white band with the grayish Pogonip Group below and the almost black Ely Springs Dolomite above. All strata are often vertically displaced by normal faulting.
Change to active margin and upliftEdit
The western edge of the North American continent was later pushed against the oceanic plate under the adjacent ocean. An area of great compression called a subduction zone was formed in the early-to-mid Mesozoic, which replaced the quiet, sea-covered continental margin with erupting volcanoes and uplifting mountains. A chain of volcanoes pushed through the continental crust parallel to the deep trench, fed by magma rising from the subducting oceanic plate as it entered the Earth's hot interior. Thousands of feet (hundreds of meters) of lavas erupted, pushing the ocean over 200 miles (320 km) to the west.
Compressive forces built up along the entire length of the broad continental shelf. The Sierran Arc, also called the Cordilleran Mesozoic magmatic arc, started to form from heat and pressure generated from the subduction. Compressive forces caused thrust faults to develop and granitic blobs of magma called plutons to rise in the Death Valley region and beyond, most notably creating the Sierra Nevada Batholith to the west. Thrust faulting was so severe that the continental shelf was shortened and some parts of older formations were moved on top of younger rock units.
The plutons in the park are Jurassic and Cretaceous aged and are located toward the park's western margin where they can be seen from unimproved roads. One of these relatively small granitic plutons was emplaced 67–87 Ma and spawned one of the more profitable precious metal deposits in the Death Valley area, giving rise to the town and mines of Skidoo. In the Death Valley area these solidified blobs of magma are located under much of the Owlshead Mountains and are found in the western end of the Panamint Mountains. Thrusted areas can be seen at Schwaub Peak in the southern part of the Funeral Mountains.
A long period of uplift and erosion was concurrent with and followed the above events, creating a major unconformity. Sediments worn off the Death Valley region were shed both east and west and carried by wind and water; the eastern sediments ended up in Colorado and are now famous for their dinosaur fossils. No Jurassic to Eocene sedimentary formations exist in the area except for some possibly Jurassic-age volcanic rock around Butte Valley. Large parts of previously deposited formations were removed; probably by streams that washed the sediment into the Cretaceous Seaway that longitudinally divided North America to the east.
Development of a plainEdit
After 150 million years of volcanism, plutonism, metamorphism, and thrust faulting had run their course, the early part of the Cenozoic era (early Tertiary, 65–30 Ma) was a time of repose; neither igneous nor sedimentary rocks of this age are known here. A relatively featureless plain was created from erosion over many millions of years. Deposition resumed some 35 Ma in the Oligocene epoch on a flood plain that developed in the area; sluggish streams migrated laterally over the surface, laying down cobbles, sand, and mud. Outcrops of the resulting conglomerates, sandstone, and mudstone of the Titus Canyon Formation can be observed in road cuts at Daylight Pass on Daylight Pass Road, which becomes State Route 374 a short distance from the pass. Several other similar formations were also laid down.
Large volcanic eruptions, originating near the Nevada Test Site, covered the Death Valley area and much of Nevada in thick sequences of silica-rich ash 27 million years ago. The ash has a rhyolitic composition, which is the volcanic equivalent of the plutonic rock granite; it covered what would later become the Grapevine Mountains in 1,200 feet (370 m) of ash. This ash filled in valleys and depressions; by 20 million years ago, the region from the Death Valley area across Nevada was a volcanic plain.
Extension creates the Basin and RangeEdit
Starting around 16 Ma in Miocene time and continuing into the present, a large part of the North American Plate in the region has been under extension by literally being pulled apart. Debate still surrounds the cause of this crustal stretching, but an increasingly popular idea among geologists called the slab gap hypothesis states that the spreading zone of the subducted Farallon Plate is pushing the continent apart. Whatever the cause, the result has been the creation of a large and still-growing region of relatively thin crust; the region grew an average of 1 inch (2.5 cm) per year initially and then slowed to 0.3 inches (0.76 cm) per year in the last 5 million years. Geologists call this region the Basin and Range Province.
Extensional forces causes rock at depth to stretch like silly putty and rock closer to the surface to break along normal faults into downfallen basins called grabens; small mountain ranges known as horsts run parallel to each other on either side of the graben. Normally the number of horsts and grabens is limited, but in the Basin and Range region there are dozens of horst/graben structures, each roughly north-south trending. A succession of these extend from immediately east of the Sierra Nevada, through almost all of Nevada, and into western Utah and southern Idaho. The crust in the Death Valley region between Lake Mead and the southern Sierra Nevada has been extended by as much as 150 miles (240 km).
The Furnace Creek Fault system, located in what is now the northern part of Death Valley, started to move about 14 Ma and the Southern Death Valley Fault system likely began to move by 12 million years ago. Both fault systems move with a right-lateral offset along strike-slip faults; these type of faults rub past each other so that a theoretical observer standing on one side facing the other side sees it move right. Both fault systems run parallel to and at the base of the ranges. Very often the same faults move laterally and vertically, simultaneously making them strike-slip and normal (i.e. oblique-slip). These two systems are also offset from each other; the area between the offset is thus put under enormous oblique tension, which intensifies subsidence there; Furnace Creek Basin opened in this area and the rest of Death Valley followed in stages. One of the last stages was the formation of Badwater Basin, which occurred by about 4 Ma. Data from gravimeters show that Death Valley's bedrock floor tilts down toward the east and is deepest under Badwater Basin; there is 9,000 feet (2,700 m) of fill under Badwater. By about 2 Ma Death Valley, Panamint Valley and their associated ranges were formed.
Much of the extra local stretching in Death Valley that is responsible for its lower depth and wider valley floor is caused by left lateral strike-slip movement along the Garlock Fault south of the park (the Garlock Fault separates the Sierra Nevada range from the Mojave Desert). This particular fault is pulling the Panamint Range westward, causing the Death Valley graben to slip downward along the Furnace Creek Fault system at the foot of the Black Mountains. The rocks that would become the Panamint Range may have been stacked on top of the rocks that would become the Black Mountains and the Cottonwood Mountains. Under this interpretation, as the Black Mountains began to rise, the Panamint/Cottonwood Mountains slid westward off of them along low-angle normal faults, and starting around 6 Ma, the Cottonwood Mountains slid northwest off the top of the Panamint Range. There is also some evidence that the Grapevine Mountains may have slid off the Funeral Mountains. Another interpretation of the evidence is that the Black and Panamint Mountains were once side-by-side and were pulled apart along normal faults. These normal faults, in this view, are steep near the surface but become low angle at depth; the mountain blocks rotated as they slid to create the tilted mountains seen today.
Total movement of the Pamamint block between the Garlock and Furnace Creek Faults is 50 miles (80 km) to the northwest, creating Death Valley in the process. A few of the 20 to 25 degree-sloped surfaces along which this mass of 20,000 to 30,000 feet (6,100 to 9,100 m) of rock slipped, are exposed in Death Valley. These features are called "turtlebacks" due to their turtle shell-like appearance.
Volcanism and valley-fill sedimentationEdit
Igneous activity associated with the extension occurred from 12 to 4 Ma. Both intrusive (plutonic/solidified underground) and extrusive (volcanic/solidified above ground) igneous rocks were created. Basaltic magma followed fault lines to the surface and erupted as cinder cones and lava flows. Some volcanic rocks were re-worked by hydrothermal systems to form colorful rocks and concentrated mineral formations, such as boron-rich minerals like borax; a Pliocene-aged example is the 4,000-foot (1,200 m)-thick Artist Drive Formation. Gold and silver ores were also concentrated by mineralizing fluids from igneous intrusions. Other times, heat from magma migrating close to the surface would superheat overlaying groundwater until it exploded, not unlike an exploding pressure-cooker, creating blowout craters and tuff rings. One example of such a feature is the roughly 2000-year-old and 800 feet (240 m) deep Ubehebe Crater (photo) in the northern part of the park; nearby smaller craters may be less than 200 to 300 years old.
Sediment filled the subsiding Furnace Creek Basin as the area was pulled apart by Basin and Range extension. The resulting 7,000-foot (2,100 m)-thick Furnace Creek Formation is made of lakebed sediments that consist of saline muds, gravels from nearby mountains and ash from the then-active Black Mountain volcanic field. Boron, which is abundant in this formation, is dissolved by ground water and flows out onto the northern end of the Death Valley playa. Today this formation is most-prominently exposed in the badlands at Zabriskie Point. Additional subsidence of the Furnace Creek Basin was filled by the four-million-year-old Funeral Formation, which consists of 2,000 feet (610 m) of conglomerates, sand, mud and volcanic material. Another smaller basin to the south was filled by the Copper Canyon Formation around the same time. Footprints and fossils of camels, horses, and mammoths are in all three of these Pliocene formations.
About 2–3 Ma, in the Pleistocene, continental ice sheets expanded from the polar regions of the globe to cover lower latitudes far north of the region, starting a series of cold glacial periods that were interrupted by warmer interglacial periods. Snowmelt from alpine glaciers on the nearby Sierra Nevada during glacial periods fed rivers that flowed into the valleys of the region year round. Since the topography of the Basin and Range region was largely formed by faulting, not by river erosion, many of the basins have no outlets, meaning they will fill up with water like a bathtub until they overflow into the next basin. So during the cooler and wetter pluvial climates of the glacial periods, much of eastern California, all of Nevada, and western Utah were covered by large lakes separated by linear islands (the present day ranges).
Lake Manly was the lake that filled Death Valley during each glacial period from at least 240,000 years ago to as late as 10,500 years ago; the lake typically dried up during each interglacial period, such as the current one. Lake Manly was the last in a chain of lakes that were fed by the Amargosa and Mojave Rivers, and possibly also the Owens River; it was also the lowest point in the Great Basin drainage system. At its height during the Last glacial period some 22,000 years ago, water filled Lake Manly to form a body of water that may have been 585 feet (178 m) deep and 90 miles (140 km) long. Much smaller lakes filled parts of Death Valley during interglacials; the largest of these was 30 feet (9.1 m) deep and lasted from 5000 to 2000 years ago. Panamint Lake filled Panamint Valley to a maximum depth of 900 feet (270 m); when it was full, Panamint Lake overflowed into Lake Manly somewhere around the southern end of the Panamint Mountains.
Lake Manly and its sister lakes started to dry up about 10,000 years ago as the alpine glaciers that fed the rivers that filled the lakes disappeared and the region became increasingly arid. Fish that had migrated into the lake system from the Colorado River started to die off; the only survivors are the minnow-sized Death Valley pupfish and related species that adapted to living in springs. Ancient weak shorelines called strandlines from Lake Manly can easily be seen on a former island in the lake called Shoreline Butte.
Stream gradients increased on flanking mountain ranges as they were uplifted. These swifter moving streams are dry most of the year but have nevertheless cut true river valleys, canyons, and gorges that face Death and Panamint valleys. In this arid environment, alluvial fans form at the mouth of these streams. Very large alluvial fans merged to form continuous alluvial slopes called bajadas along the Panamint Range. The faster uplift along the Black Mountains formed much smaller alluvial fans because older fans are buried under playa sediments before they can grow too large. Slot canyons are often found at the mouths of the streams that feed the fans, and the slot canyons in turn are topped by V-shaped gorges. This forms what looks like a wineglass shape to some people, thus giving them their names, "wineglass canyons".
Table of formationsEdit
|System||Series||Formation||Lithology and thickness||Characteristic fossils|
|Quaternary||Holocene||Fan gravel; silt and salt on floor of playa, less than 100 feet (30 m) thick.||None|
|Pleistocene||Fan gravel; silt and salt buried under floor of playa; perhaps 2,000 feet (600 m) thick.|
|Funeral fanglomerate||Cemented fan gravel with interbedded basaltic lavas, gravels cut by veins of calcite (Mexican onyx); perhaps 1,000 feet (300 m) thick.||Diatoms, pollen.|
|Tertiary||Pliocene||Furnace Creek Formation||Cemented gravel, silty and saliferous playa deposits; various salts, especially borates, more than 5,000 feet (1,500 m) thick.||Scarce.|
|Miocene||Artist Drive Formation||Cemented gravel; playa deposits, much volcanic debris, perhaps 5,000 feet (1,500 m) thick.||Scarce.|
|Oligocene||Titus Canyon Formation||Cemented gravel; mostly stream deposits; 3,000 feet (900 m) thick.||Vertebrates, titanotheres, etc.|
|Eocene and Paleocene||Granitic intrusions and volcanics, not known to be represented by sedimentary deposits.|
|Cretaceous and Jurassic||Not represented, area was being eroded.|
|Triassic||Butte Valley Formation of Johnson (1957)||Exposed in Butte Valley 1 mile (1.6 km) south of this area; 8,000 feet (2,400 m) of metasediments and volcanics.||Ammonites, smooth-shelled brachiopods, belemnites, and hexacorals.|
|Pennsylvanian and Permian||Formations at east foot of Tucki Mountain||Conglomerate, limestone, and some shale. Conglomerate contains cobbles of limestone of Mississippian, Pennsylvanian, and Permian age. Limestone and shale contain spherical chert nodules. Abundant fusulinids. Thickness uncertain on account of faulting; estimate 3,000 feet (900 m), top eroded.||Beds with fusulinids, especially Fusulinella|
|Carboniferous||Mississippian and Pennsylvanian||Rest Spring Shale||Mostly shale, some limestone, abundant spherical chert nodules. Thickness uncertain because of faulting; estimate 750 feet (230 m).||None.|
|Mississippian||Tin Mountain Limestone and younger limestone||Mapped as 1 unit. Tin Mountain Limestone 1,000 feet (300 m) thick, is black with thin-bedded lower member and thick-bedded upper member. Unnamed limestone formation, 725 feet (221 m) thick, consists of interbedded chert and limestone in thin beds and in about equal proportions.||Mixed brachiopods, corals, and crinoid stems. Syringopora (open-spaced colonies) Caninia cf. C. cornicula.|
|Devonian||Middle and Upper Devonian||Lost Burro Formation||Limestone in light and dark beds 1 to 10 feet (0.30 to 3.05 m) thick give striped effect on mountainsides. Two quartzite beds, each about 3 feet (0.91 m) thick, near base, numerous sandstone beds 800 to 1,000 feet (240 to 300 m) above base. Top 200 feet (60 m) is well-bedded limestone and quartzite. Total thickness uncertain because of faulting; estimated 2,000 feet (600 m).||Brachiopods abundant, especially Spirifer, Cyrtospirifer, Productilla, Carmarotoechia, Atrypa. Stromatoporoids. Syringopora (closely spaced colonies).|
|Silurian and Devonian||Silurian and Lower Devonian||Hidden Valley Dolomite||Thick-bedded, fine-grained, and even-grained dolomite, mostly light color. Thickness 300 to 1,400 feet (90 to 430 m).||Crinoid stems abundant, Including large types. Favosites.|
|Ordovician||Upper Ordovician||Ely Springs Dolomite||Massive black dolomite, 400 to 800 feet (120 to 240 m) thick.||Streptelasmatid corals: Grewingkia, Bighornia. Brachiopods.|
|Middle and Upper (?) Ordovician||Eureka Quartzite||Massive quartzite, with thin-bedded quartzite at base and top, 350 feet (110 m) thick.||None|
|Lower and Middle Ordovician||Pogonip Group||Dolomite, with some limestone, at base, shale unit in middle, massive dolomite at top. Thickness, 1,500 feet (460 m).||Abundant large gastropods in massive dolomite at top: Palliseria and Maclurites, associated with Receptaculites. In lower beds: Protopliomerops, Kirkella, Orthid brachiopods.|
|Cambrian||Upper Cambrian||Nopah Formation||Highly fossiliferous shale member 100 feet (30 m) thick at base, upper 1,200 feet (370 m) is dolomite in thick alternating black and light hands about 100 feet (30 m) thick. Total thickness of formation 1,200 to 1,500 feet (370 to 460 m).||In upper part, gastropods. In basal 100 feet (30 m), trilobite trash beds containing Elburgis, Pseudagnostus, Horriagnostris, Elvinia, Apsotreta.|
|Middle and Upper Cambrian||Bonanza King Formation||Mostly thick-bedded arid massive dark-colored dolomite, thin-bedded limestone member 500 feet (150 m) thick 1,000 feet (300 m) below top of formation, 2 brown-weathering shaIy units, upper one fossiliferous, Total thickness Uncertain because of faulting; estimated about 3,000 feet (900 m) in Panamint Range, 2,000 feet (600 m) in Funeral Mountains.||The only fossiliferous bed is shale below limestone member neat middle of formation. This shale contains linguloid brachiopods and trilobite trash beds with fragments of "Ehmaniella."|
|Lower and Middle Cambrian||Carrara Formation||An alternation of shaly and silty members with limestone members transitional between underlying clastic formations and overlying carbonate ones. Thickness about 1,000 feet (300 m) but variable because of shearing.||Numerous trilobite trash beds in lower part yield fragments of olenellid trilobites.|
|Lower Cambrian||Zabriskie Quartzite||Quartzite, mostly massive arid granulated due to shearing, locally it) beds 6 inches (15 cm) to 2 feet (0.61 m) thick. Thickness more than 150 feet (46 m), variable because of shearing.||No fossils.|
|Lower Cambrian and Lower Cambrian (?)||Wood Canyon Formation||Basal unit is well-bedded quartzite above 1,650 feet (500 m) thick ' shaly Unit above this 520 feet (160 m) thick contains lowest olenellids in section; top unit of dolomite and quartzite 400 feet (120 m) thick.||A few scattered olenellid trilobites and archaeocyathids in upper part of formation. Scolithus? tubes.|
|Stirling Quartzite||Well-bedded quartzite in beds 1 to 5 feet (0.30 to 1.52 m) thick comprising thick members of quartzite 700 to 800 feet (210 to 240 m) thick separated by 500 feet (150 m) of purple shale, crossbedding conspicuous in quartzite. Maximum thickness about 2,000 feet (600 m).||None.|
|Johnnie Formation||Mostly shale, in part olive brown, in part purple. Basal member 400 feet (120 m) thick is interbedded dolomite arid quartzite with pebble conglomerate. Locally, fair dolomite near middle arid at top. Thickness more than 4,000 feet (1,200 m).||None.|
|Precambrian||Noonday Dolomite||In southern Panamint Range, dolomite in Indistinct beds; lower part cream colored, upper part gray. Thickness 800 feet (240 m). Farther north, where mapped as Noonday(?) Dolomite, contains much limestone, tan and white, and some limestone conglomerate. Thickness about 1,000 feet (300 m).||Scolithus? tubes.|
|Kingston Peak(?) Formation||Mostly diamictite, sandstone, and shale; some limestone arid dolomite olistoliths near middle. At least 3,000 feet (900 m) thick. Although tentatively assigned to Kingston Peak Formation, similar rocks along west side of Panamint Range have been identified as Kingston Peak.||None.|
|Beck Spring Dolomite||Not mapped; outcrops are to the west. Blue-gray cherry dolomite, thickness estimated about 500 feet (150 m) Identification uncertain.||None.|
|Pahrump Series||Crystal Spring Formation||Recognized only in Galena Canyon and south. Total thickness about 2,000 feet (600 m). Consists of basal conglomerate overlain by quartzite that grades upward into purple shale arid thinly bedded dolomite, upper part, thick bedded dolomite, diabase, and chert. Talc deposits where diabase intrudes dolomite.||None.|
|Rocks of the crystalline basement||Metasedimentary rocks with granitic intrusions.||None.|
Table of saltsEdit
|Mineral||Composition||Known or probable occurrence|
|Halite||NaCl||Principal constituent of chloride zone and of salt-impregnated sulfate and carbonate deposits.|
|Trona||Na3H(CO3)22H2O||Carbonate zone of Cottonball Basin, especially in marshes.|
|Thermonatrite||Na2CO3·H2O||Questionably present on floodplain in Badwater Basin, would be expected in marshes of carbonate zone in Cottonball Basin.|
|Gaylussite||Na2Ca(CO3)2·5H2O||Carbonate zone and floodplain in Badwater Basin.|
|Calcite||CaCO3||Occurs as clastic grains in sediments underlying salt pan and as sharply terminated crystals in clay fraction of carbonate zone and in sediments underlying sulfate zone.|
|Magnesite||MgCO3||Obtained in artificially evaporated brines from Death Valley; not yet identified in salt pan; may be expected in carbonate zone of Cottonball Basin.|
|Dolomite||CaMg(CO3)2||identified only as a detrital mineral; may be expected in carbonate zone.|
|Northupite and/or tychite||Na3MgCl(CO3) and/or Na6Mg2(SO4)·(CO3)4||An isotropic mineral, having index of refraction in the range of Northupite and Tychite, has been observed in saline facies of sulfate zone in Cottonball Basin.|
|Burkeite||Na6(CO3)(SO4)2||Sulfate zone in Cottonball Basin.|
|Thenardite||Na2SO4||Common in all zones in Cottonball Basin and in sulfate marshes in Middle and Badwater basins.|
|Mirabilite||Na2SO4·10H2O||Occurs on floodplains in Cottonball Basin immediately following winter storms.|
|Glauberite||Na2Ca(SO4)2||Common on floodplains except in central part of Badwater Basin; sulfate zone in Cottonball Basin.|
|Anhydrite||CaSO4||As layer capping massive gypsum 1 mile (2 km) north of Badwater. Possibly also as dry-period efflorescence on floodplains.|
|Bassanite||2CaSO4·H2O||As layer capping massive gypsum along west side of Badwater Basin and as dry-period efflorescence in floodplains.|
|Gypsum||CaSO4·2H2O||In sulfate caliche, layer in carbonate zone, particularly in Middle and Badwater basins, in sulfate marshes and as massive deposits in sulfate zone.|
|Bloedite||Na2Mg(SO4)2·4H2O||Questionably present in efflorescence on floodplain in chloride zone.|
|Polyhalite||K2Ca2Mg(SO4)4·2H2O||Questionably present on floodplain in chloride zone.|
|Celestine||SrSO4||Found with massive gypsum.|
|Kernite||Na2B4O7·4H2O||Possibly present in Middle Basin in surface layer of layered sulfate and chloride salts.|
|Tincalconite||Na2B4O7·5H2O||Probably occurs as dehydration product of borax.|
|Borax||Na2B4O7·10H2O||Floodplains and marshes in Cottonball Basin.|
|Inyoite||Ca2B6O11·13H2O||Questionably present (X-ray determination but unsatisfactory) in floodplain in Badwater Basin.|
|Meyerhofferite||Ca2B6O11·7H2O||Found in all zones in Badwater Basin and in rough silty rock salt in Cottonball Basin|
|Colemanite||Ca2B6O11·5H2O||Questionably present (X-ray determination but unsatisfactory) in floodplain in Badwater Basin.|
|Ulexite||NaCaB5O9·8H2O||Common in floodplain in Cottonball Basin; known as "cottonball"|
|Proberite||NaCaB5O9·5H2O||A fibrous borate with index of refraction higher than ulexite occurs on dry areas in Cottonball Basin following hot dry spells and in surface layer of smooth silty rock salt.|
|Soda niter||NaNO3||Weak, but positive chemical tests obtained locally.|
- Harris 1997, p. 630.
- Harris 1997, p. 631.
- Collier 1990, p. 44.
- "Saratoga Springs". Death Valley geology field trip. USGS. Retrieved 2010-11-25.
- Harris 1997, p. 611.
- Harris 1997, p. 632.
- Collier 1990, p. 45.
- "Glaciers in the Tropics?: Late Precambrian time". Death Valley National Park through time. United States Geological Survey. Retrieved 2010-12-05.
- This article incorporates public domain material from the United States Geological Survey document: "A Mudflat to Remember: Latest Precambrian and Early Cambrian time". Retrieved 2010-12-05.
- Harris 1997, p. 634.
- This article incorporates public domain material from the United States Geological Survey document: "The Earliest Animal: Latest Precambrian and Early Cambrian time". Retrieved 2010-12-05.
- This article incorporates public domain material from the United States Geological Survey document: "Death Valley, Caribbean-style: Middle Cambrian to Permian time". Retrieved 2010-12-05.
- Harris 1997, p. 635.
- This article incorporates public domain material from the United States Geological Survey document: "The Earth Shook, The Sea Withdrew: Mesozoic time". Retrieved 2010-12-05.
- This article incorporates public domain material from the United States Geological Survey document: "Quiet to Chaos: Cenozoic Time". Retrieved 2010-12-05.
- Collier 1990, p. 48.
- Collier 1990, p. 55.
- Collier 1990, pp. 11, 55.
- Collier 1990, p. 53.
- Collier 1990, p. 54.
- Collier 1990, p. 24.
- Kiver 1999, p. 278.
- Kiver 1999, p. 279.
- Sharp 1997, p. 87.
- "Split Cinder Cone". Death Valley geology field trip. USGS. Retrieved 2011-05-05.
- Harris 1997, p. 616.
- Collier 1990, p. 49.
- "Ubehebe Crater". Death Valley geology field trip. USGS. Retrieved 2010-11-25.
- Kiver 1999, p. 280.
- Collier 1990, p. 20.
- "Zabriskie Point". Death Valley geology field trip. USGS. Retrieved 2010-11-25.
- Sharp 1997, p. 41.
- Kiver 1999, p. 281.
- Sharp 1997, pp. 43, 49.
- USGS contributors. "Rock Formations exposed in the Death Valley area". United States Geological Survey. Archived from the original on 2011-08-08. Retrieved 2011-05-05. (adapted public domain table)
- Le Heron, Daniel P. (May 28, 2014). "Neoproterozoic ice sheets and olistoliths: multiple glacial cycles in the Kingston Peak Formation, California". Journal of the Geological Society, London. 171 (4): 525–538. doi:10.1144/jgs2013-130.
- Hunt, C.B., and Mabey, D.R., 1966, General geology of Death Valley, California, U.S. Geological Survey Professional Paper 494. (adapted public domain table)
- Collier, Michael (1990). An Introduction to the Geology of Death Valley. Death Valley, California: Death Valley Natural History Association. LCN 90-081612.
- Harris, Ann G.; Tuttle, Esther; Tuttle, Sherwood D. (1997). Geology of National Parks (5th ed.). Iowa: Kendall/Hunt Publishing. ISBN 978-0-7872-5353-0.
- Kiver, Eugene P.; Harris, David V. (1999). Geology of U.S. Parklands (5th ed.). New York: John Wiley & Sons. ISBN 978-0-471-33218-3.
- Sharp, Robert P.; Allen F. Glazner (1997). Geology Underfoot in Death Valley and Owens Valley. Missoula, MT: Mountain Press Publishing. pp. 41&ndash, 53. ISBN 978-0-87842-362-0. |
The circle, the square, the rectangle, the quadrilateral and the triangle are examples of plane figure; the cube, the cuboid, the sphere, the cylinder, the cone and the pyramid are the examples of Solid Shapes. Plane figures are of two dimensions (2–D) and the solid shapes are of three-dimensions (3–D). Three dimensional objects is that they all have some length, breadth and height or depth. That is, they all occupy space and have three dimensions. In this chapter, we learn about various three-dimensional objects.
Match the shape with the name
Sol : After matching the shape with their names are as follow
Match the 2 dimensional figures with the names
Sol : After matching figures with their names are as the follow :
Perspectives (Views) of 3-D Shapes
Let us view a brick from front, side and top. How do they look like ?
They look like this :
For each of the given solid shapes, the three views are given :
10.2. MAPPING SPACE AROUND US
In Geography, you have been asked to locate a particular State, a particular river, a mountain etc., on a map. In History, you might have been asked to locate a particular place where some event had occured long back. You have traced routes of rivers, roads, railway lines, traders and many others. Look at the map of a house whose picture is given alongside.
What can we conclude from the above illustration? When we draw a picture, we attempt to represent reality as it is seen with all its details, whereas, a map depicts only the location of an object, in relation to other objects. Secondly, different persons can give descriptions of pictures completely different from one another, depending upon the position from which they are looking at the house. But, this is not true in the case of a map. The map of the house remains the same irrespective of the position of the observer. In other words, perspective is very important for drawing a picture but it is not relevant for a map.
Now, look at the adjacent map, which has been drawn by seven year old Raghav, as the route from his house to his school :
From this map, can you tell –
(i) how far is Raghav’s school from his house ?
(ii) would every circle in the map depict a round about ?
(iii) whose school is nearer to the house, Raghav’s or his sister’s ?
It is very difficult to answer the above questions on the basis of the given map. Can you tell why ?
The reason is that we do not know if the distances have been drawn properly or whether the circles drawn are roundabouts or represent something else. Now look at another map drawn by his sister, ten year old Meena, to show the route from her house to her school. This map is different from the earlier maps. Here, Meena has used different symbols for different landmarks. Secondly, longer line segments have been drawn for longer distances and shorter line segments have been drawn for shorter distances, i.e., she has drawn the map to a scale.
Now, you can answer the following questions :
1. How far is Raghav’s school from his residence ?a
2. Whose school is nearer to the house, Raghav’s or Meena’s ?
3. Which are the important landmarks on the route ?
Thus we realise that, use of certain symbols and mentioning of distances has helped us read the map easily. Observe that the distances shown on the map are proportional to the actual distances on the ground. This is done by considering a proper scale. While drawing (or reading) a map, one must know, to what scale it has to be drawn (or has been drawn), i.e., how much of actual distance is denoted by 1mm or 1cm in the map. This means, that if one draws a map, he/she has to decide that 1cm of space in that map shows a certain fixed distance of say 1 km or 10 km. This scale can vary from map to map but not within a map.
For instance, look at the map of India alongside the map of Delhi.
You will find that when the maps are drawn of same size, scales and the distances in the two maps will vary. That is 1 cm of space in the map of Delhi will represent smaller distances as compared to the distances in the map of India.
The larger the place and smaller the size of the map drawn, the greater is the distance represented by 1 cm.
Thus, we can summarise that :
1. A map depicts the location of a particular object/place in relation to other objects/places.
2. Symbols are used to depict the different objects/places.
3. There is no reference or perspective in map, i.e., objects that are closer to the observer are shown to be of the same size as those that are farther away. For example, look at the following illustration.
4. Maps use a scale which is fixed for a particular map. It reduces the real distances proportionately to distances on the paper.
Example 1. A map is given showing some landmarks of a city. The landmarks are replaced by symbols, whose meanings are given on right side of the map.
Answer the following questions.
(1) Ankit lives in the house H. How far is Ankit’s house from the school ?
(2) Which is nearby from Ankit’s house, stadium or hospital ?
(1) School is at a distance of 2 + 1 = 3 km from Ankit’s house.
(2) Distance of stadium from Ankit’s house = 1 + 2 + 1 + 2 = 6 km
Distance of hospital from Ankit’s house = 1 + 2 + 1 + 1 = 5 km
Therefore, hospital is nearer than stadium from Ankit’s house.
The given figure shows the map of a compound of a boarding school. Answer the following questions. (i) Which of the following landmarks is nearest to the girls dormitory ?
A. Boys dormitory
B. Academic block
(ii) Which landmark is situated at the northwest corner of the school compound ?
(i) It can be clearly seen in the map that boys dormitory is nearer to girls dormitory as compared to academic block, auditorium, and power house.
(ii) The North West corner of the school compound in the given map is the dining hall.
The given figure shows the map of a locality which is drawn on a centimetre grid paper.
What is the shortest distance between the hospital and the post office ?
Solution. It can be seen in the map that there are two ways of reaching the post office from the hospital and the shortest is the one which is shown by blue arrows.
The scale used in the map is 1 cm = 500 m
Therefore, the shortest distance between hospital and post office
= 5 × 500 m = 2500 m
We know that 1000 m = 1 km
2500 m = km = km.
Thus, the shortest distance between the hospital and the post office is km.
10.3. FACES, EDGES AND VERTICES
The corners of a solid shape are called its vertices, the line segments of its skeleton are its edges; and its flat surface are its faces.
The 8 corners of the cube are its vertices. The 12 line segments that form the skeleton of the cube are its edges. The 6 flat square surfaces that are the skin of the cube are its faces.
Example 4. Complete the following table
Solution. Complete table is as follow :
Polyhedron Let us look at the following solid figures.
All of the above solid figures are made up of polygonal regions, lines and points. There is no curved surface in the given figures. Such solids are called polyhedrons.
The polygonal regions in polyhedrons are called the faces. The faces meet to form line segments which are known as edges. The edges meet at the points which are known as vertices.
|Hence, a polyhedron can be defined as a geometric object with flat faces and straight edges.|
Polyhedra are named according to the number of faces. For example: tetrahedron (4 faces), pentahedron (5 faces), hexahedron (6 faces) and so on. The first figure is the figure of a tetrahedron. The second figure is an octahedron. Now, what can we say about the solids like cylinders, cones, spheres etc.?
These solids have lateral surfaces as well as curved edges. It means that these solids are not formed strictly with only flat surfaces as well as straight edges. Therefore, we can say that the solids like cylinders, cones and spheres are not polyhedrons.
We can classify polyhedrons into different categories. Let us discuss them one by one.
A polyhedron may be a regular or an irregular polyhedron.
A polyhedron is said to be regular if it satisfies two conditions which are given as follows.
(a) Its faces are made up of regular polygons.
(b) The same number of faces meets at each vertex.
If the polyhedron does not satisfy any one or both of the above conditions, then we can say that the polyhedron is irregular. To understand this concept, let us consider two solids, i.e. a cube and a hexahedron, as shown below.
Here, we can see that the faces of the cube are congruent regular polygons (i.e. all the faces are squares of same dimension) and each vertex is formed by the same number of faces i.e. 3 faces. Therefore, a cube is a regular polyhedron.
For the hexahedron, the faces are triangular in shape and they are congruent to each other. It means that the faces of the hexahedron are congruent regular polygons. If we look at the vertex A, we will notice that 3 faces meet at A. On the other hand, at point B, 4 faces meet. Thus, the vertices are not formed by equal number of faces. Therefore, the hexahedron is an irregular polygon.
The polyhedron may be a concave or a convex polyhedron.
A polyhedron is said to be convex, if the line segment joining any two points of the polyhedron is contained in the interior and surface of the polyhedron. A polyhedron is said to be concave, if the line segment joining any two points of the polyhedron is not contained in the interior and surface of the polyhedron.
It can be understood easily by taking two solids, i.e. a cube and the star shaped polyhedron, as shown below.
For the cube, the line segment AB joining the two points A and B of the polyhedron is contained either in the polyhedron or on the surface which is clearly shown in the figure. Thus, a cube is a convex polyhedron. For the star shaped polyhedron, the line segment AB joining the two points A and B of the polyhedron is neither contained in the polyhedron nor on the surface.
Thus, the star shaped polyhedron is a concave polyhedron.
In this way, we can easily identify a polyhedron and classify it as concave or convex and regular or irregular.
Let us discuss one more example using the concept of polyhedron.
Example 5. How many faces are at least required to make a polyhedron ?
Solution. At least 4 triangular faces are required to make a polyhedron. For the base of the polyhedron, we require at least a three sided closed figure or a triangle (at least three sides are required to form a closed figure). Let us take another point which is not on the previous triangle. If we join the line segments from that point to each of the vertices of the base triangle, then we will have three triangles. In this way, we require 4 triangular faces to make a polyhedron as shown below.
Thus, at least four faces are required to make a polyhedron.
Prism and Pyramid
Two important members of polyhedron family around are prisms and pyramids.
We say that a prism is a polyhedron whose base and top are congruent polygons and whose other faces, i.e., lateral faces are parallelograms in shape.
On the other hand, a pyramid is a polyhedron whose base is a polygon (of any number of sides) and whose lateral faces are triangles with a common vertex. (If you join all the corners of a polygon to a point not in its plane, you get a model for pyramid).
A prism or a pyramid is named after its base. Thus a hexagonal prism has a hexagon as its base; and a triangular pyramid has a triangle as its base. What, then, is a rectangular prism? What is a square pyramid? Clearly their bases are rectangle and square respectively.
Example 6. Identify whether the polyhedron shown in the following figure is a prism or a pyramid and name it accordingly.
Solution. The polyhedron shown in the given figure has two identical hexagonal bases and the lateral surfaces are parallelogram in shape. Therefore, the given figure is a hexagonal prism.
Example 7. Is a rectangular prism a cuboid ?
Solution. In a rectangular prism, the top and base surfaces are rectangles. If the lateral surfaces of that rectangular prism are strictly rectangles, then we can say that the rectangular prism is a cuboid, otherwise not. However, the lateral surfaces may not be rectangles as shown below.
Therefore, a rectangular prism is not always a cuboid.
Euler s Formula
Every polyhedron has a specific number of faces, edges, and vertices (depending upon the type of polyhedron it is). However, is there any relation that can be applied to the number of faces, edges, and vertices of any polyhedron irrespective of the type of polyhedron?
Tabulate the number of faces, edges and vertices for the following polyhedrons:
(Here ‘V’ stands for number of vertices, ‘F’ stands for number of faces and ‘E’ stands for number of edges).
|Solid||F||V||E||F + V||E+2|
|Cuboid||6||8||12||14||12 + 2|
|Triangular pyramid||4||4||6||8||6 + 2|
|Triangular prism||5||6||9||11||9 + 2|
|Pyramid with square base||5||5||8||10||8 + 2|
|Prism with square base||5||6||9||11||9 + 2|
What do you infer from the last two columns ? In each case, do you find F + V = E + 2,
i.e., F + V – E = 2 ? This relationship is called Euler’s formula.
In fact, this formula is true for any polyhedron. |
In this post, we are going to learn about some of the basic concepts of Python which more or less are also found in other programming languages. We'll start from the installation of Python and cover mathematical operations, strings, user input, string operations, variables, and In place operators.
If you don't know, this blog post and all other future posts in the Python series are part of this Udemy course. Do check it out.
Before starting if you don't have Python installed on your computer, install the latest version of Python 3 from their website and a corresponding IDE to code which we'll use for writing big programs, you can either go with VS Code or Pycharm.
We'll start with the basic Hello World program. So open your Python console and follow the article -
Type the above code in the console and hit enter. By now you would have seen the Hello World printed on your console. That's how easy it is to print something on the console in Python. But let's just move ahead because not everything is going to be that easy in the future.
Python console can also be directly used as a calculator and we can perform most of the common operations which we do on a calculator.
120 + 80
Type the above code and hit enter and you will see the right answer 200. You can also perform subtraction, multiplication, and division in the console. Copy the code below in your console and see the results.
50 - 20 30 * 2 12 / 2
If you're focusing enough, you can see that when we divide certain numbers in Python, we get our result in decimal, just like we got 6.0 in the result of the above division. This is called a Float.
Decimal numbers are known as Float in programming. Float is a number that is not an integer.
Using float with any mathematical expression will always result in float. When dividing 2 numbers, you can avoid getting float as a result by using double slash(//).
12 // 2
The above code will give 6 as a result instead of 6.0, it happens because when we use a double slash in a division, Python gives us the quotient of the division.
Dividing any number with zero in Python gives divide by zero error. So avoid dividing any number with zero or performing any other calculation which involves division by zero.
Exponent is a number raised to the power of a certain number. In Python, you can do this by using **
3 ** 2
The above expression will raise 2 to the power of 3 and you'll get 9 as the result.
We already discussed how using double slash you can avoid getting float as a result of division. This happens because double slash gives the quotient of a division as a result.
For Ex: 10//3 = 3
But what if instead of getting quotient, you want the remainder of the division? We have something called Modulus for this case. The symbol of modulus is percent(%). Let's see it in action below -
11 % 3
The above operation will result in 2 as the answer because that will be the actual result of the above division. This operation is incredibly useful when you want to know whether a certain number is odd or even.
Any text written in Python is a string or in other words, you can also say that anything written within quotes whether single('') or double("") is a string. If you type an integer within quotes, it will also be considered as a string, not an integer.
For Ex: 5 is an integer, '5' is a string.
NOTE: While writing a string if you want to put an apostrophe somewhere in between, use a backslash(\)
- Instead of writing, 'He's a good boy', write,
'He\'s a good boy'.
- The latter version of the string will save you from error because you already have finished the string in the first 2 letters. According to Python, you started your string from H and finished it at e since you already have used 2 quotes up to that point.
- If you want your string to be printed on multiple lines instead of one, you can use
\nat the point after which you want to start a new line.
User inputs are very common when building large applications. Consider the example of a Contacts app where the user has to enter the phone number and name to save a contact or a Chat app where users can input words, numbers, emojis, and whatnot.
In Python, we have a function called input() that allow users to input numbers, strings, etc in our program.
input('Please enter a value: ')
The string inside input() will be displayed to the user when asked for input. You can modify it according to your needs. Whatever value user inputs will be displayed to the console.
Concatenation is the action of joining 2 or more strings together. Suppose there is a string called 'Hello' and there is another string called 'World'. When you concatenate these 2 strings, it becomes 'HelloWorld'.
'Hello' + 'World'
The plus sign above is the concatenation operator which is used to concatenate strings. If you enter the code right and hit enter, you'll see
'HelloWorld' printed on your console.
- You cannot concatenate a string with a number.
- However, you can multiple a string with a certain number to repeat its occurrence. For Ex -
'Hello'*3will result in
- Again you cannot multiple a string with a string, it will produce error.
If you are familiar with Python or any other programming language, you might already know what a variable is. But in case you don't, you can think of it as a container to store data. Variables are common in every programming language and they let us store data types supported in that programming language. In Python, you can store all the supported data types in the variables.
a = 100
In the above code, a is the variable that we've used to store the value of 100. The equal(=) sign is called an assignment operator whose job is to assign values. Type
print(a) to print the value of variable a in the console.
Suppose you're recording your age in a variable and want to update it again this birthday.
age = 21 age = age + 1
In the above code, we first stored our age in a variable and then used a method to update the variable which is logical but isn't the best practice because writing the variable name twice in a line just to add a single value to it is not considered a great practice.
Instead, we have In place operators which help us change the value without repeating the variable name twice in the line. See the code below -
age+=1has same effect as
+=removes the need to repeat the variable name twice in the line totally.
We can also use the subtract and multiplication operator instead of the addition operator to update the subtracted or multiplied value of the variable like
age*=4. The former will reduce the value of the age variable by 2 and the latter will multiply it by 4.
In this article, we learned about maths operations, strings, variables, user input, In place operators, etc., and that marked the end of our Basic Python Concepts.
There are thousands of things in a programming language and I don't mean that this article has covered every basic concept of Python but we've surely learned enough to move ahead and learn other topics and in the next article of the series we'll learn about Control Structures in Python. |
Discovery of a Volcanic Landscape
Venus is the closest planet to Earth. However, the surface of Venus is obscured by several layers of thick cloud cover.
These clouds are so thick and so persistent that optical telescope observations from Earth are unable to produce clear images of the planet's surface features.
The first detailed information about the surface of Venus was obtained until the early 1990s when the Magellan
spacecraft (also known as the Venus Radar Mapper) used radar imaging to produce detailed topography data for most
of the planet's surface.
That data was used to create images of Venus such as the one shown at the top of the right column.
Researchers expected the topography data to reveal volcanic features on Venus but they were surprised to learn that at least 90% of the
planet's surface was covered by lava flows and broad shield volcanoes. They were also surprised that these
volcanic features on Venus were enormous in size when compared to similar features on Earth.
Enormous Shield Volcanoes
The Hawaiian Islands are often used as examples of large shield volcanoes on Earth. These volcanoes are on the order
of 120 kilometers wide at the base and about 8 kilometers in height. They would be among the tallest volcanoes on
Venus; however, they would not be competitive in width. Large shield volcanoes on Venus are an impressive 700 kilometers
wide at the base but are only about 5.5 kilometers in height.
Olympus Mons: The Largest Shield Volcano on Mars
In summary, the large shield volcanoes on Venus are several times as wide as those on Earth and they have a much gentler
slope. A relative size comparison of volcanoes on the two planets is given in the graphic below - which has a
vertical exaggeration of about 25x.
|This graphic compares the geometry of a large shield volcano from Venus with a large shield volcano from Earth. Shield volcanoes on Venus are usually very broad at the base and have gentler slopes than the shield volcanoes found on Earth. VE=~25
Extensive Lava Flows
Lava flows on Venus are thought to be composed of rocks that are similar to the basalts found on earth. Many of the lava
flows on Venus have lengths of several hundred kilometers. The lava's mobility might be enhanced by the planet's average
surface temperature of about 470 degrees Celsius.
The image of Sapas Mons volcano, in the right column of this page, contains many excellent examples of long lava flows on
Venus. The radial appearance of the volcano is produced by long lava flows extending from the two vents at the peak and from numerous flank eruptions.
Venus has a large number of features that have been called "pancake domes". These are similar to lava domes found on Earth,
but on Venus they are up to 100 times as large. Pancake domes are very broad, with a very flat top and are usually less
than 1000 meters in height. They are thought to form by the extrusion of viscous lava.
|Radar image of three pancake domes on the left and a geologic map of the same area on the right. Anyone interested in learning about the surface features of Venus can obtain radar images from NASA and compare them with geologic maps prepared by USGS.
When Did the Volcanoes on Venus Form?
Most of the surface of Venus is covered by lava flows that have a very low impact cratering density. This low impact density reveals
that the planets surface is mostly less than 500,000,000 years old. Volcanic activity on Venus can not be detected from Earth but
enhanced radar imaging from the Magellan spacecraft suggest that volcanic activity on Venus still occurs. (see image below)
|Radar images of Idunn Mons Volcano in the Imdr Regio region of Venus. The image on the left is a radar topography image with a vertical exaggeration of about 30x. The image on the right is color-enhanced based upon thermal imaging spectrometer data. The red areas are warmer and thought to be evidence of recent lava flows. Image by NASA.
Other Processes that Shape the Surface of Venus
Asteroid impacts have produced many craters on the surface of Venus. Although these features are numerous they
do not cover more than a few percent of the planet's surface. The resurfacing of Venus with lava flows that is
thought to have occurred about 500,000,000 years ago took place after impact cratering of planets in our solar
system had fallen to a very low level.
Map of Earth's Asteroid Impacts
EROSION AND SEDIMENTATION
The surface temperature of Venus is about 470 degrees Celsius -- much too high for liquid water. Without water, stream
erosion and sedimentation are unable to make significant modifications to the surface of the planet. The only erosional
features observed on the planet have been attributed to flowing lava.
WIND EROSION AND DUNE FORMATION
The atmosphere of Venus is thought to be about 90 times as dense as Earth's. Although this limits wind activity some
dune-shaped features have been identified on Venus. However, the available images do not show wind-modified landscapes covering a significant portion of the planet's surface.
Plate tectonic activity on Venus has not been clearly identified. Plate boundaries have not been recognized. Radar images
and geologic maps produced for the planet do not show linear volcano chains, spreading ridges, subduction zones and transform
faults that provide evidence of plate tectonics on Earth.
Volcanic activity is the dominant process for shaping the landscape of Venus with over 90% of the planet's surface being
covered by lava flows and shield volcanoes.
The shield volcanoes and lava flows on Venus are very large in size when compared to similar features on Earth.
Contributor: Hobart King
Find it on Geology.com
More from Geology.com
|100+ Gems - Photos of over 100 beautiful gems ranging from the popular to the obscure.
|Rock Tumblers are machines that slowly turn rough rock into beautiful gemstones.
|Fee Mining sites are mines that you can enter, pay a fee, and keep anything that you find.
|Geologist Tools: Visit our store for a large selection of field and laboratory tools.
|Uses of Granite: The rock used everywhere from the kitchen to the facing stone of skyscrapers.
|Azurite Granite ? A white granite with blue orbs of azurite. A new material from Pakistan.
|A simulated color image of the surface of Venus created by NASA using radar topography data acquired by the Magellan spacecraft. Enlarged views at 900 x 900 pixels or 4000 x 4000 pixels.|
|A simulated color image of Sapas Mons volcano, located on the Atla Regio rise near the equator of Venus. The volcano is about 400 kilometers across and about 1.5 kilometers high. The radial appearance of the volcano at this scale is caused by hundreds of overlapping lava flows - some originating from one of the two summit vents but most originating from flank eruptions. Image created by NASA using radar topography data acquired by the Magellan spacecraft. Enlarged views at 900 x 900 pixels or 3000 x 3000 pixels.
|An oblique view of Sapas Mons volcano, the same volcano shown in the vertical view above. This image views the volcano from the northwest. Features visible in this image can easily be matched to the vertical view above. Lava flows several hundred kilometers in length appear as narrow channels on the flanks of the volcano and spread into broad flows on the plain that surrounds the volcano. Image by NASA. Enlarge image.
|USGS has produced detailed geologic maps for many areas of Venus. These maps have descriptions and correlation charts for the mapped units. They also include symbols for faults, lineaments, domes, craters, lava flow directions, ridges, grabens and many other features. These can be paired with NASA radar images to learn about volcanoes and other surface features of Venus.|
|Information for Volcanoes on Venus|
NASA Image Gallery of Venus, a searchable collection of images that can be downloaded, NASA, accessed January 2013.
USGS Geologic Maps of Venus, a collection of maps in .pdf format, USGS, accessed January 2013.
Volcano Sapas Mons, images and information about the volcano from the Magellan spacecraft program, NASA, 1996.
Venus Global View, computer simulated global view of Venus from the Magellan spacecraft program, NASA, 1996.
NASA-Funded Research Suggests Venus is Geologically Alive, article about recent volcanism on Venus, NASA, 2010.
Volcanoes on Venus, overview article from the Volcano World collection, Oregon State University, 2005. |
Dr David Smawfield, August 2007 – 1
SWOT ANALYSIS: An Important Tool
for Strategic Planning
A SWOT Analysis Explained
Examples of SWOT workshop activity
The word “SWOT” stands
(in English) for four words:
S = Strengths (strong points) W= Weaknesses (weak points) O = Opportunities
T = Threats
Dr David Smawfield, August 2007 – 2 –
A SWOT Analysis uses a grid of four squares set out like this:
For workshop purposes, the grid should be drawn large: e.g. filling the whole of a
sheet of a flip chart.
Dr David Smawfield, August 2007 – 3 –
To help clarify the differences between “Strengths” and “Opportunities” and
“Weaknesses” and “Threats”, the following observations might be helpful:
• Strengths and Weaknesses tend to describe the PRESENT situation.
• Strengths and Weaknesses are typically INTERNAL to whatever is being
• Opportunities and Threats tend to describe the immediate FUTURE.
• Opportunities and Threats are typically EXTERNAL to whatever is being
analysed (but they can also include internal factors).
• Strengths and Opportunities are POSITIVE factors.
• Weaknesses and Threats are NEGATIVE factors.
These characteristics are summarised in the following diagram:
(strong points) Weaknesses:
Dr David Smawfield, August 2007 – 4 –
The grid is used to analyse a chosen topic.
The following are examples of suitable topics. A SWOT Analysis of one of the
School and Community Relationships
School Management Practices
The Use of School Grounds
Teaching Quality in the Classroom
Of course, choose another topic if this is of more interest and relevance!
Size of Groups:
The size of groups used for SWOT analysis can be varied to suit particular
circumstances. Usually, however, groups of about 5 or 6 people work best.
Everyone will be involved and it is easy to generate lively discussion. One of the
strengths of a SWOT Analysis, especially if it is a workshop activity, is that it
does encourage everyone to participate.
In other words, if you are conducting a SWOT Analysis in a workshop of 30
people, it will probably best to divide the participants into 5 or 6 groups. Each
group can do its own SWOT Analysis. Later, if wished, their results can be
compared and similarities and differences discussed.
SWOT Stage 1:
Workshop participants are provided with “post-its”. They brainstorm and try to
identify the “strengths”, “weaknesses”, “opportunities” and “threats” – for
example, for “Meeting the Learning Needs of All Students”.
Individual participants write down on “post-its” the “weaknesses”, “strengths”,
“opportunities” and “threats” they can think of, one at a time. One “post-it” is
used for each “strength”, “weakness”, “opportunity” and “threat” identified.
“Post-its” can be placed in the relevant squares in any order.
One of the reasons this kind of brainstorming is effective is that something one
person says or suggests will often stimulate or remind someone else of
Dr David Smawfield, August 2007 – 5 –
something additional. The workshop facilitator, therefore, should encourage
participants to take note of each other’s “post-its” as they are placed on the
When people begin to run out of ideas and suggestions, the first stage of the
SWOT Analysis has been completed.
SWOT Stage 2: “Clustering”
Firstly, think about cause-and-effect relationships. For example, which
weakness is caused by, or causes, another weakness? Later on, this will help us
to prioritise solutions. One problem may need to be solved before another can
If appropriate, readjust the position of the “post-its” to show cause-and-effect
relationships. For example, place the “cause” immediately below the “effect”.
The process of thinking about cause and effect relationships will probably
result in the need to add more “post-its”. This is to be encouraged!
Secondly, try to group some of the “post-its” together in clusters, according to
the type of “strength”, “weakness”, “opportunity”, or “threat” that they
describe. Perhaps, for example, there will be one cluster of “post-its” to do
with resource issues. There might be another cluster of “post-its” to do with
management issues. And so on. At this stage, some “post-its” could be removed
if it is decided that they say the same thing, or the wording could be refined to
consolidate two almost similar points or ideas together.
This stage of the analysis helps us to clarify and categorise different types of
issues. You may wish to make a record of this stage of the analysis, before
proceeding to Stage 3.
SWOT Stage 3:
For project design and activity planning purposes, the “weaknesses” square of
the grid is especially important. It is here that we are likely to get ideas for
appropriate activities and strategies to address weaknesses in the system.
Often the “weaknesses” square of the grid will fill up with more “post-its” than
any of the other three squares. In a perfect world, it would be nice to be able
to solve all weaknesses and problems. Unfortunately, in the real world, this is
not possible. No single project or action plan can address all issues.
Dr David Smawfield, August 2007 – 6 –
A very effective analysis that combines consideration of “how important” a
weakness is, with “how practical” it is to do something about it, can be
conducted with reference to an additional grid: Grid 2.
The “weakness” “post-its” on the original grid can be moved across onto this new
grid. Exactly where they are placed has an important new meaning. A “post-it”
placed in the extreme top-left corner of this grid (i.e. “post-it” number 1 in the
example) can be interpreted as being a weakness that is very important, but is
also easy to solve. “Post-it” number 2 in the example shows a weakness that is
just as important, but is considered slightly more difficult to address. The
weakness depicted by “post-it” number 3 in the example is “very important, but
also very difficult (perhaps impossible) to address”.
The vertical dotted line, for practical purposes, can divide the grid into
“weaknesses within the power of the project/management to address” (on the
left of the line) and “weaknesses outside of the power of the
project/management to address” (on the right hand side of the line).
Dr David Smawfield, August 2007 – 7 –
“Post-it” number 4, in the example, is of “fairly high importance”, but
participants are not sure whether it is inside or outside the power of the
project/management to do something about it. Therefore, they have placed the
“post-it” over the vertical dividing line.
The purpose of a SWOT Analysis is to help us to analyse (evaluate) a
situation, and then identify an action plan to do something to improve it.
One of the reasons Stage 3 of the SWOT Analysis so useful is that it helps us
to identify a “way forward”.
It is often good to develop an action plan by focussing on the “top-left-hand
corner” of the second grid: in other words with weaknesses that are very
important, but are not too difficult to address. By doing this, we are starting
with, and agreeing upon, things we believe can be done! We are identifying a
“way forward”. We are not getting bogged down with problems that are too
If wished, Stage 3 of the SWOT Analysis can also be used for further analysis
of “strengths”, “opportunities” and “threats”. For example, this would allow us
to identify the “most important and practical strengths” that we can draw upon
in mapping a way forward.
Again, you may wish to keep a record of the results of Stage 3, before moving
on to Stage 4.
SWOT Analysis Stage 4
To bring us closer to developing an action plan, SWOT Analysis Stage 4 is a very
simple, but very important, stage.
It involves taking a “weakness statement” (a negative statement) and
reformulating it as an “objective statement” (a positive statement).
Use this process to reformulate the weaknesses that you believe it is within the
power of the project/management to address.
You now have the basis for an action plan! If there are too many objectives to
address, select the ones with the greatest importance!
Dr David Smawfield, August 2007 – 8 –
SWOT Analysis Stage 5
This Stage of the SWOT Analysis concentrates on the “Threats” identified.
SWOT Analysis Stage 5 involves a “Risk Analysis”.
Another Grid is required, as shown below: Grid 3.
The grid has been colour coded (like traffic lights). For “threats” in the green
area of the grid, “go ahead”: the threats can be ignored. They are not
important enough to worry about.
For threats in the two yellow areas of the grid: “proceed with caution”. These
threats are important enough to demand further attention. Monitor or manage
these threats and, if possible, adjust activities and objectives to remove the
threats or reduce the risks associated with them.
Threats that fall in the red area of the grid are known as “killer threats”. You
may need to “Stop!” and think again. Consider redesigning your action plan to
remove the threat or substantially reduce its importance or probability of
Dr David Smawfield, August 2007 – 9 –
SWOT Analysis Stage 6:
There is an underlying “logic” to the four squares of the “SWOT” grid, which
can be summarised as follows:
• The “weaknesses” identified help us to develop possible activities and
strategies suitable for a project or action plan.
• We should than try to consider how we can build on the “strengths”
and “opportunities” we have identified to increase our chances of
• We also need to take important note of the “threats” we have
identified. We need to consider how we might be able to design
activities to avoid these threats or minimise the risks associated with
them. Another strategy might be to design activities that address
these threats directly: to remove or limit them.
What remains, therefore, is for us to go back to the “strengths” and
“opportunities” we have identified, and see if we can come up with practical
suggestions for building on these. This represents Stage 6. As a result of
Stage 6 of the Analysis, we may see the need to modify the activities that we
have provisionally identified to take full advantage of strengths and
• The purpose of a SWOT Analysis is to help us to analyse (evaluate) a
situation, and then identify an action plan to do something to improve it.
• A SWOT Analysis helps us to focus on what is possible, rather than on
what is impossible.
• Because a SWOT analysis is a participative activity, it is good for building
consensus: where everyone is agreed upon, and committed to, a practical
way forward for making things better.
Such a cheap price for your free time and healthy sleep
All online transactions are done using all major Credit Cards or Electronic Check through PayPal. These are safe, secure, and efficient online payment methods. |
The last few years of the 1850s paved the way for the sectional breakdown that resulted in a civil war. Following the Mexican-American War, disunion seemed like an unlikely prospect even though North and South disagreed on the future of slavery. In the past, national leaders had managed to compromise on divisive issues like the tariff and the bank; most people expected them to do so when it came to slavery. Unfortunately, by the time James Buchanan took office in 1857, few people wanted to compromise. The new president also seemed unwilling or incapable of bringing the North and the South together. Southerners, who worried about Buchanan’s northern sympathies, found him disposed to accept their demands for federal support of the extension of slavery. Then a financial panic, the Dred Scott decision, and John Brown’s raid on Harper’s Ferry made tensions between proslavery and antislavery advocates worse. Finally, Abraham Lincoln emerged as a forceful speaker for the Republican Party as Buchanan tilted the Democratic Party further to the South.
15.4.1: Northern and Southern Perspectives
Northerners and southerners in the 1850s increasingly felt the need to defend their position on slavery, whether they opposed it or they favored it. Slavery drove the two sides apart, but not because either side had many moral concerns about the peculiar institution. Both sides saw their freedom at stake, namely, their freedom to the political and economic liberties they believed the Constitution guaranteed. Both sides saw themselves as fighting for liberty and for what they perceived to be the legacy of the American Revolution. They simply had very different viewpoints about what the Revolution had meant.
Northerners believed a vast slave power conspiracy dominated national politics. Meanwhile, southerners saw an influential abolitionist element trying to eliminate slavery all over the country. Few people on either side fell into these extremist categories. But, northern and southern spokesmen felt compelled to criticize the other side and defend their position. As tensions mounted toward the end of the decade, people began to wonder if they could ever mend their differences. In 1858, William H. Seward outlined the notion of irrepressible conflict, in which the nation would have to choose to be all slave or all free. Northerners and southerners nonetheless did not necessarily think their differences would lead to a war.
The Northern Perspective
Northerners increasingly turned to ideas about free labor to explain the benefits of their society. A free labor system in which employers paid workers wages led to economic growth. New Yorker William Evarts suggested that labor was “the source of all our wealth, of all our progress, of all our dignity and value.” The system also provided opportunity for social mobility. The goal for most northerners was not great wealth, but economic independence. If they worked hard enough, they could improve their lives and enter the ranks of the middle class. Pennsylvanian Thaddeus Stevens recorded how “the middling classes who own the soil, and work it with their hands are the main support of every free government.” In the nineteenth century, most northerners also believed progress came from developing the economy, increasing social mobility, and spreading democratic institutions.
To the proponents of free labor, slavery robbed labor, both slave and free, of its dignity. Slavery denied workers social mobility. Since workers had no incentive, they became less productive. Economically speaking, they believed slavery led to mass poverty. However, northerners worried more about the effect a slave-based economy had on non-slaveholders than on slaves. They frequently commented on the lack of opportunity for poor whites to improve their social and economic standing. From the northern perspective, people born poor in the South remained poor. Northerners believed all the best qualities about a free labor society, such as hard work, frugality, and a spirit of industry, were lacking in the South. Many northerners, especially the Republicans, sought to create a free labor system in the South. They looked for government action to promote free labor; however, southern dominance of national political institutions, referred to sometimes as slave power, prevented that option.
The Southern Perspective
Southerners found the criticism of their lifestyle unwarranted. They believed courtesy, hospitality, and chivalry were the hallmarks of their way of life. When antislavery advocates became more vocal in the 1830s, southerners began to highlight the positive nature of slavery. Thomas R. Dew, a professor at William and Mary, relied on biblical and historical evidence to suggest how slavery benefited the master and the slave. To justify why only blacks became slaves in the South, Dew suggested the institution helped Africans become more civilized. Moreover, enslaving blacks brought greater liberty and equality to whites. By the 1850s, southern theorists like George Fitzhugh focused even more on racial inferiority to justify slavery. Fitzhugh argued in favor of the paternalistic nature of slavery, noting that “He the Negro is but a grown up child, and must be governed as a child, not as a lunatic or criminal. The master occupies toward him the place of parent or guardian.”
To the proponents of slavery, free labor did not benefit anyone. Alluding to the paternalistic nature of slavery, Virginian Edmond Ruffin suggested northern employers held their workers “under a much more stringent and cruel bondage, and in conditions of far greater…suffering than our negro slaves.” Slaves, moreover, did not have to worry about securing food, clothing, or shelter, since their masters provided those commodities. James Henry Hammond, basing his justification for slavery on the so-called mudsill theory, further suggested the benefits of slavery for southern whites. All societies had, he noted, a “mudsill class” or working class. In the South, slaves performed the menial and thankless tasks, leaving whites to pursue the fruits of civilization. In the North, the wage labor system meant whites performed the tasks of slaves and therefore had no real opportunity for advancement.
The Panic of 1857
The debate between the North and the South intensified after a financial panic hit the nation in 1857. American exports of grain increased between 1854 and 1856 because of the Crimean War in Europe. When the war ended, the market slumped. The war also pushed investors in Europe to sell off their American stocks and bonds. Both developments hurt the American economy. For much of the decade, economic growth caused a rise in western land prices, the overextension of the railroads, and risky loans by banks. When grain exports declined and European investment stopped, American banks began to fail. By the end of the year, hundreds of thousands of northern workers lost their jobs. Relief efforts helped the jobless to survive the winter and prevent a much-feared class war. By spring, the economy was on its way to recovery.
Southerners for the most part escaped the economic downturn. So, they boasted about the superiority of the plantation economy. Many even suggested cotton saved the North from financial ruin. Frustrated northerners blamed the South, with its constant demand for low tariffs, for the crisis. After the panic, a coalition of northern Republicans and Democrats pushed for an increase in the tariff, as well as land grant measures for farmers, the railroads, and colleges, to help prevent future economic problems. Southern obstruction of these efforts only made the sectional tensions worse. Southerners saw the measures as a way to promote a federallybacked antislavery agenda; northerners, on the other hand, saw the slave power conspiracy at work.
15.4.2 The Crisis Continues
As northerners and southerners staked their claim to the Revolution’s legacy, the dispute about the future of slavery in the United States continued. The Supreme Court, under the leadership of Roger B. Taney, decided to step into the debate on the rights of slaves and slaveholders. Moreover, questions about Kansas’s proposed statehood continued to affect territorial authorities and national leaders. The sectional tensions also provided politicians with new challenges and opportunities, as evidenced by Abraham Lincoln’s reentry into politics as a Republican after the Kansas-Nebraska Act. In 1858, Lincoln challenged Stephen Douglas to a series of debates before the fall elections. He hoped to win a Republican majority in the state legislature in order to secure a position in the U.S. Senate.
The Dred Scott Decision
In 1846, Dred Scott sued for his freedom after his master Dr. John Emerson died. White friends encouraged Scott to file the suit because his master had taken him to live for a significant period in the free state of Illinois and the free territory of Wisconsin in the 1830s before returning to Missouri. Scott, his wife Harriet, and their daughter claimed residing in free territory made them free. Scott initially won freedom for his family in the Missouri courts. But on appeal, the Missouri Supreme Court reversed the decision. The court had previously awarded slaves their freedom in similar cases. Scott’s lawyers therefore took his suit to the federal courts. In 1854, the Missouri district court agreed to hear the case and subsequently upheld the decision to return the family to slavery.
The U.S. Supreme Court agreed to hear the case in 1856. Chief Justice Roger B. Taney hoped their decision in the case would be the final word on the constitutionality of the institution of slavery. The justices decided to delay their ruling until after the presidential election. According to James McPherson, the Court had three questions to answer in their decision. One, did Scott have the right to sue in federal court; in other words, was he a U.S. citizen? Two, did residence in a free territory for almost four years make him free? Three, did Congress have the authority to bar slavery in any territory; in other words, was the Missouri Compromise constitutional? Before James Buchanan’s inauguration, a majority of the Court seemed inclined to rule that Missouri law determined Scott’s status as a slave and to say nothing more.
However, Roger B. Taney encouraged his fellow southerners to issue a decision in order to put the matter of slavery in the territories to rest. Taney, a native of Maryland, had long wanted to write this decision; he had waited for years for the right opportunity to protect the southern way of life. The chief justice also knew the southern majority on the Court would need one northerner to go along as well. So, one of the southern justices asked the president-elect to put pressure on one of the northern justices. Whatever Buchanan felt about the impropriety of such a move, he shared with Taney a desire to settle the issue. He knew how poisonous the debate about slavery could be to his administration. Buchanan, in his inaugural address, suggested that the issue of the extension of slavery belonged with the Supreme Court, not Congress.
Two days after the inauguration, the Court issued its ruling in Dred Scott v. Sandford. Speaking for the majority, Taney declared Scott had no standing to sue in federal court because blacks could not be citizens of the United States. Technically, the decision should have ended there since, as once he declared Scott a non-citizen, nothing else mattered. However, Taney decided to address the remaining issues before the court in order to settle portions of the ongoing slavery debate. The chief justice said that residence in free territory did not make a slave free once he or she returned to slave territory. He further indicated that the Constitution upheld slavery because it protected private property and slaves were a form of property. Finally, he said Congress had no authority to bar slavery in the territories, making the Missouri Compromise unconstitutional.
According to Vernon Burton, “The Dred Scott ruling was pure joy for southerners.” Not only did the decision grant them protection for their human property, but also it confirmed their right to take slaves anywhere in the country. In other words, slavery was a national institution; the distinction between slave and free states no longer existed. After the decision, northerners could only destroy slavery through a constitutional amendment, and no southerner expected that to happen. The South also delighted in the idea that the decision would crush the hated Republican Party. Republicans, however, refused to accept Taney’s decision.
Republican papers lambasted the ruling. The Cleveland Leader called it “villainously false,” and the New York Tribune said it had “as much moral weight…the majority of those congregated in any Washington bar-room.” Moreover, Republicans argued the decision was not binding because it addressed matters not before the court, a practice known as obiter dictum. Northern legislatures with Republican majorities responded by passing laws reaffirming the citizenship of their black residents. The decision additionally gave many northern Democrats pause. It occurred to them that Taney also undermined popular sovereignty because the chief justice indicated voters could not exclude slavery from a territory. The decision hurt the Democrats more than the Republicans, especially in light of what happened in Kansas.
Whatever Roger B. Taney hoped to accomplish with his ruling, he certainly did not remove the question of slavery from politics. The decision in Dred Scott v. Sandford only made the sectional divide greater. From the northern perspective, everything they feared about southern slave power seemed to be coming true. From the southern perspective, the decision secured them from the onslaught of northern abolitionists and preserved the institution of slavery.
Before the presidential election of 1856, Franklin Pierce sent John W. Geary to Kansas as the new governor, since Wilson Shannon proved unable to end the conflict. Geary managed to quell the violence before the election, but the peace did not last. Looking at the election returns of 1856, southerners believed they needed more slave territory in order to prevent a Republican victory in 1860. They set their sights on Kansas, where the proslavery legislature still controlled the territory, even though the Free Soilers had a commanding majority in population. To maintain the peace, Geary asked the proslavery legislature to revise the antislavery acts. In response, the legislature made plans to revise the state constitution but indicated they would not seek a statewide referendum on the changes. Geary, shocked by their audacity, resigned his position.
After the Dred Scott decision, James Buchanan persuaded Mississippian Robert J. Walker to become governor of Kansas. The president asked him to oversee an orderly drafting of a constitution, which the people had an opportunity to vote on. Surprisingly, Walker had no real desire to see Kansas become a slave state. He encouraged the slaveholders to submit the Lecompton Constitution to the people for a vote, but they refused and sent the constitution to Congress, along with their petition for statehood. Walker then journeyed to Washington to consult with Buchanan and explain the situation, especially since the president told him to secure a referendum. Buchanan, facing pressure from his proslavery advisers, refused to accept that the majority of people in Kansas wanted to become a free state. Instead of rejecting the Lecompton Constitution, Buchanan asked Congress to admit Kansas as a slave state based on the provisions of the Dred Scott decision. At the time, the president firmly believed opposing the South would lead to secession.
Southerners who wanted a victory in Kansas believed they could win approval of the Lecompton Constitution, since the Democrats controlled Congress and they controlled the Democratic Party. At the same time, enough recognized the risk of their plan and encouraged the Kansas legislature to put the constitution to vote. What seemed like a major concession proved nothing more than a face-saving device. Voters could choose from a constitution with slavery or a constitution with no slavery that protected slave property in Kansas forever. Free Soil residents called it the “great swindle,” and criticism of the South’s malfeasance mounted in the North. Walker resigned when he realized that Buchanan no longer supported a fair referendum in Kansas.
Many northern Democrats opposed admitting Kansas as a slave state because it was not what the people wanted. Stephen Douglas met with Buchanan in December and pled with him not to support the Lecompton Constitution; otherwise, he would have to oppose the president in Congress. Buchanan apparently told Douglas to “remember that no Democrat ever yet differed from an administration of his own choice without being crushed.” In spite of the threat, Douglas knew he had to stand up to Buchanan over Kansas. If he did not, his future political career would be quite short since he staked his political reputation on the validity of popular sovereignty. Douglas worked with Republicans to defeat the Lecompton Constitution. Then the Kansans held two separate elections; one where only the proslavery forces voted, and one where only the antislavery forces voted. These elections made it apparent that the Free Soilers held a two-to-one majority and northerners could not accept Kansas as a slave state. In the wake of the vote, Kansas once again descended into violence.
The Lincoln-Douglas Debates
Into the 1850s, Illinois was one of the most southern-like northern states because so many southerners migrated there early in the century. Southern folkways pervaded the lower part of the state. Moreover, it had been a stronghold for the Democratic Party. Most residents, especially in the more rural regions of the state, loathed the idea of an active government. From the 1830s to the 1850s, the Democrats usually held a majority in the state legislature, and the state consistently voted Democrat for president. However, the debates on slavery by the mid-decade allowed the newlyformed Republican Party to gain some ground among Illinois voters. In 1858, the Republicans very much wanted to secure a seat in the U.S. Senate. If they could win a majority in the state legislature, then they could replace Stephen Douglas with someone opposed to slavery. Abraham Lincoln hoped the Republicans would choose him. Douglas, of course, looked for ways to prevent that outcome.
Kentucky-born Abraham Lincoln moved to Indiana as a boy and to central Illinois as a young man. Lincoln decided not to become a farmer like his father. He wanted to find work more in tune with the modern capitalist world, so he worked as a storekeeper, surveyor, and lawyer. By the 1840s, Lincoln was prosperous and respectable. Given his views about the market economy, Lincoln found his political beliefs more in line with the Whigs than the Democrats. Eric Foner asserts that Lincoln “saw government as an active force in promoting opportunity and advancement.” Although the Democrats dominated Illinois, Lincoln served four terms in the state legislature and one term in the U.S. House of Representatives. In the early 1850s, he returned to his law practice. However, the Kansas-Nebraska Act reinvigorated his desire to run for office.
With the Whigs in decline, Lincoln eventually found a home in the Republican Party. In a series of speeches in late 1854, Lincoln called slavery a “monstrous injustice” and suggested that slavery undermined “the very fundamental principles of civil liberty.” While he admonished slavery, Lincoln was no abolitionist. Like many Republicans, he had moderate racial views. He opposed human bondage, but he also opposed political or social equality for blacks. To Lincoln, slavery threatened the human ability to succeed; it robbed individuals of the freedom to better their condition. Thus, like other Republicans, he believed in free labor principles. His public pronouncements against slavery helped him win a seat in the state legislature in 1854. However, he resigned that seat so he could seek election to the U.S. Senate. The state legislature did not award Lincoln the position. His failure pushed him more toward the Republican Party as he cast his eye on Stephen Douglas’s seat in 1858.
As Douglas looked toward the elections in Illinois in 1858, he knew that, in order to retain his spot in the Senate, he needed to stand up to the president’s policy on the Lecompton Constitution. He purposely broke with Buchanan and precipitated a sectional divide in the Democratic Party because he needed to come across as anti-southern to Illinois voters. He also tried to reach out to Republican voters, but he failed to win the Republicans over. Rather, when party leaders met in June, they criticized popular sovereignty and Dred Scott. Moreover, they publicly supported Lincoln for the U.S. Senate seat, which parties did not normally do until after the state elections. In support of his campaign, Lincoln noted, “A house divided against itself cannot stand…this government cannot endure, permanently, half slave and half free. I do not expect the Union to be dissolved…but I do expect it will cease to be divided.” In other words, Lincoln asked the voters of Illinois to decide whether to support freedom or to support slavery.
Lincoln also challenged Douglas to a series of debates so he could expose the failings of his opponent’s position on slavery. Douglas agreed to seven meetings so he could do likewise. Lincoln focused his attention on how, during his career, Douglas had undermined the intentions of the Founding Fathers by supporting an extension of slavery into the territories. He forced Douglas to reconcile popular sovereignty with Dred Scott. In the Freeport Doctrine, named for the town where the second debate occurred, Douglas suggested residents of a territory could bar slavery by enacting “local police regulations,” a position he had made public several times before. Contemporaries argued the Freeport Doctrine helped drive a wedge in the Democratic Party. However, both James McPherson and Eric Foner point out that Douglas’s position on the Lecompton Constitution already caused a rift.
Meanwhile, Douglas exploited the race issue by labeling Lincoln a “Black Republican” and by telling voters about how free blacks such as Frederick Douglass were campaigning on his behalf. He further argued it was a “monstrous heresy” to suggest the Founding Fathers intended to make blacks citizens with equal rights. Finally, only those who believed in black equality would vote for Lincoln. Countering the race issue became of major importance for Lincoln. In the fourth debate he said, “I will say then that I am not…in favor of bringing about in anyway the social and political equality of the white and black races…I am as much as any other man in favor of having the superior position assigned to the white race.” At the same time, he continued to argue against the dehumanization of blacks.
Douglas managed to retain his seat in the Senate. However, Republicans did quite well in the elections. Had the state apportionment actually reflected the growth of the northern districts, Lincoln might have won. Nevertheless, Douglas reinforced his position as the leader of the northern Democrats. Still, Lincoln gained a great deal from the 1858 campaign. The debates highlighted the differences between Democrats and Republicans in the North. They also catapulted Lincoln into the national spotlight. Finally, they showed that Lincoln was more than up to the challenge of taking on Douglas in the presidential election of 1860.
15.4.3: The Road to Secession
By 1859, James Buchanan knew the issue of slavery had ruined his administration. Although he had hoped a Supreme Court ruling could quiet concerns about slavery, the Dred Scott decision poisoned the political atmosphere and ensured the next presidential election would focus on the future of slavery. The Lincoln-Douglas debates deepened the national division over slavery. But nothing proved more inflammatory than John Brown’s attempt to foment a widespread southern slave rebellion with his attack on Harper’s Ferry. As the election of 1860 approached, the Democratic Party stood as one of the few remaining national institutions. It too proved unable to maintain unity in the face of the slavery debate as it split into three factions. This division presented an opportunity for the Republican Party to win the presidency, which they did with the nomination of Lincoln. The election of a purely sectional party prompted South Carolina and six other states from the Lower South to secede from the Union.
John Brown’s Raid on Harper’s Ferry
In the years following his attack on proslavery forces at Pottawatomie Creek, John Brown’s devotion to the antislavery cause grew. While traveling around the North to raise funds for the Free Soil effort in Kansas, Brown developed a scheme to launch a guerilla attack against slavery. With a small band of men, both black and white, he planned to attack the federal arsenal at Harper’s Ferry, Virginia, where the Potomac and the Shenandoah Rivers meet. With the arsenal secure, Brown’s forces would move southward to incite slaves to rebel against their masters with the weapons from the arsenal. In 1858, he approached several abolitionists for financial support for the raid. The “Secret Six” agreed to help him purchase weapons.
Meanwhile, Brown looked for recruits, especially free blacks, to join his mission. In August, he approached Frederick Douglass about participating in the raid. Douglass, like many other black abolitionists, had concluded that slaves would only truly be free if they fought for their own emancipation. Brown reportedly told Douglass, “When I strike, the bees will begin to swarm, and I shall need you to help hive them.” Whatever Douglass thought about the use of violence, he said no because the plan seemed suicidal. Although many of his recruits never showed up, Brown decided to proceed anyway. He had twenty-two men: five blacks and seventeen whites, including three of his sons; with these men, he would launch his war against slavery.
On October 16, 1859, Brown and his raiders crossed from Maryland into Virginia. They quickly captured the arsenal. However, then things began to fall apart. Brown sent several men into the countryside to inform the slaves the time for a rebellion had come and to kidnap some prominent whites. The expected slave uprising never occurred. Local slaves might have wanted to rebel against their masters, but they would have been suspicious of any stranger supporting an insurrection. For all they knew, their owners could have been testing their loyalty. Moreover, word spread quickly to the white community of the impending attack. Local militia units converged on Harper’s Ferry; several raiders and locals died in the exchange of fire. On October 18, 1859, the U.S. marines, under the command of Colonel Robert E. Lee and Lieutenant J.E.B. Stuart, arrived on the scene. They stormed the firehouse where Brown and his troops retreated during the confrontation with the locals. The marines killed two of the raiders and captured the rest, including Brown.
While Brown accomplished nothing he set out to do, his attack inflamed passions in both the South and the North. Southerners called for Brown’s blood. Even though the attack happened on federal property, he stood trial for treason, murder, and incitement of a slave insurrection before the end of the month in Virginia. The judge sentenced him to death after the jury returned a guilty verdict. Brown was executed in early December. Southerners also wanted an investigation into the rumors that prominent northerners funded the raid. They saw the attack as a clear sign of the lengths abolitionists would go to undermine the southern way of life. For some time after the incident, anyone in the South who did not support the maintenance of slavery faced a real risk of coming to a violent end. Southerners did take comfort in several things after the raid. One, no slave flocked to Brown’s cause. Two, slaveholders and non-slaveholders united to fight off the invaders. Three, the federal government defended slavery.
The majority of northerners criticized John Brown’s raid, but his composure during his trial and when facing execution transformed public opinion. Brown, according to James McPherson, “understood his martyr role and cultivated it.” He refused to plead insanity and suggested he would forfeit his life to help end slavery. On the day of his execution, church bells tolled and guns fired salutes in his honor. Preachers gave eulogies emphasizing his martyrdom. People did not condone his tactics. Rather, they agreed the time had come to do more about southern power, as opposed to doing something about slavery.
Democrats in the North condemned the incident in order to rebuild their ties with the South and to undermine support for the Republicans. They realized the distinction between thought and action did not impress most southerners; Stephen Douglas and others implied that Brown’s actions stemmed directly from Republican ideology. In response, leading Republicans, including William H. Seward and Abraham Lincoln, condemned Brown’s actions. Lincoln suggested that “John Brown was no Republican.” Without a doubt, Harper’s Ferry furthered the hostility between the North and the South. It also set the stage for the presidential election.
The Election of 1860
In April 1860, the Democratic Party met in Charleston, South Carolina, home of the “fire-eaters,” or those who claimed they would die defending slavery. John Brown’s raid had convinced many southerners the time had come to draw a line in the burgeoning conflict; they no longer saw northern Democrats as their ally. In fact, a few southern delegates hoped for a Republican victory because then southerners would have to choose submission or secession. Meanwhile, northern delegates felt constantly under attack as proslavery speakers extolled the virtue of slavery throughout the city. Given these feelings, the gathering began with an auspicious start.
Before choosing a candidate, party members had to agree on a party platform. Speaking for many southerners, Alabama’s William L. Yancey presented a proslavery platform to the convention delegates. It called for the nomination of a proslavery candidate. Furthermore, it demanded the adoption of a congressional slave code to protect slaveholders’ constitutional right to take their property to the territories. Speaking for many northerners, Stephen Douglas introduced an alternative platform. His platform supported the principle of popular sovereignty as well as respect for the Dred Scott decision. The platform committee leaned toward a proslavery platform; however, the delegates still had to vote. When Yancey linked the platform to the defense of southern honor, many delegates heartily cheered his assertion. Douglas’s supporters refused to yield.
In the end, the party delegates adopted the northern platform. Northerners outnumbered southerners in the polling because the party based state delegations on population. At that point, many of the southerners walked out of the convention. The meeting adjourned because there were not enough members present to nominate a presidential candidate. Two months later, northern Democrats met in Baltimore, Maryland; southern Democrats met in Richmond, Virginia. The two groups conferred with each other but were unable to resolve their differences. The northern Democrats nominated Stephen Douglas. The southern Democrats nominated Kentucky’s John C. Breckenridge, who was the vice president at the time. A third group of Democrats, along with some former Whigs, formed the Constitutional Union Party in an attempt to throw the election to the House of Representatives. They nominated Tennessee’s John Bell.
The split in the Democratic Party presented an excellent opportunity for the Republican Party to secure victory. They met in Chicago, Illinois. To win, however, the party needed to build on their showing in 1856. Somewhat expecting to lose California, Oregon, and possibly New Jersey, they directed the most attention to Pennsylvania, Illinois, and Indiana. Therefore, party leaders worked to develop a platform that dealt with more than just slavery. They also set out to choose a nominee who could reach the widest range of northern voters. Few Republicans expected to have any presence in the South. With respect to the platform, the party retained their stance against the expansion of slavery but condemned John Brown’s raid. They also promoted free homesteads in the West, a protective tariff, and a transcontinental railroad. Moreover, they supported immigrant political rights in order to ward off any lingering concerns about their ties to the nativist movement.
Most delegates knew the selection of a candidate was more important than the platform. The Republicans had a tough choice to make because they needed to find someone who could appeal to conservative and radical voters. Leading contenders for the nomination included Illinois’s Abraham Lincoln, Missouri’s Edward Bates, New York’s William H. Seward, Ohio’s Salmon P. Chase, and Pennsylvania’s Simon Cameron. Seward appeared strong going into voting. Nevertheless, some leaders hoped to nominate a candidate who could help the party in its weaker states. They knew the Republicans would carry New York regardless of whether the party nominated the state’s favorite son. Moreover, many voters linked Seward with the radical abolitionist sentiments because of his “Higher Law” speech. On the third ballot, Lincoln defeated Seward. Three things worked in Lincoln’s favor: party members saw him as a moderate, his humble origins gave him a good political personality, and he came from the crucial state of Illinois.
The election disintegrated into two separate contests: Lincoln versus Douglas in the North and Breckinridge versus Bell in the South. Lincoln focused all of his efforts on the North; he did not even appear on the ballot in most southern states. Breckinridge, likewise, focused all of his attention on the South. Bell attempted to reach out to other unionists. Douglas broke with tradition and campaigned on his own behalf. He traveled all over the eastern part of the country before the election. In speech after speech, Douglas claimed only he could prevent disunion. Douglas’s effort, however, could not overcome the split in the Democratic Party, which guaranteed a Republican victory. Lincoln took all the free states except New Jersey, which he split with Douglas. Lincoln won just under 40 percent, which was only a plurality of the popular vote; combined, the opposition nevertheless could not stop him from winning the Electoral College.
The Secession Crisis
Before the 1860 election, southern leaders proclaimed disunion would follow if Lincoln won. William Yancey even toured the North in October. At his speaking engagements, he described how an end to slavery would destroy the southern way of life, even if the Republicans did not intend to abolish slavery where it already existed. Kentucky’s John J. Crittenden, a longtime unionist, echoed this sentiment. He noted many southerners concluded they had no choice but to secede if the Republicans triumphed. Many northerners, who had heard the threats before, discounted the possibility. Heeding them in the past only made the South more demanding. Buchanan won in 1856 because northern Democrats feared secession; his presidency led to the Dred Scott decision and the Lecompton Constitution. Some Republicans asked Lincoln to issue a statement to calm southern fears, but he chose not to. He reasoned little he might say would placate them.
South Carolina voted to secede from the Union in December. For years, secessionists in the state had waited for the right moment to leave the Union. Lincoln’s victory allowed the separatists to triumph at the state’s secession convention. Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas soon followed suit. In each of these states, the debate over secession hinged on when and how, as opposed to whether they should. The southerners who left the Union believed they had the legal right to do so. Secessionists, as Jefferson Davis put it, sought to defend the liberty their fathers and grandfathers fought for during the Revolution. They championed the idea of states’ rights, noting the federal government should never infringe on their right to own property or to take that property anywhere in the country. To encourage non-slaveholders to support secession, they also used the ideas of white supremacy. Slavery made all whites, even poor whites, superior to blacks.
In February 1861, the seven seceded states met in Montgomery, Alabama to form the Confederate States of America. Four additional southern states, Virginia, North Carolina, Tennessee, and Arkansas, gave a warning to the federal government that if the government used force against the seceded states, then they too would leave the Union. Meanwhile, James Buchanan denied the southern states had the right to secede. He noted that “the Union shall be perpetual” and further suggested that preservation of the alliance trumped states’ rights. Nevertheless, he declared that the federal government had no authority to coerce a sovereign state. The president apparently hoped to encourage the two sides to compromise before he left office, since most northerners remained unsure as to the appropriate response to the southerners’ move.
Before Lincoln’s inauguration, various individuals and groups worked on some form of compromise to end the crisis. Senator John J. Crittenden led one of the most important efforts. His plan called for a constitutional amendment, which would recognize slavery as existing in all territories south of the Missouri Compromise line, the 36°30’ line. The amendment would also guarantee that the federal government would not attempt to tamper with the institution of slavery in the future. However, the compromise required the support of the president-elect. Lincoln refused to support the plan because it contradicted one of the main principles of the Republican Party, which was to stop the further spread of slavery into the territories. The Crittenden Compromise went nowhere, nor did any of the other proposals to avoid disunion. Every suggestion required the North, or the Republicans, to make all the concessions. In early 1861, the Republicans would not submit. Thus, the nation waited for Lincoln’s inauguration on March 4, 1861 to see whether secession would lead to war.
15.4.4: Before You Move On...
After James Buchanan took office, the United States continued down the road to disunion. While the country dealt with a financial crisis and the ongoing question of Kansas, the Supreme Court weighed in on the matter of slavery in the Dred Scott v. Sandford (1857) decision. Much to the delight of southerners, the Court asserted the right of slave owners to transport their slaves anywhere within the territories, whether that territory was free or permitted slavery. Likewise, the decision created a storm of protest in the northern states. The famous debates between Republican Abraham Lincoln and Democrat Stephen Douglas in 1858 as they vied for a position in the U.S. Senate deepened the national division over slavery. John Brown and his cohorts riveted national attention upon Harper’s Ferry with their failed attempt to foment a widespread southern slave rebellion in 1859.
As the critical presidential election of 1860 approached, the Democratic Party stood as one of the few remaining national institutions. It too proved unable to maintain unity in the face of the slavery debate as it split into three factions after its convention in Charleston, South Carolina. This three-way division among Stephen Douglas, John Breckinridge, and John Bell presented the Republican Party an opportunity to win the presidency, which they did with the nomination of Abraham Lincoln. After Lincoln’s election, South Carolina, followed by six other southern states, seceded from the Union. In February 1861, these states met in Montgomery, Alabama, and formed the Confederate States of America, setting the stage for a civil war.
In the Dred Scott v. Sandford decision, the Supreme Court
a. ruled that slaves who were taken to free states were free.
b. ruled that slaves who escaped must be returned to their owners.
c. stated that blacks did not have federal citizenship and could not bring suit in federal courts.
d. declared the Missouri Compromise constitutional.
In the Kansas territory, the proposed Lecompton Constitution showed the dominance of the Free Soilers.
What significant event occurred at the 1860 Democratic Convention in Charleston?
a. Southern delegates walked out.
b. Northern delegates walked out.
c. Delegates nominated Abraham Lincoln for the presidency.
d. Delegates nominated Jefferson Davis for the presidency. |
Comparing Decimals Teacher Resources
Find Comparing Decimals educational ideas and activities
Showing 141 - 160 of 1,938 resources
In this math worksheet, students will work with a coach to compare and order numbers with decimals to the hundredths place. Students will follow a step-by-step process and notes are provided for the coach.
Learners practice converting fractions, decimals, and percentages to communicate equal amounts. They fill in a table, determine the percentage of a total area that is shaded in diagrams, complete equations with decimals to the hundredths place, order decimals and fractions, and complete four story problems. Links to online drills for conversions like the ones practiced in this exercise are included.
Create your own fraction kits by folding and labeling paper using fraction vocabulary. Learners then work in groups to use these in comparing and sequencing both whole numbers and fractions. They also create unit cubes and develop an understanding that one unit cube is equivalent to a fractional amount.
Students can practice sorting and rounding decimals in this review activity, which features several examples of each skill as well as twenty-two practice exercises. This worksheet will also work well in an electronic format, as each question allows students to check their answers.
In this comparing and ordering decimals worksheet, students compare pairs of decimals and order numbers from smallest to greatest. Students solve fifteen problems.
For this algebra worksheet, students are asked to convert between percents, fractions and decimals. They calculate interest as it relates to real world scenario. There are 5 a,b,c questions.
Sixth graders review fractions and decimals. In this algebra lesson, 6th graders convert between fractions and decimals correctly. They show understanding of addition and subtraction of fractions using like terms.
Students examine the categories of the Dewey Decimal System. They categorize various grocery items into food categories, discuss a poster of the Dewey Decimal Classification system, and in small groups categorize twelve books.
In this ordering decimal numbers learning exercise, students first read a one page explanation about comparing and ordering decimal numbers. Students then solve ten problems in which a series of 5 decimal numbers is ordered from least to greatest.
Fourth graders complete a journal page to review decimal concepts. In this decimal lesson plan, 4th graders use manipulatives to represent decimal amounts. Students complete a homework sheet related to decimal and fractional parts.
In this math worksheet, learners will work with a coach to add and subtract numbers with decimals to the hundredths. Students will follow a step-by-step process and notes are provided for the coach.
Small groups work together to brainstorm ways we communicate decimals. They practice using word form, expanded form, and standard form, as well as fractions, money, and percentages. They sort word cards, matching decimals expressed in standard, word, and expanded form. Includes links to interactive games, word cards and charts for sorting, an online self-check quiz, exit tickets, and more.
Fifth and sixth graders read the information on how to compare decimal numbers to the thousandths place. They solve seven problems with comparing and ordering.
Knowing both the Dewey Decimal and Library of Congress Classification systems, and how to read their call numbers, facilitates information literacy, a critical skill for 21st century life. High schoolers, who may be familiar with Dewey, learn to navigate the LCC system, compare the two in a Venn diagram, and create call numbers for books and subjects in their class schedules. Links in the resource are missing; we've added 3 so everything you need is a click away.
Pupils complete a multiple choice pre-test based on ordering whole numbers and comparing decimals. This is an interactive worksheet.
Students explore word problems involving decimals. In this math lesson, students discuss the steps in solving these problems. Additionally, students write in the steps of solving a problem in a chart.
Third graders create rain gauges and take them home. Individually, they record the rainfall amounts at all of their homes over a 2-week period and then bring the data back to class so they can compare the different amounts of rainfall that occurred over a relatively small geographic area.
In this comparing numbers worksheet, students observe sets of two negative numbers and write the symbols for greater than, less than, or equal. Students solve 21 problems.
In this decimal comparison activity, students fill in the place holder with the correct less than, greater than, or equal to symbol. There are 21 problems for students to complete.
Fourth graders use fraction strips to compare and order fractions. Identify various ways a figure can be divided. Find equivalent fractions. Recognize and order fractions with the denominators 2,3,4,5,6,8,10, and 12. |
The cranial nerves contain the sensory and motor nerve fibers that innervate the head. The cell bodies of the sensory neurons lie either in receptor organs (e.g., the nose for smell, or the eye for vision) or within cranial sensory ganglia, which lie along some cranial nerves (V, VII–X) just external to the brain. The cranial sensory ganglia are directly comparable to the dorsal root ganglia on the spinal nerves. The cell bodies of most cranial motor neurons occur in cranial nerve nuclei in the ventral gray matter of the brain stem—just as cell bodies of spinal motor neurons occur in the ventral gray matter of the spinal cord.
The twelve cranial nerves and the origins of their names are briefly described below.
- I. Olfactory. This is the sensory nerve of smell.
- II. Optic. Because it develops as an outgrowth of the brain, this sensory nerve of vision is not a true nerve at all. It is more correctly called a brain tract.
- III. Oculomotor. The name oculomotor means “eye mover.” This nerve innervates four of the extrinsic eye muscles—muscles that move the eyeball in the orbit.
- IV. Trochlear. The name trochlear means “pulley.” This nerve innervates an extrinsic eye muscle that hooks through a pulley-shaped ligament in the orbit.
- V. Trigeminal. The name trigeminal means “threefold,” which refers to this nerve’s three major branches. The trigeminal nerve provides general sensory innervation to the face and motor innervation to the chewing muscles.
- VI. Abducens. This nerve was so named because it innervates the muscle that abducts the eyeball (turns the eye laterally).
- VII. Facial. This nerve innervates the muscles of facial expression as well as other structures.
- VIII. Vestibulocochlear. This sensory nerve of hearing and equilibrium was once called the auditory nerve.
- IX. Glossopharyngeal. The name glossopharyngeal means “tongue and pharynx,” structures that this nerve helps to innervate.
- X. Vagus. The name vagus means “wanderer.” This nerve “wanders” beyond the head into the thorax and abdomen.
- XI. Accessory. This nerve was once called the spinal accessory nerve. It originates from the cervical region of the spinal cord, enters the skull through the foramen magnum, and exits the skull with the vagus nerve. The accessory nerve carries motor innervation to the trapezius and sternocleidomastoid muscles.
- XII. Hypoglossal. The name hypoglossal means “below the tongue.” This nerve runs inferior to the tongue and innervates the tongue muscles.
Based on the types of fibers they contain, the 12 cranial nerves can be classified into three functional groups:
1. Primarily or exclusively sensory nerves (I, II, VIII) that contain special sensory fibers for smell (I), vision (II), and hearing and equilibrium (VIII).
2. Primarily motor nerves (III, IV, VI, XI, XII) that contain somatic motor fibers to skeletal muscles of the eye, neck, and tongue.
3. Mixed (motor and sensory) nerves (V, VII, IX, X). These mixed nerves supply sensory innervation to the face (through general somatic sensory fibers) and to the mouth and viscera (general visceral sensory), including the taste buds for the sense of taste (special visceral sensory). These nerves also innervate pharyngeal arch muscles (somatic motor), such as the chewing muscles (V) and the muscles of facial expression (VII).
Additionally, four of the cranial nerves (III, VII, IX, X) contain visceral motor fibers that regulate visceral muscle and glands throughout much of the body. These motor fibers belong to the parasympathetic division of the autonomic nervous system. The autonomic nervous system innervates body structures through chains of two motor neurons. The cell bodies of the second neurons occupy autonomic motor ganglia in the peripheral nervous system. The location of these peripheral autonomic ganglia are described in the pathway of these four nerves.
Cranial nerves are traditionally classified as sensory (I, II, VIII), motor (III, IV, VI, XI, XII), or mixed (V, VII, IX, X). In reality, only cranial nerves I and II (for smell and vision) are purely sensory, whereas all of the rest contain both afferent and efferent fibers and are therefore mixed nerves. Those traditionally classified as motor not only stimulate muscle contractions but also contain sensory fibers of proprioception, which provide the brain with feedback for controlling muscle action and make one aware of such things as the position of the tongue and orientation of the head. Cranial nerve VIII, concerned with hearing and equilibrium, is traditionally classified as sensory, but it also has motor fibers that return signals to the inner ear and tune it to sharpen the sense of hearing. The nerves traditionally classified as mixed have sensory functions quite unrelated to their motor functions. For example, the facial nerve (VII) has a sensory role in taste and a motor role in controlling facial expressions.
Figure 1. 12 Cranial nerves
Cranial nerves mnemonic
The following mnemonic phrase can help you remember the first letters of the names of the 12 cranial nerves in their proper order:
“Oh, Oh, Oh, To Touch And Feel Very Good Velvet, AH!”
Cranial nerves function
I. Olfactory Nerve
This is the nerve for the sense of smell. It consists of several separate fascicles that pass independently through the cribriform plate in the roof of the nasal cavity. It is not visible on brains removed from the skull because these fascicles are severed by removal of the brain.
Sensory function: Special visceral sensory, sense of smell.
Effect of Damage: Impaired sense of smell. Fracture of the ethmoid bone or lesions of olfactory fibers may result in partial or total loss of smell, a condition known as anosmia
Origin Olfactory receptor cells (bipolar neurons) in the olfactory epithelium of the nasal cavity.
Pathway: Pass through the cribriform foramina of the ethmoid bone to synapse in the olfactory bulb. Fibers of olfactory bulb neurons extend posteriorly beneath the frontal lobe as the olfactory tract. Terminate in the primary olfactory cortex of the cerebrum.
Figure 1. Olfactory nerve (Cranial nerve 1)
II. Optic Nerve
This is the nerve for vision.
Sensory function: Special somatic sensory, vision.
Effect of Damage: Damage to an optic nerve results in blindness in the eye served by the nerve; damage to the visual pathway distal to the optic chiasma results in partial visual losses; visual defects are called anopsias.
Origin: Retina of the eye.
Pathway: Pass through the optic canal of the sphenoid bone. Optic nerves converge to form the optic chiasma, where fibers partially cross over, then continue as the optic tracts to synapse in the thalamus. Thalamic fibers project to and terminate in the primary visual cortex in the occipital lobe.
Figure 2. Optic nerve (Cranial nerve 2)
III. Oculomotor Nerve
This is the nerve for vision.
Somatic motor function: Innervate four extrinsic eye muscles that direct the eyeball: superior rectus, medial rectus, inferior rectus, inferior oblique muscles. Innervate levator palpebrae superioris muscle that elevates the upper eyelid. Afferent proprioceptor fibers return from the extrinsic eye muscles.
Visceral motor function (parasympathetic): Constrictor muscles of the iris constrict the pupil. Ciliary muscle controls lens shape.
Effect of Damage: Because the actions of the two extrinsic eye muscles not served by cranial nerve III are unopposed, the eye cannot be moved up or inward, and at rest the eye turns laterally (external strabismus). The upper eyelid droops (ptosis), and the person has double vision.
Origin: Oculomotor nuclei in the ventral midbrain.
Pathway: Pass through the superior orbital fissure to enter the orbit. Parasympathetic fibers from the brain stem synapse with post ganglionic neurons in the ciliary ganglion that innervate the iris and ciliary muscle.
Figure 3. Oculomotor nerve (Cranial nerve 3)
IV. Trochlear Nerve
This is the nerve for vision.
Somatic motor function: Innervate the superior oblique muscle. This muscle passes through a ligamentous pulley at the roof of the orbit, the trochlea, from which its name is derived. Afferent proprioceptor fibers return from the superior oblique.
Effect of Damage: Damage to a trochlear nerve results in double vision and reduced ability to rotate the eye inferolaterally.
Origin: Trochlear nuclei in the dorsal midbrain.
Pathway: Pass ventrally around the midbrain; pass through the superior orbital fissure to enter the orbit.
Figure 4. Trochlear nerve (Cranial nerve 4)
V. Trigeminal Nerve
The large trigeminal nerve forms three divisions (trigeminal = threefold): ophthalmic (V1), maxillary (V2), and mandibular (V3) divisions.
This mixed nerve is the general somatic sensory nerve of the face for touch, temperature, and pain. The mandibular division supplies somatic motor innervation to the chewing muscles.
- V1 General somatic sensation from skin of anterior scalp and forehead, upper eyelid and nose, nasal cavity mucosa, cornea, and lacrimal gland.
- V2 General somatic sensation from skin of cheek, upper lip, and lower eyelid, nasal cavity mucosa, palate, upper teeth.
- V3 General somatic sensation from skin of chin and temporal region of scalp, anterior tongue and lower teeth.
Somatic motor function: V3 Innervate the muscles of mastication: temporalis, masseter, pterygoids, anterior belly of digastric. Afferent proprioceptor fibers return from these muscles.
Clinical significance: Anesthesia for Upper and Lower Jaws. Dentists desensitize upper and lower jaws by injecting local anesthetic (such as Novocain) into alveolar branches of the maxillary and mandibular divisions of the trigeminal nerve, respectively. This blocks pain-transmitting fibers from the teeth, and the surrounding tissues become numb.
Origin: Sensory receptors in skin and mucosa of face. Motor fibers from trigeminal motor nucleus in pons.
Pathway: Cell bodies of sensory neurons of all three divisions located in the large trigeminal ganglion. Fibers extend to trigeminal nuclei in the pons.
Through the Skull: V1 Superior orbital fissure. Cutaneous Branch: Supraorbital foramen.
Through the Skull: V2 Foramen rotundum. Cutaneous Branch: Infraorbital foramen.
Through the Skull: V3 Foramen ovale and Mandibular foramen. Cutaneous Branch: Mental foramen.
Figure 5. Trigeminal nerve (Cranial nerve 5)
VI. Abducens Nerve
Somatic motor function: Innervate the lateral rectus muscle. This muscle abducts the eye. Afferent proprioceptor fibers return from the lateral rectus.
Effect of Damage: In abducens nerve paralysis, the eye cannot be moved laterally; at rest, affected eyeball turns medially (internal strabismus).
Origin: Abducens nuclei in the inferior pons.
Pathway: Pass through the superior orbital fissure to enter the orbit.
Figure 6. Abducens nerve (Cranial nerve 6)
VII. Facial Nerve
A mixed nerve: Chief somatic motor nerve to the facial muscles; parasympathetic innervation to glands; special sensory taste from the tongue.
Sensory function: Special visceral sensory from taste buds on anterior two-thirds of tongue. General somatic sensory from small patch of skin on the ear.
Somatic motor function: Five major branches on face: temporal, zygomatic, buccal, mandibular, and cervical, to innervate the facial muscles. Also innervates the posterior belly of digastric. Afferent proprioceptor fibers return from these muscles.
Visceral motor function (parasympathetic): Innervate the lacrimal (tear) glands, nasal and palatine glands, and the submandibular and sublingual salivary glands.
Effect of Damage: Bell’s palsy, characterized by paralysis of facial muscles on affected side and partial loss of taste sensation, may develop rapidly (often overnight). It is caused by herpes simplex (viral) infection, which produces inflammation and swelling of the facial nerve. The lower eyelid droops, the corner of the mouth sags (making it difficult to eat or speak normally), and the eye constantly drips tears and cannot be completely closed. The condition may disappear spontaneously without treatment.
Origin: Fibers emerge from the pons, just lateral to abducens.
Pathway: Fibers enter the temporal bone via the internal acoustic meatus. Chorda tympani branches off to innervate the two salivary glands and tongue. Branch to facial muscles emerges from the temporal bone through the stylomastoid foramen and courses to lateral aspect of face. Cell bodies of sensory neurons are in geniculate ganglion. Cell bodies of postganglionic parasympathetic neurons are in pterygopalatine and submandibular ganglia on the trigeminal nerve
Figure 7. Facial nerve (Cranial nerve 7)
VIII. Vestibulocochlear Nerve
Sensory function: Vestibular branch: Special somatic sensory, equilibrium. Cochlear branch: Special somatic sensory, hearing. Small motor component adjusts the sensitivity of the sensory receptors.
Effect of Damage: Lesions of cochlear nerve or cochlear receptors result in central or nerve deafness, whereas damage to vestibular division produces dizziness, rapid involuntary eye movements, loss of balance, nausea, and vomiting.
Origin: Sensory receptors in the inner ear for hearing (within the cochlea) and for equilibrium (within the semicircular canals and vestibule).
Pathway: From the inner ear cavity within the temporal bone, fibers pass through the internal acoustic meatus, merge to form the vestibulocochlear nerve and enter the brain stem at the pons. Sensory nerve cell bodies for vestibular branch located in vestibular ganglia; for the cochlear branch, in the spiral ganglia within the cochlea.
Figure 8. Vestibulocochlear nerve (Cranial nerve 8)
IX. Glossopharyngeal Nerve
Mixed nerve innervating the tongue (general and special sensory), the pharynx, and the parotid salivary gland.
Sensory function: Special visceral sensory from taste buds on posterior third of tongue. General visceral sensory from posterior third of tongue, pharyngeal mucosa, chemoreceptors in the carotid body (which monitor O2 and CO2 in the blood and regulate respiratory rate and depth), and baroreceptors of carotid sinus (regulate blood pressure). General somatic sensory from small area of skin on external ear.
Somatic motor function: Innervate a pharyngeal muscle, stylopharyngeus, which elevates the pharynx during swallowing. Afferent proprioceptor fibers return from this muscle.
Effect of Damage: Injury or inflammation of glossopharyngeal nerves impairs swallowing and taste on the posterior third of the tongue.
Origin: Fibers emerge from the medulla oblongata.
Pathway: Fibers pass through the jugular foramen and travel to the pharynx. Cell bodies of sensory neurons are located in the superior and inferior ganglia. Cell bodies of postganglionic parasympathetic neurons are in otic ganglion on the trigeminal nerve.
Figure 9. Glossopharyngeal nerve (Cranial nerve 9)
Mixed nerves; major function is parasympathetic innervation to the thoracic and abdominal viscera.
Sensory function: General visceral sensory from the thoracic and abdominal viscera, mucosa of larynx and pharynx, carotid sinus (baroreceptor for blood pressure), and carotid and aortic bodies (chemoreceptors for respiration). Special visceral sensory from taste buds on the epiglottis. General somatic sensory from small area of skin on external ear.
Somatic motor function: IInnervates skeletal muscles of the pharynx and larynx involved in swallowing and vocalization. Afferent proprioceptor fibers return from the muscles of the larynx and pharynx.
Visceral motor function (parasympathetic): Innervates the heart, lungs, and abdominal viscera through the transverse colon. Regulates heart rate, breathing, and digestive system activity.
Effect of Damage: Vagal nerve paralysis can lead to hoarseness or loss of voice, difficulty swallowing and impaired digestive system motility. Total destruction of both vagus nerves is incompatible with life, because these parasympathetic nerves are crucial in maintaining the normal state of visceral organ activity; without their influence, the activity of the sympathetic nerves, which mobilize and accelerate vital body processes (and shut down digestion), would be unopposed.
Origin: Fibers emerge from medulla oblongata.
Pathway: Fibers exit the skull through the jugular foramen and descend through the neck into the thorax and abdomen.
Figure 10. Vagus nerve (Cranial nerve 10)
XI. Accessory Nerve
Somatic motor function: Innervate the trapezius and sternocleidomastoid muscles that move the head and neck. Afferent proprioceptor fibers return from these muscles.
Effect of Damage: Injury to the spinal root of one accessory nerve causes the head to turn toward the side of the injury as result of sternocleidomastoid muscle paralysis; shrugging of that shoulder (role of trapezius muscle) becomes difficult.
Origin: Forms from ventral rootlets arising from C1–C5 of the spinal cord. Long considered to have both a cranial and spinal portion, the cranial rootlets have been shown to be part of the vagus nerves.
Pathway: Upon emerging from the spinal cord, spinal rootlets merge to form the accessory nerves, pass into the skull through the foramen magnum, and then exit the skull through the jugular foramen.
Figure 11. Accessory nerve (Cranial nerve 11)
XII. Hypogloassal Nerve
Somatic motor function: Innervate the intrinsic and extrinsic muscles of the tongue. Aid tongue movements during feeding, swallowing, and speech. Afferent proprioceptor fibers return from these muscles.
Effect of Damage: Damage to hypoglossal nerves causes difficulties in speech and swallowing. If both nerves are impaired, the person cannot protrude the tongue; if only one side is affected, the tongue deviates (leans) toward affected side. Eventually the paralyzed side begins to atrophy.
Origin: From a series of roots from the hypoglossal nuclei in the ventral medulla oblongata.
Pathway: Exit the skull through the hypoglossal canal and travel to the tongue.
Figure 12. Hypoglossal nerve (Cranial nerve 12) |
Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots!
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10
Choose a symbol to put into the number sentence.
Can you make a cycle of pairs that add to make a square number
using all the numbers in the box below, once and once only?
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this?
Start by putting one million (1 000 000) into the display of your
calculator. Can you reduce this to 7 using just the 7 key and add,
subtract, multiply, divide and equals as many times as you like?
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . .
If you have only four weights, where could you place them in order
to balance this equaliser?
Imagine a pyramid which is built in square layers of small cubes. If we number the cubes from the top, starting with 1, can you picture which cubes are directly below this first cube?
Make your own double-sided magic square. But can you complete both
sides once you've made the pieces?
Can you put the numbers 1 to 8 into the circles so that the four
calculations are correct?
Place the numbers 1 to 10 in the circles so that each number is the
difference between the two numbers just below it.
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy.
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
Find the values of the nine letters in the sum: FOOT + BALL = GAME
Different combinations of the weights available allow you to make different totals. Which totals can you make?
This challenge extends the Plants investigation so now four or more children are involved.
Here you see the front and back views of a dodecahedron. Each
vertex has been numbered so that the numbers around each pentagonal
face add up to 65. Can you find all the missing numbers?
Place six toy ladybirds into the box so that there are two ladybirds in every column and every row.
This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items.
How have the numbers been placed in this Carroll diagram? Which labels would you put on each row and column?
In a square in which the houses are evenly spaced, numbers 3 and 10
are opposite each other. What is the smallest and what is the
largest possible number of houses in the square?
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
The letters in the following addition sum represent the digits 1
... 9. If A=3 and D=2, what number is represented by "CAYLEY"?
Can you each work out the number on your card? What do you notice?
How could you sort the cards?
If you take a three by three square on a 1-10 addition square and
multiply the diagonally opposite numbers together, what is the
difference between these products. Why?
Can you explain the strategy for winning this game with any target?
We start with one yellow cube and build around it to make a 3x3x3 cube with red cubes. Then we build around that red cube with blue cubes and so on. How many cubes of each colour have we used?
This problem is based on a code using two different prime numbers
less than 10. You'll need to multiply them together and shift the
alphabet forwards by the result. Can you decipher the code?
Here is a chance to play a version of the classic Countdown Game.
Delight your friends with this cunning trick! Can you explain how
Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100.
Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers?
This is an adding game for two players.
An environment which simulates working with Cuisenaire rods.
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2
litres. Find a way to pour 9 litres of drink from one jug to
another until you are left with exactly 3 litres in three of the
A game for 2 people using a pack of cards Turn over 2 cards and try
to make an odd number or a multiple of 3.
There are 44 people coming to a dinner party. There are 15 square
tables that seat 4 people. Find a way to seat the 44 people using
all 15 tables, with no empty places.
What do the digits in the number fifteen add up to? How many other
numbers have digits with the same total but no zeros?
Can you find which shapes you need to put into the grid to make the
totals at the end of each row and the bottom of each column?
Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions.
What do you notice about the date 03.06.09? Or 08.01.09? This
challenge invites you to investigate some interesting dates
How could you put eight beanbags in the hoops so that there are
four in the blue hoop, five in the red and six in the yellow? Can
you find all the ways of doing this?
A game for 2 people. Use your skills of addition, subtraction, multiplication and division to blast the asteroids.
A game for 2 or more players with a pack of cards. Practise your
skills of addition, subtraction, multiplication and division to hit
the target score.
Exactly 195 digits have been used to number the pages in a book.
How many pages does the book have?
Tim had nine cards each with a different number from 1 to 9 on it. How could he have put them into three piles so that the total in each pile was 15?
Do you notice anything about the solutions when you add and/or
subtract consecutive negative numbers?
Is it possible to rearrange the numbers 1,2......12 around a clock
face in such a way that every two numbers in adjacent positions
differ by any of 3, 4 or 5 hours? |
The Ultimate Guide to ANOVA
Get all of your ANOVA questions answered here.
The Ultimate Guide to ANOVA
ANOVA is the go-to analysis tool for classical experimental design, which forms the backbone of scientific research.
In this article, we’ll guide you through what ANOVA is, how to determine which version to use to evaluate your particular experiment, and provide detailed examples for the most common forms of ANOVA.
This includes a (brief) discussion of crossed, nested, fixed and random factors, and covers the majority of ANOVA models that a scientist would encounter before requiring the assistance of a statistician or modeling expert.
What is ANOVA used for?
ANOVA, or (Fisher’s) analysis of variance, is a critical analytical technique for evaluating differences between three or more sample means from an experiment. As the name implies, it partitions out the variance in the response variable based on one or more explanatory factors.
As you will see there are many types of ANOVA such as one-, two-, and three-way ANOVA as well as nested and repeated measures ANOVA. The graphic below shows a simple example of an experiment that requires ANOVA in which researchers measured the levels of neutrophil extracellular traps (NETs) in plasma across patients with different viral respiratory infections.
Many researchers may not realize that, for the majority of experiments, the characteristics of the experiment that you run dictate the ANOVA that you need to use to test the results. While it’s a massive topic (with professional training needed for some of the advanced techniques), this is a practical guide covering what most researchers need to know about ANOVA.
When should I use ANOVA?
If your response variable is numeric, and you’re looking for how that number differs across several categorical groups, then ANOVA is an ideal place to start. After running an experiment, ANOVA is used to analyze whether there are differences between the mean response of one or more of these grouping factors.
ANOVA can handle a large variety of experimental factors such as repeated measures on the same experimental unit (e.g., before/during/after).
If instead of evaluating treatment differences, you want to develop a model using a set of numeric variables to predict that numeric response variable, see linear regression and t tests.
What is the difference between one-way, two-way and three-way ANOVA?
The number of “ways” in ANOVA (e.g., one-way, two-way, …) is simply the number of factors in your experiment.
Although the difference in names sounds trivial, the complexity of ANOVA increases greatly with each added factor. To use an example from agriculture, let’s say we have designed an experiment to research how different factors influence the yield of a crop.
An experiment with a single factor
In the most basic version, we want to evaluate three different fertilizers. Because we have more than two groups, we have to use ANOVA. Since there is only one factor (fertilizer), this is a one-way ANOVA. One-way ANOVA is the easiest to analyze and understand, but probably not that useful in practice, because having only one factor is a pretty simplistic experiment.
What happens when you add a second factor?
If we have two different fields, we might want to add a second factor to see if the field itself influences growth. Within each field, we apply all three fertilizers (which is still the main interest). This is called a crossed design. In this case we have two factors, field and fertilizer, and would need a two-way ANOVA.
As you might imagine, this makes interpretation more complicated (although still very manageable) simply because more factors are involved. There is now a fertilizer effect, as well as a field effect, and there could be an interaction effect, where the fertilizer behaves differently on each field.
How about adding a third factor?
Finally, it is possible to have more than two factors in an ANOVA. In our example, perhaps you also wanted to test out different irrigation systems. You could have a three-way ANOVA due to the presence of fertilizer, field, and irrigation factors. This greatly increases the complication.
Now in addition to the three main effects (fertilizer, field and irrigation), there are three two-way interaction effects (fertilizer by field, fertilizer by irrigation, and field by irrigation), and one three-way interaction effect.
If any of the interaction effects are statistically significant, then presenting the results gets quite complicated. “Fertilizer A works better on Field B with Irrigation Method C ….”
In practice, two-way ANOVA is often as complex as many researchers want to get before consulting with a statistician. That being said, three-way ANOVAs are cumbersome, but manageable when each factor only has two levels.
What are crossed and nested factors?
In addition to increasing the difficulty with interpretation, experiments (or the resulting ANOVA) with more than one factor add another level of complexity, which is determining whether the factors are crossed or nested.
With crossed factors, every combination of levels among each factor is observed. For example, each fertilizer is applied to each field (so the fields are subdivided into three sections in this case).
With nested factors, different levels of a factor appear within another factor. An example is applying different fertilizers to each field, such as fertilizers A and B to field 1 and fertilizers C and D to field 2. See more about nested ANOVA here.
What are fixed and random factors?
Another challenging concept with two or more factors is determining whether to treat the factors as fixed or random.
Fixed factors are used when all levels of a factor (e.g., Fertilizer A, Fertilizer B, Fertilizer C) are specified and you want to determine the effect that factor has on the mean response.
Random factors are used when only some levels of a factor are observed (e.g., Field 1, Field 2, Field 3) out of a large or infinite possible number (e.g., all fields), but rather than specify the effect of the factor, which you can’t do because you didn’t observe all possible levels, you want to quantify the variability that’s within that factor (variability added within each field).
Many introductory courses on ANOVA only discuss fixed factors, and we will largely follow suit other than with two specific scenarios (nested factors and repeated measures).
What are the (practical) assumptions of ANOVA?
These are one-way ANOVA assumptions, but also carryover for more complicated two-way or repeated measures ANOVA.
- Categorical treatment or factor variables - ANOVA evaluates mean differences between one or more categorical variables (such as treatment groups), which are referred to as factors or “ways.”
- Three or more groups - There must be at least three distinct groups (or levels of a categorical variable) across all factors in an ANOVA. The possibilities are endless: one factor of three different groups, two factors of two groups each (2x2), and so on. If you have fewer than three groups, you can probably get away with a simple t-test.
- Numeric Response - While the groups are categorical, the data measured in each group (i.e., the response variable) still needs to be numeric. ANOVA is fundamentally a quantitative method for measuring the differences in a numeric response between groups. If your response variable isn’t continuous, then you need a more specialized modelling framework such as logistic regression or chi-square contingency table analysis to name a few.
- Random assignment - The makeup of each experimental group should be determined by random selection.
- Normality - The distribution within each factor combination should be approximately normal, although ANOVA is fairly robust to this assumption as the sample size increases due to the central limit theorem.
What is the formula for ANOVA?
The formula to calculate ANOVA varies depending on the number of factors, assumptions about how the factors influence the model (blocking variables, fixed or random effects, nested factors, etc.), and any potential overlap or correlation between observed values (e.g., subsampling, repeated measures).
The good news about running ANOVA in the 21st century is that statistical software handles the majority of the tedious calculations. The main thing that a researcher needs to do is select the appropriate ANOVA.
An example formula for a two-factor crossed ANOVA is:
How do I know which ANOVA to use?
As statisticians, we like to imagine that you’re reading this before you’ve run your experiment. You can save a lot of headache by simplifying an experiment into a standard format (when possible) to make the analysis straightforward.
Regardless, we’ll walk you through picking the right ANOVA for your experiment and provide examples for the most popular cases. The first question is:
Do you only have a single factor of interest?
If you have only measured a single factor (e.g., fertilizer A, fertilizer B, .etc.), then use one-way ANOVA. If you have more than one, then you need to consider the following:
Are you measuring the same observational unit (e.g., subject) multiple times?
This is where repeated measures come into play and can be a really confusing question for researchers, but if this sounds like it might describe your experiment, see repeated measures ANOVA. Otherwise:
Are any of the factors nested, where the levels are different depending on the levels of another factor?
In this case, you have a nested ANOVA design. If you don’t have nested factors or repeated measures, then it becomes simple:
Do you have two categorical factors?
Then use two-way ANOVA.
Do you have three categorical factors?
Use three-way ANOVA.
Do you have variables that you recorded that aren’t categorical (such as age, weight, etc.)?
Although these are outside the scope of this guide, if you have a single continuous variable, you might be able to use ANCOVA, which allows for a continuous covariate. With multiple continuous covariates, you probably want to use a mixed model or possibly multiple linear regression.
Prism does offer multiple linear regression but assumes that all factors are fixed. A full “mixed model” analysis is not yet available in Prism, but is offered as options within the one- and two-way ANOVA parameters.
How do I perform ANOVA?
Once you’ve determined which ANOVA is appropriate for your experiment, use statistical software to run the calculations. Below, we provide detailed examples of one, two and three-way ANOVA models.
How do I read and interpret an ANOVA table?
Interpreting any kind of ANOVA should start with the ANOVA table in the output. These tables are what give ANOVA its name, since they partition out the variance in the response into the various factors and interaction terms. This is done by calculating the sum of squares (SS) and mean squares (MS), which can be used to determine the variance in the response that is explained by each factor.
If you have predetermined your level of significance, interpretation mostly comes down to the p-values that come from the F-tests. The null hypothesis for each factor is that there is no significant difference between groups of that factor. All of the following factors are statistically significant with a very small p-value.
One-way ANOVA Example
An example of one-way ANOVA is an experiment of cell growth in petri dishes. The response variable is a measure of their growth, and the variable of interest is treatment, which has three levels: formula A, formula B, and a control.
Classic one-way ANOVA assumes equal variances within each sample group. If that isn’t a valid assumption for your data, you have a number of alternatives.
Calculating a one-way ANOVA
Using Prism to do the analysis, we will run a one-way ANOVA and will choose 95% as our significance threshold. Since we are interested in the differences between each of the three groups, we will evaluate each and correct for multiple comparisons (more on this later!).
For the following, we’ll assume equal variances within the treatment groups. Consider
The first test to look at is the overall (or omnibus) F-test, with the null hypothesis that there is no significant difference between any of the treatment groups. In this case, there is a significant difference between the three groups (p<0.0001), which tells us that at least one of the groups has a statistically significant difference.
Now we can move to the heart of the issue, which is to determine which group means are statistically different. To learn more, we should graph the data and test the differences (using a multiple comparison correction).
Graphing one-way ANOVA
The easiest way to visualize the results from an ANOVA is to use a simple chart that shows all of the individual points. Rather than a bar chart, it’s best to use a plot that shows all of the data points (and means) for each group such as a scatter or violin plot.
As an example, below you can see a graph of the cell growth levels for each data point in each treatment group, along with a line to represent their mean. This can help give credence to any significant differences found, as well as show how closely groups overlap.
Determining statistical significance between groups
In addition to the graphic, what we really want to know is which treatment means are statistically different from each other. Because we are performing multiple tests, we’ll use a multiple comparison correction. For our example, we’ll use Tukey’s correction (although if we were only interested in the difference between each formula to the control, we could use Dunnett’s correction instead).
In this case, the mean cell growth for Formula A is significantly higher than the control (p<.0001) and Formula B (p=0.002), but there’s no significant difference between Formula B and the control.
Two-way ANOVA example
For two-way ANOVA, there are two factors involved. Our example will focus on a case of cell lines. Suppose we have a 2x2 design (four total groupings). There are two different treatments (serum-starved and normal culture) and two different fields. There are 19 total cell line “experimental units” being evaluated, up to 5 in each group (note that with 4 groups and 19 observational units, this study isn’t balanced). Although there are multiple units in each group, they are all completely different replicates and therefore not repeated measures of the same unit.
As with one-way ANOVA, it’s a good idea to graph the data as well as look at the ANOVA table for results.
Graphing two-way ANOVA
There are many options here. Like our one-way example, we recommend a similar graphing approach that shows all the data points themselves along with the means.
Determining statistical significance between groups in two-way ANOVA
Let’s use a two-way ANOVA with a 95% significance threshold to evaluate both factors’ effects on the response, a measure of growth.
Feel free to use our two-way ANOVA checklist as often as you need for your own analysis.
First, notice there are three sources of variation included in the model, which are interaction, treatment, and field.
The first effect to look at is the interaction term, because if it’s significant, it changes how you interpret the main effects (e.g., treatment and field). The interaction effect calculates if the effect of a factor depends on the other factor. In this case, the significant interaction term (p<.0001) indicates that the treatment effect depends on the field type.
A significant interaction term muddies the interpretation, so that you no longer have the simple conclusion that “Treatment A outperforms Treatment B.” In this case, the graphic is particularly useful. It suggests that while there may be some difference between three of the groups, the precise combination of serum starved in field 2 outperformed the rest.
To confirm whether there is a statistically significant result, we would run pairwise comparisons (comparing each factor level combination with every other one) and account for multiple comparisons.
Do I need to correct for multiple comparisons for two-way ANOVA?
If you’re comparing the means for more than one combination of treatment groups, then absolutely! Here’s more information about multiple comparisons for two-way ANOVA.
Repeated measures ANOVA
So far we have focused almost exclusively on “ordinary” ANOVA and its differences depending on how many factors are involved. In all of these cases, each observation is completely unrelated to the others. Other than the combination of factors that may be the same across replicates, each replicate on its own is independent.
There is a second common branch of ANOVA known as repeated measures. In these cases, the units are related in that they are matched up in some way. Repeated measures are used to model correlation between measurements within an individual or subject. Repeated measures ANOVA is useful (and increases statistical power) when the variability within individuals is large relative to the variability among individuals.
It’s important that all levels of your repeated measures factor (usually time) are consistent. If they aren’t, you’ll need to consider running a mixed model, which is a more advanced statistical technique.
There are two common forms of repeated measures:
- You observe the same individual or subject at different time points. If you’re familiar with paired t-tests, this is an extension to that. (You can also have the same individual receive all of the treatments, which adds another level of repeated measures.)
- You have a randomized block design, where matched elements receive each treatment. For example, you split a large sample of blood taken from one person into 3 (or more) smaller samples, and each of those smaller samples gets exactly one treatment.
Repeated measures ANOVA can have any number of factors. See analysis checklists for one-way repeated measures ANOVA and two-way repeated measures ANOVA.
What does it mean to assume sphericity with repeated measures ANOVA?
Repeated measures are almost always treated as random factors, which means that the correlation structure between levels of the repeated measures needs to be defined. The assumption of sphericity means that you assume that each level of the repeated measures has the same correlation with every other level.
This is almost never the case with repeated measures over time (e.g., baseline, at treatment, 1 hour after treatment), and in those cases, we recommend not assuming sphericity. However, if you used a randomized block design, then sphericity is usually appropriate.
Example two-way ANOVA with repeated measures
Say we have two treatments (control and treatment) to evaluate using test animals. We’ll apply both treatments to each two animals (replicates) with sufficient time in between the treatments so there isn’t a crossover (or carry-over) effect. Also, we’ll measure five different time points for each treatment (baseline, at time of injection, one hour after, …). This is repeated measures because we will need to measure matching samples from the same animal under each treatment as we track how its stimulation level changes over time.
The output shows the test results from the main and interaction effects. Due to the interaction between time and treatment being significant (p<.0001), the fact that the treatment main effect isn’t significant (p=.154) isn’t noteworthy.
Graphing repeated measures ANOVA
As we’ve been saying, graphing the data is useful, and this is particularly true when the interaction term is significant. Here we get an explanation of why the interaction between treatment and time was significant, but treatment on its own was not. As soon as one hour after injection (and all time points after), treated units show a higher response level than the control even as it decreases over those 12 hours. Thus the effect of time depends on treatment. At the earlier time points, there is no difference between treatment and control.
Graphing repeated measures data is an art, but a good graphic helps you understand and communicate the results. For example, it’s a completely different experiment, but here’s a great plot of another repeated measures experiment with before and after values that are measured on three different animal types.
What if I have three or more factors?
Interpreting three or more factors is very challenging and usually requires advanced training and experience.
Just as two-way ANOVA is more complex than one-way, three-way ANOVA adds much more potential for confusion. Not only are you dealing with three different factors, you will now be testing seven hypotheses at the same time. Two-way interactions still exist here, and you may even run into a significant three-way interaction term.
It takes careful planning and advanced experimental design to be able to untangle the combinations that will be involved (see more details here).
Non-parametric ANOVA alternatives
As with t-tests (or virtually any statistical method), there are alternatives to ANOVA for testing differences between three groups. ANOVA is means-focused and evaluated in comparison to an F-distribution.
The two main non-parametric cousins to ANOVA are the Kruskal-Wallis and Friedman’s tests. Just as is true with everything else in ANOVA, it is likely that one of the two options is more appropriate for your experiment.
Kruskal-Wallis tests the difference between medians (rather than means) for 3 or more groups. It is only useful as an “ordinary ANOVA” alternative, without matched subjects like you have in repeated measures. Here are some tips for interpreting Kruskal-Wallis test results.
Friedman’s Test is the opposite, designed as an alternative to repeated measures ANOVA with matched subjects. Here are some tips for interpreting Friedman's Test.
What are simple, main, and interaction effects in ANOVA?
Consider the two-way ANOVA model setup that contains two different kinds of effects to evaluate:
The 𝛼 and 𝛽 factors are “main” effects, which are the isolated effect of a given factor. “Main effect” is used interchangeably with “simple effect” in some textbooks.
The interaction term is denoted as “𝛼𝛽”, and it allows for the effect of a factor to depend on the level of another factor. It can only be tested when you have replicates in your study. Otherwise, the error term is assumed to be the interaction term.
What are multiple comparisons?
When you’re doing multiple statistical tests on the same set of data, there’s a greater propensity to discover statistically significant differences that aren’t true differences. Multiple comparison corrections attempt to control for this, and in general control what is called the familywise error rate. There are a number of multiple comparison testing methods, which all have pros and cons depending on your particular experimental design and research questions.
What does the word “way” mean in one-way vs two-way ANOVA?
In statistics overall, it can be hard to keep track of factors, groups, and tails. To the untrained eye “two-way ANOVA” could mean any of these things.
The best way to think about ANOVA is in terms of factors or variables in your experiment. Suppose you have one factor in your analysis (perhaps “treatment”). You will likely see that written as a one-way ANOVA. Even if that factor has several different treatment groups, there is only one factor, and that’s what drives the name.
Also, “way” has absolutely nothing to do with “tails” like a t-test. ANOVA relies on F tests, which can only test for equal vs unequal because they rely on squared terms. So ANOVA does not have the “one-or-two tails” question.
What is the difference between ANOVA and a t-test?
ANOVA is an extension of the t-test. If you only have two group means to compare, use a t-test. Anything more requires ANOVA.
What is the difference between ANOVA and chi-square?
Chi-square is designed for contingency tables, or counts of items within groups (e.g., type of animal). The goal is to see whether the counts in a particular sample match the counts you would expect by random chance.
ANOVA separates subjects into groups for evaluation, but there is some numeric response variable of interest (e.g., glucose level).
Can ANOVA evaluate effects on multiple response variables at the same time?
Multiple response variables makes things much more complicated than multiple factors. ANOVA (as we’ve discussed it here) can obviously handle multiple factors but it isn’t designed for tracking more than one response at a time.
Technically, there is an expansion approach designed for this called Multivariate (or Multiple) ANOVA, or more commonly written as MANOVA. Things get complicated quickly, and in general requires advanced training.
Can ANOVA evaluate numeric factors in addition to the usual categorical factors?
It sounds like you are looking for ANCOVA (analysis of covariance). You can treat a continuous (numeric) factor as categorical, in which case you could use ANOVA, but this is a common point of confusion.
What is the definition of ANOVA?
ANOVA stands for analysis of variance, and, true to its name, it is a statistical technique that analyzes how experimental factors influence the variance in the response variable from an experiment.
What is blocking in Anova?
Blocking is an incredibly powerful and useful strategy in experimental design when you have a factor that you think will heavily influence the outcome, so you want to control for it in your experiment. Blocking affects how the randomization is done with the experiment. Usually blocking variables are nuisance variables that are important to control for but are not inherently of interest.
A simple example is an experiment evaluating the efficacy of a medical drug and blocking by age of the subject. To do blocking, you must first gather the ages of all of the participants in the study, appropriately bin them into groups (e.g., 10-30, 30-50, etc.), and then randomly assign an equal number of treatments to the subjects within each group.
There’s an entire field of study around blocking. Some examples include having multiple blocking variables, incomplete block designs where not all treatments appear in all blocks, and balanced (or unbalanced) blocking designs where equal (or unequal) numbers of replicates appear in each block and treatment combination.
What is ANOVA in statistics?
For a one-way ANOVA test, the overall ANOVA null hypothesis is that the mean responses are equal for all treatments. The ANOVA p-value comes from an F-test.
Can I do ANOVA in R?
While Prism makes ANOVA much more straightforward, you can use open-source coding languages like R as well. Here are some examples of R code for repeated measures ANOVA, both one-way ANOVA in R and two-way ANOVA in R.
Perform your own ANOVA
Are you ready for your own Analysis of variance? Prism makes choosing the correct ANOVA model simple and transparent.
Start your 30 day free trial of Prism and get access to:
- A step by step guide on how to perform ANOVA
- Sample data to save you time
- More tips on how Prism can help your research
With Prism, in a matter of minutes you learn how to go from entering data to performing statistical analyses and generating high-quality graphs. |
Solving Systems Of Equations By Graphing Worksheet Multiple Choice. 366c chapter 7 solving systems of linear equations and inequalities mathematical connections and background graphing systems of equations a solution of a system of equations is the set of points that satisfy each equation in the system. 50 systems of equations worksheet in 2020 graphing inequalities systems of equations linear inequalities.
8th grade math worksheets solving equations. Bell work solving systems by graphing free.
2020 Ap Calculus AB Multiple Choice Practice Vol 2 1 5
Carefully graph each equation on the same coordinate plane. Chalkdoc lets algebra teachers make perfectly customized systems of equations worksheets, activities, and assessments in 60 seconds.
Solving Systems Of Equations By Graphing Worksheet Multiple Choice
Graphing quadratic equations multiple choice.Graphing systems of equations worksheet answers.Here, the student finds a line that has it on the right side and a y value on the left side.It may be printed downloaded or saved and used in your classroom home school or other educational environment to help someone learn math.
Multi step equations worksheet 8th grade solving equations with.Multiple choice questions on system of linear equations.Notice that it would make sure the square by value of solving systems equations using the coefficients of linear equation of equations contain fractions and perpendicular.One of the best ways to use solving systems is by working with graphing worksheets.
One system contains parallel lines.Sat math multiple choice question 290 answer and explanation net.Simply graph each equation and determine where the lines intersect on the graph.Solve a system of equations by graphing.
Solve a system of equations by the elimination method.Solve a system of equations by the substitution method.Solve systems of equations by graphing pre algebra and functions mathplanet.Solving and graphing inequalities worksheet answer key also systems linear inequalities multiple choice worksheet best.
Solving equations and inequalities multiple choice test tessshlo.Solving systems equations graphing practice worksheet.Solving systems fun, solving systems using substitution answers, solving systems multiple choice, solving systems of equations java, solving systems linear inequalities graphing calculator, systems of equations maze.Solving systems of equations by any method multiple choice.
Solving systems of equations by graphing is a method to solve a system of two linear equations.solving systems of equations by graphing follows a specific process in order to simplify the solutions.the first thing you must do when solving systems of equations by graphing is to graph each equation.Solving systems of equations by graphing worksheet answer key.Solving systems of equations by graphing worksheet answers also solving equations involving absolute value worksheet workshe.Some are based off the quadratic equation, while others use the graphs.
Some of the worksheets for this concept are algebra 1 name date block unit 6 test solve each system, chapter 9 systems of equations unit test, unit 7 systems of linear equations algebra i essential, solving a system of linear equations.Some of the worksheets for this concept are graphing lines, ws3, graphing linear equations using intercepts date period, graphing lines, graphing linear equations t1s1, graphing linear equations, graphing linear equations using a table work, systems of equations.Some of the worksheets for this concept are name class date preassessment quadratic unit, unit 10 quadratic equations chapter test part 1 multiple, multiple choice l1s1, quadratic functions equations multiple, quadratic functions vocabulary, math 120.Some of the worksheets for this concept are practice solving systems of equations 3 different, solving systems of equations by graphing andor, systems of equations substitution, systems of equations elimination, homework, systems of equations real world graphing, central bucks school.
Start by browsing the selection below to get word problems, projects, and more.Students for solving equations worksheet with solving by multiplying or dividing multiplying or an example, solve and width of method to say, disable any multiple choice.Students will match each systems of equations with its solution (ufo target).one minor transformation requiredinstruct students to do only one column or print out two copies per student so.Systems of equations graphical method solutions examples s
Systems of equations graphing multiple choice tessshlo algebra 1 name date block unit 6 test solve each system by the method indicated q 4 graphin sat math question 290 answer and explanation net questions warrayat instructional with article khan academy graphical solutions examples s linear two.Systems of equations graphing multiple choice.Systems of equations homework 3.Systems of equations maze slope intercept form solve by from solving systems by graphing worksheet.
Systems of equations teks aligned:Systems of equations test multiple choice tessshlo.Systems of equations with graphing (practice) | khan academy #371374.The first 3 questions ask the student to solve by graphing.
The methods for solving systems vary greatly.There is only one solution if the graphs of theThis is a solving systems of equations by graphing worksheet.This worksheet features a system of equations with no solution and slopes written in decimal and fraction form.
Topics need to know include identifying a solution for a given problem.Type keywords and hit enter.Worksheet by kuta software llc worksheet name_____ solving systems of equations by graphing date_____ solve each system by graphing (find the point of intersection of the two lines).You will plug values into each system to determine which values make the equations true.
©e 82×0 m1g26 yknuct la x sdo wf9trwpahrse f ulmlgcm.8 r 0a 8l hld rhinguh 8t3s 0 krse 0s qe brtv pezdh.t g wm7adsej hwei htoh y kibnofnirnhigt uei taql6g betborva6 r18. |
Developing Core Literacy Proficiencies for Grade 9 fully meet the expectations of alignment to the standards. The materials provide appropriate texts and associated tasks and activities for students to build literacy proficiency and advance comprehension over the course of the school year. Students engage in writing, speaking and listening, and language tasks to build critical thinking as they grow knowledge and build skills to transfer to other rigorous texts and tasks.
Text Quality and Alignment to the Standards
Overall, the Grade 9 materials meet the expectations for Gateway 1. A variety of high quality, complex texts support students’ growing literacy skills over the course of the year. However, some text types/genres called for in the standards are not fully represented.
Materials support students’ growth in writing skills over the course of the year using high-quality, text-dependent questions and tasks, though some writing types called for in the standards are not present. Students may need additional support with speaking and listening activities. Materials do not include explicit instruction targeted for grammar and convention standards.
- 16 16
Texts are worthy of students' time and attention: texts are of quality and are rigorous, meeting the text complexity criteria for each grade. Materials support students' advancing toward independent reading.
The Grade 9 materials meet the expectations for Text Quality and Complexity. Students engage with rich texts that support their growing literacy skills as they read closely, attend to content in multiple genres and types (including multimedia platforms). Texts are organized to support students' close reading and writing, and guidance around quantitative, qualitative, and placement considerations is provided for teachers should they introduce other texts into the materials.
NOTE: Indicator 1b is non-scored and provides information about text types and genres in the program.
Anchor/core texts are of publishable quality and worthy of especially careful reading.
The materials reviewed for Grade 9 meet the criteria of anchor texts being of publishable quality, worthy of especially careful reading, and considering a range of student interests.
Throughout the year, students have the opportunity to read about a broad range of subjects of interest, such as education in America, issues of terrorism, and more. Students are also exposed to highly engaging, theme-rich fiction pieces, such as Ernest Hemingway’s “The Short Happy Life of Francis Macomber.”
Anchor texts in the majority of chapters/units and across the yearlong curriculum are of publishable quality. Examples include, but are not limited to:
- Unit 1 contains multiple texts of publishable quality from reputable publishers. Unit 1, Part 1, incorporates the first four texts to introduce the focus, Reading Closely for Textual Details. Helen Keller’s personal narrative, The Story of My Life, is a commercially published text. The remaining texts serve as an introduction to the content of the main texts; each is a different text type, including a photograph, the previously mentioned personal narrative, multimedia video, and website.
- Unit 2's anchor text, Plato’s Apology of Socrates, supports the purpose of the unit of making evidence-based claims. Apology is Plato’s account of the defense Socrates gave at his trial in Athens in 399 B.C..
- Unit 5 texts offer many perspectives and positions on the topic of terrorism and allow students to study the issue from a variety of angles. Since terrorism is currently an issue of global concern, this may be a topic of interest for Grade 9 students.
Anchor texts are well-crafted, content-rich, and include a range of student interests, engaging students at the grade level for which they are placed. Examples include, but are not limited to:
- In Unit 1, Part 4, Activity 3, students are asked to read one of three challenging texts in preparation for the culminating task: Eleanor Roosevelt’s "Good Citizenship: The Purpose of Education," Thomas Jefferson’s Notes on the State of Virginia, and Arne Duncan’s "The Vision of Education Reform in the United States." These texts are grade-level appropriate, challenging, and require close reading.
- Unit 3’s anchor text is Ernest Hemingway’s short story, ”The Short Happy Life of Francis Macomber.” This piece is well-crafted and appropriate for the grade level. It encourages close and multiple readings because it explores several themes such as courage, violence, gender roles, and marriage; these could be used to discuss similar themes in other stories. This story also has an interesting plot, engaging characters, and unusual shifts in perspective that Grade 9 students will find engaging.
- Unit 4 explores the theme “Music: What Role Does it Play in Our Lives” and consists of five text sets. Texts are selected not only to appeal to students’ interests, but also to “provide many ideas about how music plays an essential role in our lives, including its impact on leisure, self-expression, and culture.” Texts are chosen for their ability to introduce various subtopics within the general topic area and include texts such as “What is Online Piracy?”, “Why Your Brain Craves Music," and “The Evolution of Music: How Genres Rise and Fall Over Time.”
- Unit 5 contains multiple texts of publishable quality from established publishers. For example, students read Major Terrorism Cases: Past and Present from FBI.gov and Events of 9/11. To increase student engagement and understanding, the curriculum also provides information about terrorism via timelines, political cartoons, and videos.
Materials reflect the distribution of text types and genres required by the standards at each grade level.
*Indicator 1b is non-scored (in grades 9-12) and provides information about text types and genres in the program.
The materials reviewed for Grade 9 partially reflect a distribution of text types and genres required by the standards for Grade 9. While this curriculum provides an abundance of informational text, including literary nonfiction, it does not include poetry such as narrative poems, sonnets, ballads or dramas. Examples of text types and genres that are provided include, but are not limited to:
- Unit 1, “Education is the New Currency,” is centered around numerous texts related to how education in the United States is changing. The curriculum provides the teacher with a list of texts used in the unit via the Reading Closely For Textual Details Unit Texts chart (80-81). Texts provided include personal narratives by Helen Keller and Eleanor Roosevelt, speeches by Colin Powell, Arne Duncan, and Horace Mann, and other nonfiction pieces such as TED Talks, websites, government documents, and videos.
- Unit 2 uses Plato’s Apology of Socrates as the anchor text. This nonfiction piece is used throughout the unit and serves as students’ main source when making evidence-based claims.
- In Unit 3, the instruction is centered on the analysis of the short story, “The Short Happy Life of Frances Macomber,” by Ernest Hemingway; this fictional text is the sole text for this unit.
- Unit 4 offers Common Source Sets “that model and briefly explain a text sequence focused on a particular Area of Investigation” (316). A list of these Common Sources can be found at the end of the unit. Source 1 is a YouTube video entitled, “Imagine Life Without Music." Source 2 is the Internet-based article, “A Brief History of the Music Industry.” Source 3 consists of three internet-based sources: “What is Online Piracy?”, “Why Your Brain Craves Music,” and “The 25 Most Important Civil Rights Moments in Music History.” Sources 4 and 5 also consist of non-fiction, internet-based articles about music (403-406).
- Unit 5 provides a comprehensive list of texts in the chart, Building Evidence-Based Arguments Unit Texts. Text Sets 1 and 2 consist of informational texts such as “Militant Extremists in the United States” by Jonathan Masters and “A Brief History of Terrorism in the United States” by Brian Resnick. Text Set 3 consists of a political cartoon. Text Set 4 consists of seminal arguments such as Public Law 107-40 “Authorization for Use of Military Force” and Osama bin Laden’s Declaration of Jihad against Americans. Text Set 5 includes additional nonfiction arguments such as “Obama’s Speech on Drone Policy” and “Terrorism Can Only Be Defeated by Education, Tony Blair Tells the UN” (535).
Texts have the appropriate level of complexity for the grade level (according to quantitative analysis and qualitative analysis).
The instructional materials reviewed for Grade 9 meet the criteria for texts having the appropriate level of complexity for the grade according to quantitative analysis, qualitative analysis, and relationship to their associated student task. Most texts fall within either the Current Lexile Band or the Stretch Lexile Band for grades 9-10. Some texts exceed the band for grades 9-10, but are structured in a way that make them accessible to Grade 9 students. The few texts that do not have Lexiles provided qualitatively meet the requirements for this grade level because they serve as introductory pieces for a unit, provide for the exploration of several themes or multiple meanings, allow for the analysis of narrative structure, or are easily accessible sources that offer different perspectives on an issue.
Most anchor texts have the appropriate level of complexity for the grade according to quantitative and qualitative analysis and relationship to their associated student task. While many of the texts are challenging, texts are chosen to engage student interest and promote inquiry, which make them worthy of students’ time and attention. Texts support students’ advancement toward independent reading. Examples include, but are not limited to:
- Unit 1 contains an extensive set of texts for students to practice close reading for details and more than half are accompanied with a Lexile score. Of the identified texts, only one falls far below grade-level, an excerpted transcript of Colin Powell’s TED Talk, “Kids Needs Structure.” The speech was given a 900L, putting it in the 4th-5th grade Stretch Lexile Band. Although this text measures only 900L, it is appropriate for the grade level because it provides strong description and narration. Powell's ideas and supporting details also allow students to “explore his perspective, which developed during his days in the military.” Unit 1, Part 1 incorporates the first four texts to introduce the focus, Reading Closely for Textual Details. Helen Keller’s personal narrative, The Story of My Life, has an identified Lexile score of 1250. This set of texts includes a photograph that connects to the previously mentioned personal narrative, multimedia video, and website. In Unit 1, Part 4, Activity 3, students are asked to read one of three challenging texts in preparation for the culminating task. Eleanor Roosevelt’s “Good Citizenship: The Purpose of Education” measures at 1250L, Thomas Jefferson’s Notes on the State of Virginia measures at 1410L, and Arne Duncan’s “The Vision of Education Reform in the United States” measures at 1200L which falls within the Stretch Lexile Band for grades 9-10. These texts are challenging and allow for close reading, questioning, analysis and summary. While Jefferson's text is above grade level, it provides teachers the opportunity to assign a more complex text based on individual student's reading comprehension levels. This one would be reserved for more capable students.
- The core text for Unit 2 is Plato’s Apology with a 980L, which falls within the 9th grade Current Lexile Band of 960L to 1120L. Students use a question-based approach to read and analyze the text. Qualitatively, this text is challenging since requires familiarity with Greek leaders such as Chaerephon, Anytus and Lycon, as well as Greek mythology allusions such as Minos and Rhadamanthus. Other vocabulary will also challenge students such as words like “impetuous” and “odious.” At the end of Unit 1, the teacher’s edition provides “media supports” with various editions of Apology on Audiobooks, YouTube, ebooks, and PDFs. Each edition gives a description. For example, “2. Socratic Citizenship: Plato’s Apology” is described as a lecture from Yale University on the political and philosophical contexts of Socrates’ trial. Although this is an advanced analysis of Plato’s Apology, students can benefit from watching how an expert discusses an important text in Western civilization.”
- In Unit 5, the curriculum provides a variety of texts in the form of text sets. Lexile levels are provided for texts within the Text Notes sections of the teacher’s edition. For example, In Unit 5, Part 1, Activity 2, students read “What is Terrorism?”; this text measures at 1200L which exceeds the Current Lexile Band for grades 9-10, but falls within the Stretch Lexile Band. Students also read “Terrorists or Freedom Fighters: What’s the Difference?” which measures at 1070L which falls within both the Current and Stretch Lexile Bands for grades 9-10. The final text in this activity is “Militant Extremists in the United States” which measures at 1470L. This text is very complex and its Lexile level is higher than the top measurement for the Stretch Lexile Band for 11-CCR. The text does state that “the headings and subheadings help organize the information into sections” (459), making it more accessible to 9th grade students. The other texts within the unit are timelines, political cartoons, and videos. The curriculum indicates that these texts are readily accessible to 9th grade students. Text 4.1 “Authorization for Use of Military Force” is Public Law 107-40 has an estimated Lexile level of 1270L which falls in the Stretch Lexile Band for grades 9-10. All the texts in Unit 5 were appropriately chosen as resources for the unit’s final assignment where students develop a supported position on the issue of terrorism. These texts offer many perspectives and positions on the topic and allow students to study the issue from a variety of angles.
Materials support students' literacy skills (understanding and comprehension) over the course of the school year through increasingly complex text to develop independence of grade level skills (Series of texts should be at a variety of complexity levels).
The instructional materials reviewed for Grade 9 meet the criteria for materials supporting students’ increasing literacy skills over the course of the school year. Series of texts are at a variety of complexity levels appropriate for the grade band.
As the year progresses, students read increasingly difficult texts. In the Grade 9 curriculum, the writing skills build on one another, as well as the complexity of the texts to support the thinking and literacy skills. In the units with the texts sets, there is a breadth and depth of choices in the full range of the Lexile stretch band providing opportunities to challenge students by giving them complex texts, but also by providing more reachable texts as they are working on analysis and synthesis skills in writing.
The complexity of anchor texts students read provides an opportunity for students’ literacy skills to increase across the year, encompassing an entire year’s worth of growth. Examples include, but are not limited to:
- In the overview for Unit 3, the Teacher’s Edition states that “this unit extends students’ abilities to make evidence-based claims into the realm of literary analysis.” All reading, discussion, and literary analysis focuses on Ernest Hemingway’s “The Short Happy Life of Francis Macomber.”
- In Unit 4, Part 1, Activity 2, students read “A Brief History of the Music Industry,” which measures at 1500L; this measurement places it outside even the Stretch Lexile Band for Grade 11-CCR. However, the text is written in a student-friendly way with short paragraphs making it easier to read which allows students to access this much more complex text.
The complexity of anchor texts support students’ proficiency in reading independently at grade level at the end of the school year as required by grade level standards. Examples include, but are not limited to:
- Unit 1, Part 2, Activity 2, includes a speech by Colin Powell which has a Lexile of only 900 but is credited for the strong description and narration from Powell which provides the opportunity for students to explore his perspective. Unit 1, Part 3, Activity 1, introduces a passage by Maria Montessori and measures at 1270L which “should be challenging but accessible for most students with the scaffolding and support of the close reading process”. The final three texts in Unit 1, Part 4, Activity 2, all focus on the purpose and value of education in society. They range from a piece by Thomas Jefferson (1410 L) to FDR (1250L) to Arne Duncan (1200L).
- In Unit 4, Part 1, Activity 3, students read “Why Your Brain Craves Music” which measures at 1350L; this measurement places it within the Stretch Lexile Band for Grade 9-10. In Unit 4, Part 3, Activity 2, students read “Why I Pirate” which measures at 1200L; the curriculum suggests that due to the complexity of the text that it be used for teacher modeling. Students also read the article, “Are Musicians Going Up a Music Stream without a Fair Payout?” This article measures at 1180L which falls within the Stretch Lexile Band for Grade 9-10.
Anchor texts and series of texts connected to them are accompanied by a text complexity analysis and rationale for purpose and placement in the grade level.
The instructional materials meet the expectations that texts and lesson materials are accompanied by a text complexity analysis and rationale for purpose and placement in the grade level. Additionally, there are included tools and metrics to assist teachers in making their own text placements should they need to introduce a new text or text set into the materials. The curriculum provides quantitative information for both anchor texts and text sets excluding photographs, videos, and websites. The Teacher Edition explains the purpose and value of the texts in the Text Notes. For example, some texts are chosen for their value in reinforcing literary techniques while others were chosen as appropriate introductions to a particular time period. All texts were chosen because they were appropriate for 9th grade students while still allowing some flexibly for a variety of reading levels.
Examples of how the materials explain how texts are placed in the program include, but are not limited to:
- In Unit 1, Part 1, Activity 3, students read an excerpt from Helen Keller’s autobiography. Its Lexile Level is 1250L. The curriculum states, “This is a good first text for close reading because it is vivid and challenging, but it is also relatively short and accessible for most students” (17). It goes on to provide rationale for its purpose stating that the text can be used to show students how writers can use similar literary techniques in nonfiction that are found in fiction pieces with a focus on figurative language and characterization.
- In Unit 5, the curriculum provides the following rationale for text selection: “The texts...are offered in the form of text sets, in which texts are grouped together for instructional and content purposes” (443). Since students are not required to read every text, the curriculum also provides flexibility for teachers to make decisions about text selection based on student reading levels as the selections have different complexities. For this unit, Lexile levels are provided within the Text Notes for each text set (excluding photos, videos, and websites). For example for Unit 5, Part 1, Activity 2, students read “What is Terrorism?”; the curriculum describes this piece of writing as, “The text measures at 1200L and should be accessible to most ninth grade students” (455).
Anchor and supporting texts provide opportunities for students to engage in a range and volume of reading to achieve grade level reading proficiency.
The instructional materials reviewed for Grade 9 meet the criteria that anchor and supporting texts provide opportunities for students to engage in a range and volume of reading to achieve grade level reading proficiency. Students read a variety of texts including nonfiction personal narratives, fictional short stories, and nonfiction articles. Texts are accompanied by a Questioning Path Tool which provides both text-dependent and text-specific questions that guide them into a deeper reading of the text. Finally, each unit provides various student checklists and teacher rubrics that can be used to monitor progress throughout the year.
Instructional materials clearly identify opportunities and supports for students to engage in reading a variety of text types and disciplines and also to experience a volume of reading as they grow toward reading independence at the grade level. For example:
- Unit 1 is based on numerous non-fiction texts related to education in the United States and how it is changing. In Unit 1, Part 1, Activity 3, students read Helen Keller’s The Story of My Life. The curriculum provides support for this reading via the Questioning Path Tool. This tool provides four levels of both text-dependent and text-specific questioning which include questioning, analyzing, deepening, and extending (page 18 of the Teacher’s Edition).
- Unit 3 is based on the close reading of Ernest Hemingway’s short story, “The Short Happy Life of Francis Macomber.” In Unit 3, Part 1, Activity 1, students are provided a Questioning Path Tool that not only guides them through the close reading, but it is a question from this tool that will serve as the basis for the writing of an Evidence-Based Claim.
- Unit 4’s reading includes a Common Source Set (virtual texts) that focuses on a particular area of investigation, “Music: What Role Does it Play in Our Lives?” This is given as a model of how a teacher can create source sets and recognizes the ever-changing nature of websites. Within the source sets are YouTube videos, internet-based articles, as well Gale Reference Library articles..
Materials also include checklists, rubrics, and student conference suggestions to assist in evaluating the development of literacy proficiency.
Materials provide opportunities for rich and rigorous evidence-based discussions and writing about texts to build strong literacy skills.
Overall, the instructional materials reviewed for Grade 9 partially meet the expectations of indicators 1g through 1n. The materials support students as they grow their writing skills over the course of the year. High-quality, text-dependent questions and task support students as they grapple with materials, participate in discussions of content, engage in a variety of writing types, and demonstrate their learning with evidence-supported arguments. However, speaking and listening protocols are not fully outlined throughout the materials to support teachers and students. Teachers may also need to add additional instruction to cover the full range of writing standards required for narrative writing. Materials do not include explicit instruction targeted for grammar and convention standards.
Most questions, tasks, and assignments are text dependent/specific, requiring students to engage with the text directly (drawing on textual evidence to support both what is explicit as well as valid inferences from the text).
The instructional materials reviewed for Grade 9 meet the criteria that most questions, tasks, and assignments are text dependent/specific and consistently support students’ literacy growth over the course of a school year.
The instructional materials include questions and tasks that require careful reading over the course of a school year, during which students are asked to produce evidence from texts to support claims. Materials introduce the text-dependent inquiry basis called the Questioning Path Tool, which provides opportunities for students to ask and use questions to guide their close examination of the text. The Questioning Path Tool progresses from intensive practice and support in developing text-specific questions to gradual release of responsibility as students learn to develop high-quality questions on their own, deepening their understanding of the text. These questions require students to return to the text for evidence to support their answers to questions about the roles of specific details, the meaning of specific phrases, character development, and vocabulary analysis. The process supports a text-centric curriculum and approach to multiple literacy skills.
Students work independently and collaboratively to respond to and generate text-specific questions. Also, writing tasks provide the opportunity for students to conduct more text-dependent work. Models can be modified for existing content (i.e., novels) owned by a district.
The tasks and assignments asked of students are appropriately sequenced and follow a consistent routine. The materials require students to closely read the text, drawing on textual evidence to support both what is explicit as well as valid inferences from the text. Examples include, but are not limited to:
- In Unit 1, Part 1, Activity 2, The Questioning Path Tool shows initial text-dependent questions that engage the surface level details, identifying the what: “What details stand out…?” and “What do I think this image is mainly about?” Students are then allowed an opportunity to deepen their understanding by moving toward text-specific questions that analyze the how: “How do specific details help me understand what is being depicted in the image?”
- The Questioning Path Tool templates and Reading Closely: Guiding Question handouts are provided with the materials to encourage students to create their own questions in four categories: questioning, analyzing, deepening, and extending. These tools are included in each unit.
- The materials also include text-specific questions.
- In Unit 1, Part 2, Activity 1, the Questioning Path Tool for the text, “The Story of My Life,” provides text-specific questions. One example is, “What does the figurative language phrase ‘a little mass of possibilities’ in the first paragraph suggest about how Keller at first saw herself as a student?”
- In Unit 2, Part 1, Activity 2, there are text-specific questions to accompany Plato’s Apology: “In Paragraph 3, Socrates says he is on trial because of a ‘certain kind of wisdom’. According to Socrates, what kind of wisdom does he not have? What does this suggest about the wisdom he does have?”
- In Unit 4, Part 3, Activity 2, students are asked to analyze a source’s perspective and bias as they develop sources for their research portfolio. The materials direct students to use the Guiding Questions handout from previous units for guidance. Specifically, the materials point to the “P” of the “LIPS” domains--Language, Ideas, Perspective, and Structure--in the Guiding Questions handout. The Perspective domain presents text-dependent, evidence-based questions for students. The question, “What details or words suggest the author’s perspective?” is a strong example of a stand alone text-dependent question. Other questions, such as, “How does the author’s perspective influence my reading of the text?”, rely on the provided follow-up instructions in the Deepening section of the Guiding Questions handout. This section asks students to support and “explain why and cite [...] evidence” from the text.
Students are supported in their literacy growth over the course of a school year. Examples include, but are not limited to:
- Unit 1 culminates in a text-centered discussion in the Reading Closely for Textual Details: Developing Core Literacy Proficiencies Literacy Toolbox. The Instructional Notes explicitly ask students to “point out key words that indicate the author’s perspective,” as well as “ask the other participants to reference the texts in their comments.”
- In Unit 5, Part 1, Activity 2, students are asked to question the text without the typical modeling by the teachers, thus placing more of the responsibility on the student. The teacher Instructional Notes for this task indicate: “(The students) should be bringing useful questions from such handouts as the Guiding Questions and Assessing Sources handouts into their reading process, and they should not require prescriptive scaffolding. However, students will also be reading, analyzing, and evaluating complex arguments in this unit, perhaps for the first time. They may need the support of text-dependent questions that help them attend to the elements and reasoning within arguments.” The Instructional Notes include further information for the teacher to assist with “abbreviated versions of the model Questioning Tools found in other units.”
Teacher materials provide support for planning and implementation of text-dependent questions, tasks, and assignments. Examples include, but are not limited to:
- Teacher Instructional Notes across the units include reminders to teachers to consistently direct students to use the text to support responses. These notes also encourage teachers to generate questions to model for the students.
- In Unit 3, Part 5, Activity 5, the Instructional Notes provide the following teacher instructions for a task so that students are provided with proper modeling. The notes read: “Return to the literacy skills criteria students have been using in Part 4: Forming Claims and Using Evidence. Talk through how you might apply these criteria in reviewing a draft evidence-based claims essay—beginning with general Guiding Review Questions, such as “What is the claim and how clearly is it expressed and explained? Is there enough, well-chosen evidence to support the claim?” Then model how a writer might develop a more specific text-based review question to guide a second review of the draft.”
- In Unit 5, Part 1, Activity 2, the Instructional Notes reference the EBA (Evidence-based Argument) Toolbox, which includes “a few model text-specific questions for deepening students’ understanding of specific aspects of the arguments they will read closely.”
Materials contain sets of sequences of text-dependent/ text-specific questions with activities that build to a culminating task which integrates skills to demonstrate understanding
The instructional materials reviewed for Grade 9 meet the criteria for materials containing sets of high-quality sequences of text-dependent and text-specific questions and activities that build to a culminating task which integrates skills to demonstrate understanding.
The materials include quality culminating tasks which are supported with coherent sequences of text-dependent questions and tasks and are present across a year’s worth of material. Examples include, but are not limited to:
- In Unit 1, the tasks focus on Reading Closely for Textual Details which is a prerequisite skill and foundation for the work done in the following units. Each unit introduces a skill, such as making evidence-based claims and researching to deepen understanding, which is necessary for Unit 5 Building Evidence-Based Arguments: “What is the Virtue of Proportional Response?”
- In Unit 2, Part 5, Making Evidence-Based Claims (EBC), the culminating activity is a writing task, students compose “a rough draft of an evidence-based claim essay on their global claim.” The materials suggest that this task is “used as evidence of Literacy Skills associated with close reading…” To this point, close reading activities have been driven by the Questioning Path Tool - a handout utilized throughout the curricular materials with general text-dependent questions to text-specific questions. To describe the relevance of the tasks, the teacher’s edition addresses the materials by using the analogy of “practitioners in various fields are able to analyze and understand [works] because their training focuses them on details… [and this] often involves using questions to direct their attention.” This analogy is included to help students understand the process of building an EBC and completing the Forming EBC Tool. This is just one step in the process of completing the culminating task.
- In Unit 5, Part 2, Activity 7, students are tasked to use notes and annotations taken from previous activities to compose a culminating essay to demonstrate mastery of analyzing an argument. Student work is text-dependent; they use notes and annotations collected on the Delineating Arguments Tool practiced with the included text set or other materials as determined by the teacher. The Delineating Arguments Tool is preceded by the use of the Guiding Questions handout, remaining consistent with the text-dependent and text-specific basis of the entire curriculum.
Evidence that sequences of text-dependent questions and tasks throughout each unit prepare students for success on the culminating tasks includes, but is not limited to:
- In Unit 1, students are asked to read nine texts focusing on the theme, “Education is the New Currency.” In Unit 1, Part 5, Activity 3 of this unit, students are asked to participate in a text-centered discussion about how education is the great equalizer of the conditions of humans. To prepare for the culminating activity, students work collaboratively to review each other’s text-based explanations, discuss their assigned text to other texts within the unit, and write a comparative text-specific question for the discussion.
- In Unit 4, Part 5, students are asked to create a culminating research portfolio or produce an alternative research-based product. The activities leading up to this culminating task include the consistent Questioning Path Tool to explore a topic using various texts and to develop an Inquiry Question. The student- or class-generated Inquiry Question determines the texts used with the text-dependent Research Evaluation Tool. When using the Research Evaluation Tool, which consists of three checklists with guiding questions, students present materials gathered and studied to include in their portfolio and receive feedback from the teacher and peers about the credibility, relevance, and sufficiency of the evidence gathered.
The culminating tasks are varied and rich, providing opportunities for students to demonstrate what they know and are able to do in speaking and/or writing. The following are examples of this evidence:
- In Unit 3, the Unit Overview includes an Outline describing the five parts of the unit. The Outline describes Part 5: Developing Evidence-Based Writing: “Students develop the ability to express global evidence-based claims in writing through a rereading of the texts in the unit and a review of their previous work.” Students are expected to reread the texts in the unit and discuss with the class the development of Evidence-Based Claims (EBCs). They are also expected to create their own EBCs about literary technique and work collaboratively throughout the writing process. Finally, students participate in a class discussion of final evidence-based essays.
- In Unit 5, the culminating task consists of an argumentative essay. In preparation for this final task, students write short essays analyzing an argument. In Unit 5, Part 2, Activity 7: Writing to Analyze Arguments, the teacher’s Instructional Notes read: “Students use their notes, annotations, and tools (such as the Questioning Path Tool) to write paragraphs analyzing one of the arguments they have read thus far in the unit.”
Materials provide frequent opportunities and protocols to engage students in speaking and listening activities and discussions (small group, peer-to-peer, whole class) which encourage the modeling and use of academic vocabulary and syntax.
The instructional materials reviewed for Grade 9 partially meet the criteria for materials providing frequent opportunities and protocols for evidence-based discussions (small groups, peer-to-peer, whole class) that encourage the modeling and use of academic vocabulary and syntax.
The materials promote twelve Academic Habits and twenty standards-aligned Literacy Skills. The materials intend for students “to develop, apply, and extend” Academic Habits “as they progress through the sequence of instruction.” Academic Habits include mental processes and communication skills sets such as, but not limited to, Preparing, Collaborating, Completing Tasks, Understanding Purpose And Process, and Remaining Open. Each Academic Habit is accompanied by general descriptors and most units include rubrics designed for teachers to conduct observational assessments of Academic Habits, thus providing another opportunity for assessment. By comparison, the Literacy Skills articulated by the materials are focused on reading and writing skills; Academic Habits are mental and communication-based processes.
In the Teacher’s Edition, Grade 9 Developing Core Literacy Proficiencies: User Guide, the publisher includes a table that “lists the anchor Common Core State Standards that are targeted within the five Developing Core Literacy Proficiencies units and indicates the Literacy Skills and Academic Habits that are derived from or are components of those standards.” The instructional materials focus on “SL.1: Prepare for and participate effectively in a range of conversations and collaborations with diverse partners, building on others’ ideas and expressing their own clearly and persuasively.” Other speaking and listening standards within the strand are not targeted within the Developing Literacy Proficiencies units.
Throughout the curriculum, students are provided frequent opportunities to participate in evidence-based discussions. Many activities start with teacher-led, whole-class discussions to establish students’ first impressions with teachers modeling how to use evidence from text to support observations. When students dive deeper into the texts, they are often assigned to work in pairs to discuss claims and organize evidence. In other activities, the curriculum suggests that students work in teams to become experts, then jigsaw into new groups to share what they have learned. These discussions are text-specific and ask students to refer to this textual evidence while presenting claims and validating observations. While discussions are evidence-based, teachers and students are not provided with protocols or models for conversation. Conversation is a tool used throughout the curriculum, but is not ever explicitly taught or assessed.
The consistent design of the curriculum provides a focus on using textual evidence and contains sequenced tasks for most discussions to support the demonstration of academic vocabulary and analysis of syntax. This is maintained by the consistent use of a questioning path system and explicit modeling instructions for teachers to follow with students. At times, the Questioning Path Tool leads whole class discussions or between students in pairs and small-groups for specific purposes. For example, Unit 4 is focused on research skills and the questioning-based instruction promoted by the materials becomes inquiry-based discussions used by students grouped into research teams.
Materials provide multiple opportunities and questions for evidence-based discussions across the whole year’s scope of instructional materials. Examples include, but are not limited to:
- In Unit 3, Part 4, Activity 6, “The class discusses their new EBC’s (Evidence-Based Claims) from Activity 5 and students listen actively to portions of the text being read or presented.” Students discuss the text and review the self-developed Questioning Paths. In pairs, students discuss their claims and organize evidence. Students also participate in a whole-class read aloud to help each other analyze claims and selected evidence.
- In Unit 4, the core question, “Music: What role does it play in our lives?” might increase student engagement. In Unit 4, Part 1, Activity 1, the opening discussion in the Instructional Notes provides an opportunity for a discussion/question for “inquiry and research.” In Unit 4, Part 1, Activity 2, specific instructions are given after viewing a video on how to help students think about topic areas that might be interesting to research. This is done through full-class discussion.
The opportunities provided do not always adequately address and promote students’ ability to master grade-level speaking and listening standards. Examples include, but are not limited to:
- In Unit 1, Part 1, Activity 3, during small group work utilizing Academic Habits, the teacher’s edition shares that students “might reflect on how well they have demonstrated the skills of preparing for discussions by reading and annotating the text and considering the questions that have framed discussion.” The Discussion Habits Checklist is available for teachers and students to access using the RC (Reading Closely) Literacy Toolbox. Similarly, in Unit 1, the Student Edition highlights skills and habits, such as questioning, collaboration, and clear communication; notably, the students are reminded of the following: “These skills and habits are also listed on the Student Literacy Skills and Discussion Habits Checklist, which you can use to assess your work and the work of other students.” The skills and habits address core standards specifically targeted by the publisher in Developing Core Literacy Proficiencies Units.
- In Unit 4, Part 1, Activity 2, in the teacher’s edition, students in small groups “practice the thinking process by using the topic area and questions previously developed by the class and filling in the Tool’s second Area of Investigation section.” However, all the questions students are given after these instructions are questions that they are working on independently. The tie between the small group goal/organization and this activity is weak.
- In Unit 5, Part 1, Activity 4, “Students work in reading teams to develop a set of more focused text-specific questions…” Students can also work individually to review one of the background texts to find additional details. The note section suggests that students work in teams to become experts, then jigsaw into new groups to share what they have learned. Speaking and listening skills are not assessed. Students participate in a sharing out of ideas. In the activity, peers do not set rules for collegial discussions and decision-making or challenge ideas and conclusions of their peers as included in the CCSS.
Grade-level-appropriate opportunities occur for evidence-based discussions that encourage the modeling and use of academic vocabulary and syntax within the materials, but the curriculum does not utilize the opportunities. Examples include, but are not limited to:
- On page xxxiii, the curriculum states that many decisions about the teaching of vocabulary are left up to the teacher. It also states that “activities and tools use vocabulary related to reading skills that students can apply while reading and writing.” Additionally, keywords in the Reading Closely and Making Evidence-Based Claims units’ texts “are highlighted and defined so that students and teachers can focus on them as needed.” This shows that the curriculum is not providing guidance regarding how to intentionally incorporate academic or text-specific vocabulary into instruction. (This wording is the same in all grade levels.) The instructional materials do not provide the guidance necessary to ensure students will demonstrate independence in gathering vocabulary knowledge when considering a word or phrase important to comprehension or expression.
- In the Introduction for Unit 3, the teacher’s edition states a “skillful reader of a literary work pays attention to what authors do--the language, elements, devices, and techniques they use, and the choices they make that influence a reader’s experience with and understanding of the literary work.” While the development of student’s EBCs are grounded in this skillful reading, the same isn’t true for speaking/listening. For example, in Unit 4, Part 1, Activity 4, a reflection question asks about communicating ideas with specific evidence, but no focus is included on modeling and use of academic vocabulary and syntax, which could easily be incorporated into this rhetorical analysis.
- In Unit 5, Part 5, Activity 1, the activity concerns the editing and revision stages of the writing process. After allowing students time to process aspects of their writing, the activity provides reflection questions to guide discussions about the students’ drafts. The Text-Centered Discussion occurs in peer-editing pairs. Editing partners read, looking for “textual evidence that expresses the writer’s understanding of the issue” about which they are writing. At this point in the curriculum, and with modeling provided by the teacher, students could appropriately use academic vocabulary when providing feedback. The materials provide a description of the task, but not the protocol of the very explicit steps on how to be successful at the task. The opportunity is there, but the materials do not explicitly require students to use the vocabulary in their feedback. It could happen naturally, but is not directly stated as an expectation by the materials and, therefore, the teacher.
Materials support students' listening and speaking (and discussions) about what they are reading and researching (shared projects) with relevant follow-up questions and supports.
The instructional materials reviewed for Grade 9 meet the criteria for the materials supporting students’ listening and speaking about what they are reading and researching (including presentation opportunities) with relevant follow-up questions and evidence.
Materials embed evidence-based academic discussions focused on listening and speaking skills in reading and writing processes. Students are often asked to engage in discussions about texts through activities such as note taking, annotating texts, and capturing what their peers say. Students then transfer the practice to their own writing through collaborative revision workshops with peers.
Speaking and listening instruction is applied frequently over the course of the year and includes facilitation, monitoring, and instructional supports for teachers. Examples include, but are not limited to:
- In Unit 1, Part 3, Activity 2, students discuss how the author’s use of language reflects his or her perspective on the subject. Students have to present evidence and connect their comments to the ideas of others. Students practice listening skills as they take notes, annotate their texts, and capture what peers say. The curriculum supports the development of listening and speaking skills with the formal and less formal versions of the Reading Closely Literacy Skills and Discussion Habits Rubrics.
- In Unit 2, Part 1, Activity 2, the teacher guides students to share ideas during a “brief discussion in which students volunteer something they learned about the speaker (Socrates).” Further guidance is offered in the Instructional Notes to assist the teacher in leading the discussion with follow-up questions to the students’ independent reading and to prepare them for the next step. An example of one such question is: “What words or sentences in the paragraph tell you this information?” Then, students transition into “the second guiding question: ‘What details or words suggest the author's perspective?’”
- In Unit 3, Part 2, Activity 2, students read aloud, "The Short Happy Life of Francis Macomber" and continue to engage with the same guiding questions to elicit evidence of the author’s language, ideas, and use of supporting details. Support is provided by the Making EBC Tool and text-specific Questioning Path Tool.
- In Unit 4, Part 1, Activity 2, students explore a research topic. Students watch a video and read a common text to stimulate thinking about what makes the topic interesting and to open up possible areas to investigate. After watching the video, small groups, and later the entire class, will summarize what they have learned through the video. Students will compare notes and note-taking strategies. Students then work in small teams to read an internet-based text and discuss what they already know about the text before they read it. Students will listen and participate as the class discusses how the source they just read relates to the topic and the video they have previously previewed.
- In addition, students are expected to exhibit the following behaviors during the Text-Centered Discussion:
- “Listening: Listen fully to what readers have observed; Consider their ideas thoughtfully; Wait momentarily before responding verbally.
- "Remaining Open: Avoid explanations or justifications for what they as writers have tried to do (no 'yes, but . . .' responses); Frame additional informal, text-based questions to further probe their readers’ observations.”
Materials include a mix of on-demand and process writing grade-appropriate writing (e.g. grade-appropriate revision and editing) and short, focused projects.
The instructional materials reviewed for Grade 9 meet the criteria for materials including a mix of on-demand and process writing (e.g., multiple drafts, revisions over time) and short, focused projects, incorporating digital resources where appropriate. Throughout the units, the instructional materials require students to produce on demand short, informal writings and longer, independent process writing tasks and essays.
On-demand writing tasks can consist of completing the worksheets/handouts/tools from the Literary Toolboxes and evolve into students composing sentence-length, evidence-based claims and paragraphs. Students compose initial on-demand writing in pairs to become accustomed to the material’s Academic Habits or approach to revisions through informal collaborative small group and class discussions in Unit 2 to more formal Research Teams in Unit 4. Examples of on-demand writings include, but are not limited to:
- Unit 1, Part 1, Activity 4 is focused on increasing students’ close reading skills. In Part 1, students demonstrate their proficiency through discussions closely guided by their on-demand writing that occurs in the Questioning Path Tool handouts included in the Literacy Toolbox. For example, Activity 4 tasks students to follow the Guiding Questions provided in the tool to analyze details in a multimedia text, a TED Talk by Sir Ken Robinson. There are multiple text-centered questions and the materials assign students to “write a few sentences explaining something they have learned.”
- In Unit 2, Part 4, Activity 2, students consider text-based review questions, and “articulate and share their text-based responses and constructive reviewers claims” that they have generated based on the reading.
- In Unit 4, Part 1, Activity 2, students summarize their observations and understandings of a topic discussed in class and brainstorm potential areas of investigation as well as details about the topic to expand or increase understanding.
Opportunities for process writing tasks include, but are not limited to:
- In Unit 2, Part 5, Activity 5, “students used a criteria-based checklist of feedback from peers in a collaborative review process to revise and improve their evidence-based claim essays.” Once students have completed the first drafts of their essays, they work in writing groups to complete two review and revision cycles. The first cycle focuses on the essay’s content or on evaluating and improving the content or quality of claims and evidence; the second cycle focuses on improving organization and expression and clarity of their writing.
- In Unit 4, Part 2, Activity 3, students are directed to “consult a librarian or media specialist and conduct web- or library-based searches” for additional sources to complete the culminating research writing assignment. The process writing for this is sequenced out beforehand with the Common Sources and Literacy Toolbox handouts specific to this unit. The peer-feedback, revision processes practiced in previous units is implemented again to develop and improve the students’ research essays. Part 5 focuses on research skills for the summative purpose of creating a research portfolio, written research narrative, or research-based product to present.
Materials provide opportunities for students to address different types/modes/genres of writing that reflect the distribution required by the standards. Writing opportunities incorporate digital resources/multimodal literacy materials where appropriate. Opportunities may include blended writing styles that reflect the distribution required by the standards.
The instructional materials reviewed for Grade 9 partially meet the criteria for materials providing opportunities for students to address different text types of writing that reflect the distribution required by the standards. Writing is embedded throughout the curriculum; however, the writing instruction does not fully reflect the distribution of the standards, in particular the various elements of narrative writing, even though narrative writing is at times included as a follow-up reflection to longer research projects. The 9-12 CCSS state within narrative writing that students write narratives to develop real or imagined experiences or events using effective technique, well-chosen details, and well-structured event sequence. In particular, students are to use narrative techniques such as dialogue, pacing, description, reflection, and multiple plot lines, to develop experiences, events, and/or characters. Students are not provided opportunities to engage in narrative writing tasks allowing sufficient practice for specific narrative techniques as required by the standards.
The curriculum provides a variety of unit-specific checklists and rubrics so that students and teachers can monitor progress in literacy skills (including writing) and academic habits such as collaborating and clearly communicating. This curriculum is based in reading grade-appropriate texts and responding to these texts in both formal and informal writing.
Materials provide multiple opportunities across the school year for students to learn, practice, and apply different genres/modes of writing that reflect the distribution required by the standards. While students encounter a variety of texts—speeches, essays, historical texts, TED talks—the writing that they are asked to do is not varied. While they write expository and argumentative pieces, they are not asked to write outside these genresExamples include, but are not limited to:
- In Unit 3, Part 5, Activity 1, students are considering the piece, “The Short Happy Life of Francis Macomber,” by Ernest Hemingway with respect to its content, as well as use of literary techniques. While they discuss these techniques in writing a global claim about literary techniques, there are no opportunities to also practice/write/develop narrative techniques such as dialogue, pacing, description, plot lines, or develop experiences, events, and/or characters as stated in the standards.
- In Unit 4, Part 5, Activity 2, the Teacher’s Edition states, “students write a reflective research narrative explaining how they came to their understanding of the topic, the steps they took to reach that understanding, and what they have learned about the inquiry process.” Instructional Notes are included to assist teachers in guiding students through the Activity Sequence as students “tell the ‘story’ of their search,” including the following reflective points:
- Their initial understanding of the topic of Music and its importance in our lives.
- Their culminating understanding or view of the topic.
- The steps they took to reach their evidence-based perspective.
- Their personal experience learning about and using the inquiry process to research the issues connected to the topics they have investigated.
In this type of writing, students will connect their ideas to the sources used during their research; however, due to the nature of this assignment, students are not able to use narrative techniques such multiple plots lines or sensory language to convey a vivid picture of characters. An example of this follows:
- In Unit 4, Part 5, Activity 2, the Instructional Notes relating to the narrative state, “Because this may be the first time in the Developing Core Proficiencies program sequence that students have written a narrative, they may want to consider the specific Expectations of CCSS W.3 at ninth grade…” and list these standards for the teacher. There is no additional guidance to assist teachers and ensure students have practiced and reached proficiency of all narrative techniques for the grade level.
Materials provide opportunities for students/teachers to monitor progress in writing skills. Examples include, but are not limited to:
- In Unit 2, Part 3, Activity 1, the teacher’s edition states that students read “paragraphs 11-18 of Plato’s Apology, guided by a Guiding Question(s) from the model Questioning Path Tool and use the Forming EBC Tool to make an evidence based claim.” Following the reading, “students record key details, connections, and an initial evidence-based claim on the tool.” The Instructional Notes provide teachers with reminders in Part 3: Formative Assessment Opportunities: “Students should now be beginning to develop more complex claims about challenging portions of the text. Their Forming EBC Tool should demonstrate a solid grasp of the claim-evidence relationship, but do not expect precision in the wording of their claims.” Tools are provided to both teachers and students to assess Academic Habits.
- Each Part of Unit 3 ends with Formative Assessment Opportunities which incorporate writing skills ranging from Attending to Details and Identifying Relationships to claims and textual evidence and claims-evidence relationship writings.
- In Unit 3, Part 3, Activity 2, students compare the draft claims they have written and teachers facilitate a short comparative discussion that helps students reflect on the strengths and/or weaknesses of their claims.
- Unit 5 focuses on the argumentative mode of writing and addresses the standards’ persuasive appeals by continuing to focus on EBCs. It follows the same processes for multi-stage collaborative reviews for development and revisions in writing and opportunities for teachers to track students’ progress: formative assessments and rubric/checklists to observe demonstrated Literacy Skills and Academic Habits.
Where appropriate, writing opportunities are connected to texts and/or text sets (either as prompts, models, anchors, or supports).
- The Questioning Path Tool questions written by the students are always connected to a specific text.
- The Forming Evidence-Based Claims Tool asks students to respond to various texts to capture details and their thinking, as well as to identify how texts connect with one another.
- As part of Unit 1’s final assignments, students are asked to write a multi-paragraph “explanation of something you have come to understand by reading and examining your text.” Students are expected to use one of three final texts and present and explain the central idea, use quotations and paraphrases to support the central idea, explain how the central idea is connected to the author’s purpose, and explain a new understanding.
Materials include a number of writing opportunities that span the entire year. Each final writing task includes formal, usually multi-paragraph essay writing. Students also write throughout each unit in preparation for these final writing tasks. These shorter, informal writing tasks can be found in the form of independent writing, writing text-based explanations, and writing EBCs in pairs.
- In Unit 3, the curriculum provides a Student Making Evidence-based Claims Literacy Skills Checklist. This checklist allows the student to assess skills in three areas: Reading, Thinking, and Writing. Various checklists also appear in the other units and are modified to the skills being assessed in that unit.
- Unit 4 is focused on building students’ proficiency with research skills. The use of a common source set supports a teacher’s ability to track students’ progress while modeling research skills. Students practice research skills throughout the unit with guidance from the teacher and from peer reviews. The writing conducted fits into two products: (1) a research portfolio of sources and findings and (2) a reflective research narrative. As a third, optional, writing product, the material recommend students maintain a reflective journal throughout the process.
Materials include frequent opportunities for evidence-based writing to support sophisticated analysis, argumentation, and synthesis.
The instructional materials reviewed for Grade 9 meet the criteria for materials including frequent opportunities for research-based and evidence-based writing to support analysis, argument, synthesis and/or evaluation of information, supports, claims.
Instructional materials include frequent opportunities for students to write evidence-based claims relating to various topics and in response to text sets organized around the topic. Students are asked to analyze text, develop claims, and support those claims with evidence from the text. Tools, such as Questioning Path Tools, Approaching Text Tools, and Analyzing Details Tools, are provided to help students analyze and organize text to be used in later writing. The checklists and rubrics also include criteria for Using Evidence which asks students to support explanations/claims with evidence from the text by using accurate quotations, paraphrases, and references. Opportunities for writing to sources include informal writing within the units and formal writing in the form of culminating tasks.
All units provide students the opportunity to engage with texts to compose evidence-based writing for the purpose of research, argumentative, or explanatory/informational writing. Optional activities are provided for the teacher to expand and include more writing opportunities and modes.
Texts include a variety of sources (print and digital). Materials meet the grade level demands of the standards listed for this indicator.
Materials provide frequent opportunities across the school year for students to learn, practice, and apply writing using evidence. Examples include, but are not limited to:
- In Unit 1, Part 4, Activity 4, students are tasked with writing a multi-paragraph essay with a central idea and explain how it is developed using text-based evidence gathered in previous activities with the Analyzing Details Tool.
- In Unit 2, Questioning Path Tools connect the reading to the writing students will do and the Deepening portion of this tool is strongly text dependent.
- In Unit 2, the Evidence-Based Claims (EBC) Tool helps students find evidence in the text to support the claims they are creating by providing a place for students to record the details they find. This tool is tied to the targeted Literacy Skill of Using Evidence. Part 1, Activity 4 tasks the teacher with modeling the Forming Evidence-Based Claims handout from the Literacy Toolbox. It is designed for students to “first note details that stand out” and then relate details to each other, explaining any connection between the text and another text, or the text and a reader’s experience. This activity leads students to begin developing claims that are supported by evidence. It is modeled first by teachers using the Supporting Evidence-Based Claims Tool.
- Unit 4 connects reading with writing by asking student to draw connections among key details and ideas within and across texts.
Writing opportunities are focused around students’ analyses and claims developed from reading closely and working with texts and sources to provide supporting evidence. Instructional notes are very specific with regards to helping students develop clear claims beginning with the definition of what a claim is and the explanation that a claim is only as strong as the evidence that supports it. Examples include, but are not limited to:
- In Unit 1, Part 2, Activity 5, “students write a short paragraph explaining their analysis of the text and reference (or list) specific textual details.” Students are asked to write a short paragraph of several clear, coherent, and complete sentences. Students are to explain their analysis of Text 5.
- In Unit 2, Part 5, Activity 5, “Students use a criteria-based checklist of feedback from peers in a collaborative review process to revise and improve their evidence-based claim essays.” Once students have completed the first drafts of their essays, they will work in writing groups to complete two review and revision cycles. The first cycle focuses on the essay’s content or on evaluating and improving the content or quality of claims and evidence; the second cycle focuses on improving organization and expression and clarity of their writing.
Materials provide opportunities that build students’ writing skills over the course of the school year. The teacher edition shares the Unit Design and Instructional Sequence: students are presented with a topic and “begin learning to read closely by first encountering visual images, which they scan for details, and then multimedia texts that reinforce the skills of identifying details and making text-based observations from those details.” (xxxviii). Therefore, students are provided an opportunity to learn about the topic before exposure to the more complex grade-level texts and then move forward to more challenging texts. Writing opportunities are varied over the course of the year. Examples of varied writing opportunities that build students' skills over the course of the year include, but are not limited to:
- Unit 1 focuses on increasing students’ abilities to read for detail and increase understanding of the text. Most of the frequent writing opportunities occur in completing the Literacy Toolbox handouts. The guiding questions provided by the handouts are text-dependent and require students to reference details from the text to support their understanding and explanation about the central idea of the text. This is not necessarily evidence-based; students are merely focused on explanatory or informational writing in this unit. In Unit 1, Part 1, Activity 1, students are introduced to the topic through an analogy from another field.
- All the activities in Unit 1, build to a two-stage culminating activity. Students will do the following: 1) Analyze one of three related texts and draft a multi-paragraph explanation of their text, and 2) Lead and participate in a comparative discussion about the three texts. Students are writing informative/explanatory texts to examine and convey complex ideas, concepts, and information clearly and accurately through the effective selection, organization, and analysis of content. In addition, students are drawing evidence from literary or informational texts to support analysis, reflection, and research.
- Unit 4, Part 5, Activity 4 is an optional activity for students to compose a multimedia presentation or formal essay to communicate a perspective. The materials suggest various opportunities for writing modes to be addressed using EBCs--informational presentation, a research-based explanation, a thesis-driven argument, or Op-Ed piece. The summative writing assignment for the unit is a reflective research narrative. This unit incorporates much of the expectations for this indicator.
Materials include instruction and practice of the grammar and conventions/language standards for grade level as applied in increasingly sophisticated contexts, with opportunities for application in context.
The instructional materials reviewed for Grade 9 partially meet the criteria for materials including instruction of the grammar and conventions/language standards for grade level as applied in increasingly sophisticated contexts, with opportunities for application context.
The materials present tables in the initial overview of each unit and sub-sections outlining the alignment to Common Core State Standards. The materials are focused on select standards for the reading, writing, and speaking and listening standards and do not state a direct alignment to the language standards. However, the materials do provide opportunities for students to demonstrate some, but not all, language standards. This occurs in the form of reading and demonstrating understanding of the text and intentions of word choices by the authors. The provided rubrics direct students and teachers to expect standard English language conventions and punctuation to be demonstrated in writing assignments. However, the materials are not as specific for these expectations as specified by the Common Core State Standards for language conventions. The materials do not clearly provide opportunities for students to practice all language and grammar expectations outlined by national college-and-career readiness standards.
The materials promote and build students’ ability to apply conventions and other aspects of language within their own writing. Instructional materials provide opportunities for students to grow their fluency language standards through practice and application. The materials do not include explicit instruction of all grammar and conventions standards for Grade 9 and do not include Conventions of Standard English, Knowledge of Language, or Vocabulary Acquisition and Use as specific CCSS Anchor Standards Targeted in Developing Core Literacy Proficiencies Units. Evidence to support this rationale include, but are not limited to:
- In the Developing Core Literacy Proficiencies: User Guide, the instructional materials provide documentation for the Alignment of Targeted CCSS with OE Skills and Habits. Reading standards 1-10, writing standards 1-9, and SL.1 are included. No language standards are listed.
- In Literacy Skills, “Using Conventions” explains “effective sentence structure, grammar, punctuation, and spelling to express ideas and achieve writing and speaking purpose" including “writing and speaking clearly so others can understand claims and ideas.” In Unit 1, the Targeted Literacy Skills state that students will “learn about, practice, develop, and demonstrate foundational skills necessary to read closely, to participate actively in text-centered questioning and discussion, and to write text-based explanations." They align the unit goals with CCSS ELA Literacy W.4—produce clear and coherent writing.
- There are no opportunities for direct instruction of grammar and conventions/language standards. For example, in Unit 1, Part 4, Activity 4, instructions in the teacher’s edition state, “students’ writing can be reviewed in relationship to the specific grade-level expectations for writing standard 2 (explanatory writing), especially if students have been working on writing explanations in previous units and are reading for more formal feedback.” Within that standard, the teacher’s edition lists items a-f of which d states “use precise language and domain-specific vocabulary to manage, the complexity of the topic”; and e states “establish and maintain a formal style and objective tone while attending to the norms and conventions of the discipline in which they are writing.” No specific instruction for these skills has been attended to in the materials.
- On the Attending to Details handout, under “analyzing details” there are some specific questions under “Language and Structure” for students to keep in mind, such as “Authors use language or tone to establish a mood” and “Authors use a specific organization to enhance a point or add meaning” but while this tool may make students more aware of those moves/rhetorical choices within the writing they are reading, they are not linked to instruction of grammar and conventions.
- Unit 2 does not provide the instruction or opportunities necessary for students to master the use of parallel structure or the use of various types of phrases and clauses to convey specific meaning or add variety to writing or presentation. CCSS.ELA.Literacy.L.9-10.2 requires that students demonstrate command of the conventions of standard English capitalization, punctuation, and spelling when writing. Unit 2 does not provide the instruction or opportunities necessary for students to master the use of a semicolon to link independent clauses or the use of a colon to introduce a list. Spelling is also not addressed.
- Unit 3 states that it provides “several opportunities for students to apply and develop literacy skills,” including using conventions. However, instruction does not directly support this. For example, in Unit 3, Part 3, Activity 3, the work with developing an evidence-based claim includes five steps:
- Reflecting on how one has arrived at the claim,
- Breaking the claim into parts,
- Organizing supporting evidence in a logical sequence,
- Anticipating what an audience will need to know in order to understand the claim,
- Planning a line of reasoning that will substantiate the claim.
All instruction and accompanying tools are in support of practicing, developing, and writing EBCs. For example, in Unit 3, Part 4, Activity 8, “Using Peer Feedback to Revise a Written EBC, peers give feedback on clarity of the claim, the defensibility of the claim, the use of evidence, and the organization. No feedback is listed for conventions.
- For Unit 4, the instructional materials provide guidance on How This Unit Aligns with CCSS for ELA and Literacy; primary alignments include CCSS.ELA-Literacy.W.9-10.7-9, CCSS.ELA-Literacy.SL.1, CCSS.ELA-Literacy.W.2-5. Supporting alignments include CCSS.ELA-Literacy.RI.1-4, 6, and 9. No language standards are included. Unit 4 does not provide the instruction or opportunities necessary for students to master the use of parallel structure or the use of various types of phrases and clauses to convey specific meaning or add variety to writing or presentation (CCSS.ELA.Literacy.L.9-10.1.A-B). Unit 4 does not provide the instruction or opportunities necessary for students to master the use of a semicolon to link independent clauses or the use of a colon to introduce a list. Spelling is also not addressed (CCSS.ELA.Literacy.L.9-10.2.A-C).
- Unit 5 culminates in writing an argumentative essay and listed in the targeted literacy skills is “using effective sentence structure, grammar, punctuation, and spelling to express ideas and achieve writing and speaking purposes. In Unit 5, Part 1, Activity 5, there is a formative assessment as a building block for students’ final argument where they write a 1-3 paragraph explanation of their multi-part claim. It is supposed to “represent their best thinking and clearest writing, but beyond that indicator, there is no built-in instruction or sense of how that looks with regards to grammar and conventions.
- In Unit 5, Activity 1, Part 5, students work on strengthening writing collaboratively. The teacher’s edition references Writing with Style: Conversations on the Art of Writing by John R Trimble. One of his four essentials is “Use confident language—vigorous verbs, strong nouns, and assertive phrasing.” In the remaining Activities 2-5, there is focus on the following areas:
- Content: Ideas and Information,
- Organization: Unity, Coherence, and logical sequence,
- Support: Integrating and citing evidence,
- Additional Rounds of Focused review and revision.
While grammar and convention mistakes and missteps could be picked up in these rounds of revision, the materials do not include any direct lessons or instructions.
Building Knowledge with Texts, Vocabulary, and Tasks
The instructional materials meet the expectations of Gateway 2. Texts and tasks are organized around topics and themes that support students' acquisition of academic vocabulary. Comprehension of topics and concepts grow through text-connected writing and research instruction. The vocabulary and independent reading plans may need additional support to engage students over a whole school year.
- 28 28
The instructional materials meet the expectations of the building knowledge criteria. Texts and tasks are organized around topics and themes that support students' growing academic vocabulary and understanding and comprehension of topics and concepts. The materials partially support a comprehensive vocabulary plan and independent reading plan over the course of the year. The materials include cohesive writing and research instruction that is interconnected with texts to grow students' literacy skills by the end of the school year.
Texts are organized around a topic/topics or themes to build students' knowledge and their ability to comprehend and analyze complex texts proficiently.
The instructional materials reviewed for Grade 9 meet the criteria that texts are organized around a topic(s) or themes to build students’ knowledge and their ability to read and comprehend complex texts proficiently. Grade 9 materials are grouped around topics such as Unit 1’s focus on the changing dynamic of education in the United States, Unit 4’s focus on the role of music in our lives, and Unit 5’s focus on terrorism. This intense focus builds not only literacy skills but students’ content knowledge. Since the texts are appropriately complex, these texts help increase students’ ability to read and comprehend complex texts. Also, the instructional materials allow students to develop a range of reading and writing skills. Texts are set up to increase in complexity both in regards to the reading difficulty, as well as the writing tasks complexity.
Evidence that the materials meet the criteria include, but are not limited to:
- The overview for Unit 1 “Develops students’ abilities to read closely for textual details” and “lays out a process for approaching, questioning and analyzing texts that help readers focus on key textual characteristics and ideas.” The theme to engage students in the program and skills is “Education is the new currency”; it “presents students with a series of texts related to the changing dynamic of education in the United States.” Students are offered a variety of texts in this unit ranging from photographs to an excerpt of Helen Keller’s autobiography to a TED talk.
- The sole text for Unit 2 is Plato’s Apology. The entire unit focuses on the topic of Socrates’s Trial where he is charged with corrupting the youth and being impious toward the gods.
- The texts for Unit 3 are in the pursuit of making evidence-based claims (EBC’s) about literary technique and use Ernest Hemingway’s short story, “The Short Happy Life of Francis Macomber.” Three key sets of questions are introduced in Part 1, Activity 1:
- What specific aspects of the author’s craft am I attending to? (Through what lenses will I focus my reading?)
- What choices do I notice the author making and what techniques do I see the author using? What textual details do I find as evidence of those choices and techniques?
- How do the author’s choices and techniques influence my reading of the work and the meaning that emerges for me? How can I ground my claims about meaning in specific textual evidence?
- The texts for Unit 4 all relate to the topic “Music: What Role Does It Play in Our Lives?” While most of the unit texts are in the form of online articles, the variety of perspectives and subtopics increases student engagement. Unit 4 texts include, but are not limited to:
- “Imagine Life Without Music” - Video
- “A Brief History of the Music Industry” - Article
- “What is Online Piracy?” - Article
- “Why Your Brain Craves Music” - Article
- “25 Most Important Civil Rights Moments in Music History” - History
- “Why I Pirate” - Article
- The topic area and texts for Unit 5 focus on the theme of terrorism and what is meant by terrorism. The Unit overview states, “the texts in this unit are offered in the form of texts sets, in which texts are grouped together for instructional and content purposes.” Part 1 of Unit 5 introduces students to the concept of evidence-based argumentation and students read and write about a variety of information texts to build an understanding of terrorism as a definition and concept. In Part 2, students analyze arguments through close-reading skills and terminology used in delineating argumentation. Students read and analyze arguments associated with terrorism, response to terrorism, and terrorism policy. Part 3 deepens students’ abilities to read and think about arguments.
Materials contain sets of coherently sequenced higher order thinking questions and tasks that require students to analyze the language (words/phrases), key ideas, details, craft, and structure of individual texts in order to make meaning and build understanding of texts and topics.
The instructional materials reviewed for Grade 9 meet the criteria that materials contain sets of coherently sequenced higher order thinking questions and tasks that require students to analyze the language (words/phrases), key ideas, details, craft, and structure of individual texts in order to make meaning and build understanding of texts and topics.
In the User Guide for the 9th grade instructional materials, the teacher’s edition states that “at the heart of the Odell education approach to teaching closer reading is an iterative process for questioning texts that frames students’ initial reading and then guides them as they dig deeper to analyze and make meaning. This questioning process differs from traditional text questioning in that its goal is not to “find the answer, but rather to focus student attention on the author’s ideas, supporting details, use of language, text structure, and perspective—to examine a text more closely and develop a deeper understanding” (xxii). The tools included in the Odell curriculum are the Reading Closely Guide, the Guiding Questions Handout, the Questioning Path Tool, the Approaching the Text Tool, the Analyzing Details Tool, and the Forming Evidence-Based Claims Tool. Consistently throughout the Grade 9 instructional materials, higher order thinking questions are provided in the form of both text-dependent and text-specific questions. These questions are embedded into Questioning Path Tools that are used by students as guides when analyzing texts. These questions help students make meaning of what they are reading and build understanding of multiple, related texts as they prepare for each unit’s culminating task. The use of the plethora of tools, questions, and tasks not only provides evidence of student understanding of definitions and concepts, but also helps students make meaning and builds understanding of texts.
Evidence that supports this rationale includes, but is not limited to:
- The curriculum provides the Reading Closely: Guiding Questions Handout which serves as guidance when analyzing texts that do not have provided, text-specific Questioning Path Tools. The Reading Closely: Guiding Questions Handout divides questions into four categories: approaching, questioning, analyzing, and deepening. Questions are provided for each section to address language, ideas, perspective, and structure.
- In Unit 2, Part 3, Activity 2, the curriculum provides questions that require students to analyze language (words/phrases), key ideas, details, craft, and structure of Plato’s Apology. Questions include:
- "In paragraph 13, Socrates says he is 'convinced that I never deliberately harmed anyone.'... Socrates claims that his accusers have been found guilty of the truth. What does this language reveal about Socrates’s perspective of himself and his audience?
- In paragraph 11, Socrates states that he will give his defense, 'not for my own sake...but for your sake.' What details does Socrates give to support this stance? How does Socrates arrive at such a conclusion?
- In what ways are ideas, events, and claims linked together in the text"
- Unit 3 develops students’ abilities to make evidence-based claims (EBCs) about literary techniques through activities based on a close reading of Ernest Hemingway’s short story, “The Short Happy Life of Francis Macomber.” It is emphasized in the introduction to the unit that students come to understand that in a great literary work…”all aspects are significant and have some bearing on the total significance of the work.” The close reading of the text is guided by these broad questions:
- "What specific aspect(s) of the author’s craft am I attending to? (Through what lens(es) will I focus my reading?
- What choices do I notice the author making, and what techniques do I see the author using? What textual details do I find as evidence of those choices and techniques?
- How do the author’s choices and techniques influence my reading of the work and the meaning that emerges for me? How can I ground my claims about meaning in specific textual evidence?"
- In Unit 3, Part 3, Activity 4, students use text-specific questions to discuss a section of the text and produce a second EBC. Using both their Questioning Path Tool and the Forming EBC Tool, students reflect on how Hemingway’s use of techniques affects the reader’s experience of the story. In Unit 3, Part 5, Activity 4, students independently draft their final EBC essay which will be evaluated for their demonstration of three key expectations and criteria:
- Demonstrate an accurate reading and insightful analysis of the text.
- Develop a supported claim that is clearly connected to the content of the text.
- Successfully accomplish the five key elements of a written EBC.
- Unit 4 focuses on student research. For this unit, the curriculum recommends that the Guiding Questions Handout be used in conjunction with the blank Questioning Path Tool. The Guiding Questions Handout provides questions that require students to analyze language (words/phrases), key ideas, details, craft, and structure of individual texts so that they can make meaning of the texts. Questions include:
- "What do the author’s words and phrases cause me to see, feel, or think?
- How might I summarize the main ideas of the text and the key supporting details?
- In what ways are ideas, events, and claims linked together in the text?
- What do I notice about the structure of specific elements (paragraphs, sentences, stanzas, lines, or scenes)?"
- Unit 5’s focus, pedagogy and instructional sequence “are based on the idea that students (and citizens) must develop a mental model of what effective—and reasoned—argumentation entails.” The unit focuses on learning about and applying academic concepts related to argumentation: issue, perspective, position, premise, evidence, and reasoning. The topic area of the unit and the tests focus on terrorism. New tools are introduced to support students which are specific to argumentation: Evidence-Based Arguments Terms Handout; Delineating Arguments Tool; Model Arguments: and Evaluating Arguments Tool. In Unit 5, Part I, Activity 4’s Instructional Notes teachers are told to have students use questions from their Reading Closely for Textual Details and Researching to Deepen Understanding tools to frame their own, more focused questions about the issue and texts. They use these questions to “drive a deeper reading of the previous texts or of additional texts providing more background and perspectives on the topic.” Unit 5, Part II, Activity 5 presents students with different perspectives, positions, and arguments for them to read and analyze. “Students will use these texts to move from guided to independent practice of the close-reading skills associated with analyzing an argument” (485).
Materials contain a coherently sequenced set of text-dependent and text-specific questions and tasks that require students to build knowledge and integrate ideas across both individual and multiple texts.
The instructional materials reviewed for Grade 9 meet the criteria that materials contain a coherently sequenced set of text-dependent and text-specific questions and tasks that require students to build knowledge and integrate ideas across both individual and multiple texts. The curriculum provides both text-dependent and text-specific questions to support students analysis as they read texts. These questions are provided through Questioning Path Tools and the Guiding Questions Handout. These questions guide teachers as they support student growth in analyzing language, determining main ideas and supporting evidence, identifying author’s purpose and point of view, and analyzing structure of text.
Both the student work with individual and multiple texts and teacher materials provide support in growing analytical skills of students.Examples include, but are not limited to:
- Unit 2 develops students’ abilities to make evidence–based claims (EBCs) based on a close reading of Plato’s Apology of Socrates. Students use a question-based approach to read and analyze the text, building and applying learning in the Reading Closely unit.
- In the Questioning Path Tool (Part I, Activity 2) over paragraphs 1-3, both text dependent (“What does Socrates’s use of the word slandered reveal about his position? How does Socrates make it clear he is innocent?”) and text-specific (“In paragraph 2, why does Socrates ask a question to himself as if the audience asked him? How does this paragraph related to the first and third paragraphs? Why would Socrates pretend the audience is asking him questions?) are included.
- In Part 2, Activity 2, instructional notes guide teachers through the reading of paragraphs 4-10 of Plato’s piece. “Considering the question and the claim, students should search first for literal details about what the oracle says and how Socrates responds. The questions in the Analyzing and Deepening stages of the model Questioning Path Tool should then help them read and annotate the text looking for additional details, words, and images that further reveal Socrates’s understanding of the oracle’s claim."
- The Instructional Notes in Part 4, Activity 2 invite teachers to model with a draft paragraph to help students work with reading a written draft of an EBC.
- Unit 4 focuses on student research. While the instructional materials do not provide a specific Questioning Path Tool for each recommended text, they do provide a blank Questioning Path Tool that students and/or teachers could design for each source; the materials recommend that the Guiding Questions Handout be used as a guide when using this blank tool (page 420 of the Teacher’s Edition); the Guiding Questions Handout provides sample text-dependent questions such as, “What do you think the text is mainly about - what is discussed in detail?” and “What evidence supports the claims in the text, and what is left uncertain or unsupported?”
- Unit 4 includes a common source set to help students explore the question/theme “Music: What Role Does it Play in Our Lives?” In Part I, Activity 2, while the teacher leads a class exploration of a topic, students independently explore the research topic. Using the Guiding Questions Handout, students reflect on the video “Imagine Life Without Music” by answering three questions: "What new ideas or information do I find in the text? What ideas stand out to me as significant or interesting? How do the text’s main ideas relate to what I already know, think, or have read?" As students are building their own source set in Part 2, Activity 4 asks them to assess the sources by considering three key factors:
- Accessibility and interest: How readable and understandable is the source for the researcher and how interesting or useful does it seem to be?
- Credibility: How trustworthy and believable is the source, based on what the research knows about its publisher, date of publication, author (and author’s perspective), and purpose?
- Relevance and richness: How closely connected is the source to the topic, Area of Investigation, and Inquiry Path(s)? How extensive and valuable is the information in the source?
- Part 5 provides the opportunity to analyze across multiple texts by asking students to communicate an evidence-based perspective. Activity 2 asks students to write a reflective research narrative explaining how they came to their understanding of the topic, the steps they took to reach that understanding, and what they have learned about the inquiry process.
The questions and tasks support students' ability to complete culminating tasks in which they demonstrate their knowledge of a topic through integrated skills (e.g. combination of reading, writing, speaking, listening).
The instructional materials reviewed for Grade 9 meet the criteria that the questions and tasks support students’ ability to complete culminating tasks in which they demonstrate their knowledge of a topic through integrated skills (e.g. combination of reading, writing, speaking, listening). The overall curriculum, as well as each unit within, systematically builds on reading, writing, listening and speaking skills to support students in achieving the tasks included. Questions and tasks, specifically designed to lead up to the culminating task for each unit, support students in building towards independence in their work and demonstrating knowledge of a topic. While reading and writing tend to be the focus of these tasks, speaking and listening are incorporated into not only the culminating tasks but also the activities leading up to them. Students are provided multiple tools, such as the Approaching Texts Tool and the Organizing Evidence-based Claims Tool, that provide guidance for students as they read texts and begin writing about those texts. These tools serve as formative assessments that help teachers determine whether or not students have the skills necessary to complete the culminating tasks.
Examples include, but are not limited to:
- Unit 1’s culminating task asks students to become text experts, write a text-based explanation, and lead/participate in a text-centered discussion. In the first part of the culminating task, students are required to “become an expert about one of three final texts in the unit”; in this section, students build and demonstrate their knowledge through reading. In the second part of the culminating task, students are required “to plan and draft a multi-paragraph explanation” of something they came to understand by reading and examining their texts; this section focuses on using writing to demonstrate their understanding of the topic. The final part of the culminating task requires students to “prepare for and participate in a final discussion;” this section allows students to demonstrate comprehension and knowledge through speaking and listening. The instructional materials also provide questions and tasks throughout the unit that serve as formative assessment opportunities. For Unit 1, Part 2, the instructional materials suggest that the Approaching Texts Tools for Texts 2 and 5 be used as formative assessments to gauge students’ use of questioning to focus reading, ability to annotate effectively, and ability to select details.
- Unit 2’s activities focus on a close reading of Plato’s Apology of Socrates. The teacher’s edition describes the sequence of learning activities as supporting “the progressive development of the critical reading and thinking skills involved in making evidence-based claims (EBCs).” Parts 1 and 2 focus on close reading and forming and supporting EBCs as readers, using Questioning Path Tools for additional support in this work. In Part 2, Activities 3-5, students work in pairs, as well as have class discussions, a process the teacher’s edition describes as helping to “develop a class culture of supporting all claims, including oral critiques, with evidence” (153). Part 3 focuses on preparing to express written EBCs by organizing evidence and thinking. Finally, Parts 4 and 5 task students with communicating EBCs in paragraphs and essays. This process begins with modeling of an EBC in Part 4, Activities 1 and 2, with students continuing this work in Activities 3-8 in pairs, class discussion and peer feedback. In Part 5, students work more independently to craft their EBC essays and work through a two-stage collaborative review and revision process.
- Unit 3’s culminating task asks students to read the final section of text independently, develop an EBC, and draft a multi-paragraph essay. This final task’s main focus is reading and writing. In the first part of the final assignment, students read and annotate the final section of Hemingway’s short story and then compare notes with other students. With the aid of the Forming EBC Tool and the Organizing EBC Tool, students then write a one- or two-paragraph draft of their claim using the Writing EBC Handout. These tools are used as guides for students during the process; they also allow teachers to gauge student readiness and provide assistance if students are not “on track” before they begin drafting their multi-paragraph essays. In the second part of this final assignment, students write a multi-paragraph essay about the cumulative effects of a literary technique that Hemingway uses. Students then use a Forming EBC Tool and an Organizing EBC Tool to begin organizing ideas and evidence. These Tools can also be used as formative assessments to ensure that students are ready to begin their final essays. After drafting the essays, students review and improve their drafts through a collaborative process.
- Unit 4 “develops explorative proficiency: researching to deepen understanding.” Using the question/theme “Music: What Role Does it Play in Our Lives?”, students collaboratively explore a topic, reading to gain background knowledge and choose an area of investigation. From there in Part 2, they focus on the “essential skills for assessing annotating, and making notes on sources to answer inquiry questions." In Part 3, students make an EBC and analyze key sources. In Part 4 they review and evaluate their materials and analysis, and in Part 5 they organize their research and synthesize their analysis to create a research-based product.
- Students can choose to write a reflective research narrative or do a multimedia presentation. Students keep a research portfolio along the way and “these products can be used as evidence for the development of the full range of targeted Literacy Skills and Academic Habits” (401).
- Unit 5’s culminating task asks students to read a collection of informational texts, develop a supported position on an issue, and write a multi-paragraph essay making a case for that position. Like Unit 3, Unit 5’s culminating task’s main focus is reading and writing. Students are asked to review previously read texts and the claims they formed earlier in the unit along with evidence to support those claims. Students then use a Delineating Arguments Tool to plan their essays. This tool serves as a formative assessment and can be reviewed by the teacher to determine if students are ready to move on to drafting their argumentative essay. Before final publication, students are encouraged to “use a collaborative process with other students to review and improve” their drafts."
Materials include a cohesive, consistent approach for students to regularly interact with word relationships and build academic vocabulary/ language in context.
The materials for Grade 9 partially meet the criteria that materials include a cohesive, year-long plan for students to interact with and build key academic vocabulary words in and across texts.
While the curriculum provides opportunities for students to increase their vocabulary, materials do not provide teacher guidance outlining a cohesive, year-long vocabulary development component. The curriculum states, “Although leaving many decisions about the teaching of vocabulary to the teacher, the program provides opportunities for students to increase their vocabulary in areas related to specific content and fundamental to overall literacy” (xxxiii). Examples include, but are not limited to:
- Unit 1, Part 2, Activity 1 asks students to read "The Story of My Life" by Helen Keller. The curriculum identifies and defines a number of vocabulary that might be unfamiliar to students. However, the only vocabulary instruction provided comes in the form of questions such as, “How do specific words or phrases influence the meaning of the text?” or “What language does she use to describe the brook and the river and how do the words help me think about the differences between the two?”
- Unit 3’s sole text is “The Short Happy Life of Francis Macomber.” Teachers are directed to find this source on the internet; therefore, unfamiliar vocabulary are not identified or defined. In Unit 3, Part 3, Activity 1, the only vocabulary instruction is provided via questions such as, “What details and words suggest the narrator’s perspective?”
- Unit 5, Part 1, Activity 2 asks students to read “Terrorists of Freedom Fighters: What’s the Difference?” Teachers are directed to find this source on the internet; therefore, unfamiliar words are not identified or defined. The only vocabulary instruction for this text is provided by the question, “The author uses the word perception to explain the difference between a terrorist and a freedom fighter. What does he mean by 'perception' and how does this contrast with a 'metaphysical difference'?”
Materials contain a year long, cohesive plan of writing instruction and practice which support students in building and communicating substantive understanding of topics and texts.
The materials reviewed for Grade 9 meet the criteria that materials contain a yearlong, cohesive plan of writing instruction and tasks which support students in building and communicating substantive understanding of topics and texts.
Within every unit, students practice writing and speaking from sources. The mode of writing they practice, the process they use, and the independence they are given varies based on the focus of the unit and where the unit is placed in the year. Students use graphic organizers to develop short sentences and paragraphs that communicate their thinking as they read texts. Students write formal paragraphs and short expository essays. Students then break claims into component premises and develop arguments. By the end of the year, students plan, write, and publish thesis-driven academic arguments, making the case for a position related to texts and their content.
The collaboration workshop is a question-based approach for developing writing. Students work through a process that is collaborative, question-based, and criteria-driven. Students are taught to think of essays as a process rather than a product and that conversation, contemplation, consideration, and revision are part of the process.
The following learning principles are used to facilitate student writing development:
- Independence: Students are encouraged to be reflective and develop their own writing process rather than following the writing process in a rote and mechanical way.
- Collaboration: Students are encouraged to seek and use constructive feedback from others.
- Clear Criteria: Criteria is provided to describe the essential characteristics of a desired writing product.
- Guiding Questions: Students are expected to use guiding and text-based questions to promote close reading and the development of their drafts.
- Evidence: Students use and integrate evidence through references, quotations, or paraphrasing.
Each writing activity includes a teacher demonstration lesson and class time is dedicated for students to free write, experiment, draft, revise, and edit their writings. Students engage in discussions surrounding their writings and ask and answer questions about their writing. Students are also provided multiple opportunities to read aloud and share their writings throughout the process to receive feedback. The writing process moves through an increasingly focused sequence of activities, such as getting started, thinking, organization, evidence, connecting ideas, expression, final editing, and publication.
Materials include a progression of focused, shared research and writing projects to encourage students to develop and synthesize knowledge and understanding of a topic using texts and other source materials.
The instructional materials reviewed for Grade 9 meet the criteria that materials include a progression of focused, shared research, and writing projects to encourage students to synthesize knowledge and understanding of a topic using texts and other source materials. Grade 9 provides research opportunities throughout the year’s instructional materials. Research skills are built into several contexts and culminating tasks, representing both short and long projects. In preparation for these final tasks, students read and write about texts and participate as both speakers and listeners in class discussions. Units 1, 4, and 5 provide multiple texts that give students access to a variety of sources about a topic. Many resources are available for students and teachers to learn, practice, apply, and transfer skills as they gain proficiency of the skills necessary for research.
Each unit ends with a culminating writing task. Unit 1 asks students to write a text-based explanation. In Units 2 and 3, students write global evidence-based claims (EBC) essays. In Unit 4, students write a reflective research narrative. In Unit 5, students write an evidence-based argumentative essay. These writing assignments, all requiring evidence from text, increase in difficulty as the year progresses. The expectations for student independence also increase as the school year progresses. Specific details of writing tasks include:
- Throughout Unit 1, students read a variety of texts centered around the topic of the Education in the United States. For example, in Unit 1, Part 1, Activity 4, students analyze the TED Talk “Changing Education Paradigms.” This video helps students view education from a different perspective than they are used to. They learn about education reform and the barriers of traditional education. During this activity, student are conducting mini-research as they watch the video, write about the video in small groups, and analyze the video during a class discussion. This text and the work they accomplished will serve as a resource for the culminating writing task.
- Unit 2 is centered on research skills such as making EBCs and close reading “not simply to report information expected by their teachers” but instead learning to approach texts “with their own authority and the confidence to support their analysis” (126). The primary CCSS for Unit 2 is RI.1 and W.9b—“cite evidence to support analysis of explicit and inferential textual meaning”—both crucial to research. The Learning Progression of Unit 2 supports the progressive development of critical reading and thinking skills in making EBCs and culminates in an Evidence Based Writing. In Part 5, Activity 1, students return to the end of Socrates’s speech and do a closer rereading of these lines from the end of the text:“When my sons grow up, punish them by getting in their face as I’ve gotten in yours. If you think they care more about money or anything else than they do about virtue; and if they take themselves to be very important when they aren’t rebuke them for, the way I’ve rebuked you, for not paying attention to what they should and for thinking they’re important when they’re worthless.” From there they consider these lines within the context of the theme “the unexamined life is not worth living.” Activities 2-6 move from framing global EBCs to a class discussion of final EBC essays.
- Unit 3 is focused on the text, “The Short Happy Life of Francis Macomber.” The unit’s activities are designed to prepare students for the culminating writing task which is a global evidence-based essay on literary technique. For example in Unit 3, Part 2, Activity 1, “students independently read paragraphs 18 through 106 and use the Supporting EBC Tool to look for evidence to support a claim…” Teachers are provided lesson guidance through the Instructional Notes and are encouraged to model the use of the tools students will use throughout the instructional materials as they conduct research. Students are already beginning to organize the evidence for their writing as they use the Supporting EBC Tool and the Forming EBC Tool. These tools help students find and record evidence that will be used later in the unit’s final writing. This activity is followed by a read aloud and class discussion of the text.
- Unit 4 is completely grounded in research and is based on four components: choosing a topical area of interest to research, conducting a research process, compiling a research portfolio, and communicating a researched perspective. The Parts of Unit 4 are sequenced to facilitate a progression of research skills. Part 1 initiates Inquiry with an introductory discussion of research using the question/focus, “Music: What Role Does it Play in Our Lives?” Part 2 is focused on gathering information and teaches students to conduct searches and assess and annotate sources. Resources available to students in Part 2 include the Research Frame Tool and the Taking Notes Tool. Part 3 focuses on deepening understanding, helping students draw personal conclusions about their Area of Investigation. The tools available for student in the Student Edition are the Forming Evidence-Based Claims Research Tool, the Analyzing Details Tool and the Research Frame Tool. Finally Part 4 focuses on “Finalizing Inquiry” where students evaluate research and, in Activity 3, “review and discuss their Research Frames and researched materials to determine relevance, coherence, and sufficiency” (383).
- Unit 5 is a research-based unit where students learn about terrorism. The instructional materials are designed so that students learn that terrorism is “a complex topic with many perspectives and positions - not a simple pro and con arena for debate - which enables the teacher and students to approach and study the issue from many possible angles” (443). Unit 5 consists of five parts that serve as short research-based assignments that build toward the final evidence-based argumentative essay. In Unit 5, Part 2, Activity 3, students, in teams, read and describe arguments and write EBCs. Questioning Path Tools serve as support for students, and Text Notes are provided to support the teacher as he/she helps students become more independent during the research process (481-483).
Materials provide a design, including accountability, for how students will regularly engage in a volume of independent reading either in or outside of class.
The instructional materials reviewed for Grade 9 partially meet the criteria that materials provide a design, including accountability, for how students will regularly engage in a volume of independent reading either inside or outside of class. Students regularly engage in independent reading after the teacher models Academic Habits and processes guided by the materials.
Independent reading, as noted in the evidence, includes opportunities for reading time outside of class and shorter periods of independent reading to provide an initial understanding or focused analysis of specific literary techniques. Students independently practice Literacy Skills while reading and analyzing texts. This includes a range of text types: visual-based texts to printed texts of multiple genres. Students do read portions of text independently as close reading activities at various Lexile levels. However, there is no detailed schedule for independent reading to occur, in or outside of class time, but general approximations for specific purposes. The majority of independent reading occurs during class. Student accountability occurs during class discussions and the materials provide an Academic Habits checklist to support the student and teacher during text-centered discussions. The materials provide Academic Habits checklists for students to self- and peer-assess during academic discussions following independent reading tasks, but the materials do not include direct guidance for students to track their progress and growth as independent readers. At times, the materials leave the option for outside of class independent reading to take place, but scheduling and tracking of this is left up to the discretion of the teacher.
Evidence that supports this rationale include, but are not limited to:
- In Unit 1, Part 4, Activity 3, students select one of three texts that they have read independently in a previous lesson to discuss with a small group. Students will then analyze their chosen text independently. Questioning Path Tools provide built-in support as they help students focus on certain aspects of the text to foster understanding and analysis. The instructional materials suggest that this reading can be done as homework or in class which allows the teacher to appropriately balance both in and outside of class reading. While the instructional materials provide supports/scaffolds that foster independence, they do not include procedures for independent reading, a proposed schedule for independent reading, or an accountability or tracking system.
- In Unit 3, Part 1, Activity 1, “students independently read paragraphs 18 through 106 and use the Supporting EBC Tool to look for evidence to support a claim made by the teacher.” While the text does not provide procedures for independent reading, it does suggest that students complete this activity for homework to help students build the habit of perseverance in reading. In Unit 3, Part 1, Activity 2, students are provided with a series of guiding questions via the Questioning Path Tool to help guide them through the text. The instructional materials use independent reading throughout this unit and provide guiding questions to help students move from a literal understanding of the text to a deeper analysis; however, the instructional materials do not provide a schedule, an accountability system, or in this unit, any suggested independent reading outside the anchor text.
- In Unit 5, Part 1, Activity 2, “students read and analyze background text to develop an initial understanding of the topic.” While previous units have provided “comprehensive sets of text-dependent questions” to guide them through their reading and analysis, the instructional materials suggest that by this point in the school year “students should have begun to develop independence as readers…and should not require prescriptive scaffolding.” Instead the instructional materials provide text-dependent questions to help them analyze the elements and reasoning in arguments. Throughout this unit, students will be reading a variety of texts suggested by the instructional materials; since not all student will read the same texts, much of this reading and research will be done independently. While a wide variety of texts at different lexile levels are provided for student use via Text Sets, the instructional materials do not provide a proposed schedule or an accountability/tracking system for independent reading.
Instructional Supports and Usability Indicators
The materials provide a clear, useful, standards-aligned Teacher Edition, including information to bolster the teacher’s understanding of both the content and pedagogy. Additional information outlines the program’s instructional approaches, philosophy, and the research that undergirds the program.
The materials provide information for students about the program, but there are no information or protocols for communicating with families about the goals and structure of the program.
Routines and guidance within the program assist teachers in progress monitoring, though the connections between the assessments and the standards they are measuring is not clear. Sufficient guidance is provided for interpreting student performance, though specific strategies or guidance for remediation for students who are not proficient is not offered.
The materials do not outline a consistent plan for holding students accountable for independent reading. Student choice is often limited within the independent reading options.
Digital materials are web-based, compatible with multiple Internet browsers, “platform neutral”; they follow universal programming style and allow the use of tablets and mobile devices.
The included technology enhances student learning, including differentiation for the needs of all learners. The program does not provide technology for collaboration. The materials can be easily customized for local use.
- 8 8
Materials are designed with great consideration for effective lesson structure, pacing, and are designed to be completed within a school year, including some flexibility for local academic goals and content. Ample review and practice resources are provided and all materials are clearly labeled and accompanied by documentation that delineates their alignment to the standards. The design of the materials is minimalistic (orange, black, and white color scheme) and may not be engaging for students.
Materials are well-designed (i.e., allows for ease of readability and are effectively organized for planning) and take into account effective lesson structure (e.g., introduction and lesson objectives, teacher modelling, student practice, closure) and short-term and long-term pacing.
The materials reviewed for Grade 9 meet the criteria that materials are well-designed and take into account effective lesson structure and pacing. Each unit is divided into five parts, and each part is divided into activities. Not only does each part within a unit build in complexity, the units also become more complex as the year progresses. This intentional design helps students develop necessary skills before advancing to the next activity or unit. Also, by dividing each part into activities, the instructional materials are able to provide a realistic estimated time frame for completion.
In Unit 1, the instructional materials provide an overview of the activities for Part 1:
- Introduction to the Unit
- Attending to Details
- Reading Closely for Details
- Attending to Details in Multimedia
- Independent Reading and Research
This lesson structure moves students from a teacher-direction introduction and guided analysis of text to an independent reading and research activity. The materials suggest that this Unit 1, Part 1 should take three to four days to complete.
In Unit 3, Part 3, the materials outline the following activities:
- Independent Reading and Forming Evidence-Based Claims (EBCs)
- Comparing EBCs
- Model the Organizing of EBCs
- Deepening Understanding
- Organizing EBCs in Pairs
- Class Discussion of Student EBCs
This lesson structure moves students through the process of developing and explaining EBCs by providing opportunities for independent reading with the support of teacher modeling and a cooperative feedback process. Unit 3, Part 3 should take two to three days to complete.
In Unit 5, the materials provide an overview of the activities for Part 3:
- Evaluating Arguments
- Developing a Perspective and Position
- Deepening Understanding
- Using Others’ Arguments to Support a Position
- Responding to Opposing Arguments
This lesson structure is designed to help students through the process of evaluating arguments and synthesizing information to establish their own positions which is a vital step in the research process as students prepare to write an evidence-based argumentative
The teacher and student can reasonably complete the content within a regular school year, and the pacing allows for maximum student understanding.
The materials reviewed for Grade 9 meet the criteria that the teacher and student can reasonably complete the content within a regular school year, and the pacing allows for maximum student understanding. The materials provide effective guidance and flexibility for teachers to address all the content and supplement with local academic goals and curricula. The materials address intertwined essential skills delineated in five units. Each unit focuses on a Core Proficiency for literacy that builds skills applicable beyond the English language arts classroom. The materials are vertically aligned by consistently addressing the same Core Proficiencies in five units in each proceeding grade.
Evidence that supports this rationale include, but are not limited to:
- The materials consist of five units focused on four essential proficiencies that are designed to intertwine the building of knowledge. Each unit delineates standards-aligned Academic Habits into five parts with a varying amount of activities that range from 1 to 3 instructional days as determined by the teachers.
- The materials recursively focus on 20 essential Literacy Skills and 12 Academic Habits applied to text-centered analysis tasks in order to maximize student understanding of skills. Tasks include reading, writing, speaking, and listening.
- The materials bundle multiple standards and literacy skills into four Core Proficiencies. Each unit focuses on a different proficiency for students to master. The Core Proficiencies include: Reading Closely for Textual Details, Making Evidence-Based Claims, Researching to Deepen Understanding, and Building Evidence-Based Arguments.
- The materials provide guidance for structuring yearlong instruction and supplementing with local curricular content based on students’ needs as determined by the teacher.
- The materials are vertically aligned and follow the same formula and address the same Core Proficiencies from grade-to-grade with increasingly complex texts and opportunities for independent work.
The student resources include ample review and practice resources, clear directions, and explanation, and correct labeling of reference aids (e.g., visuals, maps, etc.).
The materials reviewed for Grade 9 meet the criteria that the student resources include ample review and practice resources, clear directions, and explanation, and correct labeling of reference aids (eg. visuals, maps, etc.) Student materials at Grade 9 include a variety of tools for students to practice the targeted skills in the instructional materials. The Reading Closely for Textual Details Literacy Toolbox includes, but is not limited to the following handouts: Reading Closely Graphic, Guiding Questions Handout, Attending to Details Handout, and Reading Closely Final Writing and Discussion Task Handout. In addition to the handouts, students are provided with a variety of tools to practice targeted Core Literacy Proficiency Skills, such as the Approaching the Text Tool, Analyzing Details Tool, Questioning Path Tool, and Model Questioning Path Tools. Checklists are provided to support peer- and self-review. Texts are included in the Student Edition and Additional Resources in the Topic Area are included in the Student Edition with guidance regarding where to locate online resources. Images are labeled appropriately.
Evidence that supports this rationale include, but are not limited to:
- In Unit 1, eight texts are provided in the Student Edition as well as Extended Reading opportunities including Lectures and Biographical Sketches by Ralph Waldo Emerson and Education and National Welfare by Horace Mann. These texts are located prior to Part 1 in the Student Edition. Text 1 consists of two images: Classroom Pictures, 1950s and 2012. Each image is printed with a label on the right to differentiate the 1950s classroom image from the 2012 classroom image.
- In Unit 4, Additional Resources in the Topic Area are included in the Student Edition prior to the Literacy Toolbox, including Music on the Web and Music and Therapy. Guidance is provided for students to access these resources through the appropriate website. For example, “Music is medicine, music is sanity,” Robert Gupta, TED Talk, February 2010. Available through the Ted.com website.
- In Unit 5, Part 2, Analyzing Arguments, students are provided with Questioning Path Tools to assist them in approaching the text. Clear instructions are included directly on the Questioning Path Tool, including the following: “I determine my reading purposes and take note of key information about the text. I identify the LIPS domain(s) that will guide my initial reading.” Prompts are provided on the side to remind students to identify Purpose, Key Information, and LIPS domain(s).
Materials include publisher-produced alignment documentation of the standards addressed by specific questions, tasks, and assessment items.
The materials reviewed for Grade 9 meet the criteria that materials include publisher-produced alignment documentation of the standards addressed by specific questions, tasks, and assessment items. The materials include publisher-produced alignment documentation of both primary and supporting standards at the following levels: year, unit, and part. Both the Reading Closely: Guiding Questions Handout and the Questioning Path Tools, which are used extensively throughout the instructional materials, are aligned to specific reading and writing standards.
Evidence that supports this rationale include, but are not limited to:
- In the Developing Core Literacy Proficiencies: User Guide, the materials provide an Alignment of Targeted CCSS with OE Skills and Habits chart. This chart provides the CCSS Anchor Standards and the aligned Literacy Skills and Academic Habits.
- For each Unit, the materials provide the CCSS alignment and divide the standards into primary targeted skills and related reading and writing skills from supporting CCSS; in addition, the instructional materials provide the targeted and supporting standards for each part of each unit.
- Throughout the materials, students use the Reading Closely: Guiding Questions Handout. This handout organizes questions into four areas: Language, Ideas, Perspective, and Structure. The Language questions address Common Core State Standards R.4, L.3, L.4, and L.5. The Ideas questions address Common Core State Standards R.2, W.3, R.8, R.9. The Perspective questions address Common Core State Standard R.6. The Structure questions address the Common Core State Standard R.5.
The visual design (whether in print or digital) is not distracting or chaotic, but supports students in engaging thoughtfully with the subject.
The materials reviewed for Grade 9 partially meet the criteria that the visual design (whether in print or digital) is not distracting or chaotic, but supports students in engaging thoughtfully with the subject. The visual design, while not distracting or chaotic, does not help students engage with the subject. Materials are printed in black and white with orange headings, very few graphics or pictures are provided, and the graphic organizers do not allow much room for student response. There is no color-coding to help convey structure and speed up visual searching. The materials are not visually engaging.
Evidence that supports this rationale include, but are not limited to:
- In the Unit 1 materials, the only visuals provided serve as Text 1. These consist of two pictures of classrooms; one is from the 1950s, and the other is from 2012. Both are in black and white. In Unit 1, Part 4, Activity 3, the Questioning Path Tool for Text 9 provides eight questions with subquestions, but does not provide any room for students to record notes/answers.
- In the Unit 3 materials, no visuals are provided. Many tools are provided in this unit including the Forming Evidence-Based Claims Tool and the Organizing Evidence-Based Claims Tool. These graphic organizers, which are designed to help students prepare for writing, do not provide adequate space for students to record evidence, details, or reflections.
- In the Unit 5 materials, no visuals are provided. All texts are accessible via the Internet. In Unit 5, Part 2, Activity 3, the Questioning Path Tool for text 4.1 provides seven questions, but does not provide any room for students to record notes/answers.
- 8 8
Materials support teacher learning and understanding of the Standards.
The materials provide a Teacher Edition with strong support, clear guidance, and abundant useful instructional notes. Advanced literary concepts are supported with additional information to bolster the teacher’s understanding of both the content and the pedagogy. The standards alignment within the materials is clearly delineated within unit overviews.Materials meet the criteria that materials contain a teacher’s edition that explains the role of the specific ELA/literacy standards in the context of the overall curriculum. The instructional approaches and program philosophy are described within the materials as well as the program’s focus on research-based strategies.The materials provide information for students about the program, but there are neither instruction nor protocols for communicating with families about the goals and structure of the program.
Materials contain a teacher's edition with ample and useful annotations and suggestions on how to present the content in the student edition and in the ancillary materials. Where applicable, materials include teacher guidance for the use of embedded technology to support and enhance student learning.
The materials reviewed for Grade 9 meet the criteria that materials contain a teacher’s edition with ample and useful annotations and suggestions on how to present the content in the student edition and in the ancillary materials. Where applicable, materials include teacher guidance for the use of embedded technology to support and enhance student learning. Because of the tool-based organization, the teacher’s edition includes ample and useful instructional notes which offer suggestions on how to present the content in the student edition and in the ancillary materials. Also included is teacher guidance for the places where technology is used to support and enhance student learning.
The teacher’s edition begins with a User Guide for Developing Core Literacy Proficiencies that spells out a proficiency-based approach to developing literacy. It also lays out the Literacy Skills and Academic Habits that will be referred to in the student edition and the language used throughout the program. It specifically refers to the Literacy Toolbox which is made up of three types of materials: handouts, tools, and checklists/rubrics of which the student edition is primarily comprised. At the end of the User Guide is a section entitled “Media Supports” which specifically addresses multimedia to support teaching and learning.
In each of the 5 units, there are specific “Instructional Notes” that give teachers guidance and refer directly to the materials in the student edition. For example, in Unit 1, Part 1, Activity 1, the Instructional Notes explain how to introduce the Reading Closely Graphic and Guiding Questions Handout in the student edition. Instructional Notes also help to differentiate between students’ experience levels and provide for students who may be more sophisticated in their skill sets. For example, in Unit 3, Part 3, Activity 3, there is an additional set of questions to pose so students can think more deeply about the claims they are developing. Finally, Instructional Notes give specific instructions on how to use the materials within the Student Edition. In Unit 4, Part 5, Activity 1, the Taking Notes Tool, Forming EBC Research Tools, and Organizing EBC Research Tools contained in the student edition are explained as ways to arrive at and develop the evidence-based perspective, as well as help tell a story about the research process.
Materials contain a teacher's edition that contains full, adult-level explanations and examples of the more advanced literacy concepts so that teachers can improve their own knowledge of the subject, as necessary.
The materials reviewed for Grade 9 meet the criteria that materials contain a teacher’s edition that contains full, adult-level explanations and examples of the more advanced literacy concepts so that teachers can improve their own knowledge of the subject, as necessary. Teacher editions provide adequate guidance for preparing each unit of study in a year-long course. The materials provide clear and multiple examples and explanations to support a teacher’s understanding of the texts and literacy skills for effective modeling to occur during class time.
Evidence that supports this rationale includes, but are not limited to:
- Teacher editions of rubrics and Academic Habits include guidance to use as classroom formative assessments.
- The Literacy Toolbox includes teacher and student editions. Teacher editions are accompanied with more details and examples for teachers to use during instruction to help them know what to recognize when observing student discussions for formative assessment.
- Each unit includes extensive preparatory details for the teacher to schedule instruction with suggestions for differentiation and optional tasks.
- Units include extensive Text Notes to support teachers to deliver instruction in a coherent and consistent approach. Text Notes include details about the content and examples for the teacher to use when modeling skills or for teachers to observe students.
- Teacher editions include guidance and justification for the text choices of the materials. For example, justifications note why a particular work is an ideal introduction to Core Proficiencies such as Making Evidence-Based Claims and pinpoint text-specific examples for teachers to understand and acknowledge when modeling this skill. In addition, the materials will provide an explanation justifying a companion text choice and why it is appropriately sequenced
Materials contain a teacher's edition that explains the role of the specific ELA/literacy standards in the context of the overall curriculum.
The materials reviewed for Grade 9 meet the criteria that materials contain a teacher’s edition that explains the role of the specific ELA/literacy standards in the context of the overall curriculum. The teacher’s edition includes a Developing Core Literacy Proficiencies: User Guide which includes a table listing the anchor Common Core State Standards that are targeted throughout Grade 9. The instructional materials also include a Unit Overview for each unit, including an explanation of the learning progression. In addition, a Common Core State Standards Alignment is included in the teacher’s edition in the Unit Overview for each unit and the description is specific to the instructional focus of the unit.
Evidence to support this rationale include, but are not limited to:
- The Developing Core Literacy Proficiencies: User Guide includes the following guidance for the teacher, “The following table lists the anchor Common Core State Standards that are targeted within the five Developing Core Literacy Proficiencies units and indicates the Literacy Skills and Academic Habits that are derived from or are components of those standards. This cart can be used to walk backward from the OE [Odell Education] criteria used in assessments and rubrics to the CCSS, especially if students are also trying to track student performance specific to the standards.” Specifically, R.1 - R.10, W.1 - W.9, and SL.1 are included in the table with aligned Literacy Skills and Academic Habits.
- In Unit 2, the Unit Overview includes the Learning Progression for the unit activities which are organized into five parts. The teacher’s edition states, “The sequence of learning activities supports the progressive development of the critical reading and thinking skills involved in making evidence-based claims.”
- In Unit 4, Part 1, the teacher’s edition includes Alignment to CCSS that list targeted standards and supporting standards specific to the instructional focus of the unit. For example, a targeted standard is in relation to “CCSS.ELA-Literacy.W.9-10.7: Conduct short as well as more sustained research projects to answer a question…” and a supporting standard is as follows: “CCSS.ELA-Literacy.W.9.-10.4: Produce clear and coherent writing in which development, organization, and style are appropriate to task, purpose, and audience.”
Materials contain explanations of the instructional approaches of the program and identification of the research-based strategies.
The materials reviewed for Grade 9 meet the criteria that materials contain explanations of the instructional approaches of the program and identification of the research based strategies. The Grade 9 materials contain a clear explanation of the instruction approaches and philosophy of the program and clear identification and focus on research based strategies.
Evidence that supports this rationale is include, but are not limited to:
- Each of the instructional materials begin with Developing Core Literacy Proficiencies User Guide which breaks down the Proficiency-Based Approach to Developing Literacy into five units:
- Reading Closely for Textual Details
- Making Evidence-Based Claims: Making Evidence-Based Claims about Literary Technique
- Researching to Deepen Understanding; Building Evidence-Based Arguments
Also included are a list of Literacy Skills and Academic Habits, both teacher versions and student versions. As another component of the User Guide, it is explained that at the heart of the Odell Education approach is an iterative process for questioning which lays out the essentials tools:
- Reading Closely Graphic
- Guiding Questions Handout
- Questioning Path Tool
- Approaching the Text Tool
- Analyzing Details Tool
- Forming Evidence-Based Claims Tool
Research based strategies are aligned with CCSS W.7--”Conduct short as well as more sustained research projects based on focused questions, demonstrating understanding of the subject under investigation”; W.8--”Gather relevant information from multiple print and digital sources, assess the credibility and accuracy of each source, and integrate the information while avoiding plagiarism” and W.9--”Draw evidence from literary or informational texts to support analysis, reflection, and research.”
Materials contain strategies for informing all stakeholders, including students, parents, or caregivers about the ELA/literacy program and suggestions for how they can help support student progress and achievement.
The materials reviewed for Grade 9 partially meet the criteria that materials contain strategies for informing all stakeholders, including students, parents, or caregivers about the ELA/literacy program and suggestions for how they can help support student progress and achievement.
While the instructional materials contain strategies for informing students about the ELA/literacy program, there is no evidence that this program is shared with stakeholders, nor are there any suggestions included as to how parents or caregivers can support their student’s progress and/or achievement.
Within the Grade 9 instructional materials, there are checklists and rubrics that give feedback to both teachers and students. Evidence that supports this rationale include, but are not limited to:
- In Unit 1, Part 4, Activity 4, students can use an informal skills-based checklist to self- and peer-assess the literacy skills of Attending to Details, Summarizing, Identifying Relationships, Recognizing Perspective, and Using Evidence. Another checklist is found at the end of Unit 2 that is broken down into Reading Skills, Thinking Skills, Writing Skills, and Essay Content. It ranges from Emerging (Needs Improvement) to Excelling (Exceeds Expectations) and leaves room for comments by the teacher as to the strengths and areas of growth observed in the work, as well as areas for improvement in future work. However, while there are many checklists included for student reflection and teacher feedback, there are no strategies for including other stakeholders.
Materials offer teachers resources and tools to collect ongoing data about student progress on the Standards.
Materials partially meet the criteria for 3K to 3n. Routines and guidance within the program assist teachers in monitoring student progress. Regular opportunities to assess student progress are included within the materials; however, the assessments do not make strong connections between what is being assessed and the standards that are emphasized within that assessment. Sufficient guidance is provided to support teachers in interpreting student performance, though specific strategies or guidance for remediation for students who are not proficient is not offered.The materials do not outline a consistent plan for holding students accountable for independent reading, and student choice is often not an option for the independent reading that is required, though the opportunities for student choice do require students to be held accountable for the selections in order to build stamina and confidence.
Materials regularly and systematically offer assessment opportunities that genuinely measure student progress.
The materials reviewed for Grade 9 meet the criteria that materials regularly and systematically offer assessment opportunities that genuinely measure student progress. Materials regularly and systematically offer assessment opportunities that measure student progress. Throughout the instructional materials, both formative and summative assessments are used to measure student progress. Formative assessments are intentionally placed at the beginning of each unit so that teachers can ensure that students are prepared for the activities leading up to the culminating writing activity.
Each unit consists of five parts; each part ends with either a formative assessment or a summative assessment. Formative assessments consist of work samples including Approaching Text Tools, Analyzing Details Tools, annotations of texts, answers for Questioning Path Tools, written explanations of text analysis, and group/class discussions. Formative Assessments can also include completed Forming Evidence-Based Claims (EBCs) Tools, Supporting EBCs Tools, and Organizing EBCs Tools. Summative Assessments are more formal and consist of multi-paragraph rough drafts and culminating writing tasks.
The purpose/use of each assessment is clear:
Assessments clearly denote which standards are being emphasized.
The materials reviewed for Grade 9 do not meet the criteria that assessments clearly denote which standards are being emphasized. While the instructional materials do make connections between the assessments and the development of Academic Habits/Literacy Skills, such as Attending to Details and Communicating Clearly, and provide checklists for students to use to self-assess these habits and skills, the assessments do not clearly denote which standards are being emphasized. The instructional materials provide alignment for the year, unit, and part, but do not provide alignment at the activity or assessment level.
Evidence that supports this rationale include, but are not limited to:
- Each unit is divided into five parts and each part has either a formative or summative assessment. The instructional materials do provide targeted and supported standards for each part, but alignment is not clearly provided for assessments. It is not possible to easily determine which standards apply to each part of an assessment.
- Only the Questioning Path Tools, which can be used as formative assessments, are aligned to specific reading and writing standards, but the instructional materials do not identify which standards are aligned to which questions.
Assessments provide sufficient guidance to teachers for interpreting student performance and suggestions for follow-up.
The materials reviewed for Grade 9 partially meet the criteria that assessments provide sufficient guidance to teachers for interpreting student performance and suggestions for follow up. Students are assessed often, via formative and summative assessments, and teachers are provided many tools, such as unit-specific rubrics, to help them interpret student performance; however, the instructional materials do not provide strategies or suggestions for how to remediate students who did not master the skills/habits.
Throughout the instructional materials, unit-specific rubrics are provided as tools to assess Literacy Skills and Academic Habits. Each rubric uses a four-point scale to help teachers and students identify areas of strength, weakness, and growth. Teachers are prompted to consider evidence of the skills/habits and rate accordingly. This system of rubrics allows teachers to compare student performance as the year progresses. The instructional materials do not provide follow-up suggestions for students who do not master the skills/habits.
Materials should include routines and guidance that point out opportunities to monitor student progress.
The materials reviewed for Grade 9 meet the criteria that materials should include routines and guidance that point out opportunities to monitor student progress. There are routines and guidance in place throughout grade 9, as well as the 9-12 curriculum, which allow for opportunities to monitor student progress.
Each grade level is divided into five units:
- Unit 1--Reading Closely for Textual Details
- Unit 2--Making Evidence Based Claims
- Unit 3--Making Evidence-Based Claims about Literary Technique
- Unit 4--Researching to Deepen Understanding
- Unit 5--Building Evidence-Based Arguments
Each part within each unit culminates in a formative assessment opportunity and Part 5 in a summative assessment opportunity, embedding many opportunities within each unit to monitor student progress. Beyond these assessment opportunities are tools, such as the Questioning Path Tool as one example, that allow teachers to guide and monitor students’ progress.
Materials indicate how students are accountable for independent reading based on student choice and interest to build stamina, confidence, and motivation.
The materials reviewed for Grade 9 partially meet the criteria that materials indicate how students are accountable for independent reading based on student choice and interest to build stamina, confidence, and motivation. There is very little student choice in the Grade 9 instructional materials for independent reading. In the few occasions where there is choice, materials do hold students accountable for their selections and may contribute to their stamina and confidence.
Student independent reading choice is built into only Unit 4 and Unit 5. Unit 4 explores Music and the Role it Plays in Our Lives, and Unit 5 has students reflect on what is meant by terrorism. Within each unit is a common source set, and while students read many of the same texts as their peers, there is some choice, depending on the inquiry path they wish to follow. Within the student edition, there are many materials that hold students accountable for this reading--the Exploring a Topic Tool, Potential Sources Tool, Taking Notes Tool, Research Frame Tool, and Research Evaluation Tool. Since Unit 5 is focused on Building Evidence-Based Arguments, the tools to hold students accountable include the Questioning Path Tool, Forming Evidence-Based Claims Tool, Organizing Evidence-Based Claims Tool, Delineating Arguments Tool, and Evaluating Arguments Tool. These tools can support students in building the notes and skills necessary to write the summative assessments at the end of each unit.
- 10 10
Materials provide teachers with strategies for meeting the needs of a range of learners so that they demonstrate independent ability with grade-level standards.
Materials offer teachers the ability to personalize the materials for all learners. The program provides the opportunity for all learners to work within grade-level text, including those whose skills may be above or below grade level, or whose English proficiencies may provide additional challenges as they engage with the content. All students have extensive opportunities to read, write, speak, and listen to grade level text and meet or exceed grade level standards. Lessons provide whole class, small group, and independent learning opportunities throughout the school year.
Materials provide teachers with strategies for meeting the needs of a range of learners so the content is accessible to all learners and supports them in meeting or exceeding the grade-level standards.
The materials reviewed for Grade 9 meet the criteria that digital materials include opportunities for teachers to personalize learning for all students, using adaptive or other technological innovations. Teachers determine whether students need increased scaffolding and time, or less. Differentiation support is integrated into the scaffolding and design of the instructional materials. At times, teachers are reminded to determine whether students need more or less time to develop a Core Proficiency. Most units include supplemental texts. These can be used by the teacher to give students additional opportunities to develop skills. The supplemental texts are categorized as “Extended Reading.” In addition to this, the materials claim to be designed so schools can use local curricular materials. This flexibility allows for teachers to determine the text complexity appropriate for students.
Evidence that supports this rationale include, but are not limited to:
- Instructional supports for English Language Learners and students reading below grade level are integrated and scaffolded into the explicit instructions for each activity. Each activity follows a progression moving from scaffolding and support to independent application.
- The sequence of instruction and supporting tools are the same for all students. However, the materials note that the tools and activities can be applied to alternative or supplemental texts not included in the materials.
- In order to help students understand the content, the materials will suggest making analogies or allotting more time to tasks. For example, the materials suggest comparing the process of close reading to analytical processes used by experts--scientists, detectives, etc.-- in other fields. The materials also suggest for teachers to skip the Introductory Analogy if students are sufficiently familiar with the close reading skill.
- “Extended Reading” refers to supplemental, optional texts teachers can incorporate if students need more opportunities to develop literacy skills.
- Text choices are bundled in order to effectively increase in complexity over the course of a unit. In each unit, the first text is a visual and is followed by a text with a Lexile measurement below grade level to allow access for all students. By the end of the unit, students are reading texts at or above grade level independently and in small groups. The small group discussions intend for students to self- and peer-assess understanding.
Materials regularly provide all students, including those who read, write, speak, or listen below grade level, or in a language other than English, with extensive opportunities to work with grade level text and meet or exceed grade-level standards.
The materials reviewed for Grade 9 meet the criteria that materials regularly provide all students, including those who read, write, speak, or listen below grade level, or in a language other than English, with extensive opportunities to work with grade level text and meet or exceed grade-level standards. By design, the materials provide all students with the opportunity to interact with grade-level texts. The materials allow for teachers to determine when to incorporate texts above grade level. In units where students engage with multiple texts, the materials do not require all students to read every text. The materials provide suggestions for organizing small groups to support English Language Learners and students reading below grade level.
Evidence that supports this rationale include, but are not limited to:
- The materials include a section dedicated to helping teachers understand the support structures integrated in the sequence of activities. This section describes the seven routines designed to support all students, including English Learners and below-grade-level readers. Following this progression, according to the materials, provides all students with the opportunity to interact with texts at grade-level complexity. The seven supports are as follows:
- Intentional Unit Design and Instructional Sequence
- Short Texts, Focused Reading
- Read-Alouds and Modeling
- Guiding Question Framework
- Graphic Organizers
- Reading Teams
- Academic Vocabulary
- The Unit Design and Instructional Sequence includes visual texts for students to practice Core Proficiency skills before transferring the skill to grade-level printed texts.
- When presented with a series of texts or common source sets of multiple texts to analyze, the materials state that students should not be required to read all texts. This allows for the teacher to provide text choices at a student's current reading level. Additionally, the activity includes a small group discussion and suggests students to be grouped by reading level and assigned texts at their current level.
Materials regularly include extensions and/or more advanced opportunities for students who read, write, speak, or listen above grade level.
The materials reviewed for Grade 9 meet the criteria that materials regularly include extensions and/or more advanced opportunities for students who read, write, speak, or listen above grade level. Materials contain integrated suggestions, Extended Readings, and optional activities to extend learning. The mix of activities offered allow for advanced students to explore texts or more complex texts while practicing the Core Proficiencies skills at greater depth.
Evidence that supports this rationale includes, but are not limited to:
- The materials suggest teachers consider the needs and background experiences of students before beginning a unit of study. Specifically, if a student has “advanced skills” or “extensive previous experience,” the teacher can expect the instruction to “move more rapidly.”
- For advanced students, the materials also suggest teachers concentrate time on engaging students with the Extended Reading texts provided in some units and “emphasize more complex topics.”
- The materials are vertically aligned and utilize the same lists, handouts, and rubrics provided in the Literacy Toolbox. For advanced students and students with previous experience, the materials recognize they will rely less on the Literacy Toolbox supports and are encouraged to “use their own, developing strategies” for analyzing texts.
- At times, the materials will present optional assessment opportunities for teachers to collect evidence and for students to demonstrate understanding. In Unit 1, Part 5, the Summative Assessment Opportunities offers an optional collection of evidence through a writing task. Multiple pathways to accomplish the writing are provided by the materials. This is done as a supplement to the summative discussion activity. Due to the intentional vertically-aligned design of the materials, this option is presented in every grade level.
Materials provide opportunities for teachers to use a variety of grouping strategies.
The materials reviewed for Grade 9 meet the criteria that materials provide opportunities for teachers to use a variety of grouping strategies. The materials are designed with collaboration as an essential academic habit. Students are provided regular opportunities to work as a class, in pairs, and in small groups. In each variation, students develop literacy skills by completing a Literacy Toolbox resource, analyzing text, and collaborating on writing.
Evidence that supports this rationale include, but are not limited to:
- In Unit 3, Part 1, Activity 4, after the teacher models the formation of an evidence-based claim (EBC), students practice the skill in pairs with the support of the Literacy Toolbox resources.
- In Unit 4, Part 2, Activity 1, after the teacher models and develops an Inquiry Question and pathway, students work in small groups to develop 2 to 3 pathways.
- In Unit 5, Part 3, Activity 1, students work in “reading teams” to apply the material’s eight criteria from the Evaluating Arguments Tool to objectively rate an argument.
Materials support effective use of technology to enhance student learning. Digital materials are accessible and available in multiple platforms.
Digital materials are web-based, compatible with multiple Internet browsers, “platform neutral”; they follow universal programming style and allow the use of tablets and mobile devices.
Effective use of technology to enhance student learning, drawing attention to evidence and texts as appropriate is supported. There are multiple opportunities for teachers to differentiate instructional materials for multiple student needs, including supports before, during, and after each selection. The materials can be easily customized for local use. The program does not provide technology for collaboration.
Digital materials (either included as supplementary to a textbook or as part of a digital curriculum) are web-based, compatible with multiple Internet browsers (e.g., Internet Explorer, Firefox, Google Chrome, etc.), "platform neutral" (i.e., are compatible with multiple operating systems such as Windows and Apple and are not proprietary to any single platform), follow universal programming style, and allow the use of tablets and mobile devices. This qualifies as substitution and augmentation as defined by the SAMR model. Materials can be easily integrated into existing learning management systems.
The materials reviewed for Grade 9 include digital materials (either included as supplementary to a textbook or as part of a digital curriculum) that are web-based, compatible with multiple internet browsers (e.g., Internet Explorer, Firefox, Google Chrome, etc.), “platform neutral” (i.e., Windows and Apple and are not proprietary to any single platform), follow universal programming style, and allow the use of tablets and mobile devices.
The instructional materials provide many of the texts in print format and these are included in the teacher’s edition and student’s edition. Handouts included in the Literacy toolbox can be accessed online and additional copies can be printed for the purpose of annotation. The Developing Core Literacy Proficiencies: User Guide preceding Unit 1 in the Grade 9 materials provides additional guidance for teachers in relation to Electronic Supports and Versions of Materials. For example, “The Odell Education Literacy Toolbox files, including handouts, tools and checklists, are available " as editable PDF forms. With the free version of Adobe Reader, students and teachers are able to type in the forms and save their work for recording and emailing.” The resources can be located using a website and password provided in the instructional materials.
There are texts utilized in the instructional materials that are accessible online only. The instructional materials state, “Because of the ever-changing nature of website addresses, specific links are not provided. Teachers and students can locate these texts using provided key words (e.g., article titles, authors, and publishers).” The online texts are available for free access using the resource information provided by the publisher. Examples include, but are not limited to:
- In Unit 1, a table labeled, Reading Closely Media Supports, includes a multimedia time line published by PBS entitled, "Only a Teacher—Teaching Timeline."
- In Unit 4, Additional Resources in the Topic Area are listed, including a TED Talk by Benjamin Zander entitled, “The transformative power of classical music”.
- In Unit 5, Building Evidence-Based Arguments Unit Texts, a table lists all the five Text Sets included in the unit and the instructional materials state, “The unit uses texts that are accessible for free on the Internet without any login information, membership requirements, or purchase.”
Materials support effective use of technology to enhance student learning, drawing attention to evidence and texts as appropriate and providing opportunities for modification and redefinition as defined by the SAMR model.
The materials reviewed for Grade 9 support effective use of technology to enhance student learning, drawing attention to evidence and texts as appropriate.
Many texts are accessible online to build background knowledge and can be used to supplement the anchor texts. Teachers are provided with an opportunity to utilize audio versions of texts available online and in print format for students to follow along with the text. The PDF versions of handouts and graphic organizers are editable and provided by Odell Education; therefore, students can type directly on the handouts and these can be submitted electronically to the teacher. Texts Sets include a variety of options beyond print, such as videos, audio recordings, images, and timelines. Teachers could choose to assign independent reading and annotations at home due to the accessibility through both the publisher website with a password and the free resources available online. Key words are provided when web addresses are not to assist teachers and students in locating the resources. Examples include, but are not limited to:
- In Unit 2, Plato’s Apology is available in audiobook format via Youtube and included in the Making Evidence-Based Claims Media Supports.
- In Unit 4, an additional resource students can access online is a TED Talk available through the Ted.com website entitled, “Music is medicine, music is sanity” by Robert Gupta.
- In Unit 5, the facts listed in the Building Evidence-Based Arguments Unit Texts table provide enough information to access the correct argument online, “Terrorism Can Only Be Defeated by Education, Tony Blair Tells the UN,” published 11/22/2013 by UN News (news article and video).
Materials can be easily customized for individual learners.
Digital materials include opportunities for teachers to personalize learning for all students, using adaptive or other technological innovations.
The materials reviewed for Grade 9 provide teachers with strategies for meeting the needs of a range of learners so the content is accessible to all learners and supports them in meeting or exceeding the grade-level standards. The instructional materials include a criteria-based assessment system throughout the five units included in Grade 9.
Students utilize handouts and graphic organizers to practice and demonstrate proficiency relating to targeted skills. The graphic organizers and tools can be used as a formative assessment by the teacher and completed digitally by students using the editable PDFs provided by Odell Education. Student annotation and submission for evaluation can take place electronically. The graphic organizers are included as an instructional tool to support English language learners and students reading below grade level: “Visually, the tools help students understand the relationships among concepts, processes, and observations they make from texts. In addition, Media Supports are included in the instructional materials: ‘The various media (i.e. videos, audio, images, websites) can be assigned and explored at the student or group level to differentiate experiences for students based on their interests and abilities’.” Students who require more challenging texts have the opportunity to explore topics using texts at higher levels of complexity. Examples include, but are not limited to:
- In Unit 1, students utilize an Approaching Texts Tool that teachers can use to gauge students’ ability to create guiding questions for the first reading of the text and create text-specific questions to help focus the rereading of the text; the tool can be printed and handwritten or completed digitally using an editable PDF.
- In Unit 2, Media Supports include an Ebook of Plato’s Apology published by Project Guttenberg that can be accessed using an electronic device.
- In Unit 4, Common Source Sets offer a variety of complexity levels from which teachers may choose for exploration by students. In Unit 4, Part 1, Activity 3, “This Common Source should be accessible to students, but it also should provide some additional reading challenges, often by referencing technical information or terminology.”
Materials can be easily customized by schools, systems, and states for local use.
The materials reviewed for Grade 9 can be easily customized for local use. The online resources available allow teachers the opportunity to print additional copies for annotation and offer editable PDFs for students to use and submit their work electronically. Teachers have the choice of which texts they would like to use as model texts when presented with Common Source Sets, such as in Unit 4. Also, teachers can differentiate for students and choose specific texts in the Common Source Sets that individual students or small groups will read together. Additional resources are available to allow for further exploration and to allow an opportunity to increase the level of complexity for students who need an additional challenge. The tools provided offer a method for formative assessment, and teachers can make decisions regarding future units based on student performance. The following Instructional Notes are an example of guidance to the teachers:
- Teachers can use these Common Sources as a model in several ways, depending on the classroom context and emerging student interests.
- Select a single source for modeling that matches with the direction for investigation that the class is likely to pursue. All students read and work with this single Common Source.
- Use one source for modeling and a second for guided practice. All students read both sources, working with one as a class and the other in small groups.
- Use all three sources (and additional ones if helpful), grouping students by possible topic interests and modeling and practicing within groups.
- Find other, similar Common Source(s) related to the topic and subtopics the class is examining.
Materials include or reference technology that provides opportunities for teachers and/or students to collaborate with each other (e.g. websites, discussion groups, webinars, etc.)
The materials reviewed for Grade 9 do not include or reference technology that provides opportunities for teachers and/or students to collaborate with each other (e.g. websites, discussion groups, webinars, etc.) While students are encouraged to collaborate with one another throughout the five units in a face-to-face format, there are no opportunities for students to create group projects or peer assess each other’s work virtually. Teachers would need to seek out these opportunities when planning the lessons outside of the tools offered in the instructional materials. The materials offers Professional Development to educators on the website: “Odell Education (OE) collaborates with districts and schools that are implementing the Core Literacy Proficiencies Program. OE works with educators on the foundational principles of the instruction, as well as the integration of the units into their curriculum and the use of the materials in their classrooms.” However, opportunities for teachers to engage online with their colleagues is not present on the website. |
In physics, tension describes the pulling force exerted by each end of a string, cable, chain, or similar one-dimensional continuous object, or by each end of a rod, truss member, or similar three dimensional object. At the atomic level, tension is produced when atoms or molecules are pulled apart from each other and gain electromagnetic potential energy. Each end of a string or rod under tension will pull on the object it is attached to, to restore the string/rod to its relaxed length.
Tension is the opposite of compression. Although not physics terms, slackening and tensioning are used when talking about fencing, for example.
In physics, although tension is not a force, it does have the units of force and can be measured in newtons (or sometimes pounds-force). The ends of a string or other object under tension will exert forces on the objects to which the string or rod is connected, in the direction of the string at the point of attachment. These forces due to tension are often called "tension forces." There are two basic possibilities for systems of objects held by strings: either acceleration is zero and the system is therefore in equilibrium, or there is acceleration and therefore a net force is present.
Tension in one-dimensional continua
Tension in a string is a non-negative scalar. Zero tension is slack. A string or rope is often idealized as one dimension, having length but being massless with zero cross section. If there are no bends in the string (as occur with vibrations or pulleys), then tension is a constant along the string, equal to the magnitude of the forces applied by the ends of the string. By Newton's Third Law, these are the same forces exerted on the ends of the string by the objects to which the ends are attached. If the string curves around one or more pulleys, it will still have constant tension along its length in the idealized situation that the pulleys are massless and frictionless. A vibrating string vibrates with a set of frequencies that depend on the string's tension. These frequencies can be derived from Newton's Laws. Each microscopic segment of the string pulls on and is pulled upon by its neighboring segments, with a force equal to the tension at that position along the string. tension where is the position along the string.
If the string has curvature, then the two pulls on a segment by its two neighbors will not add to zero, and there will be a net force on that segment of the string, causing an acceleration. This net force is a restoring force, and the motion of the string can include transverse waves that solve the equation central to Sturm-Liouville theory:
where is the force constant per unit length [units force per area] are the eigenvalues for resonances of transverse displacement on the string., with solutions that include the various harmonics on a stringed instrument.
Tension in three-dimensional continua
Tension is also used to describe the force exerted by the ends of a three-dimensional, continuous material such as a rod or truss member. Such a rod elongates under tension. The amount of elongation and the load that will cause failure both depend on the force per cross-sectional area rather than the force alone, so stress = axial force / cross sectional area is more useful for engineering purposes than tension. Stress is a 3x3 matrice called a tensor, and the element of the stress tensor is tensile force per area (or compression force per area, denoted as a negative number for this element, if the rod is being compressed rather than elongated).
System in equilibrium
A system is in equilibrium when the sum of all forces is zero.
For example, consider a system consisting of an object that is being lowered vertically by a string with tension, T, at a constant velocity. The system has a constant velocity and is therefore in equilibrium because the tension in the string (which is pulling up on the object) is equal to the force of gravity, mg, which is pulling down on the object.
System under net force
A system has a net force when an unbalanced force is exerted on it, in other words the sum of all forces is not zero. Acceleration and net force always exist together.
For example, consider the same system as above but suppose the object is now being lowered with an increasing velocity downwards (positive acceleration) therefore there exists a net force somewhere in the system. In this case, negative acceleration would indicate that .
In another example, suppose that two bodies A and B having masses and respectively are connected with each other by an inextensible string over a frictionless pulley. There are two forces acting on the body A: its weight () pulling down, and the tension in the string pulling up. If body A has greater mass than body B, . Therefore, the net force on body A is , so .
Strings in modern physics
String-like objects in relativistic theories, such as the strings used in some models of interactions between quarks, or those used in the modern string theory, also possess tension. These strings are analyzed in terms of their world sheet, and the energy is then typically proportional to the length of the string. As a result, the tension in such strings is independent of the amount of stretching.
In an extensible string, Hooke's law applies.
|This article relies largely or entirely upon a single source. (April 2012)|
- Physics for Scientists and Engineers with Modern Physics, Section 5.7. Seventh Edition, Brooks/Cole Cengage Learning, 2008.
- A. Fetter and J. Walecka. (1980). Theoretical Mechanics of Particles and Continua. New York: McGraw-Hill. |
A filibuster is a political procedure where one or more members of parliament or congress debate over a proposed piece of legislation so as to delay or entirely prevent a decision being made on the proposal. It is sometimes referred to as "talking a bill to death" or "talking out a bill" and is characterized as a form of obstruction in a legislature or other decision-making body. This form of political obstruction reaches as far back as Ancient Roman times and could also be referred to synonymously with political stonewalling. Due to the often extreme length of time required for a successful filibuster, many speakers stray off topic after exhausting the original subject matter. Past speakers have read through laws from different states, recited speeches, and even read from cookbooks and phone books.
The word filibuster comes from the Spanish "filibote", English "fly-boat", a small, swift sailing-vessel with a large mainsail, which enabled buccaneers to pursue merchantmen in the open sea and escape when pursued. The Oxford English Dictionary finds its only known use in early modern English in a 1587 book describing "flibutors" who robbed supply convoys.
The English filibuster was borrowed from Spanish in the 19th century. Originally it applied to pirates infesting the Spanish American coasts, but around 1850 it designated the followers of William Walker and Narciso López, who were then pillaging former Spanish colonies in Central America. The word entered American political slang with the meaning "to delay legislation by dilatory motions or other artifices".
One of the first known practitioners of the filibuster was the Roman senator Cato the Younger. In debates over legislation he especially opposed, Cato would often obstruct the measure by speaking continuously until nightfall. As the Roman Senate had a rule requiring all business to conclude by dusk, Cato's purposely long-winded speeches were an effective device to forestall a vote.
Cato attempted to use the filibuster at least twice to frustrate the political objectives of Julius Caesar. The first incident occurred during the summer of 60 BCE, when Caesar was returning home from his propraetorship in Hispania Ulterior. Caesar, by virtue of his military victories over the raiders and bandits in Hispania, had been awarded a triumph by the Senate. Having recently turned forty, Caesar had also become eligible to stand for consul. This posed a dilemma. Roman generals honored with a triumph were not allowed to enter the city prior to the ceremony, but candidates for the consulship were required, by law, to appear in person at the Forum. The date of the election, which had already been set, made it impossible for Caesar to stand unless he crossed the pomerium and gave up the right to his triumph. Caesar petitioned the Senate to stand in absentia, but Cato employed a filibuster to block the proposal. Faced with a choice between a triumph and the consulship, Caesar chose the consulship and entered the city.
Cato made use of the filibuster again in 59 BCE in response to a land reform bill sponsored by Caesar, who was then consul. When it was Cato's time to speak during the debate, he began one of his characteristically long-winded speeches. Caesar, who needed to pass the bill before his co-consul, Marcus Calpurnius Bibulus, took possession of the fasces at the end of the month, immediately recognized Cato's intent and ordered the lictors to jail him for the rest of the day. The move was unpopular with many senators and Caesar, realizing his mistake, soon ordered Cato's release. The day was wasted without the Senate ever getting to vote on a motion supporting the bill, but Caesar eventually circumvented Cato's opposition by taking the measure to the Tribal Assembly, where it passed.
In the Parliament of the United Kingdom, a bill defeated by a filibustering manoeuvre may be said to have been "talked out". The procedures of the House of Commons require that members cover only points germane to the topic under consideration or the debate underway whilst speaking. Example filibusters in the Commons and Lords include:
The all-time Commons record for non-stop speaking, six hours, was set by Henry Brougham in 1828, though this was not a filibuster. The 21st century record was set on December 2, 2005 by Andrew Dismore, Labour MP for Hendon. Dismore spoke for three hours and 17 minutes to block a Conservative Private Member's Bill, the Criminal Law (Amendment) (Protection of Property) Bill, which he claimed amounted to "vigilante law." Although Dismore is credited with speaking for 197 minutes, he regularly accepted interventions from other MPs who wished to comment on points made in his speech. Taking multiple interventions artificially inflates the duration of a speech and thus may be used as a tactic to prolong a speech.
In local unitary authorities of England a motion may be carried into closure by filibustering. This results in any additional motions receiving less time for debate by Councillors instead forcing a vote by the Council under closure rules.
A notable filibuster took place in the Northern Ireland House of Commons in 1936 when Tommy Henderson (Independent Unionist MP for Shankill) spoke for nine and a half hours (ending just before 4 am) on the Appropriation Bill. As this Bill applied government spending to all departments, almost any topic was relevant to the debate, and Henderson used the opportunity to list all of his many criticisms of the Unionist government.
Both houses of the Australian parliament have strictly enforced rules on how long members may speak, so filibusters are generally not possible, though this is not the case in some state legislatures.
The Museum of Australian Democracy identifies the last filibuster at the federal level to be a 12-hour long speech (including interruptions) by Senator Albert Gardiner in 1918, in which he read the entire Commonwealth Electoral Act 1918, to which the Labor Party was opposed because it introduced preferential voting. The next year, Senate speeches were limited to 20 minutes (there was already a limit on speeches in the House of Representatives).
In opposition, Tony Abbott's Liberal National coalition used suspension of standing orders in 2012 for the purposes of talking at length on political issues, most commonly during question time against the Labor government. However, the suspension of standing orders was not intended to delay or stop the passage of legislation, as with a traditional filibuster.
In August 2000, New Zealand opposition parties National and ACT delayed the voting for the Employment Relations Bill by voting slowly, and in some cases in M?ori (which required translation into English).
In 2009, several parties staged a filibuster of the Local Government (Auckland Reorganisation) Bill in opposition to the government setting up a new Auckland Council under urgency and without debate or review by select committee, by proposing thousands of wrecking amendments and voting in M?ori as each amendment had to be voted on and votes in M?ori translated into English. Amendments included renaming the council to "Auckland Katchafire Council" or "Rodney Hide Memorial Council" and replacing the phrase "powers of a regional council" with "power and muscle".
The Rajya Sabha (Council of states) - which is the upper house in the Indian bicameral legislature - allows for a debate to be brought to a close with a simple majority decision of the house, on a closure motion so introduced by any member. On the other hand, the Lok Sabha (Council of the people) - the lower house - leaves the closure of the debate to the discretion of the speaker, once a motion to end the debate is moved by a member.
In 2014, Irish Justice Minister Alan Shatter performed a filibuster; he was perceived to "drone on and on" and hence this was termed a "Drone Attack".
A dramatic example of filibustering in the House of Commons of Canada took place between Thursday June 23, 2011 and Saturday June 25, 2011. In an attempt to prevent the passing of Bill C-6, which would have legislated the imposing of a four-year contract and pay conditions on the locked out Canada Post workers, the New Democratic Party (NDP) led a filibustering session which lasted for fifty-eight hours. The NDP argued that the legislation in its then form undermined collective bargaining. Specifically, the NDP opposed the salary provisions and the form of binding arbitration outlined in the bill.
The House was supposed to break for the summer Thursday June 23, but remained open in an extended session due to the filibuster. The 103 NDP MPs to delay the passing of the bill. MPs are allowed to give such speeches each time a vote takes place, and many votes were needed before the bill could be passed. As the Conservative Party of Canada held a majority in the House, the bill passed. This was the longest filibuster since the 1999 Reform Party of Canada filibuster, on native treaty issues in British Columbia.
Conservative Member of Parliament Tom Lukiwski is known for his ability to stall Parliamentary Committee business by filibustering. One such example occurred October 26, 2006, when he spoke for almost 120 minutes to prevent the House of Commons of Canada Standing Committee on Environment and Sustainable Development from studying a private member's bill to implement the Kyoto Accord. He also spoke for about 6 hours on February 5, 2008 and February 7, 2008 at the House of Commons of Canada Standing Committee on Procedure and House Affairs meetings to block inquiry into allegations that the Conservative Party spent over the maximum allowable campaign limits during the 2006 election.
Another example of filibuster in Canada federally came in early 2014 when NDP MP and Deputy Leader David Christopherson filibustered the government's bill C-23, the Fair Elections Act at the Procedure and House Affairs Committee. His filibuster lasted several meetings the last of which he spoke for over 8 hours and was done to support his own motion to hold cross country hearings on the bill so MPs could hear what the Canadian public thought of the bill. In the end, given that the Conservative government had a majority at committee, his motion was defeated and the bill passed although with some significant amendments.
The Legislature of the Province of Ontario has witnessed several significant filibusters, although two are notable for the unusual manner by which they were undertaken. The first was an effort on May 6, 1991, by Mike Harris, later premier but then leader of the opposition Progressive Conservatives, to derail the implementation of the budget tabled by the NDP government under premier Bob Rae. The tactic involved the introduction of Bill 95, the title of which contained the names of every lake, river and stream in the province. Between the reading of the title by the proposing MPP, and the subsequent obligatory reading of the title by the clerk of the chamber, this filibuster occupied the entirety of the day's session until adjournment. To prevent this particular tactic from being used again, changes were eventually made to the Standing Orders to limit the time allocated each day to the introduction of bills to 30 minutes.
A second high-profile and uniquely implemented filibuster in the Ontario Legislature occurred in April 1997, where the Ontario New Democratic Party, then in opposition, tried to prevent the governing Progressive Conservatives' Bill 103 from taking effect. To protest the Tory government's legislation that would amalgamate the municipalities of Metro Toronto into the "megacity" of Toronto, the small NDP caucus introduced 11,500 amendments to the megacity bill, created on computers with mail merge functionality. Each amendment would name a street in the proposed city, and provide that public hearings be held into the megacity with residents of the street invited to participate. The Ontario Liberal Party also joined the filibuster with a smaller series of amendments; a typical Liberal amendment would give a historical designation to a named street. The NDP then added another series of over 700 amendments, each proposing a different date for the bill to come into force. The filibuster began on April 2 with the Abbeywood Trail amendment and occupied the legislature day and night, the members alternating in shifts. On April 4, exhausted and often sleepy government members inadvertently let one of the NDP amendments pass, and the handful of residents of Cafon Court in Etobicoke were granted the right to a public consultation on the bill, although the government subsequently nullified this with an amendment of its own. On April 6, with the alphabetical list of streets barely into the Es, Speaker Chris Stockwell ruled that there was no need for the 220 words identical in each amendment to be read aloud each time, only the street name. With a vote still needed on each amendment, Zorra Street was not reached until April 8. The Liberal amendments were then voted down one by one, eventually using a similar abbreviated process, and the filibuster finally ended on April 11.
A weird example of filibustering occurred when the governing Liberal Party of Newfoundland and Labrador had "nothing else to do in the House of Assembly" and debated between only themselves about their own interim supply bill, after both the Conservative and New Democratic Parties indicated they intended to vote in favour of the bill.
On 28 October 1897, Dr. Otto Lecher, Delegate for Brünn, spoke continuously for twelve hours before the Abgeordnetenhaus ("House of Delegates") of the Reichsrat ("Imperial Council") of Austria, to block action on the "Ausgleich" with Hungary, which was due for renewal. Mark Twain was present, and described the speech and the political context in his essay "Stirring Times in Austria."
In the Southern Rhodesia Legislative Assembly, Independent member Dr Ahrn Palley staged a similar filibuster against the Law and Order Maintenance Bill on 22 November 1960, although this took the form of moving a long series of amendments to the Bill, and therefore consisted of multiple individual speeches interspersed with comments from other Members. Palley kept the Assembly sitting from 8 PM to 12:30 PM the following day.
In the Senate of the Philippines, Roseller Lim of the Nacionalista Party held out the longest filibuster in Philippine Senate history. On the election for the President of the Senate of the Philippines in April 1963, he stood on the podium for more than 18 hours to wait for party-mate Alejandro Almendras who was to arrive from the United States. The Nacionalistas, who comprised exactly half of the Senate, wanted to prevent the election of Ferdinand Marcos to the Senate Presidency. Prohibited from even going to the comfort room, he had to relieve himself in his pants until Almendras' arrival. He voted for party-mate Eulogio Rodriguez just as Almendras arrived, and had to be carried off via stretcher out of the session hall due to exhaustion. However, Almendras voted for Marcos, and the latter wrested the Senate Presidency from the Nacionalistas after more than a decade of control.
On December 16, 2010, Werner Kogler of the Austrian Green Party gave his speech before the budget committee, criticizing the failings of the budget and the governing parties (Social Democratic Party and Austrian People's Party) in the last years. The filibuster lasted for 12 hours and 42 minutes (starting at 13:18, and speaking until 2:00 in the morning), thus breaking the previous record held by his party-colleague Madeleine Petrovic (10 hours and 35 minutes on March 11, 1993), after which the standing orders had been changed, so speaking time was limited to 20 minutes. However, it didn't keep Kogler from giving his speech.
The filibuster is a powerful legislative device in the United States Senate. Senate rules permit a senator or senators to speak for as long as they wish and on any topic they choose, unless "three-fifths of the Senators duly chosen and sworn" (usually 60 out of 100 senators) vote to bring debate to a close by invoking cloture under Senate Rule XXII. Even if a filibuster attempt is unsuccessful, the process takes floor time. Defenders call the filibuster "The Soul of the Senate."
It is not part of the US Constitution, becoming theoretically possible with a change of Senate rules only in 1806, and never being used until 1837. Rarely used for much of the Senate's first two centuries, it was strengthened in the 1970s and the majority has preferred to avoid filibusters by moving to other business when a filibuster is threatened and attempts to achieve cloture have failed. As a result, this has come to mean that all major legislation (apart from budgets) effectively requires a 60% majority to pass.
Under current Senate rules, any modification or limitation of the filibuster would be a rule change that itself could be filibustered, with two-thirds of those senators present and voting (as opposed to the normal three-fifths of those sworn) needing to vote to break the filibuster.
However, under Senate precedents, a simple majority can (and has acted to) limit the practice by overruling decisions of the chair. The removal or substantial limitation of the filibuster by a simple majority, rather than a rule change, is colloquially called the nuclear option or, by some proponents, the constitutional option.
On November 21, 2013, the then-Democratic-controlled Senate exercised the nuclear option, in a 52-48 vote, to require only a majority vote to end a filibuster of all executive and judicial nominees, excluding Supreme Court nominees, rather than the 3/5 of votes previously required. On April 6, 2017, the Republican-controlled Senate did the same, in a 52-48 vote, to require only a majority vote to end a filibuster of Supreme Court nominees. A 60% supermajority is still required to end filibusters on legislation.
In the United States House of Representatives, the filibuster (the right to unlimited debate) was used until 1842, when a permanent rule limiting the duration of debate was created. The disappearing quorum was a tactic used by the minority until Speaker Thomas Brackett Reed eliminated it in 1890. As the membership of the House grew much larger than the Senate, the House had acted earlier to control floor debate and the delay and blocking of floor votes. On February 7, 2018, Minority Leader Nancy Pelosi set a record for the longest speech on the House floor (8 hours and 7 minutes), in support of Deferred Action for Childhood Arrivals, taking advantage of the fact that the Minority Leader is allowed to speak indefinitely without interruption.
Only 14 state legislatures have a filibuster:
In France, member of Parlement Christine Boutin spoke for five hours in the French National Assembly in November 1999 in an attempt to prevent or postpone the adoption of PACS, a contractual form of civil union open to homosexual couples, which she opposed.
In August 2006, the left-wing opposition submitted 137,449 amendments to the proposed law bringing the share in Gaz de France owned by the French state from 80% to 34% in order to allow for the merger between Gaz de France and Suez. Normal parliamentary procedure would require 10 years to vote on all the amendments.
The French constitution gives the government two options to defeat such a filibuster. The first one was originally the use of the article 49 paragraph 3 procedure, according to which the law was adopted except if a majority is reached on a non-confidence motion (a reform of July 2008 resulted in this power being restricted to budgetary measures only, plus one time each ordinary session - i.e. from October to June - on any bill. Before this reform, article 49, 3 was frequently used, especially when the government was short a majority in the Assemblée nationale to support the text but still enough to avoid a non-confidence vote). The second one is the article 44 paragraph 3 through which the government can force a global vote on all amendments it did not approve or submit itself.
In the end, the government did not have to use either of those procedures. As the parliamentary debate started, the left-wing opposition chose to withdraw all the amendments to allow for the vote to proceed. The "filibuster" was aborted because the privatisation of Gaz de France appeared to have little opposition amongst the general population. It also appeared that this privatisation law could be used by the left-wing in the presidential election of 2007 as a political argument. Indeed, Nicolas Sarkozy, president of the Union pour un Mouvement Populaire (UMP - the right wing party), Interior Minister, former Finance Minister and campaigning for President, had previously promised that the share owned by the French government in Gaz de France would never go below 70%.
The first incidence of filibuster in the Legislative Council (LegCo) after the Handover occurred during the second reading of the Provision of Municipal Services (Reorganization) Bill in 1999, which aimed at dissolving the partially elected Urban Council and Regional Council. As the absence of some pro-Establishment legislators would mean an inadequate support for the passing of the bill, the Pro-establishment Camp filibustered along with Michael Suen, the then-Secretary for Constitutional Affairs, the voting of the bill was delayed to the next day and that the absentees could cast their votes. Though the filibuster was criticised by the pro-democracy camp, Lau Kong-wah of the Democratic Alliance for the Betterment and Progress of Hong Kong (DAB) defended their actions, saying "it (a filibuster) is totally acceptable in a parliamentary assembly."
Legislators of the Pro-democracy Camp filibustered during a debate about financing the construction of the Guangzhou-Shenzhen-Hong Kong Express Rail Link by raising many questions on very minor issues, delaying the passing of the bill from 18 December 2009 to 16 January 2010. The Legislative Council Building was surrounded by thousands of anti-high-speed rail protesters during the course of the meetings.
In 2012, Albert Chan and Wong Yuk-man of People Power submitted a total of 1306 amendments to the Legislative Council (Amendment) Bill, by which the government attempted to forbid lawmakers from participating in by-elections after their resignation. The bill was a response to the so-called 'Five Constituencies Referendum, in which 5 lawmakers from the pro-democracy camp resigned and then joined the by-election, claiming that it would affirm the public's support to push forward the electoral reform. The pro-democracy camp strongly opposed the bill, saying it was seen a deprivation of the citizens' political rights. As a result of the filibuster, the LegCo carried on multiple overnight debates on the amendments. In the morning of 17 May 2012, the President of the LegCo (Jasper Tsang) terminated the debate, citing Article 92 of the Rules of Procedure of LegCo: In any matter not provided for in these Rules of Procedure, the practice and procedure to be followed in the Council shall be such as may be decided by the President who may, if he thinks fit, be guided by the practice and procedure of other legislatures. In the end, all motions to amend the bill were defeated and the Bill was passed.
To ban filibuster, Ip Kwok-him of the DAB sought to limit each member to move only one motion, by amending the procedures of the Finance Committee and its two subcommittees in 2013. All 27 members from pan-democracy camp submitted 1.9 million amendments. The Secretariat estimated that 408 man-months (each containing 156 working hours) were needed to vet the facts and accuracy of the motions, and, if all amendments were admitted by the Chairman, the voting time would take 23,868 two-hour meetings.
As of 2017, filibustering is still an ongoing practice in Hong Kong by the pan-democratic party, but at the same time, the pan-democratic party are undergoing huge amounts of fire from the pro-Beijing camp for making filibustering a norm in the Legislative Council.
In Iranian oil nationalisation, the filibustering speech of Hossain Makki, the National Front deputy took four days that made the pro-British and pro-royalists in Majlis (Iran) inactive. To forestall a vote, the opposition, headed by Hossein Makki, conducted a filibuster. For four days Makki talked about the country's tortuous experience with AIOC and the shortcomings of the bill. Four days later when the term ended the debate had reached no conclusion. The fate of the bill remained to be decided by the next Majlis.
South Korean opposition lawmakers started a filibuster on February 23, 2016 to stall the Anti-Terrorism bill, which they claim will give too much power to the National Intelligence Service and result in invasions of citizens' privacy. As of March 2, the filibuster completed with a total of 193 hours, and the passing of the bill.South Korea's 20th legislative elections were held 2 months after the filibuster, and the opposite party the Minjoo Party of Korea won more seats than the ruling party, the Saenuri Party.
Since 2019 Senate of Poland is controlled by parties opposing the rulling PiS. In 2020 this body postponed legislative procedure of a controversial electoral act for 30 days and eventually vetoed it. |
|SI unit||Cubic metre [m3]|
|Litre, fluid ounce, gallon, quart, pint, tsp, fluid dram, in3, yd3, barrel|
|In SI base units||1 m3|
Volume is a scalar quantity expressing the amount of three-dimensional space enclosed by a closed surface. For example, the space that a substance (solid, liquid, gas, or plasma) or 3D shape occupies or contains. Volume is often quantified numerically using the SI derived unit, the cubic metre. The volume of a container is generally understood to be the capacity of the container; i.e., the amount of fluid (gas or liquid) that the container could hold, rather than the amount of space the container itself displaces. Three dimensional mathematical shapes are also assigned volumes. Volumes of some simple shapes, such as regular, straight-edged, and circular shapes can be easily calculated using arithmetic formulas. Volumes of complicated shapes can be calculated with integral calculus if a formula exists for the shape's boundary. One-dimensional figures (such as lines) and two-dimensional shapes (such as squares) are assigned zero volume in the three-dimensional space.
The volume of a solid (whether regularly or irregularly shaped) can be determined by fluid displacement. Displacement of liquid can also be used to determine the volume of a gas. The combined volume of two substances is usually greater than the volume of just one of the substances. However, sometimes one substance dissolves in the other and in such cases the combined volume is not additive.
In differential geometry, volume is expressed by means of the volume form, and is an important global Riemannian invariant. In thermodynamics, volume is a fundamental parameter, and is a conjugate variable to pressure.
Any unit of length gives a corresponding unit of volume: the volume of a cube whose sides have the given length. For example, a cubic centimetre (cm3) is the volume of a cube whose sides are one centimetre (1 cm) in length.
In the International System of Units (SI), the standard unit of volume is the cubic metre (m3). The metric system also includes the litre (L) as a unit of volume, where one litre is the volume of a 10-centimetre cube. Thus
- 1 = (10 cm)3 = 1000 cubic centimetres = 0.001 cubic metres,
- 1 cubic metre = 1000 litres.
Small amounts of liquid are often measured in millilitres, where
- 1 millilitre = 0.001 litres = 1 cubic centimetre.
In the same way, large amounts can be measured in megalitres, where
- 1 million litres = 1000 cubic metres = 1 megalitre.
Various other traditional units of volume are also in use, including the cubic inch, the cubic foot, the cubic yard, the cubic mile, the teaspoon, the tablespoon, the fluid ounce, the fluid dram, the gill, the pint, the quart, the gallon, the minim, the barrel, the cord, the peck, the bushel, the hogshead, the acre-foot and the board foot. These are all units of volume.
Capacity is defined by the Oxford English Dictionary as "the measure applied to the content of a vessel, and to liquids, grain, or the like, which take the shape of that which holds them". (The word capacity has other unrelated meanings, as in e.g. capacity management.) Capacity is not identical in meaning to volume, though closely related; the capacity of a container is always the volume in its interior. Units of capacity are the SI litre and its derived units, and Imperial units such as gill, pint, gallon, and others. Units of volume are the cubes of units of length. In SI the units of volume and capacity are closely related: one litre is exactly 1 cubic decimetre, the capacity of a cube with a 10 cm side. In other systems the conversion is not trivial; the capacity of a vehicle's fuel tank is rarely stated in cubic feet, for example, but in gallons (an imperial gallon fills a volume with 0.1605 cu ft).
The density of an object is defined as the ratio of the mass to the volume. The inverse of density is specific volume which is defined as volume divided by mass. Specific volume is a concept important in thermodynamics where the volume of a working fluid is often an important parameter of a system being studied.
Volumetric space is a 3D region having a shape in addition to capacity or volume.
In cylindrical coordinates, the volume integral is
In spherical coordinates (using the convention for angles with as the azimuth and measured from the polar axis; see more on conventions), the volume integral is
(B: area of base)
(B: area of base)
|Solid of revolution|
|Solid body with continuous area |
of its cross sections
|For the solid of revolution above: |
Ratios for a cone, sphere and cylinder of the same radius and height
The above formulas can be used to show that the volumes of a cone, sphere and cylinder of the same radius and height are in the ratio 1 : 2 : 3, as follows.
Let the radius be r and the height be h (which is 2r for the sphere), then the volume of the cone is
the volume of the sphere is
while the volume of the cylinder is
The volume of a sphere is the integral of an infinite number of infinitesimally small circular disks of thickness dx. The calculation for the volume of a sphere with center 0 and radius r is as follows.
The surface area of the circular disk is .
The radius of the circular disks, defined such that the x-axis cuts perpendicularly through them, is
where y or z can be taken to represent the radius of a disk at a particular x value.
Using y as the disk radius, the volume of the sphere can be calculated as
This formula can be derived more quickly using the formula for the sphere's surface area, which is . The volume of the sphere consists of layers of infinitesimally thin spherical shells, and the sphere volume is equal to
The cone is a type of pyramidal shape. The fundamental equation for pyramids, one-third times base times altitude, applies to cones as well.
However, using calculus, the volume of a cone is the integral of an infinite number of infinitesimally thin circular disks of thickness dx. The calculation for the volume of a cone of height h, whose base is centered at (0, 0, 0) with radius r, is as follows.
The radius of each circular disk is r if x = 0 and 0 if x = h, and varying linearly in between—that is,
The surface area of the circular disk is then
The volume of the cone can then be calculated as
and after extraction of the constants
Integrating gives us
In differential geometry, a branch of mathematics, a volume form on a differentiable manifold is a differential form of top degree (i.e., whose degree is equal to the dimension of the manifold) that is nowhere equal to zero. A manifold has a volume form if and only if it is orientable. An orientable manifold has infinitely many volume forms, since multiplying a volume form by a non-vanishing function yields another volume form. On non-orientable manifolds, one may instead define the weaker notion of a density. Integrating the volume form gives the volume of the manifold according to that form.
An oriented pseudo-Riemannian manifold has a natural volume form. In local coordinates, it can be expressed as
where the are 1-forms that form a positively oriented basis for the cotangent bundle of the manifold, and is the determinant of the matrix representation of the metric tensor on the manifold in terms of the same basis.
In thermodynamics, the volume of a system is an important extensive parameter for describing its thermodynamic state. The specific volume, an intensive property, is the system's volume per unit of mass. Volume is a function of state and is interdependent with other thermodynamic properties such as pressure and temperature. For example, volume is related to the pressure and temperature of an ideal gas by the ideal gas law.
The task of numerically computing the volume of objects is studied in the field of computational geometry in computer science, investigating efficient algorithms to perform this computation, approximately or exactly, for various types of objects. For instance, the convex volume approximation technique shows how to approximate the volume of any convex body using a membership oracle.
- "Your Dictionary entry for "volume"". Retrieved 2010-05-01.
- One litre of sugar (about 970 grams) can dissolve in 0.6 litres of hot water, producing a total volume of less than one litre."Solubility". Retrieved 2010-05-01.
Up to 1800 grams of sucrose can dissolve in a liter of water.
- "General Tables of Units of Measurement". NIST Weights and Measures Division. Archived from the original on 2011-12-10. Retrieved 2011-01-12.
- "capacity". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
- "density". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
- Rorres, Chris. "Tomb of Archimedes: Sources". Courant Institute of Mathematical Sciences. Retrieved 2007-01-02.
|Wikimedia Commons has media related to Volumes.|
Media files used on this page
Photograph of a simple measuring cup, taken by me.
Volume measurements from The New Student's Reference Work. |
2-22-19 How the zebra got its stripes: The problem with 'just-so' stories
When it comes to explaining why zebras have stripes, it’s best to remember that some issues are not black and white. Biologists have been debating the puzzle since Darwin’s time, but a study published on Wednesday offers further evidence for one of the most promising explanations: that the stripes deter biting flies. In the parts of Africa where zebras live, there are blood-sucking horseflies that carry lethal diseases such as trypanosomiasis. Clearly, zebras would do well to avoid being bitten. The idea is that the stripes somehow confuse the flies so that they don’t land on the zebras. A team led by Tim Caro of the University of California, Davis tracked captive zebras and horses at a site in England. Horseflies circled round both, but they landed on horses significantly more often. Putting striped coats on the horses’ bodies meant the horseflies landed there less often – but still landed on their heads, which were uncovered. The implication is that the stripes were having a real effect. The hypothesis backed by a lot of evidence, but does that mean it’s the only reason for a zebra’s stripes? Not necessarily. Some ideas don’t seem to stand up, notably the suggestion that the stripes help zebras cool down on hot days – if that were true, we would expect a lot more tropical animals to be stripy. But other ideas seem to have more to them. One which at first seems ridiculous is that the stripes are a form of camouflage. Obviously, zebras are not inconspicuous. But the stripes could create “dazzle camouflage”: overwhelming the predator’s visual system and making it hard to track the zebra’s movement. Think about the experience of watching a herd of zebras all dashing in different directions, and imagine trying to pick out one of them to bring down.
2-22-19 African hominid fossils show ancient steps toward a two-legged stride
New cache of Ardipithecus ramidus bones reveals advances in upright walking 4 million years ago. Fossils unearthed from an Ethiopian site not far from where the famous hominid Ardi’s partial skeleton was found suggest that her species was evolving different ways of walking upright more than 4 million years ago. Scientists have established that Ardi herself could walk upright (SN Online: 4/2/18). But the new fossils demonstrate that other members of Ardipithecus ramidus developed a slightly more efficient upright gait than Ardi’s, paleoanthropologist Scott Simpson and colleagues report in the April Journal of Human Evolution. The fossils, excavated in Ethiopia’s Gona Project area, are the first from the hominid species since 110 Ar. ramidus fossils, including Ardi’s remains, were found about 100 kilometers to the south (SN: 10/24/09, p. 9). Gona field surveys and excavations from 1999 through 2013 yielded Ar. ramidus remains, including 42 lower-body fossils, two jaw fragments and a large number of isolated teeth. Several leg and foot bones, along with a pelvic fragment, a lower back bone and possibly some rib fragments, came from the same individual. The same sediment layers, characterized by previously dated reversals of Earth’s magnetic field, contained fossils of extinct pigs, monkeys and other animals known to have lived more than 4 million years ago. Unlike Ardi, the fossil individual at Gona walked on an ankle that better supported its legs and trunk, says Simpson, of Case Western Reserve University in Cleveland. And only the Gona hominid could push off its big toe while striding on two legs.
2-21-19 A ban on artificial trans fats in NYC restaurants appears to be working
The fatty acids have been linked to heart disease. New Yorkers fond of eating out in the last decade weren’t just saved from doing the dishes. Residents’ blood levels of artificial trans fats, which increase the risk heart disease, dropped following a 2006 citywide policy that banned restaurants from using the fats. Researchers analyzed blood samples of adult city residents from before and after the ban, taken as part of a health and nutrition survey that queried participants on their dining habits. The samples, 212 from 2004 and 247 from 2013–2014, revealed a drop from 49.2 to 21.3 micromoles per liter, suggesting that trans fat levels plunged by about 57 percent overall among New Yorkers. For people who dined out frequently, the decrease was even greater: Levels of the fats declined by about 62 percent for New Yorkers who ate out four or more times per week, the team reports online February 21 in the American Journal of Public Health. An estimated 1 in 5 city residents eats out that frequently, says study coauthor Sonia Angell, deputy commissioner of the New York City Department of Health and Mental Hygiene in Queens. “We think [the ban] has just been a win overall for New Yorkers … in particular for those who dine out more frequently.” Artificial trans fats, also called trans fatty acids, end up in foods like fried chicken and doughnuts, anything that is fried, baked or cooked in partially hydrogenated vegetable oils. The fats increase the amount of low-density lipoprotein, commonly known as “bad” cholesterol, in the body while lowering high-density lipoprotein, the “good” cholesterol.
2-21-19 Dinosaur extinction lines up closely with timing of volcanic eruptions
Were the dinosaurs seen off by an asteroid, or a flurry of volcanic eruptions? Two new studies on the timing of volcanic events help us piece together the story of Earth’s most famous mass extinction, but they leave it unclear exactly what triggered the demise of so many species. Around three-quarters of the species on Earth are thought to have perished in the Cretaceous-Palaeogene extinction event 66 million years ago, most famously including all dinosaurs except the ancestors of modern birds. In the geological records, the event coincides with a layer of rock with high levels of iridium – evidence of an asteroid impact. Most geologists think this impact created a huge crater at Chicxulub, Mexico. The impact could have caused a global soot cloud that blocked out the sun. However, the extinction also coincided with a burst of intense volcanic activity that formed a huge rock formation known as the Deccan Traps in western India. Similar volcanic events have been implicated in other mass extinctions in Earth’s history. Eruptions can warm the climate by releasing massive quantities of greenhouse gases or cause cooling by putting sun-blocking aerosols into the high atmosphere. To get a more precise idea about the timing of the Deccan eruptions, Courtney Sprain of the University of California, Berkeley, and colleagues used argon-argon dating to estimate the age of the lava flows. Another team, led by Blair Schoene of Princeton University, New Jersey, used a different method, uranium-lead dating. Both studies agree that the Deccan eruptions took place over a period lasting around a million years, beginning around 400,000 years before the extinction event. But on precise details, their conclusions differ. Schoene and his colleagues suggest the Deccan eruptions occurred in four bursts. The second was the most rapid and it began tens of thousands of years before the asteroid impact.
2-21-19 Why kids may be at risk from vinyl floors and fire-resistant couches
Chemicals called semivolatile organic compounds have been linked to health problems. Home decor like furniture and flooring may not be notorious polluters like gas-guzzlers, but these indoor consumer products can also be significant sources of potentially dangerous chemicals. Kids who live in homes with all vinyl flooring or living room couches that contain flame retardants have much higher concentrations of chemicals called semivolatile organic compounds in their blood and urine than other children. Researchers reported those results February 17 at the annual meeting of the American Association for the Advancement of Science. Manufacturers commonly use semivolatile organic compounds, such as plasticizers and flame retardants, to make electronics, furniture and other household trappings (SN: 11/14/15, p. 10). “Many of these chemicals have been implicated in adverse health outcomes in children — things like ADHD, autism … even cancer,” environmental health researcher Heather Stapleton of Duke University said in a news conference. “It’s important that we understand the primary sources of these chemicals in the home.” Stapleton and her colleagues investigated the in home exposure to semivolatile organic compounds of 203 children ages 3 to 6. The team collected dust and air samples, along with small pieces of items like couch cushions, from the kids’ homes. The researchers also gathered urine and blood samples from the children. Children living in homes with all vinyl flooring had concentrations of a by-product of the plasticizer benzyl butyl phthalate in their urine of about 240 nanograms per milliliter on average. Meanwhile, kids living in homes with no vinyl flooring had only about 12 nanograms per milliliter on average. Children with the highest exposure showed 20 to 40 percent of the “reference dose” for benzyl butyl phthalate — that is, the highest daily dose that the U.S. Environmental Protection Agency considers safe to ingest without negative consequences over a person’s lifetime.
2-21-19 A deer-sized T. rex ancestor shows how fast tyrannosaurs became giants
The newly discovered fossil’s name, Moros intrepidus, means ‘the harbinger of doom’. A new dinosaur shows that even Tyrannosaurus rex had humble beginnings. Dubbed Moros intrepidus, or “the harbinger of doom,” the new species is one of the smallest tyrannosaurs yet discovered from the Cretaceous Period. Analyses of the animal’s fossilized leg show that the creature would have stood only 1.2 meters at the hip, and weighed an estimated 78 kilograms — about the size of a mule deer, researchers report February 21 in Communications Biology. Dating to around 96 million years ago, the fossil is the oldest tyrannosaur found in North America. Its discovery helps fill in a 70-million-year gap in the evolution of tyrannosaurs leading up to the ferocious giants like T. rex. Teeth from early, petite tyrannosaurs have been found in rocks in North America dating to the Late Jurassic Period around 150 million years ago, when larger predator dinosaurs called allosaurs topped the food chain. But the next time tyrannosaurs are seen in the North American fossil record is 70 million years later, when they’ve become the colossal top predators. When, and how, the dinosaurs sized up within that period is a mystery. Paleontologist Lindsay Zanno of North Carolina State University in Raleigh and her colleagues dug for 10 years around Emery County in Utah, searching for clues to solve that mystery. That’s where the team discovered M. intrepidus’ long, thin leg, a characteristic indicative of a swift runner, quite unlike later titanic tyrannosaurs. “What Moros shows is that the ancestral stock of the big tyrannosaurs was small and fast,” says Thomas Carr, a vertebrate paleontologist at Carthage College in Kenosha, Wis., who wasn’t involved in the study. And it “suggests that the tyrannosaurs became giant some time in that 16-million-year stretch between Moros and the earliest of the big guys.”
2-21-19 Teeny T. rex relative discovered in US
A newly discovered relative of Tyrannosaurus rex stood just over a metre tall at the hip, a study shows. The diminutive tyrannosaur reveals crucial new information about how T. rex established itself as a dominant carnivore in North America. Early in their evolution, tyrannosaurs were small, but at some stage, the hulking T. rex along with others emerged as apex predators. The new fossil helps fill a 70-million-year gap in the fossil record. Discovered in Emery County, Utah, the animal lived about 96 million years ago, during the later part of the Cretaceous Period. Tyrannosaurs, or "tyrant lizards" - the group to which this specimen and T. rex belong - ruled the predatory roost on land during the last 15 million years before the Chicxulub asteroid slammed into the Yucatan Peninsula 66 million years ago. T. rex could reach more than 3.5m tall as measured at the hip. But, as co-author Lindsay Zanno, from North Carolina State University, explained, it wasn't always this way: "Early in their evolution, tyrannosaurs hunted in the shadows of archaic lineages such as allosaurs that were already established at the top of the food chain," she said. Our understanding of the evolutionary events leading up to the appearance of giant tyrannosaurs has been limited by the lack of complete fossils in North America. Small-ish, primitive tyrannosaurs have been found in North America dating from the Jurassic Period (around 150 million years ago). By around 81 million years ago, North American tyrannosaurs had become enormous beasts. But the fossil record in between these two time periods is patchy. The lower leg bones of the new species, Moros intrepidus, were discovered in the same area where Dr Zanno and her team had previously found Siats meekerorum, a giant meat-eating dinosaur belonging to a group known as the carcharodontosaurs. This larger predator lived during the same period as Moros. The researchers estimate that Moros intrepidus was about the size of a modern mule deer, weighing about 78kg. It was seven years old when it died and was almost fully grown.
2-20-19 Footballers really are working harder and getting injured more often
English Premier League footballers will enjoy a mid-season break next winter, partly in an attempt to reduce injuries. Some say footballers have never had it so easy, but a study of player injuries confirms the modern game is increasingly taking its toll. Ashley Jones at Leeds Beckett University in the UK and his colleagues tracked 243 players from 10 clubs across four of the divisions below the English Premier League in the 2015/16 season. They found players had an average of 1.9 injuries per player per season, compared to 1.3 in the 1997/98 and 1998/99 seasons combined – the last time a similar study was conducted. “It’s a different game now,” says Jones. “Twenty years ago you had footballers trying to be athletes. Now we have athletes who can play football.” Today’s players run around 30 per cent further than in 2006, but recovery time has not increased. Lower league teams play a 46-game season, with additional cup competition matches. It is no surprise that old injuries flare up more often, says Jones. Of the injuries the group tracked, 17 per cent were reoccurrences of an existing problem, up from 7 per cent in 1997-9. And 40 per cent of modern injuries were the result of repetitive stress and strain placed on players’ bodies over time. Some things haven’t changed. The most common injury remains a hamstring strain. And problems tend to peak twice in the year: during winter and in the first few weeks of the season. Coaches could be pushing players too hard and too early, says Jones. “It’s not needed. These players don’t lose fitness in the summer like they used to.”
2-20-19 Why a data scientist warns against always trusting AI’s scientific discoveries
Data-mining algorithms aren’t good at communicating uncertainty in results, Genevera Allen says. We live in a golden age of scientific data, with larger stockpiles of genetic information, medical images and astronomical observations than ever before. Artificial intelligence can pore over these troves to uncover potential new scientific discoveries much quicker than people ever could. But we should not blindly trust AI’s scientific insights, argues data scientist Genevera Allen, until these computer programs can better gauge how certain they are in their own results. AI systems that use machine learning — programs that learn what to do by studying data rather than following explicit instructions — can be entrusted with some decisions, says Allen, of Rice University in Houston. Namely, AI is reliable for making decisions in areas where humans can easily check their work, like counting craters on the moon or predicting earthquake aftershocks (SN: 12/22/18, p. 25). But more exploratory algorithms that poke around large datasets to identify previously unknown patterns or relationships between various features “are very hard to verify,” Allen said February 15 at a news conference at the annual meeting of the American Association for the Advancement of Science. Deferring judgment to such autonomous, data-probing systems may lead to faulty conclusions, she warned. Take precision medicine, where researchers often aim to find groups of patients that are genetically similar to help tailor treatments. AI programs that sift through genetic data have successfully identified patient groups for some diseases, like breast cancer. But it hasn’t worked as well for many other conditions, like colorectal cancer. Algorithms examining different datasets have clustered together different, conflicting patient classifications. That leaves scientists to wonder which, if any, AI to trust.
2-20-19 A 30 minute walk may reduce blood pressure by as much as medication
Just 30 minutes of exercise every morning may be as effective as medication at lowering blood pressure for the rest of the day. A study found that a short burst of treadmill walking each morning had long-lasting effects, and there were further benefits from additional short walks later in the day. In experiments, 35 women and 32 men aged 55 to 80 followed three different daily plans, in a random order, with at least six days between each one. The first plan consisted of uninterrupted sitting for eight hours, while the second consisted of one hour of sitting before 30 minutes of walking on a treadmill at moderate intensity, followed by 6.5 hours of sitting down. The final plan was one hour of sitting before 30 minutes of treadmill walking, followed by 6.5 hours of sitting which was interrupted every 30 minutes with three minutes of walking at a light intensity. The study was conducted in a laboratory to standardise the results, and men and women ate the same meals the evening before the study and during the day. Michael Wheeler at the University of Western Australia in Perth and colleagues found that blood pressure was lower in men and women who took part in the exercise plans, compared with when they did not exercise. The effect was especially seen with systolic blood pressure, which measures pressure in blood vessels when the heart beats and is a stronger predictor of heart problems such as heart attacks than diastolic blood pressure, which measures the pressure in blood vessels when the heart rests between beats. Women also saw extra benefits if they added in the short three-minute walks throughout the day, although the effect was less for men. The team say they do not know why there was a gender difference, but think it may due to varying adrenaline responses to exercise and the fact that all women in the study were post-menopausal and therefore at higher risk of cardiovascular disease.
2-20-19 How to upgrade your thinking and avoid traps that make you look stupid
Even the most intelligent people can make ridiculous mistakes – but there are simple things all of us can do to act more wisely and avoid blinkered thinking. PAUL FRAMPTON was looking for love. A 68-year-old divorcee, he was delighted to strike up a friendship on an online dating site with someone claiming to be the Czech glamour model Denise Milani. They soon arranged to meet during one of her modelling assignments in South America. When he arrived in La Paz, Bolivia, however, he was disappointed to find that Milani had been asked to fly to another shoot. But could he pick up the suitcase she had left? He did, and was subsequently arrested and charged with smuggling 2 kilograms of cocaine. It may seem like an obvious honey trap, yet Frampton wasn’t exactly lacking in brainpower. An acclaimed physicist, he had written papers on string theory and quantum field theory. How could someone so clever have been so stupid? Recent psychological research shows that Frampton’s behaviour isn’t as exceptional as it first appears. IQ does correlate with many important outcomes in life, including academic success and job performance in many workplaces. But it is less useful at predicting “wise” decision-making and critical thinking, including the capacity to assess risk and uncertainty and weigh up conflicting evidence. Indeed, as I discuss in my new book The Intelligence Trap, intelligence and expertise can sometimes make you more likely to err. This has important consequences, leading not only to errors like Frampton’s, but also to the political polarisation we see on burning issues such as Brexit or climate change. Here are some of the big intellectual traps that lead smart people to act stupidly. Luckily there are science-backed ways to avoid them.
2-20-19 We don't know what a fifth of our genes do - and won’t find out soon
We still have no idea what 20 per cent of protein-coding genes are for. What’s more, we’ve stopped making progress, according to a study looking at what we know about yeast and human proteins. “Basically we really don’t have a clue,” says team leader Valerie Wood of the University of Cambridge in the UK. Her team started by defining what is known or unknown. For instance, we might be able to tell that a protein is an enzyme from its sequence, but if we don’t know what reaction it catalyses its function cannot be said to be known. Wood compares it to taking a car to pieces – recognising that one piece is, say, a wire is not much help understanding what it’s for. When the team applied these criteria to yeast proteins, they found that the function of most of them was discovered in the 1990s. Progress slowed in the 2000s and plateaued in the 2010s with the function of a fifth still unknown. Next the team showed that the same proportion of human protein-coding genes remain a mystery. “There are 3000 human proteins whose function is unknown,” says Wood. The team did not look at the rate of progress for human proteins, but Wood thinks the situation is similar. There are two reasons why progress is grinding to a halt, she says. Firstly, a common way to find out what protein-coding genes do is to mutate them in animals such as mice and zebrafish to see what happens. The mystery proteins don’t show up in these screens, perhaps because they are involved in processes such as ageing whose effects are subtle. Secondly, funders are turning down applications to study these unknown proteins because of the risk of people spending years working on them without any results.
2-19-19 Ancient humans thrived in rainforests by hunting monkeys and squirrels
Dangerous animals, diseases and poor resources: three features of rainforests that have led many to believe that these environments were generally too inhospitable for ancient humans to live in or move through. New evidence for sophisticated monkey hunting dating back 45,000 years has shown that not only could our species live there, but it thrived as well. An international team of scientists analysed around 15,000 bone and tooth fragments from the Fa-Hien Lena Cave in Sri Lanka, thought to be the oldest archaeological site occupied by humans in the country. They found that these humans were able to survive by hunting small, quick, tree-dwelling animals, such as monkeys and giant squirrels. They did so almost continuously until around 4000 years ago. “We’re just now starting to see how flexible early humans were in terms of their behaviour,” says Michelle Langley of Griffith University, Brisbane, a bone and tooth tool analyst on the investigation. This hunting was sophisticated and sustainable, Langley says. “They were going for the biggest, healthiest monkeys,” she says. “The ones that have the most meat on them. “We’re fairly sure they’re not using traps or snares, because if you use these traps you don’t get to choose what exact animal gets caught in it.” Langley and her colleagues are still trying to determine how precisely the early hunters did it, but remains at the site include tiny bone points, which could have been used in bows and arrows, darts or as the tips of spears. These weren’t simply fragments of bone, they had been carefully shaped and styled to fit the purpose. One macaque canine tooth showed signs of having being shaped and then used as a cutting and stabbing tool.
2-19-19 Neolithic skull found by Thames 'mudlarkers'
Here's a piece of history pulled from the muddy banks of the River Thames. It's a skull fragment that is 5,600 years old. It dates to a time long before there was any permanent settlement on the site we now know as London. Investigations indicate it belonged to a male over the age of 18. There are older Neolithic remains that have been recovered in the region, but what makes this specimen especially interesting is that it's the earliest ever skull found by "mudlarkers". If you haven't heard of them before - they're the band of mostly amateur archaeologists who scour the Thames' edges at low tide for objects of intrigue and antiquity. And they're constantly picking up fascinating items - many of which end up in the Museum of London, where this frontal bone will now be displayed from Wednesday. "Mudlarkers are hugely knowledgable," she told BBC News. "They understand where finds will emerge and what's of archaeological interest. And it's really great that they work with us so we can share what they discover, because very often they will turn up things that are very different to what we find elsewhere in the city. Mudlarking requires a permit from the Port of London Authority, and if human remains are identified, the police have to be informed. “Upon reports of a human skull fragment having been found along the Thames foreshore, detectives from South West CID attended the scene," explained DC Matt Morse at the Metropolitan Police. "Not knowing how old this fragment was, a full and thorough investigation took place, including further, detailed searches of the foreshore. The investigation culminated in the radiocarbon dating of the skull fragment, which revealed it to be likely belonging to the Neolithic Era." It's hard to imagine what London looked like before London. But the region was probably covered in extensive woodland, said Dr Redfern. The man, when he was alive, was very likely "a farmer rather than a hunter-gatherer", she added. "From the Neolithic onwards, there’s evidence for farmsteads, but no evidence for a large permanent settlement until after the Claudian conquest." Roman Emperor Claudius' troops invaded Britain in AD 43.
2-18-19 PTSD may one day be treated with a common blood pressure drug
A WIDELY used blood pressure medicine could help people overcome post-traumatic stress disorder (PTSD). The drug seems to make it easier for people to learn to stop being afraid of a past experience. It successfully helped people in a lab test lose a mild fear they had just developed. People can experience PTSD after a frightening event, such as an assault or car crash. It can be debilitating, and involves nightmares and flashbacks. Antidepressants and therapy that lets people remember what happened while in safe surroundings can both help, but neither works perfectly. A few years ago, researchers noticed that people who have experienced trauma tend to have fewer PTSD symptoms if they happen to be taking blood pressure medicines that block a hormone called angiotensin. This hormone is thought to bind to receptor proteins in parts of the brain involved in learning. When a drug that blocks angiotensin, called losartan, was tested in mice, the animals lost fears learned in the lab, such as a fear of sounds linked to receiving an electric shock, more quickly. Now Benjamin Becker at the University of Electronic Science and Technology of China in Chengdu and his colleagues have carried out a similar test in people. They trained 59 men to develop a mild fear by giving them a small electric shock whenever they were shown an image of a coloured square. Electrodes on the participants’ skin showed that they soon began sweating whenever they saw the square – a sign of fear or alertness. The volunteers were then given either losartan or a placebo tablet. After 90 minutes, they were shown the square again, this time without shocks, so they would unlearn their fear reaction. The men who received losartan were faster to stop sweating in response to the square than those given the placebo.
2-18-19 Brain cells combine place and taste to make food maps
These double-duty neurons, discovered in rats’ hippocampi, may help animals find food Sometimes a really good meal can make an evening unforgettable. A new study of rats, published online February 18 in the Journal of Neuroscience, may help explain why. A select group of nerve cells in rats’ brains holds information about both flavors and places, becoming active when the right taste hits the tongue when the rat is in a certain location. These double-duty cells could help animals overlay food locations onto their mental maps. Researchers implanted electrodes into the hippocampus, an area of the brain that is heavily involved in both memory formation and mapping. The rats then wandered around an enclosure, allowing researchers to identify “place cells” that become active only when the rat wandered into a certain spot. At the same time, researchers occasionally delivered one of four flavors (sweet, salty, bitter and plain water) via an implanted tube directly onto the wandering rats’ tongues. Some of the active place cells also responded to one or more flavors, but only when the rat was in the right spot within its enclosure. When the rat moved away from a place cell’s preferred spot, that cell no longer responded to the flavor, the researchers found. A mental map of the best spots for tasting something good would come in handy for an animal that needs to find its next meal.
2-18-19 This optical illusion breaks your brain for 15 milliseconds
Move your head towards these rings of dashed lines and the circles will appear to turn clockwise. Pull your head away and the motion reverses. This is the Pinna-Brelstaff illusion – and it has just been explained. It seems to be due to a communication delay between the regions of your brain that process vision. “It’s kind of like if you’re at a party where you’re listening to a voice amongst lots of noise,” says Ian Max Andolina at the Chinese Academy of Sciences in Shanghai. “The physical motion is like background noise and the illusion is the voice in the noise you have to pick out. It takes a little longer to do that.” Andolina and his colleagues trained two macaques to indicate whether they saw any rotation in images that were actually in motion. Then they showed them the Pinna-Brelstaff illusion, and found that the macaques perceive illusory motion similarly to nine human observers. Macaques were used because they have a very similar vision processing system to humans. The macaques in this experiment had electrodes in their brains, allowing the researchers to see exactly how they processed the optical illusion. The team found a 15-millisecond delay between the activity of neurons that perceive global motion – in this case the illusion that the entire set of lines is moving – and those that perceive local motion, in this case that there is actually no movement. Our brains probably have the same delay, which may seem like a flaw, says Andolina, but they are just being efficient. When we see something, our brain tries to quickly guess what it is. Normally, that guess is pretty accurate because the physical rules of our environment are usually consistent. Here, your brain is using a shortcut, substituting apparent motion for actual motion.
2-18-19 Stone Age Europe may have been home to no more than 1500 people
Stone Age Europe was a lonely place to live. An assessment of ancient population sizes suggests a vast swathe of western and central Europe may have been home to no more than 1500 people at any one time. Our species, Homo sapiens, arrived in Europe about 43,000 years ago. Archaeological evidence, particularly the appearance of distinctive stone tools at multiple sites, suggests these humans rapidly spread across the continent. But it’s an open question exactly how many people lived in Europe at this time. Now Isabell Schmidt and Andreas Zimmermann at the University of Cologne, Germany, have estimated the average population size in a period of European prehistory called the Aurignacian, between 42,000 and 33,000 years ago. The two researchers looked at a large chunk of Europe stretching from northern Spain in the west to Poland in the east. They plotted the location of the approximately 400 known Aurignacian sites across this area. This revealed that humans really occupied just 13 small regions of the continent – leaving most areas effectively uninhabited. To estimate how many hunter-gatherer groups lived in these 13 regions, Schmidt and Zimmermann looked more closely at the archaeological evidence, including how far stone material was likely transported to make tools at these sites. They argue that from the way the sites cluster, the 13 regions were home to no more than about 35 different hunter-gatherer groups. To get a sense for how many people lived in those 35 groups, the researchers used information about more recent hunter-gatherers recorded by explorers as they spread throughout the world in the past few centuries. Groups that most closely resembled the Aurignacians in terms of the animals they hunted contained about 42 individuals, on average.
2-17-19 The sixth mass extinction
The populations of the world’s wild animals have fallen by more than 50 percent, and humanity is to blame. (Webmaster's comment: If it takes us 100-200 years to kill off 75% or more of all species THAT IS A MASS EXTINCTION. 100-200 years was only a blink of the eye in previous extinctions! Mass extinction events do not happen overnight. It might take 100's of years for the full effect of an asteroid strike or a massive volcanic eruption to play out. So will human devastation of most animal life.)
- What’s gone wrong? As the human population has swelled to 7.5 billion, our species’ massive footprint on planet Earth has had a devastating impact on mammals, birds, reptiles, insects, and marine life. We’ve driven thousands of species to the edge of extinction through habitat loss, overhunting and overfishing, the introduction of invasive species into new ecosystems, toxic pollution, and climate change.
- How many species are already extinct? Scientists can only guess. Earth is home to between 9 million and as many as 1 trillion species—and only a fraction have been discovered. Vertebrate species have, however, been closely studied, and at least 338 have gone extinct, with the number rising to 617 when one includes those species “extinct in the wild” and “possibly extinct.”
- How many species are endangered? There are 26,500 species threatened with extinction, according to the International Union for Conservation of Nature (IUCN), a global network of some 16,000 scientists. That includes 40 percent of amphibian species, 33 percent of reef-building corals, 25 percent of mammals, and 14 percent of birds. There are now only 7,000 cheetahs left, and the number of African lions is down 43 percent since 1993.
- Is a mass extinction underway? Possibly. Many scientists now believe humans are living through a “mass extinction,” or an epoch during which at least 75 percent of all species vanish from the planet. The previous five mass extinctions occurred over the past 450 million years; the last one occurred about 66 million years ago, when the aftermath of a massive asteroid strike wiped out the dinosaurs.
- How fast is this happening? Extremely fast. Species extinction is an ordinary part of the natural processes of our planet; in fact, 99 percent of all species that ever lived on Earth are gone. It’s the pace of recent extinctions that is alarming. More than half of the vertebrate extinctions since 1500 have occurred since 1900.
- What are the consequences? Potentially enormous. The loss of species can have catastrophic effects on the food chain on which humanity depends. Ocean reefs, which sustain more than 25 percent of marine life, have declined by 50 percent already—and could be lost altogether by 2050. Insects pollinate crops humans eat.
- Can extinct species be resurrected? Using DNA technology, scientists are working on re-creating species that have disappeared. The technology, called “de-extinction,” is likely at least a decade off, although there are a few possible ways to go about it.
2-17-19 Tooth plaque shows drinking milk goes back 3,000 years in Mongolia
Milk proteins preserved in tartar show that ancient Mongolians drank cow, yak and sheep milk. Ancient people living in what’s now Mongolia drank milk from cows, yaks and sheep — even though, as adults, they couldn’t digest lactose. That finding comes from the humblest of sources: ancient dental plaque. Modern Mongolians are big on dairy, milking seven different animal species, including cows, yaks and camels. But how far into the past that dairying tradition extends is difficult to glean from the usual archaeological evidence: Nomadic lifestyles mean no kitchen trash heaps preserving ancient pots with lingering traces of milk fats. So molecular anthropologist Christina Warinner and her colleagues turned to the skeletons found in 22 burial mounds belonging to the Deer Stone culture, a people who lived in Mongolia’s eastern steppes around about 1300 B.C. The hardened dental plaque, or tartar, on the teeth of the skeletons contained traces of milk proteins, Warinner, of the Max Planck Institute for the Science of Human History in Jena, Germany, said February 16 at the annual meeting of the American Association for the Advancement of Science. Those proteins showed that the people drank milk from cows, yaks, goats and sheep, but not from camels or reindeer, which modern day Mongolians milk today. Ancient Mongolians’ DNA also revealed that they weren’t able to digest lactose as adults. Instead, the Deer Stone people, like modern Mongolians, may have relied on bacteria within the gut, known as the gut microbiome, to break down the lactose, Warinner said.
2-17-19 Your phone and shoes are home to completely unknown life forms
Anyone hoping to discover a new species may only need to look as far as the soles of their shoes or the phone in their pocket. A study of 3500 swab samples taken from people’s shoes and phones has found nine unstudied branches of bacterial life. The samples were taken by Jonathan Eisen, of the University of California, Davis, and his team from members of the public attending sporting events, museums, and educational events in the US. When they sequenced and analysed the DNA of the bacteria in each sample, they found that 35 different phyla of bacteria were present. Phyla are large branches of the family tree of life, and are subdivisions of the larger kingdoms, which include bacteria, plants or animals. According to official nomenclature lists, there are only 39 phyla of prokaryotic organisms – those that have small, bacteria-type cells that lack a true nucleus. But the team found nine possible additional phyla living on shoes and phones, suggesting that there is a vast variety of bacterial life that we know almost nothing about. “We have only scratched the surface of understanding microbial diversity, even right in front of us,” says Eisen. The team found that 10 per cent of the samples they took contained DNA from bacteria belonging to such so-called microbial dark matter – organisms that we know little about because they are difficult to grow and study in the lab. The samples also contained bacteria belonging to extremely rare groups, such as Edwardsbactera, first discovered in an underground water aquifer, and Diapherotrites, which was previously found in water seeping underground in an abandoned goldmine.
2-16-19 Why some Georgia O’Keeffe paintings have ‘art acne’
A new imaging technique could help art curators track destructive bumps over time. Like pubescent children, the oil paintings of Georgia O’Keeffe have been breaking out with “acne” as they age, and now scientists know why. Tiny blisters, which can cause paint to crack and flake off like dry skin, were first spotted forming on the artist’s paintings years ago. O’Keeffe, a key figure in the development of American modern art, herself had noticed these knobs, which at first were dismissed as sand grains kicked up from the artist’s New Mexico desert home and lodged in the oil paint. Now researchers have identified the true culprit: metal soaps that result from chemical reactions in the paint. The team has also developed a 3-D image capturing computer program, described February 17 at the annual meeting of the American Association for the Advancement of Science, to help art conservators detect and track these growing “ailments” using only a cell phone or tablet. O’Keeffe’s works aren’t the first to develop such blisters. Metal soaps, which look a bit like white, microscopic insect eggs, form beneath the surfaces of around 70 percent of all oil paintings, including works by Rembrandt, Francisco de Goya and Vincent van Gogh. “It’s not an unusual phenomenon,” says Marc Walton, a materials scientist at Northwestern University in Evanston, Ill. Scientists in the late 1990s determined that these soaps form when oil paint’s negatively charged fats, which hold the paint’s colored pigments together, react with positively charged metal ions, such as zinc and lead, in the paint. This reaction creates liquid crystals that slowly aggregate beneath a painting’s surface, causing paint layers on the surface to gradually bulge, tear and eventually flake off.
2-15-19 Vaping can help smokers quit—at a cost
A major new study has found that e-cigarettes can help smokers quit—good news, because smoking causes nearly 6 million deaths a year, including 480,000 in the U.S. The bad news, reports NPR.com, is that many people who use e-cigarettes to stop smoking end up hooked on vaping, which carries its own health risks. For the study, British researchers recruited 886 smokers who wanted to quit and split them into two groups. The first received nicotine gum, inhalers, and other standard replacement treatments; the second were given e-cigarettes. Both groups also received a month of weekly one-on-one counseling sessions. After a year, 18 percent of the e-cigarette group had quit, compared with 10 percent of those using traditional therapies. “Anything that helps smokers avoid heart disease and cancer and lung disease is a good thing,” said lead researcher Peter Hajek, from Queen Mary University of London, “and e-cigarettes can do that.” However, 80 percent of the quitters in the e-cigarette group were still vaping at the one-year mark; only 9 percent of the quitters in the traditional nicotine replacement group were still using those products. Continued e-cigarette use worries some scientists because of growing evidence of vaping’s harmful health effects. A U.S. study unveiled last week found that compared with nonusers, people who vape have a higher risk of stroke (by 71 percent), heart attack (59 percent), and heart disease (40 percent).
2-15-19 Kids using too much toothpaste
Parents are putting an unhealthy amount of toothpaste on their kids’ brushes, the Centers for Disease Control and Prevention has warned. The CDC and the American Dental Association advise that children ages 3 to 6 use no more than a pea-size amount of fluoride paste, to prevent the youngsters from swallowing large amounts while brushing. While fluoride helps prevent cavities—which is why it’s added to toothpaste and tap water—it can also damage and discolor children’s teeth when consumed in excess. But in a CDC study of more than 5,000 kids ages 3 to 15, only 49 percent of the 3-to-6 cohort brushed with the recommended pea-size dollop of paste, and more than 38 percent coated either half or all of the brush. Jonathan Shenkin, a spokesman for the ADA, tells The New York Times that parents should keep buying fluoride toothpaste, “but use it in the proper quantity so your children don’t swallow too much.” The study also found that nearly 80 percent of kids started brushing later than recommended; the CDC says they should begin the moment their first tooth comes through.
2-15-19 Breakfast and weight loss
“Eat breakfast like a king, lunch like a prince, and dinner like a pauper,” goes the old adage. Yet new research suggests the first meal of the day isn’t as important as many believe—throwing into question the widely held belief that eating breakfast promotes weight loss by “jump-starting the metabolism.” Australian researchers analyzed 13 previous studies relating to breakfast, weight, and calorie intake in the U.S. and other high-income countries. They found that those who ate breakfast actually consumed 260 more calories on average than those who didn’t, reports Vox.com, and were nearly a pound heavier. They saw no significant difference in metabolic rates between breakfast eaters and abstainers. The analysis has its flaws: The trials examined ran for a maximum of only 16 weeks, and mostly didn’t factor in the types of food eaten by participants. The authors also acknowledge that breakfast is beneficial for children. But for adults, they conclude, there’s “no evidence to support the notion that breakfast consumption promotes weight loss.”
2-15-19 A dialect quiz shows we still cling to our regional identities
What do you call your grandmother? Do the words but and put rhyme? Would you eat a bread roll, a bap, a bun or a cob? If you grew up in the UK or Ireland, an online quiz by The New York Times will try to pinpoint where by collecting your answers to 25 questions like these. For a small sample of New Scientist journalists, the quiz proved shockingly accurate. “There are a lot of distinct dialects in the UK for a small land mass,” says Laurel MacKenzie, a linguist at New York University. Dialects develop when groups are isolated from one another, which has been the case for most of the thousands of years in which people have lived in the UK and Ireland. Although it is now very easy to travel, dialects stick around because they are a matter of local pride and identity. “People hold on to their traditional ways of speaking because that’s who they are,” says MacKenzie. Her favourite dialect words are “barm” – a bread roll in Manchester – and “mither”, which means bother in north-west England. These differences aren’t just reflected in society. By studying how people speak using ultrasound, researchers have learned that people move their tongues in different ways, even when they are making the same sound. “There’s so much variation in language and so much of it is under the surface,” says MacKenzie. One important factor that isn’t taken into account by the New York Times quiz is class. Regardless of where you are, people higher on the social class ladder tend to sound the same, but lower down the ladder, you hear a lot more regional variation. That’s also true of other countries.
2-15-19 STEM professors’ beliefs on intelligence may widen the racial achievement gap
Racial minorities can suffer lower grades if their teachers see intelligence as fixed. Beliefs among some university professors that intelligence is fixed, rather than capable of growth, contribute to a racial achievement gap in STEM courses, a new study suggests. Those professors may subtly communicate stereotypes about blacks, Hispanics and Native Americans allegedly being less intelligent than Asians and whites, say psychologist Elizabeth Canning of Indiana University in Bloomington and her colleagues. In turn, black, Hispanic and Native American undergraduates may respond by becoming less academically motivated and more anxious about their studies, leading to lower grades. Even small dips in STEM grades — especially for students near pass/fail cutoffs — can accumulate across the 15 or more science, technology, engineering and math classes needed to become a physician or an engineer, Canning says. That could jeopardize access to financial aid and acceptance to graduate programs. “Our work suggests that academic benefits could accrue over time if all students, and particularly underrepresented minority students, took STEM classes with faculty who endorse a growth mind-set,” Canning says. Underrepresented minority students’ reactions to professors with fixed or flexible beliefs about intelligence have yet to be studied. But over a two-year period, the disparity in grade point averages separating Asian and white STEM students from black, Hispanic and Native American peers was nearly twice as large in courses taught by professors who regarded intelligence as set in stone, versus malleable, Canning’s team reports online February 15 in Science Advances.
2-15-19 Meet the man who made CRISPR monkey clones to study depression
Hung-Chun Chang hopes his work will lead to new treatments for depression and schizophrenia. One year after the birth of the world’s first two cloned primates, a team in China has used CRISPR gene editing and cloning to create monkeys that show some symptoms of depression and schizophrenia. While some researchers have praised the work’s potential for helping us understand psychiatric disorders in humans, others have raised ethical concerns. Lead scientist Hung-Chun Chang, of the Institute of Neuroscience in Shanghai, told New Scientist about how he hopes the monkeys will help us better understand mental health and find new treatments. (Webmaster's comment: Note that the cutting-edge research is being done in CHINA!)
- How did you create these monkeys? We are working on the BMAL1 gene, which affects how our body responds to the day-night cycle.
- What symptoms do these monkeys have? The most direct result is that they are not getting enough sleep.
- How can you know that these aren’t just symptoms of sleep deprivation? It’s impossible to separate the effects of sleep deprivation on monkey’s mental state from their genetic mutation.
- What are you hoping to learn from this work? We will use these monkeys for drug testing
- Couldn’t this research be done in mice or people? Monkeys have an identical body clock to humans.
- Is it ethical to genetically engineer monkeys to be depressed? Gene editing in cynomolgus monkeys, the species we used here, is permitted worldwide.
- What else is your team is working on? We are trying to create an Alzheimer’s model.
2-14-19 A gut bacteria toxin that damages DNA may be involved in bowel cancer
Gut bacteria could be to blame for bowel cancer. People with the condition often have higher levels of certain strains of Escherichia coli in their digestive systems. Now, a toxin produced by the bacteria has been shown to damage DNA in gut cells – possibly the first step towards turning cancerous. While some strains of E. coli can cause food poisoning, others are more friendly and form part of the bacterial community in a healthy gut. Previous studies have found about 20 per cent of E. coli strains produce a DNA-damaging toxin called colibactin. People with inflammatory bowel disease and bowel cancers often have elevated levels of these strains in their digestive systems. Aiming to find out what colibactin does to our body, Emily Balskus at Harvard University in Massachusetts and her colleagues injected colibactin-producing E. coli into human gut cells. They found the toxin severely damaged the cells’ DNA after 16 minutes. Cells injected with non-colibactin-producing E. coli didn’t experience such changes. When the team repeated the experiment in mice, they found the same result in their colon cells. “It’s the first time we see evidence that colibactin directly damages DNA in cells and mice,” says Balskus. They’ve yet to investigate if this damage will turn cancerous, “but in other settings, such as tobacco products, there is good evidence that [DNA destruction] is carcinogenic,” says Balskus. It’s not known when or why some E. coli produce colibactin. Many people have E. coli capable of producing colibactin in their gut, but appear to be completely healthy. “We don’t know what that means,” says Balskus.
2-14-19 Chemicals 'repair damaged neurons in mice'
New results suggest ageing brains can potentially be rejuvenated, at least in mice, according to researchers. Very early-stage experiments indicate that drugs can be developed to stop or even reverse mental decline. The results were presented at the 2019 meeting of the American Association for the Advancement of Science. The US and Canadian researchers took two new approaches to trying to prevent the loss of memory and cognitive decline that can come with old age. One team, from the University of California, Berkeley, showed MRI scans which indicated that mental decline may be caused by molecules leaking into the brain. Blood vessels in the brain are different from those in other parts of the body. They protect the organ by allowing only nutrients, oxygen and some drugs to flow through into the brain, but block larger, potentially damaging molecules. This is known as the blood-brain barrier. The scans revealed that this barrier becomes increasingly leaky as we get older. For example, 30-40% of people in their 40s have some disruption to their blood-brain barrier, compared with 60% of 60-year-olds. The scans also showed that the brain was inflamed in the leaky areas. Prof Daniela Kaufer, who leads the Berkeley group, said that young mice altered to have leaky blood-brain barriers showed many signs of aging. She discovered a chemical that stops the damage to the barrier from causing inflammation to the brain. Prof Kaufer told BBC News that not only did the chemical stop the genetically altered young mice from showing signs of aging, it reversed the signs of aging in older mice. "When you think of brain aging you think about the degeneration of cells and losing what we have," she said. "What these results show is that you are not losing anything. The cells are still there and they just needed to be 'unmasked' by reducing the inflammation."
2-14-19 CRISPR could help us protect ourselves from viruses like flu and HIV
CRISPR gene editing could let us hack the immune system to give lasting protection against HIV and other infections. Experiments in mice suggest that the technique could be used to give people immunity from a range of viruses for which there are no effective vaccines. Justin Taylor, at the Fred Hutchinson Cancer Research Center in Seattle, Washington, and colleagues used the CRISPR technique on B cells. These white blood cells are part of our natural immune system and secrete antibody proteins that attack particular bacteria and viruses. While effective against many diseases, these protective antibodies don’t work as well as needed against some viruses. This is one of the reasons researchers have struggled to develop vaccines against some of the most lethal infections. To get around this problem, better versions of some antibodies can be created artificially and then given to patients. For example, palivizumab can be made in the lab and is very effective against the respiratory syncytial virus (RSV), which infects the respiratory tract and is a serious threat to infants and older people. Injections of palivizumab are used to treat RSV infections in these high-risk groups, but the antibody breaks down quickly, meaning these expensive injections have to be repeated every month. Taylor and his colleagues hope that editing the DNA of B cells to produce better antibodies and then injecting them back into the body could lead to a steady supply of new antibodies, without the need for repeat injections. If it works, this could provide immunity against certain pathogens. The team tested their idea by giving B cells from mice the genetic instructions to make palivizumab for themselves. They found that a single injection of these cells protected 15 mice from the virus for up to 82 days.
2-14-19 Can teenagers get vaccinated without their parents’ permission?
Measles outbreaks are spreading in two neighbouring US states, Washington and Oregon, with the former declaring a public health emergency. These states are among 17 that have laws allowing parents to opt out of vaccinating their children on the basis of personal beliefs. The latest outbreak has seen teenagers turning to social media to ask how they can get vaccinated against their parents’ wishes. Legally, it is a difficult question, because children can’t necessarily make their own medical decisions. Regulations vary from state to state, but in general, some minors can access certain treatments without parental consent. Vaccines are not always specified on this list, but in some states the law is vague enough that a minor could potentially have a legal right to a vaccination. In Oregon, anyone 15 or older can get hospital care, dental and vision services, and immunisations without parental permission. In Washington, minors can receive immunisations without their parents’ consent if their doctor determines they are a “mature minor”, which takes into account their age, ability to understand the treatment and self-sufficiency, although they need not be legally independent to qualify. Other states allow even younger children to access some vaccines. In California, 12-year-olds can consent to medical treatment for sexually transmitted infections (STIs). These include the vaccine for human papillomavirus (HPV), which has become a target for anti-vaccination campaigners. “There were lots of claims about things that are bad about the HPV vaccine, which really aren’t founded in any scientific evidence. That created a lot of mistrust among parents,” says Claudia Borzutzky, a physician in the adolescent medicine clinic at Children’s Hospital Los Angeles. Californian minors can also consent to the hepatitis B vaccine. “We don’t have the same resistance to hepatitis B as we do with other vaccines, which is mysterious to me because all our vaccines have the same efficacy. People forget it’s an STI,” says Borzutzky. Almost every state allows minors to consent to medical care related to reproductive health – birth control, pregnancy testing, abortion – and drug and alcohol abuse services. Some states also let minors access mental-health services and sexual-assault treatment without a parent’s permission.
2-14-19 Find tonic water bitter? Part of your brain may be on the small side
Here comes a taste test with a surprising outcome. A study involving 1600 people found the volume of a particular brain region is reversely linked to how bitter people find tonic water. Before anyone thought to mix it with gin, tonic water was developed as a treatment for malaria. It contains a medicinal substance called quinine, which gives tonic water its bitterness. Previous studies have found that individuals perceive the strength of bitter flavours like quinine differently depending on their genes. Daniel Hwang at the University of Queensland in Australia and his colleagues wondered if brain size also plays a role. They collected brain scans of 1600 volunteers, who were also asked to rate the bitterness of a quinine solution. The team found that the size of a brain region – the left entorhinal cortex in the temporal lobe – is associated with how intensely someone perceives bitterness. Those who found the drink less bitter tended to have a bigger left entorhinal cortex. The entorhinal cortex has previously been linked to our sense of smell, but it is unclear how its volume is associated with taste perception. “There are various possibilities – a smaller volume may result in a shorter time for the taste signal to transfer across the brain,” says Hwang. It is still unclear whether the volume of certain brain regions affects our perception of other food and drinks. “This is the first study relating volumetric differences and taste, and our findings warrant future research on not only bitter, but also sweet and even salty, sour and umami taste responses,” says Hwang. Those who find tonic water to be extremely bitter shouldn’t assume that they have a smaller left entorhinal cortex, though. Hwang says genetics probably also plays a role in determining our taste sensation.
2-14-19 Offspring from older sperm are fitter and age more slowly
Sperm with stamina sire the healthiest, longest-lived offspring, at least in zebrafish. The finding challenges the prevailing orthodoxy about what determines the physical traits of sperm, which could have important evolutionary implications. It also suggests that the methods fertility clinics use to select sperm – which instead favour the sprinters – could be improved. “I definitely do think this is relevant,” says team leader Simone Immler at the University of East Anglia in the UK. “We miss out on a lot of steps during artificial fertilisation technologies.” Half of zebrafish sperm stop swimming just 25 seconds after entering water, although some fare better and survive for about 1 minute. To see if there was any difference between these short and relatively longer-lived sperm, Immler’s team split zebrafish ejaculate into two parts. One part was mixed with both eggs and water. With the other part, the eggs were added 25 seconds after the water, meaning that only the longer-lived sperm had a chance of fertilising them. The results were striking. The offspring sired by longer-lived sperm were fitter, says Immler. “They not only reproduced more throughout life, they also lived longer.” However, the effects were less pronounced in female offspring than male ones. Allowing only longer-surviving sperm to fertilise eggs might act as a form of quality control, weeding out sperm with harmful mutations, says Immler. But, surprisingly, this challenges conventional wisdom. The stem cells that give rise to sperm have two slightly different copies of the genome. But sperm themselves have just one copy, containing a mix of the parental genomes.
2-13-19 Breast pumps may introduce harmful bacteria to babies’ gut microbiome
Using a breast pump may introduce babies to the “wrong” kind of bacteria, and perhaps increase their risk of childhood asthma. Shirin Moossavi at the University of Manitoba, Canada, and colleagues found milk from pumps contained higher levels of potentially harmful microbes than milk straight from the breast. “Increased exposure to potential pathogens in breast milk could pose a risk of respiratory infection in the infant,” says Moossavi. This might explain why infants fed pumped milk are at increased risk for paediatric asthma compared with those fed exclusively at the breast, she says. Exactly how bacteria become established in the infant gut is unclear. Microbes from the mother carried in breast milk is one probable route, but so is the transfer of mouth bacteria from the mouth of a sucking baby. Breast pumps offer a third, artificial pathway – one that can potentially transmit a range of environmental bacteria to the baby. For the study, the researchers looked for bacterial genes in breast milk samples from 393 healthy mothers three to four months after giving birth. The team found the bacterial content of milk being fed to the mothers’ babies differed greatly from infant to infant. Milk administered from breast pumps contained higher levels of potentially harmful “opportunistic pathogens”, such as those from the genus Stenotrophomonas and family Pseudomonadaceae. In contrast, direct breastfeeding without a pump was associated with microbes typically found in the mouth, as well as greater bacterial richness and diversity. This suggests infant mouth microbes play an important role in determining what kind of bacteria are found in mothers’ milk.
2-13-19 Slow sperm may fail at crashing ‘gates’ on their way to an egg
Narrow spots in the female reproductive tract could help weed out less desirable suitors. The female reproductive tract is an obstacle course that favors agile sperm. Narrow straits in parts of the tract act like gates, helping prevent slower-swimming sperm from ever reaching an egg, a study suggests. Using a device that mimics the tract’s variable width, researchers studied sperm behavior at a narrow point, where the sex cells faced strong head-on currents of fluid. The faster, stronger swimmers started moving along a butterfly-shaped path, keeping them close to the narrow point and upping the chances of making it through. Meanwhile, slower, weaker swimmers were swept away, the team reports online February 13 in Science Advances. “Narrow junctions of the tract may act as barriers” to poor swimmers, says coauthor Alireza Abbaspourrad, a biophysicist at Cornell University. The results suggest that this is a way that females select the healthiest sperm, he says. Sperm travel through the reproductive tract — the vagina, cervix, uterus and fallopian tubes — by swimming upstream against fluid flowing through the tract, which moves at different speeds along the way. Previous studies have shown that sperm tend to follow the walls of the tract to “steer” toward the egg, but haven’t investigated what effect the narrow spots might have on the trek. Through computer simulations and tests of sperm, Abbaspourrad and colleagues found that the fastest sperm, when stopped by the current at a narrow point, could make it back to a wall, swim along it and try again. Repeated, this movement resulted in a butterfly-shaped pattern. Whether a swimmer could get through ultimately depended on the speed of the fluid through the narrow point and the speed of a sperm.
2-13-19 How humans evolved to be both shockingly violent and super-cooperative
The origins of our paradoxical nature lie in murder and self-domestication. It's a weird story that may even explain why our species came into existence. ARE humans, by nature, good or evil? The question has split opinions since people began philosophising. Some, like the followers of Jean-Jacques Rousseau, say we are a naturally peaceful species corrupted by society. Others side with Thomas Hobbes and see us as a naturally violent species civilised by society. Both perspectives make sense. To say that we are both “naturally peaceful” and “naturally violent” seems contradictory, however. This is the paradox at the heart of my new book. The paradox is resolved if we recognise that human nature is a chimera. The chimera, in classical mythology, was a creature with the body of a goat and the head of a lion. It was neither one thing nor the other: it was both. I argue that, with respect to aggression, a human is both a goat and a lion. We have a low propensity for impulsive aggression, and a high propensity for premeditated aggression. This solution makes both Rousseauians and Hobbesians partially right, but it raises a deeper question: why did such an unusual combination of virtue and violence evolve? The story of how our species came to possess this unique mixture hasn’t been told before, and offers a rich and fresh perspective on the evolution of our behavioural and moral tendencies. It also addresses the fascinating but surprisingly neglected question of how and why our species, Homo sapiens, came into existence at all. Since the 1960s, efforts to understand the biology of aggression have converged on an important idea. Aggression – meaning behaviour intended to cause physical or mental harm – falls into two major types, so distinct in their function and biology that from an evolutionary viewpoint they need to be considered separately. I use the terms “proactive” and “reactive” aggression, but many other word pairs describe the same dichotomy, including cold and hot, offensive and defensive, premeditated and impulsive. To judge from other relevant animals, a high level of proactive aggression is normally associated with high reactive aggression. The common chimpanzee is the primate species that most often uses proactive aggression to kill its own kind, and it also has a high rate of reactive aggression within communities. The wolf’s proactive aggression against members of its own species is often lethal. As with chimpanzees, although relationships within wolf groups are generally benign and cooperative, they are far more emotionally reactive than dogs are. Lions and hyenas are also wolf-like in these respects.
2-13-19 Smart skin sticker could detect asthma attacks before they happen
A smart sticker that could alert people with asthma of an impending attack, has been made using a children’s toy. The device is made using Shrinky Dinks – plastic sheets that shrink to a fraction of their original size when heated. They are popular among children because they can be coloured and cut into shapes before shrinking. The Shrinky Dinks are used to shrink ultrathin metal sheets into stretch-detecting sensors that wirelessly transmit breathing data to a smartphone. The hope is that this data could be analysed to detect subtle changes in breathing rate that may be early signs of a worsening condition, or track improvements following medical treatment. It could be a useful tool for monitoring people with chronic lung conditions, such as asthma and cystic fibrosis, says Michelle Khine at the University of California, Irvine, who led the team. People will use the device by sticking it to their lower ribs. The device monitors changes in electrical resistance as it stretches and retracts on the skin. When the wearer is still, the sensor’s measurements are as good as a medical-grade spirometer – a machine that measures lung volume from how much a person breathes out in a forced breath, says Michael Chu, one of the team. Spirometry is still the most accurate approach, but the new method has the advantage of continuous monitoring over time. Currently, the sensor becomes less accurate when the wearer is very active, for example if they are running. Khine says the next step is to use the device to try to predict asthma attacks before they happen.
2-11-19 AI can diagnose childhood illnesses better than some doctors
Diagnosing an illness requires taking in a lot of information and connecting the dots. Artificial intelligence may be well-suited to such a task and in recent tests one system could diagnose children’s illnesses better than some doctors. Kang Zhang at the University of California in San Diego and his colleagues trained an AI on medical records from 1.3 million patient visits at a major medical centre in Guangzhou, China. The patients were all under 18 years old and visited their doctor between January 2016 and January 2017. Their medical charts include text written by doctors and laboratory test results. To help the AI, Zhang and his team had human doctors annotate medical records to identify portions of text associated with the child’s complaint, their history of illness, and laboratory tests. When tested on previously unseen cases, the AI could diagnoseglandular fever (also known as mononucleosis), roseola, influenza, chicken pox and hand-foot-mouth disease with between 90 and 97 per cent accuracy. It’s not perfect, but neither are human doctors, says Zhang. “When you’re busy you can see 80 patients a day. And you can only grasp so much information. That’s where we potentially as human physicians might make mistakes. AI doesn’t have to sleep, it has a large memory and doesn’t lose energy,” he says. The team compared the model’s accuracy to that of 20 paediatricians with varying years of experience. It outperformed the junior paediatricians, though the senior ones did better than the AI. The AI could be used to triage patients in emergency departments. “Given sufficient data, AI should be able to tell if this is an urgent situation and needs referral or if it’s a cold,” says Zhang.
2-11-19 Congo’s Ebola outbreak is a testing ground for new treatments.
The four different drugs include three antibody treatments and one antiviral. Amid the second largest Ebola outbreak ever, the hunt for a lifesaving treatment is on. A clinical trial of patients taking place now in Congo is gathering evidence on experimental therapies, to provide a proven option when the deadly virus inevitably emerges again. The first multidrug clinical trial of Ebola therapies, which began enrolling patients in November, will compare the effectiveness of three different antibody treatments and one antiviral drug. One therapy tested briefly during the 2014–2016 outbreak in West Africa, the largest ever, has already shown promise. With the trial data, though, “we’ll be able to say, ideally, that this drug or that drug actually does work, not just we think or hope it does work,” says Richard Davey, one of the principal trial investigators and the deputy clinical director at the U.S. National Institute of Allergy and Infectious Diseases in Bethesda, Md. Ebola virus causes severe illness, including fever, vomiting, diarrhea and bleeding. Death rates range from 25 to 90 percent, depending on the outbreak. During Congo’s current outbreak — the country’s 10th and its largest since Ebola was discovered within its borders in 1976 — about 63 percent of those infected have died, or 510 out of the 811 cases reported as of February 9. Stopping the outbreak, which began August 1, has been difficult due to security risks and armed conflict in the region, as well as public mistrust of the medical response, the World Health Organization says.
2-11-19 Sailors spread the ancient fashion for monuments like Stonehenge
Thousands of ancient stone structures, such as Stonehenge, are found throughout Europe. Now a long-standing puzzle of where the practise originated and how it spread has been solved. Over the last century there have been two main views on the origins of the stone structures, known as megaliths. One was that they started from a single source then spread over sea routes. The other was that megalith construction developed independently in different locations. To find out which was correct, Schulz Paulsson of the University of Gothenburg, Sweden and colleagues analysed the dates from over 2000 megaliths in Europe. They used statistical methods to narrow down previous estimations and get a better picture of where they built and in what order. The team found that megalith construction started in a single location in northwest France over a period of 200-300 years around 4500 BC. The tradition then spread through Europe spanning 2,000 years along the sea routes of the Mediterranean and Atlantic coasts, concentrated in coastal regions. The pattern of how the megaliths spread over time also hints that societies developed sophisticated sea-faring technology, far earlier than previously thought. “They were moving over the seaway, taking long distance journeys along the coasts,” says Schulz Paulsson. This fits with other research she has carried out on megalithic art in Brittany, which shows engravings of many boats, some large enough for a crew of 12. The previous view was that large boats capable of travelling long distance were only developed in the Bronze Age, some 2000 years later.
2-11-19 The spread of Europe’s giant stone monuments may trace back to one region
Ancient sea travelers carried the knowledge of how to build megaliths from France. From simple rock arches to Stonehenge, tens of thousands of imposing stone structures dot Europe’s landscapes. The origins of these megaliths have long been controversial. A new study suggests that large rock constructions first appeared in France and spread across Europe in three waves. The earliest megaliths were built in what’s now northwestern France as early as around 6,800 years ago, says archaeologist Bettina Schulz Paulsson of the University of Gothenburg in Sweden. Knowledge of these stone constructions then spread by sea to societies along Europe’s Atlantic and Mediterranean coasts, she contends in a study posted online the week of February 11 in the Proceedings of the National Academy of Sciences. “European megaliths were products of mobile, long-distance sea travelers,” Schulz Paulsson says. Around 35,000 megalithic graves, standing stones, stone circles and stone buildings or temples still exist, many located near coastlines. Radiocarbon dating has suggested that these structures were built between roughly 6,500 and 4,500 years ago. Scholars a century ago thought that megaliths originated in the Near East or the Mediterranean area and spread elsewhere via sea trading or land migrations by believers in a megalithic religion. But as absolute dates for archaeological sites began to emerge in the 1970s, several researchers argued that megaliths emerged independently among a handful of European farming communities.
2-11-19 Controversial fossils suggest life began to move 2.1 billion years ago
Burrow-like structures several millimetres in diameter have been found in 2.1-billion-year-old rocks in Gabon, Africa. The structures were made by a moving lifeform of some kind, claim geologist Abderrazak El Albani at the University of Poitiers in France and his team. The team do not know what made the trace fossils, but they speculate that it could be something similar to colonial amoeba or slime moulds – organisms made of cells that normally live separately. The trace fossils were found near bacterial mats that the mysterious lifeforms may have been feeding on. “It’s truly amazing,” says El Albani. Previously, the earliest evidence of moving lifeforms was just a half a billion years old. There are burrows and tiny footprints in rocks of this age, probably left by small animals. The 2.1-billion year-old burrows are very unlikely to have been produced by organisms as complex as animals, which probably appeared only between about 850 and 650 million years ago. In fact, it’s not even clear that organisms as complex as amoeba were around 2.1 billion years ago: they are eukaryotes, and the oldest eukaryotic fossils found so far are about 1.7 billion years old. So if El Albani’s interpretation is correct, these finds challenge the conventional story of life’s evolution. On the other hand, it’s clear that multicellularity evolved on numerous different occasions. There are even multicellular organisms composed of simple – prokaryotic – cells. And lab experiments suggest it’s relatively easy for cells to evolve multicellularity. In 2010, El Albani reporting finding what his team think are fossils of multicellular organisms in the same sedimentary rocks in the Franceville basin in Gabon, which formed in a warm, shallow ocean 2.1 billion years ago. “It’s a unique place in the world, where we have this preservation of the rocks,” he says. Most rocks are of this age have been metamorphosed by extreme heat and pressure. Since then, his team have continued to make field trips and have collected more than 500 specimens – now including the apparent trace fossils. These organisms lived at the time when oxygen levels were relatively high, says El Albani. Shortly after oxygen level plummeted and remained low for a billion years – the “Boring Billion”. So El Albani thinks complex lifeforms started to evolve much earlier than thought, but were then killed off. “These organisms disappeared,” he says.
2-11-19 A rare, ancient case of bone cancer has been found in a turtle ancestor
An extinct ancestor of modern turtles called Pappochelys rosinae had bone cancer, the oldest known case in an amniote, a group that includes mammals, birds and reptiles. A 240-million-year-old case of bone cancer has turned up in a fossil of an extinct ancestor of turtles. Dating to the Triassic Period, the fossil is the oldest known example of this cancer in an amniote, a group that includes mammals, birds and reptiles, researchers report online February 7 in JAMA Oncology. The fossilized left femur from the shell-less stem-turtle Pappochelys rosinae was recovered in southwestern Germany in 2013. A growth on the leg bone prompted a team of paleontologists and physicians to analyze the fossil with a micro CT scan, an imaging technique that provides a detailed, three-dimensional view inside an object. “When we saw that this was not a break or an infection, we started looking at other growth-causing diseases,” says Yara Haridy, a paleontologist at the Museum für Naturkunde in Berlin. The verdict? Periosteal osteosarcoma, a malignant bone tumor. “It looks almost exactly like human periosteal osteosarcoma,” Haridy says. “It is almost obvious that ancient animals would have cancer, but it is so very rare that we find evidence of it,” she says. The discovery of this tumor from the Triassic offers evidence that cancer is “a vulnerability to mutation deeply rooted in our DNA.”
2-10-19 How we evolved to love horror movies
Being terrified might actually have its benefits. Bird Box was the first breakout film of the new year. The post-apocalyptic thriller was watched by more than 26 million people over its first week on Netfilx, a record for the streaming service. It comes on the heels of last year's A Quiet Place, which was one of the top-grossing films of 2018. Movies such as these have a single mission: To terrify their viewers. But why do so many people choose to spend two hours in perpetual fear? New research provides a clear answer: We are evolutionarily wired to seek out such material. A research team led by Mathias Clasen of Denmark's Aarhus University argues horror movies, novels, and video games fall into the category of "benign masochism." "Horror movies tend to imaginatively transport consumers into fictional universes that brim with dangers," the researchers write. "Through such imaginative absorption, people get to experience strong, predominantly negative emotions within a safe context. This experience serves as a way of preparing for real-world threat situations." The study, in the journal Evolutionary Behavioral Sciences, featured 1,070 Americans recruited online. All completed surveys designed to measure their personality traits, propensity for sensation-seeking, and belief in paranormal phenomena. They also reported how much they enjoy horror movies, books, and games; how frequently they were exposed to such material; and whether they prefer such films to be intensely or merely moderately scary. As recent successes like Bird Box suggest, horror is far from a niche market, and more than 54 percent of the study's participants either agreed or strongly agreed with the statement "I tend to enjoy horror media." Only 14 percent strongly disagreed.
2-10-19 Brain-zapping implants that fight depression are inching closer to reality
Researchers are resetting the part of the brain that can shift mood. Like seismic sensors planted in quiet ground, hundreds of tiny electrodes rested in the outer layer of the 44-year-old woman’s brain. These sensors, each slightly larger than a sesame seed, had been implanted under her skull to listen for the first rumblings of epileptic seizures. The electrodes gave researchers unprecedented access to the patient’s brain. With the woman’s permission, scientists at the University of California, San Francisco began using those electrodes to do more than listen; they kicked off tiny electrical earthquakes at different spots in her brain. Most of the electrical pulses went completely unnoticed by the patient. But researchers finally got the effect they were hunting for by targeting the brain area just behind her eyes. Asked how she felt, the woman answered: “Calmer in my nerves.” Zapping the same spot in other participants’ brains evoked similar responses: “I feel positive, relaxed,” said a 53-year-old woman. A 60-year-old man described “starting to feel a little more alive, a little more energy.” With stimulation to that one part of the brain, “participants would sit up a little straighter and seem a little bit more alert,” says UCSF neuroscientist Kristin Sellers. Such positive mood changes in response to light neural jolts, described in the Dec. 17 Current Biology, bring researchers closer to an audacious goal: a device implanted into the brains of severely depressed people to detect a looming crisis coming on and zap the brain out of it.
2-8-19 Brain scans decode an elusive signature of consciousness
An international research effort finds patterns of brain activity that come with awareness. A conscious brain hums with elaborate, interwoven signals, a study finds. Scientists uncovered that new signature of consciousness by analyzing brain activity of healthy people and of people who were not aware of their surroundings. The result, published online February 6 in Science Advances, makes headway on a tough problem: how to accurately measure awareness in patients who can’t communicate. Other methods for measuring consciousness have been proposed, but because of its size and design, the new study was able to find a particularly strong signal. Conducted by an international team of researchers spanning four countries, the effort “produced clear, reliable results that are directly relevant to the clinical neuroscience of consciousness,” says cognitive neuroscientist Michael Pitts of Reed College in Portland, Ore. Consciousness — and how the brain creates it — is a squishy concept. It slips away when we sleep, and can be distorted by drugs or lost in accidents. Though scientists have proposed many biological explanations for how our brains create consciousness, a full definition still eludes scientists.
2-8-19 Screen time could hurt kids’ development
Parents who let their young children spend lots of time in front of TVs and tablets risk slowing their kids’ development, a new study has warned. Researchers in Canada tracked nearly 2,500 children ages 2 to 5 and asked their mothers to report how many hours a day, on average, their child looked at screens. The moms also answered questions on their kids’ communication skills, behavior, and social interactions. Researchers found that, on average, the children spent two to three hours a day in front of screens; the American Academy of Pediatrics recommends that young children watch only one hour of quality programming a day. And the more time the children spent looking at screens at ages 2 and 3, the worse they did in developmental tests at ages 3 and 5. The study had some limitations—most data was self-reported by the mothers—and the authors emphasize that correlation does not mean causation. But they suggest that the differences in development could be because kids who bury their heads in screens miss out on opportunities to practice and refine their communication, social, and motor skills—by playing with toys, for example, or interacting with family and friends. Parents can think of screen time as they do junk food, study leader Sheri Madigan, from the University of Calgary, tells Time.com. “In small doses, it’s OK, but in excess, it has consequences.”
2-8-19 Blood pressure and brain volume
Scientists have long thought that high blood pressure takes decades to affect the brain. But a new study suggests that young adults with elevated blood pressure also often show signs of brain shrinkage. Researchers recruited 423 people ages 19 to 40, who underwent an MRI brain scan and at least one blood pressure reading. They found that participants with higher blood pressure readings—even within the 120-140 systolic range, which is generally considered normal—had less gray matter volume in several areas of the brain than those whose readings were under 120. The finding counters the assumption that brain-volume changes happen only in older people with hypertension. “This is a gradual change that probably happens throughout life, and ends where people have a stroke or cognitive decline,” lead author H. Lina Schaare, from the Max Planck Institute in Leipzig, Germany, tells The New York Times. “A blood pressure around 130 in young people is not necessarily benign.” Schaare now wants to examine whether reduced gray matter volume at an early age can increase risk of stroke, dementia, and other conditions.
2-8-19 The genes that make night owls
Late risers are genetically predisposed to needing a lie-in—and may be more likely to suffer mental health problems as a result. That’s the conclusion of a major new study that examined the genetics of some 700,000 people in the U.S. and U.K. By looking at how participants described themselves—a “morning person” or an “evening person”—researchers identified 351 genes associated with early rising. Previous research identified only 24 such genes. When researchers then looked for links to mental health issues, they found that night owls were about 10 percent more likely than early risers to develop schizophrenia, had a higher risk of depression, and reported being less happy on well-being questionnaires. Study leader Samuel Jones, from the U.K’s University of Exeter, says the 351 genes he and his team identified may affect how a person’s brain reacts to external light signals. “These small differences may have potentially significant effects on the ability of our body clocks to keep time effectively,” he tells The Guardian (U.K.). Jones says it remains unclear why night owls may be more susceptible to mental health issues, but suggests it could be because they have to work against their natural circadian rhythms in school and at work.
2-8-19 A superbug’s global spread
In a worrying sign of how far and fast so-called superbugs are spreading, an antibiotic-resistant gene first discovered in India has been found in a remote region of the Arctic. Antibiotic resistance is a growing global health concern: At least 700,000 people die from superbug infections each year. And as more bacteria evolve to fight off antibiotics—a phenomenon fueled by their overuse in medicine and farming—that annual death toll could hit 10 million by 2050. To study the global spread of superbugs, researchers took soil samples from eight locations in Svalbard, a frozen Norwegian archipelago in the high Arctic, and then analyzed the DNA of bacteria and other organisms in the earth. A gene linked to multidrug resistance, first observed in a hospital patient in India in 2008, was found in more than 60 percent of the samples. Scientists believe the superbug arrived in the Arctic in the fecal matter of migrating birds or human visitors. Study leader David Graham, from the U.K.’s Newcastle University, tells ScienceDaily.com that the discovery confirms that “solutions to antibiotic resistance must be viewed in global rather than just local terms.”
2-8-19 The Goodness Paradox: The Strange Relationship Between Virtue and Violence in Human Evolution
“If we are so good, how can we be so bad?” asked John Hawks in The Wall Street Journal. Thinkers have puzzled over humans’ contradictory nature for ages, and for anyone who’s pondered how we can be both unusually docile and murderous on a grand scale, Richard Wrangham’s new book is “essential reading.” The Harvard anthropologist, who first gained notoriety two decades ago by arguing that humans are intrinsically violent, also agrees with researchers who claim that the species has become gradually less violent. In The Goodness Paradox, Wrangham argues that the change occurred because we “self-domesticated,” and did so in an unusual way: Our ancestors punished alpha-male bullies by working cooperatively to execute them. Over time, the capacity to cooperate became the more prevalent trait. So score one for capital punishment, said Tom Whipple in The Times (U.K.). But note that Wrangham personally opposes the execution of violent individuals today, and he isn’t claiming that today’s humans, including the beta males who won the evolutionary battle, are saints. The capacity to cooperate, after all, amplifies the human capacity for war and genocide. Wrangham prefers focusing on the evolutionary record, beginning with our two closest primate cousins, said Rachel Newcomb in The Washington Post. Whereas chimpanzees are notoriously aggressive, bonobos are the opposite, and Wrangham claims that the latter species self-domesticated because they had less need for aggression in their resource-rich native habitat. The proposition that humans and bonobos both self-domesticated is backed by shared physical evolutionary changes: Both became milder and more childlike in appearance over time, presumably as they grew more cooperative. But does capital punishment have to be the key to the human story? asked anthropologist Melvin Konner in The Atlantic. I once spent two years amid a hunter-gatherer group, the !Kung of southern Africa, and found that women, when afforded power equal to or greater than men’s in a culture, will choose men of calm temperament as their mates, thus reducing the group’s propensity for aggressiveness over time. But of course Wrangham’s thesis provokes argument; “that’s what bold theorizing is supposed to do.” Over the course of a long career, he has come up with some of the boldest and best ideas about human evolution, and now he has done it again. The Goodness Paradox highlights a puzzle about our history that can’t be ignored, and reminds us that violence and virtue live together within us.
2-8-19 Tyrannosaurus rex might have accidentally helped fruit grow
The king of the dinosaurs may have been an accidental gardener. Tyrannosaurus rex was a famed carnivore, but it seems it may also have spread fruit seeds, as a result of gobbling down plant-eating prey. Many plants rely on animals to disperse their seeds. They produce seeded fruits to attract herbivores which consume the fruit and defecate the seeds. Carnivores, which have no interest in fruit, can also end up with seeds in their dung from eating herbivores. Tetsuro Yoshikawa at National Institute for Environmental Studies in Tsukuba, Japan wondered if the same was true during the Cretaceous period. He and his colleagues used information on the body weight and diet of 51 living bird species, T. rex’s closest living relatives, to build a computer model that estimates how long seeds are retained in a bird’s gut before being expelled. The team used this model to predict how long seeds would stay inside a T. rex. They found that seeds would probably stick around for five to seven days before passing through its digestive system. Considering T. rex was highly mobile, Yoshikawa says this could mean the dinosaur dispersed seeds over a wide area. A lot more work is needed to understand T. rex’s role in seed dispersal, says Yoshikawa. “Our result is a first step to the modelling, and the estimates for dinosaurs are quite rough.” Other factors that would improve the model include the type of seeds ingested and an understanding of T. rex’s overall diet, says Yoshikawa, but the limited information we have on dinosaurs makes this very difficult.
2-8-19 Beer before wine or wine before beer: the hangover is the same
Drinking beer before wine won’t save you from a hangover. This is according to rigorous tests performed with 90 volunteers. The idea of the study was to test the old adage: “beer before wine and you’ll feel fine; wine before beer and you’ll feel queer.” Similar sayings exist in French and German. The participants, aged between 19 and 40, were split into three groups. The first group drank beer until their breath alcohol concentration reached 0.05 per cent, then drank white wine until their breath alcohol reached 0.11 per cent. In the second group, the two drinks were switched. People in these two groups typically drank around 1.3 litres of beer and 600 millilitres of wine. The third group drank only beer or only wine. A week later, the experiment was repeated, with the first and second groups switched. The non-mixers who drank beer switched to wine and vice versa. The next day, they completed a questionnaire about their hangover symptoms, including headache, fatigue, thirst, dizziness and nausea. There were no significant differences between any of the three groups, suggesting not only that there is no safe order for mixing drinks, but sticking to one drink isn’t much help either. Vomiting and self-rated drunkenness the night before correlated with hangover scores in the morning, but other factors, such as age and sex, did not have a significant effect. The only way to avoid a hangover is not to drink as much, says Kai Hensel at the University of Cambridge, UK. “After doing 360 blood and urine tests and spending almost £10,000 on lab analysis, the predictor is asking people how drunk are you and do you have to vomit.”
2-7-19 In some cases, getting dengue may protect against Zika
What happened in a hard-hit Brazilian slum suggests timing of dengue infections may matter. Previous infections with dengue virus may have protected some people in an urban slum in Brazil from getting Zika. In a study of more than 1,400 people in the Pau da Lima area of Salvador, those with higher levels of antibodies against a particular dengue virus protein were at lower risk of contracting Zika, researchers report in the Feb. 8 Science. “The higher the antibody, the higher the protection,” says Albert Ko, an infectious disease physician and epidemiologist at the Yale School of Public Health. That finding contrasts with previous studies in mice and in cells grown in lab dishes, in which antibodies against dengue seemed to make Zika worse (SN: 5/29/17, p. 14). Ko and other researchers had been tracking a rat-borne bacterial illness in the poverty-stricken neighborhood for two years when the Zika outbreak hit in 2015. “We were at the epicenter of the pandemic,” Ko says. Blood samples taken every six months enabled researchers to track people there before, during and after the outbreak. Zika infected an estimated 73 percent of people in the slum but “it really, really varied geographically,” says Isabel Rodriguez-Barraquer, an epidemiologist at the University of California, San Francisco. In some areas of the 0.17-square-kilometer community, 83 percent of people were infected. In other pockets, just 29 percent were.
2-7-19 DNA reveals early mating between Asian herders and European farmers
The finding might rewrite the origins and spread of key cultural innovations and languages. Hundreds of years before changing the genetic face of Bronze Age Europeans, herders based in western Asia’s steppe grasslands were already mingling and occasionally mating with nearby farmers in southeastern Europe. That surprising finding, published online February 4 in Nature Communications, raises novel questions about a pivotal time when widespread foraging and farming populations interacted in Eurasia’s Caucasus region. Those exchanges presumably sparked the geographic spread of metalworking, the wheel and wagon, and Indo-European languages still spoken in much of the world. Archaeologists have often assumed that, as early as around 5,600 years ago, Caucasus farmers known as the Maykop migrated north in big numbers, bringing metalworking and early Indo-European tongues to herders who roamed grasslands on the edge of the region. In that scenario, this cultural exchange led steppe herders to develop a horse-and-wagon lifestyle that the nomads later transported to Europe and Asia, along with Indo-European languages, starting about 5,000 years ago (SN: 11/25/17, p. 16). Researchers call those mobile herders Yamnaya people. An ancient DNA analysis unexpectedly found signs of mating more than 5,000 years ago between western Asian Yamnaya herders and European farmers, possibly from the Globular Amphora Culture. In another surprise, Maykop farmers thought by many researchers to have dramatically influenced Yamnaya culture left no genetic mark on the herders. The dotted lines represent the suspected spheres of influence exercised by the Globular Amphora Culture and the Maykop people in Yamnaya territory.
2-7-19 Voting systems that let losing side win may increase overall happiness
In a democratic election, the winning side is the one that gets the most votes – at least, normally. A test of alternative voting systems has found that in some cases, it is actually possible to increase overall satisfaction by delivering a result in which a minority decision prospers. Alessandra Casella of Columbia University and Luis Sanchez of Cornell University in New York tested two voting systems in a survey ahead of a state-wide Californian ballot in 2016. Rather than one vote per person, the systems – known as storable and quadratic votes – give people multiple votes to allocate to a range of issues. “The ingenuity of the voting schemes is that they induce the voter to reveal her priorities sincerely,” says Casella. The researchers asked 600 California residents about four issues that were likely to be included on the ballot. They selected issues that were unlikely to result in a landslide, but about which some voters would feel strongly – such as requiring law enforcement to report undocumented immigrants. Survey respondents were first asked to rate how important each issue was to them, and how they would vote in each proposal (in favour, opposed, or abstain). For storable votes, participants were then granted one extra vote to support a proposition that they felt strongly about. For quadratic voting, respondents were given a choice of extra, weighted votes to express their strength of feeling. For example, a voter could choose to cast an additional vote on each proposal, each weighted as 1, or to cast only one additional vote on a single issue, but with a weight of 2. These priorities were factored into the final outcome by counting the weighted number of cast votes, rather than the total number of voters, to reach a majority.
2-7-19 New Tonga island 'now home to flowers and owls'
Scientists have found signs of life on one of the world's newest islands, just four years after it was spawned by a volcanic eruption. Unofficially known as Hunga Tonga-Hunga Ha'apai, it lies in the kingdom of Tonga, and is already nurturing pink flowering plants, sooty tern birds, and even barn owls. Tonga is made up of over 170 islands in the Pacific Ocean, east of Australia. A team from the Sea Education Association and Nasa visited the small land mass in October, having previously kept watch through satellite imaging. Hunga Tonga-Hunga Ha'apai - named after the two islands it is nestled between - was born in December 2014 after a submarine volcano erupted, sending a stream of steam, ash and rock into the air. When the ash finally settled, it interacted with the seawater and solidified. A month later, the new island was formed. It isn't uncommon for underwater volcanic eruptions to form little islands, but they usually have shorter life-spans. Hunga Tonga-Hunga Ha'apai is one of just three to emerge in the last 150 years that have lasted more than a few months. "In this case, the ash seemed to have a chemical reaction with the seawater that allowed it to solidify more than it usually would," volcanologist Jess Phoenix told the BBC. She compares the island to Surtsey, an island in Iceland that was formed in a similar way in the 1960s, and is still around today. Nasa researcher Dan Slayback was among those who visited the island in October, and said they were "all like giddy school children". He found a light-coloured, sticky clay mud on the volcanic mass - something that left him mystified. "We didn't really know what it was and I'm still a little baffled of where it's coming from," Mr Slayback said in a recent Nasa blog post.
2-7-19 Evolutionarily, grandmas are good for grandkids — up to a point
Women may live past their reproductive years because they help their grandchildren survive. Grandmothers are great — generally speaking. But evolutionarily speaking, it’s puzzling why women past their reproductive years live so long. Grandma’s age and how close she lives to her grandchildren can affect those children’s survival, suggest two new studies published February 7 in Current Biology. One found that, among Finnish families in the 1700s–1800s, the survival rate of young grandchildren increased 30 percent when their maternal grandmothers lived nearby and were 50 to 75 years old. The second study looked at whether that benefit to survival persists even when grandma lives far away. (Spoiler: It doesn’t.) The studies are part of a broader effort to explain the existence of menopause, a rarity in the animal kingdom. The so-called “grandmother hypothesis” stipulates that, from an evolution standpoint, women’s longevity is due to their contributions to their grandkids’ survival, thus extending their own lineage (SN: 3/20/04, p. 188). In the Finnish study, researchers wanted to know if grandmas eventually age out of that beneficial role. The team used records collected on the country’s churchgoers born from 1731 to 1895, including 5,815 children. Women at that time had large families, averaging almost six children, with about a third of kids dying before age 5. The team found that when maternal grandmothers living nearby were aged 50 to 75, their 2- to 5-year-old grandchildren had a 30 percent higher likelihood of survival than children whose maternal grandmothers were deceased. Similarly aged paternal grandmothers and maternal grandmothers aged past 75 did not affect children’s overall survival.
2-6-19 The truth about generations: Why millennials aren’t special snowflakes
We increasingly form opinions about people based on the generation they belong to, but these labels are often lacking in science. PEOPLE born between the mid-80s and early 2000s have been called many things: Generation Y, the Net Generation and, more usually, millennials. Now, a new name is growing in popularity: the Burnout Generation. The argument, laid out in a viral BuzzFeed article last month, is that growing up, millennials were unduly affected by the financial crisis of the late 2000s and pressured by a new wave of intensive parenting. As a result, they are uniquely overambitious, overworked and overwhelmed. The description rang true to many millennial readers, but also left a lot of people in the previous cohort, Generation X, wondering why no one was paying attention to the difficulties they face. This disparity exposes the looseness with which we talk about generations. So is it even useful to divide people up in this way? “If you want to draw a boundary between two historical generations, there needs to be a reason for it,” says Elwood Carlson, a sociologist at Florida State University. Generally, that should be a collective difference between the two groups that can be identified empirically, he says. It isn’t clear whether “burnout” fulfils that criteria, but it might. “Deciding which differences are important for separating generations is more of an art than a science,” says Carlson. The study of generations took off in the 1920s, when sociologist Karl Mannheim posited that youths experiencing major events and rapid social change form more cohesive generations. Merely coexisting isn’t enough to produce a generational consciousness, he argued.
2-6-19 Why some children may get strep throat more often than others
Tonsil tissue from kids with recurring infections have smaller key immune structures. For kids, getting strep throat again and again is a pain. It’s also a problem little understood by scientists. Now a study that analyzed kids’ tonsils hints at why such repeat infections may happen. Children with recurrent strep infections had smaller immune structures crucial to the development of antibodies in their tonsils than kids who hadn’t had repeated infections, researchers found. The frequently sore-of-throat were also more susceptible to a protein, deployed by the bacteria that cause the infection, that disrupts the body’s immune response, the team reports online February 6 in Science Translational Medicine. Globally each year, there are an estimated 600 million cases of strep throat, which commonly produces a sore throat and fever. Doctors treat the illness with antibiotics, especially in children, who are at highest risk of developing rheumatic fever and heart problems from a strep infection. But some kids, even though they get treatment, repeatedly develop new cases of strep throat. In the study, immunologist Shane Crotty of the La Jolla Institute for Immunology in California and colleagues examined tonsils, the immune tissue found at the back of the throat, that had been removed from 5- to 18-year-olds. Some of the children had their tonsils taken out because of recurrent strep infections. Others had their tonsils removed to resolve sleep apnea caused by enlarged tonsils; this group was a proxy for kids not plagued by repeated bouts of strep.
2-6-19 Recommended time between smear tests could increase thanks to HPV test
A new approach for cervical cancer screening set to begin this year in the UK could let women safely wait longer between tests. Existing forms of screening, sometimes known as the smear or pap test, involve brushing some cells from the neck of the cervix and examining them under a microscope to see if any look precancerous. In the UK, women are advised to have this test every three years from the ages of 25 to 49, and every five years between 50 and 64. But a different method of testing will be introduced in the UK this year. This method removes some cells in the same way, but these are then tested for the human papillomavirus (HPV). This virus is a common sexually transmitted infection which can cause cervical tumours plus other types of cancer. Samples found to have the virus will then be checked for precancerous cells in the standard way. In a trial in nearly 600,000 women, Matejka Rebolj at Kings College London and her team found that viral testing with five-yearly checks was more sensitive than more regular screening with the old-style smear test, finding up to 50 per cent more cases of people with precancerous cells. The new test is being introduced this year, based on early results from the same trial. The UK’s National Screening Committee is currently consulting on whether to also extend the interval between tests to five years for all women who test negative. Other countries, such as Australia and the Netherlands, have already switched over, while the US does both tests in tandem.
2-6-19 Virus lurking inside banana genome has been destroyed with CRISPR
GENOME editing has been used to destroy a virus lurking inside many bananas grown in Africa. Other teams are trying to use the technique to make Cavendish bananas, a commercial variety sold in supermarkets worldwide, resistant to another disease. The banana streak virus, spread by insects, integrates its DNA into the banana’s genome. In places like west Africa, where bananas are a staple food, most varieties now have the virus inside them. When crops are stressed, it emerges from dormancy and can destroy plantations. There’s nothing farmers can do. But Leena Tripathi at the International Institute of Tropical Agriculture in Kenya has now used the CRISPR genome editing method to target and destroy the viral DNA inside the genome of a banana variety called Gonja Manjaya. The plan is to breed virus-free plants for African farmers. Her team is also using CRISPR to make the bananas resistant to the virus, so they aren’t simply re-infected (Communications Biology, doi.org/cz8k). The banana streak virus doesn’t infect the popular Cavendish variety. But a fungal infection called Tropical Race 4 is devastating Cavendish plantations. Because the Cavendish is a sterile mutant that can only be propagated by cloning, there is no way to breed resistant varieties. Instead, several teams worldwide are trying to use CRISPR to make it resistant to Tropical Race 4.
2-6-19 When did the kangaroo hop? Scientists have the answer
Scientists have discovered when the kangaroo learned to hop - and it's a lot earlier than previously thought. According to new fossils, the origin of the famous kangaroo gait goes back 20 million years. Living kangaroos are the only large mammal to use hopping on two legs as their main form of locomotion. The extinct cousins of modern kangaroos could also hop, according to a study of their fossilised foot bones, as well as moving on four legs and climbing trees. The rare kangaroo fossils were found at Riversleigh in the north-west of Queensland in Australia. The site is a treasure trove of animal remains, including marsupials, bats, lizards, snakes, crocodiles and birds. "It's one of the few snapshots we have of the evolution of marsupials in Australasia in deep time," said study researcher Dr Benjamin Kear, of Uppsala University in Sweden. Kangaroos can quickly cover large distances using their distinctive gait, which is most effective in open habitats such as deserts and grasslands. The long-held view has been that the animals evolved the ability to hop to take advantage of a change in the climate, which brought drier conditions and the spread of grasslands. However the research, published in the journal Royal Society Open Science, suggests the story isn't that simple. Geometric modelling shows the ancient extinct cousins of modern kangaroos could use the same range of gaits as living kangaroos. Evidence, say the scientists, that the kangaroo has had the ability to hop for many millions of years. "It all points towards an extremely successful animal, that's superbly adapted to its environment and a whole range of habitats and ecosystems and it's why kangaroos are so successful today," said Dr Kear. "It's one of the most biologically weird and wonderful animals you're likely to find."
2-6-19 What FamilyTreeDNA sharing genetic data with police means for you
Law enforcement can now use the company’s private DNA database to investigate rapes and murders. A popular at-home DNA testing company has announced that it is allowing police to search its database of genetic data just as customers do when looking for family members. But there’s one big difference: Police are trying to track down rape and murder suspects using relatives’ DNA. Since Joseph James DeAngelo was arrested as the suspected Golden State Killer last April, police have announced the identification of suspects in at least 25 cold cases, including five in January (SN Online: 4/29/18). Until now, law enforcement agencies had mostly used a public database called GEDMatch for these “genetic genealogy” investigations. But FamilyTreeDNA has granted police permission to upload data from crime scene DNA and search the company’s more than 1 million records to look for relatives of potential suspects. While some people support the company’s effort to help catch suspected rapists and murderers, privacy advocates and some customers of DNA testing services are alarmed by the idea that police could poke around in people’s genetic data. Here’s what the announcement really means. Police are interested in determining how much DNA people in the database share with genetic samples from crime scenes. Genealogists can then use the closest matches possible to build family trees and identify a likely suspect. The process is similar to looking at someone’s LinkedIn profile to see who is in the person’s social network, says Melinde Lutz Byrne, a forensic genealogist at Boston University who is involved in helping law enforcement solve rape and murder cases.
2-6-19 Trump wants to end HIV infections by 2030 – here’s how to do it
During his State of the Union address on 5 February, US president Donald Trump announced a goal of ending HIV transmission in the US by 2030. “Scientific breakthroughs have brought a once-distant dream within reach,” he said. “Together, we will defeat AIDS in America.” Trump didn’t lay out a specific amount of money that his next budget would dedicate to the cause, but said the initiative would be included in his request to Congress for funding. At the height of the AIDS epidemic in the early 1990s, Americans were being diagnosed with HIV at a rate of between 650,000 and 900,000 people per year. With new medicines and increased awareness of the risk of transmitting HIV, new infections have decreased since then, plateauing for the last several years to around 50,000 people per year. There is some way to go, then, to achieve elimination. The US isn’t alone in this ambitious target. On 30 January, the UK announced the same goal, and in 2014 the Joint United Nations Programme on HIV and AIDS set a similar target with key milestones in the coming years. The UN strategy is a global one that aims to have most people living with HIV diagnosed and on antiretroviral treatment by 2020, and to maintain suppression of the virus until 2030. If that happens, the number of new infections and transmissions globally would be so low that we could effectively say the epidemic had been eliminated. To meet the UN goal, the US would have to show that 73 per cent of people with HIV have their infection under control by 2020. “Are we on track to achieve that target? As best I can tell it’s certainly possible,” says Jessica Justman at Columbia University.
2-6-19 DNA-eating bacteria lurk beneath the Atlantic Ocean floor
For a few species of microbe, DNA is more than a library of genetic information: it’s also lunch. Some bacteria that live in the mud below the seafloor appear to survive by eating DNA trapped in the dirt. “This is one of the yummiest things to eat down there,” says Gustavo Ramírez at the University of Southern California. “It’s got the major macronutrients that you get in your lawn fertiliser – carbon, nitrogen and phosphorus.” Biologists have already established that seafloor mud contains naked DNA – molecules no longer locked away inside biological cells. But the fact that this ‘extracellular’ DNA doesn’t build up into really substantial quantities suggests it must be recycled, says Kenneth Wasmund at the University of Vienna, Austria. That could be because some of the bacteria living in the mud break it down and reuse its components, he says. To find out, Wasmund and his colleagues collected samples of mud from the bottom of Baffin Bay in the North Atlantic Ocean. Back in the lab, they placed the mud samples in anaerobic conditions at 4°C – replicating conditions seen in the mud at the bottom of Baffin Bay. “We incubated them for a few weeks to let the microbes do their thing,” says Wasmund. Then they used lab equipment to separate out microbes that had broken down the DNA and incorporated it into their cells. Finally, the researchers used genetic sequencing techniques to identify these DNA-eating microbes and reconstruct their genomes. The team found five different types of bacteria dined on the DNA. Four of the five seemed to be opportunistic DNA consumers, just taking advantage of the molecule because it was available.
2-6-19 Shutdown aside, Joshua trees live an odd life
In the U.S. southwest, Joshua trees evolved a rare, fussy pollination scheme. A year when vandals trashed a Joshua tree in a national park during a U.S. government shutdown is a good time to talk about what’s so unusual about these iconic plants. The trees’ chubby branches ending in rosettes of pointy green leaves add a touch of Dr. Seuss to the Mojave Desert in the U.S. Southwest. Its two species belong to the same family as agave and, believe it or not, asparagus. And the trees bloom with masses of pale flowers erupting from a branch tip. “To me [the flowers] smell kind of like mushrooms or ripe cantaloupe,” says evolutionary ecologist Christopher Irwin Smith of Willamette University in Salem, Ore. His lab has found a form of alcohol in the scent that actually occurs in mushrooms, too. It’s tough to tell how old a Joshua tree is. Their trunks don’t show annual growth rings the way many other trees do. The desert trees became headline news in January when vandals trashed at least one of them at Joshua Tree National Park (SN Online: 1/12/19). What gets biologists really excited about Joshua trees is their pollination, with each of the two tree species relying on its own single species of Tegeticula moth. That could make Joshua tree reproduction highly vulnerable to climate change and other environmental disruptions. Typically, insects pollinate a flower “just by blundering around in there” as they grope for pollen and nectar for food, Smith says. But for the female moths that service the Joshua trees, pollination “does not look like an accident.”
2-6-19 How Earth’s changing ecosystems may have driven human evolution
The most detailed ever look at Earth's prehistoric climate suggests many habitats changed in the past 800,000 years – and this may be why we evolved big brains. FOR the first time, we have had a detailed look at how our climate has changed throughout prehistory, thanks to a surprisingly detailed computer model. And it could shed light on how ecosystem changes shaped our evolution and intelligence. Thanks to ice cores and other natural records, we already knew that, for the past 2.5 million years, Earth has been in an ice age, with permanent ice at both poles. The extent of this ice has often waxed and waned during this time, and we are currently in a warmer, “interglacial” period. But this doesn’t explain why these climate changes happened or how they affected wildlife, says Mark Maslin of University College London, who wasn’t involved in the modelling work. “An ice core in Antarctica just tells you what’s happened in Antarctica,” he says. “Only by using computer models can you actually connect the dots.” Mario Krapp at the University of Cambridge, UK, and his colleagues have now done this, simulating global climate changes over the past 800,000 years. They did it by using models of the past 120,000 years to develop an algorithm, which they then used to reconstruct an outline of the past 800,000 years. This was fleshed out by simulating detailed “snapshots” at intervals throughout the 800,000 years. Running a detailed model for the whole period would have taken too much computer time. The model successfully reconstructed known changes in average global temperature, as well as the different patterns over sea and land. “They seem to be approximating what’s going on extremely well,” says Eleanor Scerri of the Max Planck Institute for the Science of Human History in Jena, Germany.
2-6-19 Cosy up with the Neanderthals, the first humans to make a house a home
Meet the Stone Age people who liked nothing better than spending time indoors around the fire, doing a spot of DIY and having friends over for dinner PUT Matt Pope in a valley apparently untouched by humans and he can tell you where Neanderthals would have built their home. “It’s about a third of the way up a slope, with a really good vista and a solid bit of rock behind,” he says. Anyone who goes camping will recognise these preferences: this is where you want to pitch your tent when you arrive in an unfamiliar place at dusk. It is also where aspirational types dream of buying a place to live. In other words, this is the spot that lures us with siren calls of “home”. There has long been an assumption that the concept of home is as old as humanity. But Pope, an archaeologist at University College London, is challenging that. “We take for granted that early humans had a home, an address, but it wasn’t always with us,” he says. “It’s something we evolved.” The invention of “home”, Pope argues, marked a critical threshold in the long march towards civilisation. As well as being a practical advance, it was also a conceptual leap that shaped the way our ancestors thought and interacted. What’s more, evidence is growing that home wasn’t exclusively the domain of Homo sapiens. In fact, Neanderthals may have been the original homebodies. A picture is emerging of their domestic life that would have been unthinkable just a few years ago. Far from being brutish, they may have enjoyed nothing more than spending time indoors around a cosy fire, doing a spot of DIY and inviting friends over for dinner.
2-6-19 Australia has been home to hopping kangaroos for 20 million years
An ancient group of kangaroo relatives called balbarids had multiple ways of getting around, including hopping, bounding and climbing. The finding may mean we have to rethink how modern day kangaroos came to hop. Kangaroo evolution has been difficult to piece together because there are very few fossils older than one or two million years. The prevailing view of kangaroo evolution is that they began hopping when the climate in Australia became drier and wiped out many forests, but new fossil evidence suggests that their relatives were hopping much earlier. The balbarids were distant cousins of modern kangaroos and lived in forests when the Australian climate was wetter. They went extinct around 10 to 15 million years ago when the climate dried out. One of the most complete skeletons, from a species in the balbarid family called Nambaroo gillespieae, suggests that these animals moved on four legs and did not hop like true kangaroos. Benjamin Kear at Uppsala University, Sweden, and colleagues have now analysed a set of more fragmentary remains, including ankle bones, a calf bone and a claw. They suggest that some balbarids galloped, some hopped, and some climbed in trees. That’s true of modern kangaroos too, if you look beyond the most famous among them. There are rat kangaroos that scurry in the undergrowth and burrow, and tree kangaroos that live in the forests of New Guinea. Short-faced giant kangaroos, which went extinct 30,000 years ago, walked on two legs like us. This versatility has been key to kangaroos’ success, enabling them to exploit a huge range of terrestrial environments, says Kear. The origin of hopping goes all the way back to virtually the beginning of kangaroo evolution, he says.
2-5-19 Scientists studied a ‘haunted house’ to understand why we love horror
Horror films and fairground haunted houses may be enjoyable because they let us overcome simulated threats in a safe space, so we can learn how to cope with negative experiences in real life. To better understand how we experience horror, Mathias Clasen at Aarhus University in Denmark and his colleagues have been studying how people cope with gory surprises. They recruited 280 visitors at a commercial haunted house in Vejle, Denmark, which was set in a dilapidated factory where 30 rooms had been designed to target different fears. For instance, there were dark, claustrophobic spaces and rooms containing actors in zombie make-up. Before they entered the building in groups, each visitor was asked to choose to focus either on minimising or increasing their fear throughout the experience. The team then asked the visitors about the mental tactics they used. Those who tried to maximise their fear said they concentrated on the things meant to frighten them, instead of looking away or thinking about something else. They also told themselves that the situation was really dangerous, and allowed themselves to scream, which Clasen says can make you feel more frightened. Those who tried to lessen their fears did the opposite. But both groups had one response in common: they got closer to others in their group, sometimes holding hands. Clasen says the adrenaline junkies may have done this to experience more fear vicariously through others, while those intent on feeling less fear may have been looking for comfort. “It was striking that the same gesture of seeking physical proximity can work in those diametrically opposed ways,” he says.
2-5-19 The ancestor of all creatures on Earth lived a lukewarm lifestyle
THE ancestor of all life on Earth probably preferred moderate temperatures, not scorching heat as some biologists believe. The finding could shed light on where such early organisms lived, but only if it is confirmed. Everything alive today can be traced back to the last universal common ancestor (LUCA), a single-celled organism that appeared early in Earth’s history. LUCA emerged at least 3.9 billion years ago, and relatively soon after split into two groups called bacteria and archaea, which today make up the majority of all living species. More complex organisms made of multiple cells, like sponges, elephants and us, only appeared billions of years later. Ryan Catchpole and Patrick Forterre of the Pasteur Institute in Paris have re-examined the genetic evidence that LUCA was adapted to extreme heat. They think earlier work may have incorrectly traced a key gene, changing our understanding of LUCA’s habitat. Many biologists have argued that LUCA lived somewhere hot, like a geothermal pond, where temperatures exceed 50°C or even 100°C. They point to the many primitive archaea alive today that are adapted for heat. Organisms that live above 50°C are called thermophiles, while the hardy few that endure 80°C or more are known as hyperthermophiles. LUCA’s genome could provide a clue as to which category it belongs in. Being so ancient, no specimens of this organism remain, but in 2016, a team led by Bill Martin at Heinrich Heine University Düsseldorf in Germany looked for universal genes found in some of the oldest branches of life, which are likely to have been present in LUCA.
2-5-19 Women seem to have younger brains than men the same age
Women have younger brains than men the same age. A study basing age on metabolism rather than birth date found an average 3.8 year difference between men and women. The discovery may help explain why women are more likely than men to stay mentally sharp in their later years. All brains get smaller with age, and it was already known that men’s tend to shrink at a faster rate. To investigate the differences further, Manu Goyal at Washington University School of Medicine in St Louis and colleagues looked at the brains of 205 men and women ranging in age from 20 to 82. They used positron emission tomography, an imaging technique that helps uncover brain metabolism by measuring the flow of oxygen and glucose. The brain consumes large amounts of glucose for energy, but the pattern of use alters with age. They found that metabolic brain ageing correlated with chronological ageing in both men and women, but that at any given age women’s brains were younger, metabolically speaking, than men’s. “It’s not that men’s brains age faster — they start adulthood about three years older than women, and that persists throughout life,” says Goyal. “What we don’t know is what it means. I think this could mean that the reason women don’t experience as much cognitive decline in later years is because their brains are effectively younger, and we’re currently working on a study to confirm that.”
2-5-19 Mouse toes partially regrown after amputation thanks to two proteins
A pair of proteins could help regenerate amputated limbs. When applied to amputated toes, the proteins encouraged both bone and joint growth in mice. Joints are structurally complex, so even for animals that can regrow their lost limbs, rarely can they regenerate their joints as well. Ken Muneoka at Texas A&M University and his colleagues had previously regenerated bones in mice after they were amputated by treating the stump with a bone-growing protein, BMP2. But joint structures never formed. The team suspected that another bone-growing protein, BMP9, could be essential in joint building. So they tried applying the protein to mice that had their toes amputated. After three days, over 60 per cent of the stump bones formed a layer of cartilage, as seen in joints, at the end of the bones. The result was more effective when the team treated the wounds first with BMP2 and then BMP9 a week later. Not only did the bones regrew, they also formed more complete joint structures with part of the new bones attached to them. Although the method does not yet produce a full toe. “Our study is transformational,” says Muneoka. He suggests this experiment proves that even though mammals can’t regenerate body parts, we have cells that know how to and what to grow. “They can do it, they just don’t do it. So, we have to figure out what’s constraining them,” he says. Because human skeletal structure is very similar to that of mouse, Muneoka says he is optimistic that one day we will be able to help amputees regrow their limbs. But more studies need to be done before any trials in humans, he says.
2-4-19 Confused about cancer? Here’s what we really do know about its causes
We are bombarded with stories about things that might give us cancer, yet even the experts don't seem sure. So what's the best way to judge the risks? RED meat, cellphones, plastic drinking bottles, artificial sweeteners, power lines, coffee… Which of these have been linked with cancer? If you are unsure, you aren’t alone. The problem isn’t a lack of information. Rather, we are bombarded with so much information and misinformation about what might cause cancer that it is often hard to separate myth from reality. Yet it is something we must all do, because cancer affects every one of us. Whether or not you have had it yourself, you surely know someone who has. For people in the UK, the lifetime chance of being diagnosed with the disease is 1 in 2. Globally, cancer is second only to cardiovascular disease as a cause of death, killing an estimated 1 in 6 people. Cancer is not a single disease and its causes are many and complex, but there are things we can do to reduce our risk – if only we could identify them. That isn’t easy when even the experts don’t always agree. Nevertheless, our knowledge has come a long way in recent years, thanks to a huge amount of research into both environmental factors and genetic susceptibility. So, what do we know – and don’t know – about the causes of cancer? And, when faced with mixed messages, how can we best judge the risks for ourselves? The extent of public confusion on the subject was glaringly exposed in a survey of 1330 people in England published last year. Researchers from University College London and the University of Leeds, UK, reported that more than a third of the general public mistakenly attributed carcinogenic properties to artificial sweeteners, genetically modified food, drinking from plastic bottles and using a cellphone. Over 40 per cent thought stress causes cancer, although there is no proven link. More worryingly, only 60 per cent of people believed sunburn can lead to cancer. And only 30 per cent were aware of the strong link between human papillomavirus (HPV) infection and the disease.
2-4-19 Teenagers who copy each other’s risk-taking have more friends
From binge-drinking to reckless driving, our teenage years are known to be a time of risk-taking. Now we are starting to understand why such behaviours spread between friends. Many previous studies have shown that adolescents are more likely to start smoking or drinking if their friends do, but it is hard to study how such behaviours spread through social groups. While working at the Dresden University of Technology in Germany, Andrea Reiter and her colleagues used a simple gambling game to dig into the teen appeal of risk-taking, and its social implications. The task involved choosing between a definite payout of €5 or a known, small chance of winning up to €50. The game was played over a series of rounds by 86 male volunteers, half of whom were between 12 and 15, while the rest were adults. When the volunteers played the game alone, the boys were less likely than the men to take the risky gamble of trying for a larger payout. “There is this stereotype, but teens were not more risk-seeking when tested alone,” says Reiter. However, this changed when the participants no longer thought they were alone. In a second run of the experiment, the volunteers met a “partner” face-to-face before playing the game, and were told they could see each other’s actions on a computer. In reality, the researchers were in control of all the “partner’s” decisions. If the fake partner took the risky gamble more often, the boys’ own play became riskier – but only if their partner was another teen, not an adult. The boys’ behaviour changed more than twice as much as that of the adults. A questionnaire revealed that the boys who changed their behaviour the most also reported having more friends and a higher social confidence.
2-4-19 DNA from extinct red wolves lives on in some mysterious Texas coyotes
The find raises questions of whether conservation efforts should preserve DNA, not just species. Mysterious red-coated canids in Texas are stirring debate over how genetic diversity should be preserved. “I thought they were some strange looking coyotes,” wildlife biologist Ron Wooten says of the canids on Galveston Island, where Wooten works. But DNA evidence suggests the large canids might be descendants of red wolves, a species declared in 1980 to be extinct in the wild. A small population of red wolves from a captive breeding program lives in a carefully monitored conservation area in North Carolina. But those wolves have had no contact with other canids, including those in Texas. So maybe, Wooten thought, red wolves never actually went extinct in the wild. He made it his mission to find out. “There was no way I could let this go,” he says. He reached out to evolutionary geneticist Bridgett vonHoldt at Princeton University. She and colleagues have amassed genetic data on about 2,000 North American canids, mostly coyotes and wolves, but with a few dogs thrown into the mix. VonHoldt regularly receives photographs of wolflike animals with requests to identify what species they belong to — an exercise she describes as “really challenging and possibly misleading.” Instead, she asks for tissue samples so that her team can analyze the animal’s DNA. “Many pictures I don’t give a second thought to,” she says. But Wooten’s photos of the Galveston Island canids were “a little bit different.… It just doesn’t look typical of a standard coyote.”
2-4-19 Obesity-related cancers rise for younger US generations, study says
Cancers linked to obesity are rising at a faster rate in millennials than in older generations in the United States, the American Cancer Society has said. It said a steep rise in obesity in the past 40 years may have increased cancer risk in younger generations. And it warned the problem could set back recent progress on cancer. The Society studied millions of health records from 1995 to 2014, publishing its findings in The Lancet Public Health. In the last few decades, there has been mounting evidence that certain cancers can be linked to obesity. Researchers found that the rates of six out of 12 obesity-related cancers (colorectal, uterine, gallbladder, kidney, pancreatic and multiple myeloma - a blood cancer) all went up, particularly in people under the age of 50. And they found steeper rises in successively younger generations aged 25 to 49 - and particularly in millennials, in their 20s and 30s. For example, the risk of colorectal, uterine and gallbladder cancers has doubled for millennials compared to baby boomers, now aged 50 to 70, at the same age. Some of these cancers increased in people over 50 too, but the rises were not as steep. Researchers say this trend may be down to the rapid rise in obesity in the last few decades with "younger generations worldwide experiencing an earlier and longer exposure to the dangers of extra weight".
2-3-19 Why it’s key to identify preschoolers with anxiety and depression
New research shows these kids have mental and physical problems as they grow older. The task was designed to scare the kids. One by one, adults guided children, ranging in age from 3 to 7, into a dimly lit room containing a mysterious covered mound. To build anticipation, the adults intoned, “I have something in here to show you,” or “Let’s be quiet so it doesn’t wake up.” The adult then uncovered the mound — revealed to be a terrarium — and pulled out a realistic looking plastic snake. Throughout the 90-second setup, each child wore a small motion sensor affixed to his or her belt. Those sensors measured the child’s movements, such as when they sped up or twisted around, at 100 times per second. Researchers wanted to see if the movements during a scary situation differed between children diagnosed with depression or anxiety and children without such a diagnosis. It turns out they did. Children with a diagnosis turned further away from the perceived threat — the covered terrarium — than those without a diagnosis. In fact, the sensors could identify very young children who have depression or anxiety about 80 percent of the time, researchers report January 16 in PLOS One. Such a tool could be useful because, even as it’s become widely accepted that children as young as age 3 can suffer from mental health disorders, diagnosis remains difficult. Such children often escape notice because they hold their emotions inside. It’s increasingly clear, though, that these children are at risk of mental and physical health problems later in life, says Lisabeth DiLalla, a developmental psychologist at Southern Illinois University School of Medicine in Carbondale. “The question is: ‘Can we turn that around?’”
2-2-19 Your gut bacteria may match your blood group – but we don’t know why
Gut bacteria seem to vary according to the blood groups of their hosts, but the reason for this is not yet clear. Your ABO blood type is determined by a type of sugar on the surface of your red blood cells. Type A individuals have a different sugar from type B individuals, while AB people have both sugars. O people – who are known as universal donors – have neither. These sugars are called antigens and help tell your immune system that your blood cells belong to you and shouldn’t be attacked. If an A person were to accidentally receive a transfusion of B blood, antibodies made by their immune system would react with the B sugar and flag these cells for destruction. Other parts of the body – including the intestines – carry these antigens too, prompting researchers to wonder if the bacteria that live in our body might as well. To see if our gut bacteria match our blood type, Zhinan Yin at Jinan University in Guangzhou, China, and his colleagues took gut bacteria samples from 149 volunteers from across the four blood groups. The team found that blood type wasn’t linked to any differences in the kinds of bacteria a person had. However, they noticed that bacteria seemed to be recognised by antibodies from different blood types, in a similar way to when antibodies detect incompatible blood cells. This suggests that gut bacteria make sugars that match their host’s blood type. “We were very surprised to see this,” says Yin. While some bacteria are already known to carry molecules that are similar to B antigens, this is the first indirect evidence that bacteria can have sugars that behave like A antigens too.
2-1-19 Washington state in state of emergency
Washington state has declared a state of emergency since an outbreak of the measles virus hit Clark County, with at least 34 cases of the highly infectious and sometimes fatal viral illness. Washington is one of 18 states that permit parents to opt their children out of mandatory measles vaccines for philosophical reasons. In Clark County, 7.9 percent of students got exemptions from vaccination last year.strong>(Webmaster's comment: Some human beings can be so stupid!)
2-1-19 Aspirin and bleeding
Healthy adults should not take a daily low-dose aspirin to prevent heart disease unless a doctor advises them to do so. That’s the conclusion of a major review into previous research, which found that the drug “substantially” raises the chance of dangerous bleeds in the gut and skull. Aspirin’s blood-thinning properties can help prevent heart attacks and strokes for those with existing cardiovascular issues. But for those with no issues, the cons outweigh the pros, the new report says. Its authors analyzed the findings of 13 studies, including three major clinical trials published last year, involving some 164,000 people. Overall, they found that aspirin reduced the risk of cardiovascular problems by 11 percent—but was linked to a 43 percent increase in significant bleeding events. Lead author Sean Lee Zheng, from King’s College London, says doctors need to assess patients’ needs on a case-by-case basis. “Aspirin use requires discussion between the patient and their physician,” he tells The Times (U.K.), “with the knowledge that any small potential cardiovascular benefits are weighed up against the real risk of severe bleeding.”
2-1-19 A blood test for Alzheimer’s
Doctors might soon be able to use a simple blood test to predict if a patient will develop Alzheimer’s more than a decade before the appearance of symptoms. Scientists have previously observed that a raised level of neurofilament light chain (NfL), a protein found in the brain and spinal cord, is a possible indicator of early-stage Alzheimer’s. To explore the issue further, researchers examined NfL levels in 243 people with genetic mutations that predisposed them to the neurodegenerative disease and 162 people without the mutation. They found that just under seven years before Alzheimer’s symptoms developed, the group with the mutation had distinctly higher levels of NfL, reports USA Today. When the team examined how quickly levels of the protein changed over time, they found that the rate of increase was noticeably higher for people with the mutation more than 16 years before symptoms began. Scientists haven’t yet discovered a cure for Alzheimer’s. But an early test could help doctors predict when patients will start showing symptoms and help researchers determine whether potential new treatments are effective. Study author Mathias Jucker, from the German Center for Neurodegenerative Diseases, said that the reason there is no effective treatment for Alzheimer’s “is partly because current therapies start much too late.”
2-1-19 Rhinoceros beetles have weird mouth gears that help them chew
A species of horned beetle has a startling secret: a gearing mechanism in its mouthparts. The beetles beat us to the invention of meshed gears, possibly by millions of years. Japanese rhinoceros beetles (Trypoxylus dichotomus) are found in east Asia. Males can be 8 centimetres long. This is unusually large for an insect, although not as large as male Hercules beetles that can reach double the size. In Japan, the rhinoceros beetles are popular pets and are regularly depicted in anime and other media. “There is nobody who has not touched the horned beetle in Japan,” says Hiroaki Abe at the Tokyo University of Agriculture and Technology in Japan. Abe’s team was studying the beetles’ genetics when their breeding programme created some with abnormally-shaped heads. To figure out exactly what was unusual, they needed to know what the mouthparts or “mandibles” of normal beetles looked like. Surprisingly, this had never been documented. So Abe’s colleague Wataru Ichiishi ?dissected some and was startled to discover that the right and left mandibles moved simultaneously. A closer examination revealed that each mandible has two gear teeth, and the two sets mesh. As a result, when one mandible moves, so does the other. Abe thinks the gearing has evolved because of the beetles’ lifestyle. They spend a lot of time chewing the tough bark of trees to feed on sap. If one of the mandibles broke, the beetle might starve. Linking the two mandibles with gears spreads the force between them, reducing the strain on each mandible and making it less likely to break. |
A survey team on a remote island in Arctic Canada came across a grisly sight in the summer of 2016. Caribou carcasses, dozens of them, lay strewn across the tundra of Prince Charles Island, just north of the Arctic Circle in Nunavut. Based on the condition of the carcasses and the decomposition of internal organs, death was estimated to have occurred at least several weeks prior to the team’s arrival, perhaps in late winter. While some animals died lying down, others appeared to have simply collapsed.
A half-century earlier and more than 4,200 miles (6,800 kilometers) west, a similar scene confronted biologists on a remote speck of land in the Bering Sea. Forty-two reindeer were found foraging among the skeletal remains of a reindeer herd on St. Matthew Island that only three years earlier had numbered 6,000 animals.
Using a combination of remotely sensed data from satellites and sensors on the ground, scientists found the unmistakable fingerprints of the same killer in 2016 and 1966. Both Arctic islands are shown on this page as observed in 2015 and 2016 by the Operational Land Imager (OLI) on Landsat 8.
While caribou and reindeer are the same species (Rangifer tarandus), they are not the same animal. Caribou, which live in North America, are migratory and travel in large herds between breeding grounds. Reindeer inhabit Europe and Asia and have adapted to domestication. They can be used for pulling sleighs and can be milked like cows and goats. (Reindeer cheese is reported to be mild and creamy.)
One key attribute caribou and reindeer share is that they are herbivores that feed on lichens and plants. In late fall and early spring, they use their sharp hooves to break through the icy crust on northern lands to reach this food source. While the animals are adapted to efficiently managing their energy reserves over the long Arctic winter, timing is everything. And at both Prince Charles Island and St. Matthew Island, time ran out for the herds.
Meteorological data from Prince Charles Island in the winter of 2015–2016 indicate that major storms occurred in April 2016, a time when caribou energy reserves are generally at their lowest. Wind and snow from these storms created an unusually dense snowpack, which was detected through brightness temperature data acquired by the Special Sensor Microwave Imager/Sounder (SSMI/S) aboard the Defense Meteorological Satellite Program (DMSP) series of satellites. Scientists determined from the data that the caribou, already weakened at the end of a long winter, starved to death when they were unable to break through the dense snow and ice layer to reach the food they needed.
Unusually harsh winter weather also was the culprit on St. Matthew Island. Scientists reanalyzing meteorological data found that the winter of 1963–1964 was one of the harshest ever recorded in the Bering Sea islands. The reindeer endured storms with hurricane-force gusts, wind chills as low as -71.5° Fahrenheit (-57.5° Celsius), and a record amount of snow. As at Prince Charles Island, the hard crust on the snowpack made it difficult, if not impossible, for the huge reindeer herd to access vital nutrients. For the 6,000 reindeer, there simply was not enough food available when it was most needed. By 1966, only 42 survivors remained.
Through the use of remotely sensed data, scientists were able to close the cold case of the mysterious deaths of caribou in Canada and reindeer in the Bering Sea islands occurring a half-century apart. The data told the tale.
NASA Earth Observatory images by Joshua Stevens, using Landsat data from the U.S. Geological Survey. Story by Josh Blumenfeld, NASA ESDS Program. |
|Previous Tutorial||Tutorial 1||Next Tutorial|
|PIC Microcontrollers Programming|
|PIC Microcontrollers Course Home Page 🏠|
In this tutorial, you’ll get to know what are microcontrollers? What’s inside a typical MCU chip? How do PIC microcontrollers operate? Set everything up, and become familiar with the development environment or the Microchip PIC microcontrollers ecosystem which we’ll be using through this series of tutorials. Using MPLAB X IDE, XC8 Compiler, and PICkit2 or 3.
- 1 Introduction To Microcontrollers
- 2 What is a Microcontroller?
- 3 Variants Of Microcontrollers
- 4 What’s Inside a Microcontroller?
- 5 Microcontrollers VS Microprocessors
- 6 Fundamental Terminologies
- 7 Cross-Development
- 8 Which Language To Use?
- 9 The compiler ( XC8 )
- 10 IDE ( MPLAB X )
- 11 How Does a µC Run a Program?
- 12 Computer Simulation
- 13 Development Hardware Kit
- 14 Prototyping Board
- 15 Share this:
- 16 Related
Introduction To Microcontrollers
As we stated earlier, an embedded system is a computerized system that in most cases will not look like a computer. We’ve also mentioned numerous examples of embedded devices applications. The computers being embedded in these devices are small microcontrollers (MCUs) or also abbreviated as µC. You should also know that microcontrollers are not the only option out there for embedded solutions/devices, but MCUs are our main interest in this series of tutorials.
Now, we’ll consider one of the previous embedded devices examples to have a closer look at its main components. Let’s consider the Drone for example.
A typical drone will have an internal structure as shown in the diagram below.
Well, this aerial robot consists of some mechanical & electrical parts. We’re only concerned with the electronic embedded system which has the following components
|Sensors||Camera||Electronic sensor used for imaging and video recording.|
|GPS||This sensor is used to get the coordinates information of the vehicle at any instance of time. Widely used for locomotion tasks.|
|IMU||An IMU is an inertial measurement unit. It’s typically used to get information about the static/dynamic properties of the vehicle. Such as angular rates, tilt angle, and acceleration in 3 axes. In order to keep its balance and plan for smooth maneuvering.|
|Compass||This sensor is used to measure the heading angle of the vehicle which is very helpful for controlling its orientational motion.|
|Modules||WiFi||This modules adds a WiFi internet connectivity to the vehicle. Used for control or data transfer|
|RF||This modules adds a Radio Frequency connectivity to the vehicle. Used for communicating with the control station.|
|External Memory||On-Board additional memory for data & settings storage|
|Drivers Circuitry||ESCs||Electronic Speed Controllers. These electronic circuits are pre-built and packaged devices that are being sold as an electronic driver solution used for brushless motors.|
|Actuators||4 x Brushless Motors||These are the actuators that cause motion for this vehicle.|
|DC Power Source||Batteries||The rechargeable power source used for drones. It’s the bottleneck of the whole system which can definitely tell for how long can this machine fly in the air.|
|Central Computer||Microcontroller||This is the brain of the system that handles all computations implemented in the control system mathematical model. It also interfaces all sensors to collect all the needed data for a robust controlled behavior and also communicates with the operator (you) via the RF module to pass your commands to the vehicle which will follow them.|
There is no doubt that the µC or the embedded computer is the most important part of these components as it handles almost everything. A typical sensor or actuator may possibly have a datasheet associated with it that can take no longer than 10 pages. On the other hand, a very simple microcontroller will typically have a datasheet that is no less than 100 pages long. In fact, most MCUs chips have datasheets that are between 200-2000 pages long! Each module within these small chips can have a documentation that is 25 pages long on average.
That’s why in the following tutorials we’ll be more focused on the microcontroller programming and we’ll discuss each of the modules in detail to understand the underlying mechanics. However, we’ll also be interfacing many of the common modules & sensors on our way through. We’ll do our best to balance between understanding all the MCU’s modules in detail, and practically interface some sensors/actuators to create small projects. So stick with me! It’s gonna be tough indeed but it’s really worth it and definitely rewarding!
What is a Microcontroller?
A Microcontroller is a single-chip, self-contained computer which incorporates all the basic components of a personal computer on a much smaller scale. Microcontrollers are typically used as embedded controllers that control some parts of a larger system such as mobile robots, computer peripherals, etc.
A microcontroller is fundamentally a smaller version of your personal computers. It has the same exact component but with limited capabilities and resources. An arbitrarily-made comparison between X-Computer and Y-MCU may result in a table that looks like the one below.
|RAM||1 – 8 GB||128byte – 512Kbyte|
|ROM||few MBs||4 – 46Kbyte|
|Clock Rate||1 – 4GHz||32KHz – 20MHz|
|CPU Cores||up to 16||1|
|Serial Ports||USB, RS422 & RS232||UART, SPI, I2C & USB|
A microcontroller is not the same as a microprocessor. A microprocessor is a single-chip CPU used within other computer systems. While a microcontroller is itself a single-chip full computer system!
We use microcontrollers extensively in embedded systems design. You can spot its existence in a tremendous number of applications and devices around you. A microcontroller can possibly show a higher efficiency and performance in some sort of applications than any computer we use.
When it comes to low-end devices and minimal power utilization, there is nothing comparable with microcontrollers. It drains a few milli-amperes that will result in a substantial increase in the device’s battery life!
Variants Of Microcontrollers
Microcontrollers can be categorized in many different ways depending on numerous metrics. They could be categorized based on their memory architecture, bus width, CPU architecture, manufacturer and so on. Here I will list down some of these categories and finally will tell you which MCU chips we’ll be using in these tutorials and why.
The bus width (number of wires) determines the capability of the computer to handle data words of a specific length. An X-Bits microcontroller can manipulate data up to X bits in size. However, there are many MCUs that have a full instruction set that handles 8-Bit data alongside with a few ones for handling 16-Bit words. In this case, it should be called an 8-Bit machine! A specific microcontroller is considered as an X-Bit Computer if it has an instruction set that most of its instructions are designed to manipulate data words of X-Bits size.
The 8-Bit microcontrollers are the most common with highest shipping volume in the market. However, there are 16-Bit and 32-Bits microcontrollers that are used for high-performance applications that require excessive computational power. In conclusion, the microcontrollers’ market standards in bus widths are the 3 ones listed down below.
|8-Bit µCs||16-Bit µCs||32-Bit µCs|
Microcontrollers’ memory may be embedded within the chip itself or the chip could be designed to operate using an external memory hooked to the µC chip externally.
|Internal Memory MCUs||External Memory MCUs|
|These devices are designed with all needed RAM and ROM memories built-in. There is no need for any external connections for memory.
This represents the majority of the MCUs being manufactured nowadays.
|In the past, some chips had no program memory built-in and they had to connect it externally. That’s what happened with Intel’s 4004 MPUs back in the late 60’s.
This type of MCUs had to disappear with the emergence of internal memory designs only 5 years later back in 1975.
Every single microprocessor has its own instruction set which defines the basic function it only can handles. Some sophisticated functions could be achieved using the basic instructions available in the instruction set of a microprocessor. Here is an example of a set of assembly instructions that could be found in any device out there.
|Instruction Name||The Function It Performs|
|ADD||Adds two operands together|
|SUB||Subtracts two operands|
|MUL||Multiplies two operands|
|DIV||Divide the first operand by the second one|
|AND||Perform logical ANDing Bit-Wise for two operands|
|OR||Perform logical ORing Bit-Wise for two operands|
|XOR||Perform logical exclusive-ORing Bit-Wise for two operands|
|And So On…|
Let’s consider a microcontroller which has a very basic instruction set that does not include the Modulus % operation, which returns the remainder of the division. (e.g. 5%2 = 1 , 7%3 = 1 , 6%2 = 0 , 10%4 = 2 and so on)
Then, it’s going to be the programmer’s task to use the basic functions available in the MCU’s instruction set to implement the Modulus function. And so on for the rest of the functions.
Hence, there are basically two categories for instructions sets. There are devices with only basic function instructions which are called RISC-Machines. (RISC = Reduced Instruction Set Computers). And there are devices with a bunch of sophisticated functions hardware-implemented (with digital logic) which have complex instruction sets. That’s why these type of devices are called CISC-Machines. (CISC = Complex Instruction Set Computers). In conclusion, the 2 major computer architectures are the 2 listed down below.
Back in the late 40’s, both of Harvard and Princeton universities were asked by the US government to come up with an architecture for a computer that could be used in military applications. Princeton’s computer architecture was named after their genius mathematician John Von Neumann. Which had a single memory to store the program instructions as well as data variables. While the Harvard architecture had two separate memories. A ROM for program instructions and a RAM for data variables. The diagram below shows you the difference between those two architectures.
All in all, the Harvard architecture is the most common option for the microcontrollers used in embedded systems applications. However, sometimes you’ll encounter Von-Neumann machines specifically when you are working with larger computers (processors).
The microcontrollers we’ll be using in this series of tutorials is a Harvard machine like the most microcontrollers in the market obviously. That’s why we’ll be only concerned with this architecture and dedicated to understanding the mechanics in which a Harvard machine actually works. We’ll address this process hereafter in this tutorial so stick around.
Here is a brief summarized comparison between the Harvard & Von-Neumann computer architectures
|Harvard Architecture||Von-Neumann Architecture|
|It uses tow separate memory spaces for program instructions and data variables||It uses the same memory space for storing both of program instructions and also the data variables|
|Allows for different bus widths||It limits the operating bus width|
|Data transfers and instruction fetches can be performed simultaneously||Data transfers and instruction fetches cannot be performed simultaneously|
|Creating a control unit for two different buses it too complicated and costly process||Creating a control unit for a single bus is much easier, cheaper and faster process|
|The processor needs as low as one single clock cycle to execute an instruction||The Processor needs more clock cycles to complete an instruction execution|
|Is widely used in microcontrollers and digital signal processing units (DSPs)||Commonly used in our personal desktop computers (PCs) and laptops as well|
Each company in the microcontrollers manufacturing business is producing different series (families) of their architecture. The obvious reason for doing so is to satisfy the needs of many different segments that require microcontrollers chips with specific performance levels. Some applications require excessive computational power, some require less and some may require minimal electric power utilization, and so on.
The most common choices in the meantime will be a Microchip PIC, Atmel AVR or ARM chip. These are the most common widely-available architectures out there. An 8Bit MCU from the mid-range PIC family will help you so much to get started. Or equivalently an 8-Bit AVR will do the job indeed.
Don’t care too much about the exact devices that are being used at the production level. As the embedded industry is so broad and there is a room for every single architecture which at some point will be the best fit for the job.
Which MCUs We’ll Be Using And Why?
For this microcontroller programming series of tutorials, we’ll be using an 8-Bit mid-range PIC microcontroller. It’s called PIC16F877A which you may have seen at least once before. Despite being an old product it’s still very useful & cost-efficient for both learning and creating projects. The newly manufactured chips while having the same old name & architecture they are technically more efficient than before in terms of power utilization and the overall performance.
We’ll also be using some variants of the 16F family and some members of the 18F family in order to develop sophisticated USB drivers and create some interesting applications. Which will help you much see how easy it is to port your projects and transfer your mastery/knowledge from a platform to another.
You will always be free to use the chip you want, there is no restriction to the 16F877A chip. As long as you’ll make sure everything is correctly configured. In this case, please be advised to use my code listings at your own risk. Or just comment wherever it’s relevant to tell us your exact situation in order to get helped.
Why we’ll use an 8-Bit mid-range MCU? Well, there are a few reasons to do so
- It’s a beginner-friendly device with limited resources that will definitely help you be more focused on specific modules while writing efficient firmware that fits in these chips with limited resources
- Simple MCU architecture means a short read datasheet (around 200 pages) which is a lot easier to grasp
- Market availability. The 8-Bit MCUs have the lion’s share of the embedded systems market in terms of devices being shipped every single year. So it’s easier to get your starting kit locally
- The large community that was and still supporting these devices. Which means you’ll easily find helpful content online when you get stuck at any point
- Low cost. Obviously, an 8-Bit microcontroller is too way cheaper than a sophisticated one. Just in case you feel not interested enough in embedded systems, you won’t lose much money on a fancy kit.
I believe that platforms with very limited resources are the best choice to get started in embedded systems. Embedded systems design is all about optimization and doing more with less. Fewer instructions, less memory, less power, and less money. We are problem-solvers and optimizers. Having a platform with excessive resources contributes to growing bad coding habits that go forever uncorrected.
What’s Inside a Microcontroller?
A Microcontroller is usually a black IC with a bunch of pins coming out of It. The most common package type used for prototyping is the DIP (dual in-line) which fits nicely in a typical breadboard. While the QFP packages are the most efficient ones, usually used for creating professional PCBs that take much less space.
You should also note that the external packaging material is a solid epoxy for protection purposes. But the actual microcontroller is actually much smaller. Here is a picture of a microcontroller chip internal structure. In which you can see that a typical microcontroller is something that is even smaller than the smallest of your nails!
Can you see those tiny wires coming out the core? Well, these wires are actually connected to the external pins on both sides of the chip which we typically use for i/o (Input/Output) purposes. This essentially means that the maximum current could be sunk or sourced from an i/o pin will be a very few milli-amps in best cases.
We stated earlier that a microcontroller is just a smaller version of a full computer machine. Here is a brief table of the main components (modules) typically found in a generic microcontroller.
|CPU||The core processing unit|
|RAM||Random Access Memory for data storage|
|ROM||Read Only Memory for program storage|
|I/O Pins (Ports)||8,10,16,20,28,40 or more pins|
|Serial Ports||USART, SPI, I2C and maybe USB|
|Timers||1,2,3 or more timers|
|Interrupts||Full logic circuitry to generate and control interrupt signals|
|Peripherals||Additional features as EEPROMs, PWM, Comparators and so on.|
|Oscillator||Generate or control the input clock signal to guarantee a harmonic behavior for the whole machine|
The following hand-drawn diagram shows you the main components typically found in a generic microcontroller and how they are organized. With simplified interlinkings that connect all of these modules altogether.
This is the working horse of the microcontroller internal system. It handles your program instructions and executes them after the processes of fetching and decoding. It also has the ALU inside which does all the arithmetic and logical calculations. A typical simple 8-Bit CPU will execute a single instruction in about 4 machine cycle. While a pipelined design will be, seemingly, running 1 instruction/machine cycle.
The Random Access Memory is the unit which contains the data registers which are used for data storage and also for controlling the operation of all other modules. The registers of the RAM could be categorized into two categories: GPRs & SFRs.
GPRs: stands for General Purpose Registers. Which are used to store data variables that will allocate random addresses. Most of the registers in a typical RAM are GPRs.
SFRs: stands for Special Function Registers. Which are hard-wired to most of the modules within a microcontroller. This allows for the control of each module by moving 0’s and 1’s to the special register that corresponds to this module. For example, TIMERx could be turned ON/OFF by moving 1 or 0 to the bit TMRxON, where x may be 0,1 or 2. And so on
Read Only Memory. This memory unit is dedicated to storing the program instructions (assembly). Every single instruction is stored in an x-Bits register. The number of bits depends on the architecture and design of the CPU itself and its instruction set. The MCU we’ll be using has a ROM that is 14-Bit in width.
Program instructions are stored in consecutive order in the ROM unlike the random nature of a RAM. The instructions are addressed by a specific register called the program counter PC which points to the instruction that has to be executed now. Executing a full program is a matter of incrementing the PC that point to the instructions which will be executed by the CPU.
The input/output ports are the very basic way to communicate with a microcontroller. You can input digital signal to a microcontroller with a simple push button or a sensor. Likewise, you can get a digital output (0 or 1) out of microcontroller that could be used to blink a LED or trigger a motor driver circuity or whatever.
Serial ports allow for advanced communication with a microcontroller. As we can send or receive streams of data serially. This could be numbers, audio, images or even files. Serial ports are extensively used to create machine-machine interfaces as well as human-machine interfaces.
Serial ports include USART, SPI, I2C, USB and much more protocols that are specific-purpose. We’ll discuss each of them in depth hereafter in this series of tutorials.
There is no embedded application that does something real in a real-world application that is not using a timer in one way or another.
A typical microcontroller may have 1, 2 or more hardware timer modules. Which are mainly used for generating time intervals that separate specific events and control time-dependent events. It’s also used for calculating the time between any couple of hardware/software events as if it’s a stopwatch!
An interrupt is an event that suspends the main program execution while the event is serviced (Handled) by another program. Interrupts substantially increase the system’s response speed to external events. Different microcontrollers have different interrupt sources which may include timers, serial ports, IRQs or even software interrupt.
Immediately on receiving an interrupt signal, the current instruction is suspended, the interrupt source is then identified and the CPU branches (vectors) to an interrupt service routine stored in a specific address in memory. The interrupt handler program (code) is usually called interrupt service routine or ISR.
The interrupt circuitry unit is a digital logic circuit that controls the generation of interrupt signals and commands the CPU to handle these interrupts requests on time.
This module is responsible for the harmonic synchronous operation within the microcontroller itself. Even if the microcontroller has no internal oscillator this module will be right there to control the input clock signal to generate a unified clock signal (internally) that synchronizes all the events/operations within the microcontroller.
These are basically the wires that connect everything inside a microcontroller. The bus is defined by its width, this feature is called bus width. The bus width is the number of the physical wires that make up the bus itself. It could be 4, 8, 16, or any number obviously.
If a device A is connected to device B via a 4-wires bus, this could be symbolically represented as shown in the diagram below.
The bus could be drawn as a straight line that connects a digital circuit to another. And the bus width is abbreviated with a small number that tells how many wires are there.
A microcontroller may or may not have any of the following modules. It may have all of them as well. All in all, these modules are extra additional features that a microcontroller can run flawlessly without. You can always buy any of these modules as a separate IC package and hook it externally to your microcontroller so easily.
|EEPROM||Electrically Erasable Programmable Read Only Memory. This memory unit is used for storing data that we need our MCU to store even when the power goes OFF.|
|ADC||Analog To Digital Converter. This module is used to convert the analog voltage values 0-5v to a digital value on a scale that is dependent on the resolution of an ADC.
An 8-Bit ADC can convert (0-5v) to a scale of (0-255) and a 10-Bit ADC can convert it to a scale of (0-1024) and so on.
|DAC||Digital To Analog Converter. This module converts the digital values to analog voltage (0-5v) that could be used to generate audio waveforms or whatever.|
|PWM||Pulse Width Modulation. This module utilizes one of the hardware timer modules available on you MCU chip to generate a square-wave signal that you can control it’s average value (with changing its duty cycle). You also have a control over the output signal’s frequency.|
|Comparator||This module is an analog comparator that is integrated within the MCU chip itself. So you don’t need to convert analog signals to digital values for comparison. With this module, you can compare analog signals on the run.|
During this microcontroller programming series of tutorials, we’ll discuss all of the above modules and even much more. But this will be done after the end of the course’s core content itself.
Microcontrollers VS Microprocessors
There is a fundamentally huge difference between microcontrollers (MCUs) and microprocessors (MPUs). However, there many of who are still confuse these terms. So let’s put it in perspective.
The chip on the left is a Microcontroller. It’s an atmega16 from Atmel. This single chip has a built-in CPU, RAM, ROM, IO Ports, Serial Ports (SPI, I2C & UART), 3 Timers and many other peripherals. It’s a single chip full computer! On which you can flash and run your code and also take advantage of all the built-in peripherals.
The chip on the right is a Zilog Z80 Microprocessor. It’s an old microprocessor released back in 1976. This is a bare processor that needs a RAM, ROM, and interfaces to do any computing job as any microcontroller out there.
Now, I will connect some peripherals to my Z80 MPU such as a dynamic RAM, ROM, ADC, DAC, and an IO Controller. As shown in the image below
Now we’re talking! The microprocessor is now connected to a RAM & ROM. Which means that we can load a program for it to execute. It’s also connected to an ADC, DAC and IO Controller which means that it’s capable of doing some basic input/output operations.
But is it really comparable to a microcontroller? Nope! maybe after connecting the peripherals, as shown above, we can compare the MPU with its externally connected peripherals with an MCU which will still be too way ahead of the MPU. The MCU still has more peripherals, ports, and features than the few external ones which we added to the MPU.
The Bottom Line
A Microcontroller is typically a smaller version of a computer which has a microprocessor as its own CPU. With internal RAM, ROM and a bunch of peripherals. It’s a full computer system on a single chip!
While a Microprocessor is single chip CPU that cannot do anything without connecting some external hardware to create a computer system.
You should be familiar with the following fundamental terminologies. As a beginner in microcontroller programming, it’s absolutely substantial to understand all of the following terminologies. Read them carefully and mark the ones you can’t grasp right now. At the end of this tutorial, you’ll be sent again to read these definitions again. Then everything should be clear. If not, please use your search engine to find out what do these terms actually mean.
|Program||Set of instructions for a computer written in a programming language that implements an algorithm. Finally assembled in assembly instructions that are coded & flashed to the memory in form of 0‘s & 1‘s|
|Paging||A page is a logical block of memory. A paged memory system uses a page address and a displacement address to refer to a specific memory location.|
|Bank||A logical unit of memory, which is hardware-dependent. The size of a bank is further determined by the number of bits in a column and a row, per chip, multiplied by the number of chips in a bank.|
|Pointer||An address of a specific object in memory used to refer to that object.|
|Stack||The hardware Stack is a section of Memory which is used to store temporary data or registers’ values. A software stack is a last-in-first-out (LIFO) data structure that contains information that is saved (PUSHed) and restored (POPed).|
|Stack Pointer||A register that contains the address of the top of the hardware stack.|
|Program Counter||Abbreviated and well-known as the PC. It’s a register which holds the address of the next instruction to be executed. The program counter is incremented after each instruction is fetched (It’s incremented once/instruction cycle).|
|Interrupts||An interrupt is an event that suspends the main program execution while the event is serviced (Handled) by another program. Interrupts substantially increase the system’s response speed to external events|
|Interrupt Vector||The location from which the program continues execution. The location containing interrupt vector is usually passed over during regular program execution.|
|Clock||This is the beating heart of a Microcontroller. it’s Fixed-frequency signal that triggers or synchronizes CPU operation and events. A clock has a frequency which describes its rate of oscillation in MHz. The clock source could be an external RC network, Crystal Oscillator, Resonator or even a built-in internal clock source. The frequency of oscillation is usually referred to as FOSC|
|Machine Cycle||The machine cycle is the time it takes the system’s clock to repeat itself. For a microcontroller that has a 4MHz crystal OSC hooked to its clock input pins, the machine cycle will be 1/4000000 = 250 nanosecond|
|Instruction Cycle||The instruction cycle is the time it takes a microcontroller to complete the execution of a single instruction. This includes Fetching, decoding, execution and saving the result which takes 4 clock cycles for the MCU we’ll be using. So 1 instruction cycle = 4 machine cycles.
We can also say that a CPU that is running @ 4MHz executes FOSC/4 instructions per second = 1MIPS (million instructions per second)
We’ll be programming MCUs during these tutorials as you should expect. But what kind of development is this said to be? Well, when you’re coding on your PC to create applications that also run on PCs, this is said to be “Development” process. However, it’s not always the case.
In many situations, you’ll be writing code on a specific machine (e.g. your PC) which will eventually get compiled to a machine code that runs on another machine (e.g. Android, ios, MCUs, etc.). This is what we call “Cross-Development“, and this is the case for embedded firmware programming.
A compiler used for cross-development is often called a Cross-Compiler.
Which Language To Use?
There are actually many available programming languages to use for developing embedded software. However, the C-Programming language is the standard in this industry. It has been and still considered as the most efficient way to interact with the low-level hardware at the register level. Most of the advanced high-volume quality products are developed in C. We may mention Assembly at some points.
It’s been known for too long that you can do more optimizations in your system with the assembly language. And that’s true, but use it for learning purposes only at this phase. Otherwise, you’ll be wasting too much time without gaining profitable skills. As every processor has its own instructions, consequently it has its own assembly commands so it’ll only be an educational practice.
However, you’re not restricted to the other choices. Everybody has his own preferences. So I encourage you to use the language you’re familiar with. if it’s available for the device family which you’re using!
The compiler ( XC8 )
The programming language in which we’ll be writing firmware to the PIC MCUs is called the C-Language, the standard ANSI-C. Which has been the most efficient option for embedded software development during the past few decades. And still proves to be the best fit for writing firmware up till today.
And please be advised that this course is built upon the assumption of your familiarity with basic concepts in C-Language. However, all code listings are written to be newbie-friendly while maintaining the overall efficiency level as high as possible.
Almost all industrial machines, military fighters, rockets, spacecraft, compilers and even other programming languages all are built in C. With this fact in mind, you should appreciate the effort & time it takes a person to be good at this language. Just practice it as much as you can. It’s the key to mastery in almost every field of knowledge.
The C-Compiler we’ll be using is called XC8 from Microchip, for 8-Bit MCUs. You can download the suitable version for your OS from this Link.
Setting up the compiler is also a straightforward process that you can easily DIY it. Or you can follow the steps in the next tutorial for installing both MPLAB IDE + XC8 Compiler.
IDE ( MPLAB X )
The integrated development environment ( IDE ) is the software platform on which we’ll be developing our projects. This is computer software that gathers a bunch of tools that eliminates the overhead of less important tasks. This helps developers focus their attention on the whole system and spot their major problems that need more investigation.
The IDE which we’ll be using for this course (tutorials) is called MPLAB X from the Microchip inc itself, the same manufacturer of PIC microcontrollers. It’s the official option yet the most powerful one out there. It has too many great features which may possibly cause you a little bit of frustration. No need to worry anyway, you’ll get used to it very soon. You can download the suitable version for your operating system (Windows, Linux & MAC) through this link down below.
Other options of IDEs are some pirated (Cracked) software or drag-drop Arduino-like environments. This is based on what I’ve experienced myself, maybe there is a worthy choice that I wasn’t lucky enough to find. All in all, the official MPLAB IDE is more than enough for our course in the meantime.
For installation, you can follow the video down below (in Compiler’s section). However, you don’t actually have to. Installing MPLAB is kind of a straightforward process that sounds like: Next, Next, Next, Yes, Finish!
One thing to notice is that you MUST install the MPLAB IDE first before installing the compiler to avoid integration issues between both of them.
Despite the fact that IDEs are created to be very useful & efficient tools, the other face of the coin is that it gave us enough luxury to abandon all discipline at some points and tweak things here and there hopefully until a solution is successful. At the end of the day, we ended up having too many engineers with limited experience in the estimation of systems’ behavior under certain circumstances. Be Careful
How Does a µC Run a Program?
Now, we’re about to discuss how an embedded program runs on a typical microcontroller in technical terms. First of all, the program should be loaded into the microcontroller’s program memory (ROM). After writing the code in C, the compiler & assembler will generate a .hex file that you should burn (flash) to the microcontroller chip thereafter. The program instructions be a bunch of 0’s and 1’s obviously.
Next, the chip must be powered up as any electronic device. Typical microcontrollers run at 3.3 or 5v DC. We should also connect the crystal oscillator to provide the chip with the clock input.
Now, the microcontroller will start executing the instructions stored in memory sequentially. Each instruction execution take up to 4 clock cycles which work as follows
- An instruction is fetched by the CPU from the program memory. And the program counter PC gets incremented to point at the next instruction in the program memory.
- The fetched instruction will be decoded in the 2nd clock cycle.
- The instruction operation is performed (executed), it could be ADD, SUB, MOV, DIV or whatever. The execution happens during the third clock cycle.
- Finally, the result gets saved to the working register in the 4th cycle. Which was the final step in executing the first program instruction.
- Now, the PC is pointing to the 2nd instruction as you’ve seen in step 1. And everything is repeated over and over again forever or until the power goes OFF!
This obviously explains why a machine cycle = 1/4 instruction cycle. Or in other words, the MCU is said to perform Fosc/4 MIPS.
Using computer simulation software can substantially improve your learning experience with multiple embedded systems platforms (e.g. ARM, AVR, PIC, etc.). However, a typical simulation software (e.g. Proteus) will only catch you logical (Code) errors! even though it’s still a powerful tool that helps you avoid flashing and testing firmware on real hardware, as long as we can easily prove this firmware won’t run correctly or won’t give the expected results. With only a few clicks & drag-drop, you’ll have your schematics fully-connected and ready for some testing.
You can use whichever simulator you prefer. I’d strongly recommend Proteus for most beginners. And you can check their website to find a student version or whatever suits your budget.
If your code is running flawlessly in the simulator with 0% error. There is still no guarantee that it’ll run seamlessly in real-life settings on a real prototyping board. Any MCU-Based kind of system could possibly go really insane! just because of tiny hardware issues. These issues are impossible to be predicted by any means of computer simulation.
Development Hardware Kit
A typical course in embedded systems is practical in nature. Hence the importance of getting a decent hardware kit to play with. The list down below includes all the necessary parts & components needed for the practical LABs in the upcoming tutorials.
We can also categorize these components into 2 categories: Basic kit & Extra modules. The first one is everything you must have in order to follow-up these tutorials and create all the basic projects yourself in a real-world sense. The latter list has some extra modules, for larger budget beginners, which helps you complete almost all of the tutorials covered herein or create whatever project you’d like to develop at your own pace.
- The Basic Course Kit (Essentials)*
- Extra Modules For Advanced Tutorials (Optional)
Check out the specific components part names and quantities in the course kit resources page.
In embedded systems practice, you must be working with various hardware parts and especially microcontrollers which introduces the following issues.
I- Programming the microcontrollers in a generic way involves removing them from the breadboard and mounting it on the programmer circuitry. When the firmware is flashed to the MCU then you take it back to the breadboard testing environment. As you may have noticed that updating the firmware involves removing and reconnecting the MCU from the breadboard. Which will possibly damage your MCU and result in some bent & broken pins.
II- MCUs typically operate at 3.3v or 5v. Which requires that you’ve your own power supply and voltage regulation onboard in order to guarantee the operating power required specs.
For these reasons above, we’ll stick to a fixed on-board development connection. This means that you’ll have to construct the basic circuit down below only once. Then you’ll be able to easily test your projects on-board. This connection has the following features:
- On-board ICSP (In-Circuit Serial Programmer) port. Which means you can easily flash the chip without being removed from the breadboard.
- Regulated 5v power supply connection for powering the chip up. With LED indicator.
- Reset pin is pulled-up and hooked to a push button.
- Oscillator input pins are connected to our crystal oscillator.
The final fully connected circuit is shown down below.
At the end of this tutorial, please double-check the terminologies which you’ve seen earlier. Just to make sure these concepts are now clearer than before. Click the link down below
PIC Microcontrollers Course Home Page 🏠
|Previous Tutorial||Tutorial 1||Next Tutorial| |
Sphere and cone
Within the sphere of radius G = 36 cm inscribe cone with largest volume. What is that volume and what are the dimensions of the cone?
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
- Cube in sphere
The sphere is inscribed cube with edge 8 cm. Find the radius of the sphere.
- Billiard balls
A layer of ivory billiard balls of radius 6.35 cm is in the form of a square. The balls are arranged so that each ball is tangent to every one adjacent to it. In the spaces between sets of 4 adjacent balls other balls rest, equal in size to the original. T
- Cube in ball
Cube is inscribed into sphere of radius 241 cm. How many percent is the volume of cube of the volume of sphere?
- Sphere vs cube
How many % of the surface of a sphere of radius 12 cm is the surface of a cube inscribed in this sphere?
- Two balls
Two balls, one 8cm in radius and the other 6cm in radius, are placed in a cylindrical plastic container 10cm in radius. Find the volume of water necessary to cover them.
- Cube in a sphere
The cube is inscribed in a sphere with volume 3724 cm3. Determine the length of the edges of a cube.
- Hemispherical hollow
The vessel hemispherical hollow is filled with water to a height of 10 cm =. How many liters of water are inside if the inside diameter of the hollow is d = 28cm?
- Cube and sphere
Cube with the surface area 150 cm2 is described sphere. What is sphere surface?
- Sphere area
A cube with edge 1 m long is circumscribed sphere (vertices of the cube lies on the surface of a sphere). Determine the surface area of the sphere.
One cube is inscribed sphere and the other one described. Calculate difference of volumes of cubes, if the difference of surfaces in 254 cm2.
Surface of the sphere is 2820 cm2, weight is 71 kg. What is its density?
- Cube into sphere
The cube has brushed a sphere as large as possible. Determine how much percent was the waste.
- MO SK/CZ Z9–I–3
John had the ball that rolled into the pool and it swam in the water. Its highest point was 2 cm above the surface. Diameter of circle that marked the water level on the surface of the ball was 8 cm. Determine the diameter of John ball.
- Tropical, mild and arctic
How many percents of the Earth's surface lies in the tropical, mild and arctic range? The border between the ranges is the parallel 23°27 'and 66°33'.
Cuboid with edge a=24 cm and body diagonal u=50 cm has volume V=17280 cm3. Calculate the length of the other edges.
- Cube diagonal
Determine length of the cube diagonal with edge 33 km.
In point O acts three orthogonal forces: F1 = 20 N, F2 = 7 N and F3 = 19 N. Determine the resultant of F and the angles between F and forces F1, F2 and F3. |
A bar chart is a diagram in which the numerical values of variables are represented by the height or length of lines or rectangles of equal width. In simpler terms, they present categorical data using rectangular bars. Bars used in this case could be horizontal or vertical. We can plot bar charts with the
pyplot module in the Matplot library, `matplotlib.
To plot a bar chart, we use the following command:
y parameter values represent the horizontal and vertical axis of the plot, respectively.
It is worth noting that the
pyplot.barfunction takes more parameter values other than just the
yaxis values. However, it is beyond the scope of this shot to look at the other parameters.
A vertical bar represents the data vertically. Hence, its bars are drawn vertically. In such a chart, the data categories are shown on the x-axis while the data values are shown on the y-axis.
# importing the necessary libraries and modules import matplotlib.pyplot as plt import numpy as np # creating the data values for the vertical y and horisontal x axis x = np.array(["Oranges", "Apples", "Mangoes", "Berries"]) y = np.array([3, 8, 1, 10]) # using the pyplot.bar funtion plt.bar(x,y) # to show our graph plt.show()
As seen in the code above, we first import the libraries (
matplotlib) and the modules (
numpy) needed to plot a bar chart. With
numpy.array(), we create an array of values for the
y axes. Then, using the
pyplot.bar(x, y) function, we create a vertical bar chart with horizontal and vertical values. Lastly, we use the
pyplot.show() function to display our graph.
A horizontal bar chart represents the data horizontally. Hence, its bars are drawn horizontally. The data categories are shown on the y-axis, while the data values are shown on the x-axis.
The unique thing about the syntax for the horizontal bar chart is the
h after the
# importing the necessary libraries and modules import matplotlib.pyplot as plt import numpy as np # creating the data values for the vertical y and horisontal x axis x = np.array(["Oranges", "Apples", "Mangoes", "Berries"]) y = np.array([3, 8, 1, 10]) # using the pyplot.barh funtion for the horizontal bar plt.barh(x,y) # to show our graph plt.show()
We can see that the only difference between the vertical and horizontal bar charts is their syntax. We use the
pyplot.bar() function for a vertical bar chart and the
pyplot.barh() function for a horizontal bar chart.
View all Courses |
By Catherine Close, PhD, Psychometrician
In this post, we take a closer look at validity. In the past we’ve noted that test scores can be reliable (consistent) without being valid, which is why validity ultimately takes center stage. We will still define validityas the extent to which a test measures what it’s intended to measure for the proposed interpretations and uses of test scores.
Going beyond the definition, we begin to talk about evidence—a whole lot of evidence—needed to show that scores are valid for the planned uses. What kind of evidence? Well, it depends. But before you run for the hills, let me tell you that the way we plan to use test scores is the important thing. So, our primary goal is to provide validity evidence in support of the planned score uses.
There are several types of validity evidence. Although they are presented separately, they all link back to the construct. A construct is the attribute that we intend to measure. For example, perhaps reading achievement. If the items in a reading achievement test are properly assembled, students’ responses to these items should reflect their reading achievement level. We look for evidence of just that, in various ways:
Evidence related to construct. Evidence that shows the degree to which a test measures the construct it was intended to measure.
Evidence related to content. Evidence that shows the extent to which items in a test are adequately matched to the area of interest, say reading.
Evidence related to a criterion. Evidence that shows the extent to which our test scores are related to a criterion measure. A criterion measure is another measure or test that we desire to compare with our test. There are many types of criterion measures.
Again, these types of evidence all relate to the construct—the attribute the test is intended to measure—as we shall see in the example below. Before we continue, recall that in our last blog on reliability we defined the correlation coefficient statistic. This is another key term to understand when evaluating test validity, because the correlation coefficient is also used to show validity evidence in some instances. When used this way, the correlation coefficient is referred to as a validity coefficient.
Suppose we have a test designed to measure reading achievement. The construct here is reading achievement. Can we use the scores from this test to show students’ reading achievement?
First, we might want to look at whether the test is truly measuring reading achievement—our construct. So, we look for construct-related evidence of validity. Evidence commonly takes two forms:
Evidence of a strong relationship between scores from our test and other similar tests that measure reading achievement. If scores from our test and another reading achievement test rank order students in a similar manner, the scores will have a high correlation that we refer to as convergent evidence of validity.
Evidence of a weak relationship between our test and other tests that don’t measure reading achievement. We may find that scores from our test and another test of, say, science knowledge have a low correlation. This low correlation—believe it or not!—is a good thing, and we call it divergent evidence of validity.
Both convergent and divergent evidence are types of construct-related evidence of validity.
Second, a reading achievement test should contain items that specifically measure reading achievement only, as opposed to writing or even math. As a result, we look for content-related evidence of validity. This evidence is contained in what we call a table of specifications or a test blueprint. The test blueprint shows all of the items in a test and the specific knowledge and skill areas that the items assess. Together, all of the items in a test should measure the construct we want to measure. I’ll tell you that although the test blueprint is enough to demonstrate validity evidence related to content, it is only a summary of a much lengthier item development process used to show this type of validity evidence.
Third, being able to compare scores from our test with scores from another similar test that we hold in high esteem is often desirable. This reputable test is an example of a criterion measure. If students take both the reading test and this criterion measure at approximately the same time, we look for a high correlation between the two sets of scores. We refer to this correlation coefficient as concurrent evidence of validity.
What if I told you that you can also predict—without a crystal ball—how your students will likely perform on an end-of-year reading achievement test based on their current scores on our reading test? You may not believe me, but you sure can! You simply take scores on the reading test taken early in the year and compare them with the end-of-year reading test scores for the same students. A high correlation between the two sets of scores tells you that students who score highly on the reading test are also likely to score highly on the end-of-year reading test. This correlation coefficient shows predictive evidence of validity.
Both concurrent and predictive evidence are types of criterion-related evidence of validity.
Finally, there’s a fourth type of validity evidence related to consequences. Validity evidence for consequences of testing refers to both the intended and the unintended consequences of score use. For example, our reading test is designed to measure reading achievement. This is the intended use if we only use it to show how students are performing in reading. However, this same test may also be used for teacher evaluation. This is an unintended score use in this particular instance, because whether the test accurately measures reading achievement—the purpose for which we validated the scores—has no direct relationship with teacher evaluation. If we desire to use the scores for teacher evaluation, we must seek new validity evidence for that specific use.
Still, there are other unintended consequences, usually negative, that don’t call for supporting validity evidence. An example might be an instance where the educator strays from the prescribed curriculum to focus on areas that might give his or her students a chance to score highly on the said reading test and hence deny the students an opportunity to learn important materials.
The burden of proof of validity evidence lies primarily with the test publisher, but a complete list of all unintended uses that may arise from test scores is beyond the realm of possibility. Who then is responsible for validity evidence of unintended score uses not documented by the test publisher? You guessed right—there’s still no agreement on that one.
Test score validity is a deep and complex topic. The above summary is by no means complete, but it gives you a snapshot of the most common types of validity evidence. Again, the specific interpretations we wish to make about test score uses will guide our validation process. Hence, the specific types of validity evidence we look for may be unique to our specific use for the test scores in question.
With validity evidence in hand, how then do you determine whether the evidence is good enough?
Although validity coefficients generally tend to be smaller than reliability coefficients, validity—much like reliability—is a matter of degree. Just how good is good enough is largely tied to the stakes in decision making. If the stakes are high, stronger evidence might be preferred than if the stakes were lower.
In general, some arbitrary guidelines are cited in literature to help test users interpret validity coefficients. Coefficients equal to .70 or greater are considered strong; coefficients ranging from .50 to .70 are considered moderate, and coefficients less than .50 are considered weak. Usually, there is additional evidence that these coefficients are not simply due to chance.
At Renaissance, we dedicate a whole chapter in the Star technical manuals to document validity as a body of evidence. Part of that evidence shows the validity coefficients, which for the Renaissance Star Assessments® range from moderate to strong. To summarize, when judging the validity of test scores, one should consider the available body of evidence. not just the individual coefficients.
For the best outcome, the validation of a test for specific uses is best achieved through collaboration between educators and the test designers. This joint effort ensures that the educator is aware of the intended uses for which the test is designed and seeks new evidence if there’s a need to use scores for purposes not yet validated.
Well, this concludes our series on reliability and validity. I hope this overview of the basics will help you make sense of test scores and better evaluate the assessments available. I hope you’ll also check out my next post on measurement error!
Catherine Close, Ph.D., is a psychometrician at Renaissance who primarily works with the Star computerized adaptive tests team. |
The Clem language
Clem (pronounced klem) is a stack based programming language with first-class functions created by User:Orby in 2014. The best way to learn Clem is to run the `clem` interpreter in interactive mode, allowing you to play with the available commands. A reference interpreter written in C is available here. To run the example programs which come with the reference interpreter, type `clem example.clm` where example is the name of the program. This brief tutorial should be enough to get you started.
There are two main classes of functions. Atomic functions and compound functions. Compound functions are lists composed of other compound functions and atomic functions. Note that a compound function cannot contain itself.
The first type of atomic function is the constant. A constant is simply an integer value. For example, -10. When the interpreter encounters a constant, it pushes it to the stack. Run `clem` now. Type `-10` at the prompt. You should see
> -10 001: (-10) >
The value `001` describes the position of the function in the stack and `(-10)` is the constant you just entered. Now enter `+11` at the prompt. You should see
> +11 002: (-10) 001: (11) >
Notice that `(-10)` has moved to the second position in the stack and `(11)` now occupies the first. This is the nature of a stack! All other atomic functions are commands. There are 14 in total:
@ Rotate the top three functions on the stack # Pop the function on top of the stack and push it twice $ Swap the top two functions on top of the stack % Pop the function on top of the stack and throw it away / Pop a compound function. Split off the first function, push what's left, then push the first function. . Pop two functions, concatenate them and push the result + Pop a function. If its a constant then increment it. Push it - Pop a function. If its a constant then decrement it. Push it < Get a character from STDIN and push it to the stack. Pushes -1 on EOF. > Pop a function and print its ASCII character if its a constant c Pop a function and print its value if its a constant w Pop a function from the stack. Peek at the top of the stack. While it is a non-zero constant, execute the function. q Quit h Print this list
Typing a command at the prompt will execute the command. Type `#` at the prompt (the duplicate command). You should see
> # 003: (-10) 002: (11) 001: (11) >
Notice that the (11) has been duplicated. Now type `%` at the prompt (the drop command). You should see
> % 002: (-10) 001: (11) >
To push a command to the stack, simply enclose it in parenthesis. Type `(-)` at the prompt. This will push the decrement command to the stack. You should see
> (-) 003: (-10) 002: (11) 001: (-) >
You may also enclose multiple atomic functions in parenthesis to form a compound function. When you enter a compound function at the prompt, it is pushed to the stack. Type `($+$)` at the prompt. You should see
> ($+$) 004: (-10) 003: (11) 002: (-) 001: ($ + $) >
Technically, everything on the stack is a compound function. However, some of the compound functions on the stack consist of a single atomic function (in which case, we will consider them to be atomic functions for the sake of convenience). When manipulating compound functions on the stack, the `.` command (concatenation) is frequently useful. Type `.` now. You should see
> . 003: (-10) 002: (11) 001: (- $ + $) >
Notice that the first and second functions on the stack were concatenated, and that the second function on the stack comes first in the resulting list. To execute a function that is on the stack (whether it is atomic or compound), we must issue the `w` command (while). The `w` command will pop the first function on the stack and execute it repeatedly so long as the second function on the stack is a non-zero constant. Try to predict what will happen if we type `w`. Now, type `w`. you should see
> w 002: (1) 001: (0) >
Is that what you expected? The two numbers sitting on top of the stack were added and their sum remains. Let's try it again. First we'll drop the zero and push a 10 by typing `%10`. You should see
> %10 002: (1) 001: (10) >
Now we'll type the entire function in one shot, but we'll add an extra `%` at the end to get rid of the zero. Type `(-$+$)w%` at the prompt. You should see
> (-$+$)w% 001: (11) >
(Note this algorithm only works if the first constant on the stack is positive).
Strings are also present. They are mostly syntactic sugar, but can be quite useful. When the interpreter encounters a string, it pushes each character from last to first onto the stack. Type `%` to drop the 11 from the previous example. Now, type `0 10 "Hi!"` on the prompt. The `0` will insert a NULL terminator and the `10` will insert a new-line character. You should see
> 0 10 "Hi!" 005: (0) 004: (10) 003: (33) 002: (105) 001: (72) >
Type `(>)w` to print characters from the stack until we encounter the NULL terminator. You should see
> (>)w Hi! 001: (0) >
These can easily be concatenated to produce a quine
> (")10$#34$(34 40 (>)w)....1$w")10$#34$(34 40 (>)w)....1$w (")10$#34$(34 40 (>)w)....1$w")10$#34$(34 40 (>)w)....1$w Empty stack >
Hopefully this should be enough to get you started with the interpreter. The language design should be relatively straightforward. E-mail firstname.lastname@example.org with any questions or comments. |
In this multiplication practice worksheet, students practice their math skills as they solve 36 problems that require them to multiply 3 digit numbers by 2 digit numbers.
3 Views 2 Downloads
Patterns in the Multiplication Table
Explore patterns in the multiplication table in order to deepen your third graders' understanding of this essential skill. Implement this activity as a whole-class lesson, allowing young scholars to work in pairs or small groups to...
3rd - 4th Math CCSS: Designed
Multiplication & Division Word Problems
Show your class all the hard work you have put into their lesson by showing this PowerPoint presentation. They will not only be proud of you, but it will help them solve multiplication and division word problems using the algorithms.
3rd - 5th Math CCSS: Adaptable
Solve Multiplication Problems: Using Repeated Addition
While young mathematicians are still working to memorize their multiplication facts, teach them how to use repeated addition when solving multiplication problems. The second video in a series models this skill with multiple examples,...
5 mins 2nd - 4th Math CCSS: Designed
Understand Multiplication Problems: Using Equal Groups
Understanding multiplication as the sum of equal sized groups is a big step for young mathematicians. This concept is clearly demonstrated in the first video of this series, as students learn to write multiplication equations to...
3 mins 2nd - 4th Math CCSS: Designed |
(Wikipedia) - Black hole (Redirected from Black Hole) For other uses, see Black hole (disambiguation).Simulated view of a black hole (center) in front of the Large Magellanic Cloud. Note the gravitational lensing effect, which produces two enlarged but highly distorted views of the Cloud. Across the top, the Milky Way disk appears distorted into an arc.
|Fundamental concepts |
- Equivalence principle
- Special relativity
- World line
- Riemannian geometry
- Kepler problem
- Gravitational lensing
- Gravitational waves
- Geodetic effect
- Event horizon
- Black hole
- Spacetime diagrams
- Minkowski spacetime
- Linearized gravity
- Einstein field equations
- Kaluza–Klein theory
- Quantum gravity
- de Sitter
- van Stockum
A black hole is a region of spacetime from which gravity prevents anything, including light, from escaping. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon. Although crossing the event horizon has enormous effect on the fate of the object crossing it, it appears to have no locally detectable features. In many ways a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, making it all but impossible to observe.
Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. The first modern solution of general relativity that would characterize a black hole was found by Karl Schwarzschild in 1916, although its interpretation as a region of space from which nothing can escape was first published by David Finkelstein in 1958. Long considered a mathematical curiosity, it was during the 1960s that theoretical work showed black holes were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality.
Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies.
Despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as light. Matter falling onto a black hole can form an accretion disk heated by friction, forming some of the brightest objects in the universe. If there are other stars orbiting a black hole, their orbit can be used to determine its mass and location. Such observations can be used to exclude possible alternatives (such as neutron stars). In this way, astronomers have identified numerous stellar black hole candidates in binary systems, and established that the core of the Milky Way contains a supermassive black hole of about 4.3 million solar masses.Contents
HistorySimulation of gravitational lensing by a black hole, which distorts the image of a galaxy in the background (larger animation)
- 1 History
- 1.1 General relativity
- 1.2 Golden age
- 2 Properties and structure
- 2.1 Physical properties
- 2.2 Event horizon
- 2.3 Singularity
- 2.4 Photon sphere
- 2.5 Ergosphere
- 3 Formation and evolution
- 3.1 Gravitational collapse
- 3.1.1 Primordial black holes in the Big Bang
- 3.2 High-energy collisions
- 3.3 Growth
- 3.4 Evaporation
- 4 Observational evidence
- 4.1 Accretion of matter
- 4.2 X-ray binaries
- 4.2.1 Quiescence and advection-dominated accretion flow
- 4.2.2 Quasi-periodic oscillations
- 4.3 Galactic nuclei
- 4.4 Effects of strong gravity
- 4.5 Alternatives
- 5 Open questions
- 5.1 Entropy and thermodynamics
- 5.2 Information loss paradox
- 6 See also
- 7 Notes
- 8 References
- 9 Further reading
- 10 External links
- 11 Overview tables
The idea of a body so massive that even light could not escape was first put forward by John Michell in a letter written to Henry Cavendish in 1783 of the Royal Society:
If the semi-diameter of a sphere of the same density as the Sun were to exceed that of the Sun in the proportion of 500 to 1, a body falling from an infinite height towards it would have acquired at its surface greater velocity than that of light, and consequently supposing light to be attracted by the same force in proportion to its vis inertiae, with other bodies, all light emitted from such a body would be made to return towards it by its own proper gravity. —John Michell
In 1796, mathematician Pierre-Simon Laplace promoted the same idea in the first and second editions of his book Exposition du système du Monde (it was removed from later editions). Such "dark stars" were largely ignored in the nineteenth century, since it was not understood how a massless wave such as light could be influenced by gravity. General relativity
In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light''s motion. Only a few months later, Karl Schwarzschild found a solution to the Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates (see Eddington–Finkelstein coordinates), although it took until 1933 for Georges Lemaître to realize that this meant the singularity at the Schwarzschild radius was an unphysical coordinate singularity.
In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 solar masses) has no stable solutions. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse. They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star, which is itself stable because of the Pauli exclusion principle. But in 1939, Robert Oppenheimer and others predicted that neutron stars above approximately three solar masses (the Tolman–Oppenheimer–Volkoff limit) would collapse into black holes for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes.
Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. Because of this property, the collapsed stars were called "frozen stars", because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it inside the Schwarzschild radius. Golden age See also: Golden age of general relativity
In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction". This did not strictly contradict Oppenheimer''s results, but extended them to include the point of view of infalling observers. Finkelstein''s solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it.
These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars in 1967, which, by 1969, were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse.
In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric; mass, angular momentum, and electric charge.
At first, it was suspected that the strange features of the black hole solutions were pathological artifacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose and Stephen Hawking used global techniques to prove that singularities appear generically.
Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory predicts that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole.
The first use of the term "black hole" in print was by journalist Ann Ewing in her article "''Black Holes'' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science. John Wheeler used term "black hole" a lecture in 1967, leading some to credit him with coining the phrase. After Wheeler''s use of the term, it was quickly adopted in general use. Properties and structure
The no-hair theorem states that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, charge, and angular momentum. Any two black holes that share the same values for these properties, or parameters, are indistinguishable according to classical (i.e. non-quantum) mechanics.
These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analog of Gauss''s law, the ADM mass, far away from the black hole. Likewise, the angular momentum can be measured from far away using frame dragging by the gravitomagnetic field.
When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behavior of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm. This is different from other field theories like electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behavior is so puzzling that it has been called the black hole information loss paradox. Physical properties
The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff''s theorem, it is the only vacuum solution that is spherically symmetric. This means that there is no observable difference between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore only correct near a black hole''s horizon; far away, the external gravitational field is identical to that of any other body of the same mass.
Solutions describing more general black holes also exist. Charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.
While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. In Planck units, the total electric charge Q and the total angular momentum J are expected to satisfy
for a black hole of mass M. Black holes saturating this inequality are called extremal. Solutions of Einstein''s equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations.
Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a common feature of compact objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value.
Black hole classifications
Class Mass Size
|Supermassive black hole ||~105–1010 MSun ||~0.001–400 AU |
|Intermediate-mass black hole ||~103 MSun ||~103 km ≈ REarth |
|Stellar black hole ||~10 MSun ||~30 km |
|Micro black hole ||up to ~MMoon ||up to ~0.1 mm |
Black holes are commonly classified according to their mass, independent of angular momentum J or electric charge Q. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is roughly proportional to the mass M through
where rsh is the Schwarzschild radius and MSun is the mass of the Sun. This relation is exact only for black holes with zero charge and angular momentum; for more general black holes it can differ up to a factor of 2. Event horizon Main article: Event horizon
| Far away from the black hole, a particle can move in any direction, as illustrated by the set of arrows. It is only restricted by the speed of light. |
| Closer to the black hole, spacetime starts to deform. There are more paths going towards the black hole than paths moving away. |
| Inside of the event horizon, all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape. |
The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can only pass inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine if such an event occurred.
As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole.
To a distant observer, clocks near a black hole appear to tick more slowly than those further away from the black hole. Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow down as it approaches the event horizon, taking an infinite time to reach it. At the same time, all processes on this object slow down, for a fixed outside observer, causing emitted light to appear redder and dimmer, an effect known as gravitational redshift. Eventually, at a point just before it reaches the event horizon, the falling object becomes so dim that it can no longer be seen.
On the other hand, an indestructible observer falling into a black hole does not notice any of these effects as he crosses the event horizon. According to his own clock, which appears to him to tick normally, he crosses the event horizon after a finite time without noting any singular behaviour. In particular, he is unable to determine exactly when he crosses it, as it is impossible to determine the location of the event horizon from local observations.
The shape of the event horizon of a black hole is always approximately spherical. For non-rotating (static) black holes the geometry is precisely spherical, while for rotating black holes the sphere is somewhat oblate. Singularity Main article: Gravitational singularity
At the center of a black hole as described by general relativity lies a gravitational singularity, a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point and for a rotating black hole, it is smeared out to form a ring singularity lying in the plane of rotation. In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density.
Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity, once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a point; after attaining a certain ideal velocity, it is best to free fall the rest of the way. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect".
In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of traveling to another universe is however only theoretical, since any perturbation will destroy this possibility. It also appears to be possible to follow closed timelike curves (going back to one''s own past) around the Kerr singularity, which lead to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes.
The appearance of singularities in general relativity is commonly perceived as signaling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities. Photon sphere Main article: Photon sphere
The photon sphere is a spherical boundary of zero thickness such that photons moving along tangents to the sphere will be trapped in a circular orbit. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. The orbits are dynamically unstable, hence any small perturbation (such as a particle of infalling matter) will grow over time, either setting it on an outward trajectory escaping the black hole or on an inward spiral eventually crossing the event horizon.
While light can still escape from inside the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light reaching an outside observer from inside the photon sphere must have been emitted by objects inside the photon sphere but still outside of the event horizon.
Other compact objects, such as neutron stars, can also have photon spheres. This follows from the fact that the gravitational field of an object does not depend on its actual size, hence any object that is smaller than 1.5 times the Schwarzschild radius corresponding to its mass will indeed have a photon sphere. Ergosphere Main article: ErgosphereThe ergosphere is an oblate spheroid region outside of the event horizon, where objects cannot remain stationary.
Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect becomes so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.
The ergosphere of a black hole is bounded by the (outer) event horizon on the inside and an oblate spheroid, which coincides with the event horizon at the poles and is noticeably wider around the equator. The outer boundary is sometimes called the ergosurface.
Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered. This energy is taken from the rotational energy of the black hole causing it to slow down. Formation and evolution
Considering the exotic nature of black holes, it may be natural to question if such bizarre objects could exist in nature or to suggest that they are merely pathological solutions to Einstein''s equations. Einstein himself wrongly thought that black holes would not form, because he held that the angular momentum of collapsing particles would stabilize their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to forming an event horizon.
Once an event horizon forms, Penrose proved that a singularity will form somewhere inside it. Shortly afterwards, Hawking showed that many cosmological solutions describing the Big Bang have singularities without scalar fields or other exotic matter (see Penrose–Hawking singularity theorems). The Kerr solution, the no-hair theorem and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. The primary formation process for black holes is expected to be the gravitational collapse of heavy objects such as stars, but there are also more exotic processes that can lead to the production of black holes. Gravitational collapse Main article: Gravitational collapse
Gravitational collapse occurs when an object''s internal pressure is insufficient to resist the object''s own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star''s temperature is no longer high enough to prevent it from collapsing under its own weight. The collapse may be stopped by the degeneracy pressure of the star''s constituents, condensing the matter in an exotic denser state. The result is one of the various types of compact star. The type of compact star formed depends on the mass of the remnant—the matter left over after the outer layers have been blown away, such from a supernova explosion or by pulsations leading to a planetary nebula. Note that this mass can be substantially less than the original star—remnants exceeding 5 solar masses are produced by stars that were over 20 solar masses before the collapse.
If the mass of the remnant exceeds about 3–4 solar masses (the Tolman–Oppenheimer–Volkoff limit)—either because the original star was very heavy or because the remnant collected additional mass through accretion of matter—even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure, see quark star) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.
The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to 103 solar masses. These black holes could be the seeds of the supermassive black holes found in the centers of most galaxies.
While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer sees the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Primordial black holes in the Big Bang
Gravitational collapse requires great density. In the current epoch of the universe these high densities are only found in stars, but in the early universe shortly after the big bang densities were much greater, possibly allowing for the creation of black holes. The high density alone is not enough to allow the formation of black holes since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to form in such a dense medium, there must be initial density perturbations that can then grow under their own gravity. Different models for the early universe vary widely in their predictions of the size of these perturbations. Various models predict the creation of black holes, ranging from a Planck mass to hundreds of thousands of solar masses. Primordial black holes could thus account for the creation of any type of black hole. High-energy collisionsA simulated event in the CMS detector, a collision in which a micro black hole may be created.
Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass (mP = √ħc/G ≈ 1.2×1019 GeV/c2 ≈ 2.2×10−8 kg), where quantum effects are expected to invalidate the predictions of general relativity. This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the Planck mass could be much lower: some braneworld scenarios for example put the boundary as low as 1 TeV/c2. This would make it conceivable for micro black holes to be created in the high-energy collisions occurring when cosmic rays hit the Earth''s atmosphere, or possibly in the new Large Hadron Collider at CERN. Yet these theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes should be formed in these collisions, it is expected that they would evaporate in about 10−25 seconds, posing no threat to the Earth. Growth
Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its direct surroundings and omnipresent cosmic background radiation. This is the primary process through which supermassive black holes seem to have grown. A similar process has been suggested for the formation of intermediate-mass black holes in globular clusters.
Another possibility is for a black hole to merge with other objects such as stars or even other black holes. Although not necessary for growth, this is thought to have been important, especially for the early development of supermassive black holes, which could have formed from the coagulation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Evaporation Main article: Hawking radiation
In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation; this effect that has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles in a perfect black body spectrum. Since Hawking''s publication, many others have verified the result through various approaches. If Hawking''s theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time because they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.
A stellar black hole of one solar mass has a Hawking temperature of about 100 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrink. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole needs to have less mass than the Moon. Such a black hole would have a diameter of less than a tenth of a millimeter.
If a black hole is very small the radiation effects are expected to become very strong. Even a black hole that is heavy compared to a human would evaporate in an instant. A black hole the weight of a car would have a diameter of about 10−24 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c2 would take less than 10−88 seconds to evaporate completely. For such a small black hole, quantum gravitation effects are expected to play an important role and could even—although current developments in quantum gravity do not indicate so—hypothetically make such a small black hole stable. Observational evidenceGas cloud ripped apart by black hole at the centre of the Milky Way.
By their very nature, black holes do not directly emit any signals other than the hypothetical Hawking radiation; since the Hawking radiation for an astrophysical black hole is predicted to be very weak, this makes it impossible to directly detect astrophysical black holes from the Earth. A possible exception to the Hawking radiation being weak is the last stage of the evaporation of light (primordial) black holes; searches for such flashes in the past have proven unsuccessful and provide stringent limits on the possibility of existence of light primordial black holes. NASA''s Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes.
Astrophysicists searching for black holes thus have to rely on indirect observations. A black hole''s existence can sometimes be inferred by observing its gravitational interactions with its surroundings. A project run by MIT''s Haystack Observatory is attempting to observe the event horizon of a black hole directly. Initial results are encouraging. Accretion of matter See also: Accretion discBlack hole with corona, X-ray source (artist''s concept).
Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disc-like structure around the object. Friction within the disc causes angular momentum to be transported outward, allowing matter to fall further inward, releasing potential energy and increasing the temperature of the gas.Blurring of X-rays near Black hole (NuSTAR; 12 August 2014).
In the case of compact objects such as white dwarfs, neutron stars, and black holes, the gas in the inner regions becomes so hot that it will emit vast amounts of radiation (mainly X-rays), which may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known; up to 40% of the rest mass of the accreted material can be emitted in radiation. (In nuclear fusion only about 0.7% of the rest mass will be emitted as energy.) In many cases, accretion discs are accompanied by relativistic jets emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood.
As such many of the universe''s more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion discs of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. X-ray binaries See also: X-ray binaryPlay mediaA computer simulation of a star being consumed by a black hole. The blue dot indicates the location of the black hole.
X-ray binaries are binary star systems that are luminous in the X-ray part of the spectrum. These X-ray emissions are generally thought to be caused by one of the component stars being a compact object accreting matter from the other (regular) star. The presence of an ordinary star in such a system provides a unique opportunity for studying the central object and determining if it might be a black hole.Play mediaThis animation compares the X-ray ''heartbeats'' of GRS 1915 and IGR J17091, two black holes that ingest gas from companion stars.
If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit (that is, the maximum mass a neutron star can have before collapsing) then the object cannot be a neutron star and is generally expected to be a black hole.
The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster and Paul Murdin in 1972. Some doubt, however, remained due to the uncertainties resultant from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system the companion star is relatively low mass allowing for more accurate estimates in the black hole mass. Moreover, these systems are only active in X-ray for several months once every 10–50 years. During the period of low X-ray emission (called quiescence), the accretion disc is extremely faint allowing for detailed observation of the companion star during this period. One of the best such candidates is V404 Cyg. Quiescence and advection-dominated accretion flow
The faintness of the accretion disc during quiescence is suspected to be caused by the flow entering a mode called an advection-dominated accretion flow (ADAF). In this mode, almost all the energy generated by friction in the disc is swept along with the flow instead of radiated away. If this model is correct, then it forms strong qualitative evidence for the presence of an event horizon. Because, if the object at the center of the disc had a solid surface, it would emit large amounts of radiation as the highly energetic gas hits the surface, an effect that is observed for neutron stars in a similar state. Quasi-periodic oscillations Main article: Quasi-periodic oscillations
The X-ray emission from accretion disks sometimes flickers at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of potential black holes. Galactic nuclei See also: Active galactic nucleus
Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of gas and dust called an accretion disk; and two jets that are perpendicular to the accretion disk.
Although supermassive black holes are expected to be found in most AGN, only some galaxies'' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy.
It is now widely accepted that the center of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy''s bulge, known as the M-sigma relation, strongly suggests a connection between the formation of the black hole and the galaxy itself.Simulation of gas cloud after close approach to the black hole at the centre of the Milky Way.
Currently, the best evidence for a supermassive black hole comes from studying the proper motion of stars near the center of our own Milky Way. Since 1995 astronomers have tracked the motion of 90 stars in a region called Sagittarius A*. By fitting their motion to Keplerian orbits they were able to infer in 1998 that 2.6 million solar masses must be contained in a volume with a radius of 0.02 lightyears. Since then one of the stars—called S2—has completed a full orbit. From the orbital data they were able to place better constraints on the mass and size of the object causing the orbital motion of stars in the Sagittarius A* region, finding that there is a spherical mass of 4.3 million solar masses contained within a radius of less than 0.002 lightyears. While this is more than 3000 times the Schwarzschild radius corresponding to that mass, it is at least consistent with the central object being a supermassive black hole, and no "realistic cluster is physically tenable". Effects of strong gravity
Another way that the black hole nature of an object may be tested in the future is through observation of effects caused by strong gravity in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected much like light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. However, it has never been directly observed for a black hole. One possibility for observing gravitational lensing by a black hole would be to observe stars in orbit around the black hole. There are several candidates for such an observation in orbit around Sagittarius A*.
Another option would be the direct observation of gravitational waves produced by an object falling into a black hole, for example a compact object falling into a supermassive black hole through an extreme mass ratio inspiral. Matching the observed waveform to the predictions of general relativity would allow precision measurements of the mass and angular momentum of the central object, while at the same time testing general relativity. These types of events are a primary target for the proposed Laser Interferometer Space Antenna. Alternatives See also: Exotic star
The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from general arguments in general relativity that any such object will have a maximum mass.
Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes (the average density of a 108 solar mass black hole is comparable to that of water). Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates.
The evidence for stellar and supermassive black holes implies that in order for black holes not to form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons (and thus no black holes). In 2002, much attention has been drawn by the fuzzball model in string theory. Based on calculations in specific situations in string theory, the proposal suggests that generically the individual states of a black hole solution do not have an event horizon or singularity, but that for a classical/semi-classical observer the statistical average of such states does appear just like an ordinary black hole in general relativity. Open questions Entropy and thermodynamics Further information: Black hole thermodynamicsThe formula for the Bekenstein–Hawking entropy (S) of a black hole, which depends on the area of the black hole (A). The constants are the speed of light (c), the Boltzmann constant (k), Newton''s constant (G), and the reduced Planck constant (ħ).
In 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of a system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease of the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area.
The link with the laws of thermodynamics was further strengthened by Hawking''s discovery that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation, however also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy.
One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard ''t Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume.
Although general relativity can be used to perform a semi-classical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities (such as mass, charge, pressure, etc.). Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity. Information loss paradox Main article: Black hole information paradox
List of unsolved problems in physics
|Is physical information lost in black holes? |
Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.
The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community (see Thorne–Hawking–Preskill bet). In quantum mechanics, loss of information corresponds to the violation of vital property called unitarity, which has to do with the conservation of probability. It has been argued that loss of unitarity would also imply violation of conservation of energy. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem. |
Critical thinking is the ability to think clearly and rationally, understanding the logical connection between ideas. It is a skill that can be learned and developed through practice.
Here are some of the key characteristics of critical thinking:
- Rational: Critical thinkers are able to think rationally and logically. They can identify and evaluate evidence, and use this information to draw sound conclusions.
- Independent: Critical thinkers are able to think independently and question assumptions. They are not afraid to challenge the status quo and look for new perspectives.
- Open-minded: Critical thinkers are open to new ideas and perspectives. They are willing to consider different viewpoints and are not afraid to change their own minds.
- Reflective: Critical thinkers are reflective and self-aware. They are able to think critically about their own thinking and identify potential biases.
- Communication: Critical thinkers are able to communicate their ideas clearly and concisely. They can explain complex concepts in a way that is easy to understand.
Critical thinking is an essential skill for success in many areas of life, including school, work, and relationships. It can help you to make better decisions, solve problems more effectively, and be more informed about the world around you.
Here are some examples of how critical thinking can be used in everyday life:
- When you are reading an article, you can use critical thinking to evaluate the author’s arguments and evidence.
- When you are making a decision, you can use critical thinking to weigh the pros and cons of different options.
- When you are faced with a problem, you can use critical thinking to identify the root cause of the problem and develop a solution.
- When you are interacting with others, you can use critical thinking to understand their perspectives and communicate your own ideas effectively.
Critical thinking is a skill that can be learned and developed through practice. There are many resources available to help you improve your critical thinking skills, such as books, articles, and online courses. You can also practice critical thinking by working on puzzles, playing games, and reading challenging material.
The more you practice critical thinking, the better you will become at it. And the better you are at critical thinking, the better equipped you will be to succeed in school, work, and life. |
In mathematics, an annulus (the Latin word for "little ring", with plural annuli) is a ring-shaped object, especially a region bounded by two concentric circles. The adjectival form is annular (as in annular eclipse).
The open annulus is topologically equivalent to both the open cylinder S1 × (0,1) and the punctured plane.
The area of an annulus is the difference in the areas of the larger circle of radius R and the smaller one of radius r:
The area of an annulus can be obtained from the length of the longest interval that can lie completely inside the annulus, 2*d in the accompanying diagram. This can be proven by the Pythagorean theorem; the length of the longest interval that can lie completely inside the annulus will be tangent to the smaller circle and form a right angle with its radius at that point. Therefore d and r are the sides of a right angled triangle with hypotenuse R and the area is given by:
The area can also be obtained via calculus by dividing the annulus up into an infinite number of annuli of infinitesimal width dρ and area 2πρ dρ and then integrating from ρ = r to ρ = R:
The area of an annulus sector of angle θ, with θ measured in radians, is given by:
In complex analysis an annulus ann(a; r, R) in the complex plane is an open region defined by:
If r is 0, the region is known as the punctured disk of radius R around the point a.
As a subset of the complex plane, an annulus can be considered as a Riemann surface. The complex structure of an annulus depends only on the ratio r/R. Each annulus ann(a; r, R) can be holomorphically mapped to a standard one centered at the origin and with outer radius 1 by the map
The inner radius is then r/R < 1.
The Hadamard three-circle theorem is a statement about the maximum value a holomorphic function may take inside an annulus.
- Annulus theorem (or conjecture)
- Spherical shell
- List of geometric shapes
- Annulus definition and properties With interactive animation
- Area of an annulus, formula With interactive animation |
Online Mock Tests
Chapter 2: Electrostatic Potential and Capacitance
Chapter 3: Current Electricity
Chapter 4: Moving Charges and Magnetism
Chapter 5: Magnetism and Matter
Chapter 6: Electromagnetic Induction
Chapter 7: Alternating Current
Chapter 8: Electromagnetic Waves
Chapter 9: Ray Optics and Optical Instruments
Chapter 10: Wave Optics
Chapter 11: Dual Nature of Radiation and Matter
Chapter 12: Atoms
Chapter 13: Nuclei
Chapter 14: Semiconductor Electronics: Materials, Devices and Simple Circuits
Chapter 15: Communication Systems
Chapter 1: Electric Charge and Fields
NCERT solutions for Class 12 Physics Textbook Chapter 1 Electric Charge and Fields Exercise [Pages 46 - 50]
What is the force between two small charged spheres having charges of 2 × 10−7 C and 3 × 10−7 C placed 30 cm apart in air?
The electrostatic force on a small sphere of charge 0.4 μC due to another small sphere of charge − 0.8 μC in air is 0.2 N.
(a) What is the distance between the two spheres?
(b) What is the force on the second sphere due to the first?
Check that the ratio ke2/G memp is dimensionless. Look up a Table of Physical Constants and determine the value of this ratio. What does the ratio signify?
Explain the meaning of the statement ‘electric charge of a body is quantised’.
Why can one ignore quantisation of electric charge when dealing with macroscopic i.e., large scale charges?
When a glass rod is rubbed with a silk cloth, charges appear on both. A similar phenomenon is observed with many other pairs of bodies. Explain how this observation is consistent with the law of conservation of charge.
Four point charges qA = 2 μC, qB = −5 μC, qC = 2 μC, and qD = −5 μC are located at the corners of a square ABCD of side 10 cm. What is the force on a charge of 1 μC placed at the centre of the square?
An electrostatic field line is a continuous curve. That is, a field line cannot have sudden breaks. Why not?
Why do the electric field lines never cross each other?
Two point charges qA = 3 μC and qB = −3 μC are located 20 cm apart in vacuum.
(a) What is the electric field at the midpoint O of the line AB joining the two charges?
(b) If a negative test charge of magnitude 1.5 × 10−9 C is placed at this point, what is the force experienced by the test charge?
A system has two charges qA = 2.5 × 10−7 C and qB = −2.5 × 10−7 C located at points A: (0, 0, − 15 cm) and B: (0, 0, + 15 cm), respectively. What are the total charge and electric dipole moment of the system?
An electric dipole with dipole moment 4 × 10−9 C m is aligned at 30° with the direction of a uniform electric field of magnitude 5 × 104 N C−1. Calculate the magnitude of the torque acting on the dipole.
A polythene piece rubbed with wool is found to have a negative charge of 3 × 10−7 C.
(a) Estimate the number of electrons transferred (from which to which?)
(b) Is there a transfer of mass from wool to polythene?
Two insulated charged copper spheres A and B have their centers separated by a distance of 50 cm. What is the mutual force of electrostatic repulsion if the charge on each is 6.5 × 10−7 C? The radii of A and B are negligible compared to the distance of separation.
What is the force of repulsion if each sphere is charged double the above amount, and the distance between them is halved?
Suppose the spheres A and B in Exercise 1.12 have identical sizes. A third sphere of the same size but uncharged is brought in contact with the first, then brought in contact with the second, and finally removed from both. What is the new force of repulsion between A and B?
The figure shows tracks of three charged particles in a uniform electrostatic field. Give the signs of the three charges. Which particle has the highest charge to mass ratio?
Consider a uniform electric field E = 3 × 103 `hat"I"` N/C.
(a) What is the flux of this field through a square of 10 cm on a side whose plane is parallel to the yz plane?
(b) What is the flux through the same square if the normal to its plane makes a 60° angle with the x-axis?
What is the net flux of the uniform electric field of Exercise 1.15 through a cube of side 20 cm oriented so that its faces are parallel to the coordinate planes?
Careful measurement of the electric field at the surface of a black box indicates that the net outward flux through the surface of the box is 8.0 × 103 N m2/C.
(a) What is the net charge inside the box?
(b) If the net outward flux through the surface of the box were zero, could you conclude that there were no charges inside the box? Why or Why not?
A point charge +10 μC is a distance 5 cm directly above the centre of a square of side 10 cm, as shown in the Figure. What is the magnitude of the electric flux through the square? (Hint: Think of the square as one face of a cube with edge 10 cm.)
A point charge of 2.0 μC is at the centre of a cubic Gaussian surface 9.0 cm on edge. What is the net electric flux through the surface?
A point charge causes an electric flux of −1.0 × 103 Nm2/C to pass through a spherical Gaussian surface of 10.0 cm radius centred on the charge.
(a) If the radius of the Gaussian surface were doubled, how much flux would pass through the surface?
(b) What is the value of the point charge?
A conducting sphere of radius 10 cm has an unknown charge. If the electric field 20 cm from the centre of the sphere is 1.5 × 103 N/C and points radially inward, what is the net charge on the sphere?
A uniformly charged conducting sphere of 2.4 m diameter has a surface charge density of 80.0 μC/m2.
(a) Find the charge on the sphere.
(b) What is the total electric flux leaving the surface of the sphere?
An infinite line charge produces a field of 9 × 104 N/C at a distance of 2 cm. Calculate the linear charge density.
Two large, thin metal plates are parallel and close to each other. On their inner faces, the plates have surface charge densities of opposite signs and of magnitude 17.0 × 10−22 C/m2. What is E: (a) in the outer region of the first plate, (b) in the outer region of the second plate, and (c) between the plates?
An oil drop of 12 excess electrons is held stationary under a constant electric field of 2.55 × 104 N C−1 (Millikan’s oil drop experiment). The density of the oil is 1.26 g cm−3. Estimate the radius of the drop. (g = 9.81 m s−2; e = 1.60 × 10−19 C).
Which among the curves shown in the fig. cannot possibly represent electrostatic field lines?
In a certain region of space, electric field is along the z-direction throughout. The magnitude of electric field is, however, not constant but increases uniformly along the positive z-direction, at the rate of 105 NC−1 per metre. What are the force and torque experienced by a system having a total dipole moment equal to 10−7 Cm in the negative z-direction?
(a) A conductor A with a cavity as shown in Fig (a) is given a charge Q. Show that the entire charge must appear on the outer surface of the conductor.
(b) Another conductor B with charge q is inserted into the cavity keeping B insulated from A. Show that the total charge on the outside surface of A is Q + q [Fig. (b)].
(c) A sensitive instrument is to be shielded from the strong electrostatic fields in its environment. Suggest a possible way.
A hollow charged conductor has a tiny hole cut into its surface. Show that the electric field in the hole is `(sigma/(2in_0)) hat"n"`. where `hat"n"` is the unit vector in the outward normal direction, and `sigma` is the surface charge density near the hole.
Obtain the formula for the electric field due to a long thin wire of uniform linear charge density λ without using Gauss’s law. [Hint: Use Coulomb’s law directly and evaluate the necessary integral.]
It is now believed that protons and neutrons (which constitute nuclei of ordinary matter) are themselves built out of more elementary units called quarks. A proton and a neutron consist of three quarks each. Two types of quarks, the so called ‘up’ quark (denoted by u) of charge (+2/3) e, and the ‘down’ quark (denoted by d) of charge (−1/3) e, together with electrons build up ordinary matter. (Quarks of other types have also been found which give rise to different unusual varieties of matter.) Suggest a possible quark composition of a proton and neutron.
(a) Consider an arbitrary electrostatic field configuration. A small test charge is placed at a null point (i.e., where E = 0) of the configuration. Show that the equilibrium of the test charge is necessarily unstable.
(b) Verify this result for the simple configuration of two charges of the same magnitude and sign placed a certain distance apart.
A particle of mass m and charge (−q) enters the region between the two charged plates initially moving along x-axis with speed vx (like particle 1 in the fig.). The length of plate is L and an uniform electric field E is maintained between the plates. Show that the vertical deflection of the particle at the far edge of the plate is qEL2/(2m`"v"_"x"^2`).
Suppose that the particle is an electron projected with velocity vx = 2.0 × 106 m s−1. If E between the plates separated by 0.5 cm is 9.1 × 102 N/C, where will the electron strike the upper plate? (|e| = 1.6 × 10−19 C, me = 9.1 × 10−31 kg)
Chapter 1: Electric Charge and Fields
NCERT solutions for Class 12 Physics Textbook chapter 1 - Electric Charge and Fields
NCERT solutions for Class 12 Physics Textbook chapter 1 (Electric Charge and Fields) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Class 12 Physics Textbook solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com provide such solutions so that students can prepare for written exams. NCERT textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 12 Physics Textbook chapter 1 Electric Charge and Fields are Gauss’s Law, Physical Significance of Electric Field, Electric Field Due to a System of Charges, Charging by Induction, Electric Field Due to a Point Charge, Uniformly Charged Infinite Plane Sheet and Uniformly Charged Thin Spherical Shell (Field Inside and Outside), Applications of Gauss’s Law, Electric Flux, Dipole in a Uniform External Field, Electric Dipole, Electric Field Lines, Introduction of Electric Field, Continuous Distribution of Charges, Superposition Principle of Forces, Superposition Principle - Forces Between Multiple Charges, Force Between Two Point Charges, Coulomb’s Law - Force Between Two Point Charges, Basic Properties of Electric Charge, Electric Charges.
Using NCERT Class 12 solutions Electric Charge and Fields exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in NCERT Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 12 prefer NCERT Textbook Solutions to score more in exam.
Get the free view of chapter 1 Electric Charge and Fields Class 12 extra questions for Class 12 Physics Textbook and can use Shaalaa.com to keep it handy for your exam preparation |
The lipid bilayer (or phospholipid bilayer) is a thin polar membrane made of two layers of lipid molecules. These membranes are flat sheets that form a continuous barrier around all cells. The cell membranes of almost all organisms and many viruses are made of a lipid bilayer, as are the nuclear membrane surrounding the cell nucleus, and other membranes surrounding sub-cellular structures. The lipid bilayer is the barrier that keeps ions, proteins and other molecules where they are needed and prevents them from diffusing into areas where they should not be. Lipid bilayers are ideally suited to this role, even though they are only a few nanometers in width, they are impermeable to most water-soluble (hydrophilic) molecules. Bilayers are particularly impermeable to ions, which allows cells to regulate salt concentrations and pH by transporting ions across their membranes using proteins called ion pumps.
Biological bilayers are usually composed of amphiphilic phospholipids that have a hydrophilic phosphate head and a hydrophobic tail consisting of two fatty acid chains. Phospholipids with certain head groups can alter the surface chemistry of a bilayer and can, for example, serve as signals as well as "anchors" for other molecules in the membranes of cells. Just like the heads, the tails of lipids can also affect membrane properties, for instance by determining the phase of the bilayer. The bilayer can adopt a solid gel phase state at lower temperatures but undergo phase transition to a fluid state at higher temperatures, and the chemical properties of the lipids' tails influence at which temperature this happens. The packing of lipids within the bilayer also affects its mechanical properties, including its resistance to stretching and bending. Many of these properties have been studied with the use of artificial "model" bilayers produced in a lab. Vesicles made by model bilayers have also been used clinically to deliver drugs.
Biological membranes typically include several types of molecules other than phospholipids. A particularly important example in animal cells is cholesterol, which helps strengthen the bilayer and decrease its permeability. Cholesterol also helps regulate the activity of certain integral membrane proteins. Integral membrane proteins function when incorporated into a lipid bilayer, and they are held tightly to lipid bilayer with the help of an annular lipid shell. Because bilayers define the boundaries of the cell and its compartments, these membrane proteins are involved in many intra- and inter-cellular signaling processes. Certain kinds of membrane proteins are involved in the process of fusing two bilayers together. This fusion allows the joining of two distinct structures as in the fertilization of an egg by sperm or the entry of a virus into a cell. Because lipid bilayers are quite fragile and invisible in a traditional microscope, they are a challenge to study. Experiments on bilayers often require advanced techniques like electron microscopy and atomic force microscopy.
When phospholipids are exposed to water, they self-assemble into a two-layered sheet with the hydrophobic tails pointing toward the center of the sheet. This arrangement results in two "leaflets" that are each a single molecular layer. The center of this bilayer contains almost no water and excludes molecules like sugars or salts that dissolve in water. The assembly process is driven by interactions between hydrophobic molecules (also called the hydrophobic effect). An increase in interactions between hydrophobic molecules (causing clustering of hydrophobic regions) allows water molecules to bond more freely with each other, increasing the entropy of the system. This complex process includes non-covalent interactions such as van der Waals forces, electrostatic and hydrogen bonds.
The lipid bilayer is very thin compared to its lateral dimensions. If a typical mammalian cell (diameter ~10 micrometers) were magnified to the size of a watermelon (~1 ft/30 cm), the lipid bilayer making up the plasma membrane would be about as thick as a piece of office paper. Despite being only a few nanometers thick, the bilayer is composed of several distinct chemical regions across its cross-section. These regions and their interactions with the surrounding water have been characterized over the past several decades with x-ray reflectometry,neutron scattering and nuclear magnetic resonance techniques.
The first region on either side of the bilayer is the hydrophilic headgroup. This portion of the membrane is completely hydrated and is typically around 0.8-0.9 nm thick. In phospholipid bilayers the phosphate group is located within this hydrated region, approximately 0.5 nm outside the hydrophobic core. In some cases, the hydrated region can extend much further, for instance in lipids with a large protein or long sugar chain grafted to the head. One common example of such a modification in nature is the lipopolysaccharide coat on a bacterial outer membrane, which helps retain a water layer around the bacterium to prevent dehydration.
Next to the hydrated region is an intermediate region that is only partially hydrated. This boundary layer is approximately 0.3 nm thick. Within this short distance, the water concentration drops from 2M on the headgroup side to nearly zero on the tail (core) side. The hydrophobic core of the bilayer is typically 3-4 nm thick, but this value varies with chain length and chemistry. Core thickness also varies significantly with temperature, in particular near a phase transition.
In many naturally occurring bilayers, the compositions of the inner and outer membrane leaflets are different. In human red blood cells, the inner (cytoplasmic) leaflet is composed mostly of phosphatidylethanolamine, phosphatidylserine and phosphatidylinositol and its phosphorylated derivatives. By contrast, the outer (extracellular) leaflet is based on phosphatidylcholine, sphingomyelin and a variety of glycolipids. In some cases, this asymmetry is based on where the lipids are made in the cell and reflects their initial orientation. The biological functions of lipid asymmetry are imperfectly understood, although it is clear that it is used in several different situations. For example, when a cell undergoes apoptosis, the phosphatidylserine -- normally localised to the cytoplasmic leaflet -- is transferred to the outer surface: There, it is recognised by a macrophage that then actively scavenges the dying cell.
Lipid asymmetry arises, at least in part, from the fact that most phospholipids are synthesised and initially inserted into the inner monolayer: those that constitute the outer monolayer are then transported from the inner monolayer by a class of enzymes called flippases. Other lipids, such as sphingomyelin, appear to be synthesised at the external leaflet. Flippases are members of a larger family of lipid transport molecules that also includes floppases, which transfer lipids in the opposite direction, and scramblases, which randomize lipid distribution across lipid bilayers (as in apoptotic cells). In any case, once lipid asymmetry is established, it does not normally dissipate quickly because spontaneous flip-flop of lipids between leaflets is extremely slow.
It is possible to mimic this asymmetry in the laboratory in model bilayer systems. Certain types of very small artificial vesicle will automatically make themselves slightly asymmetric, although the mechanism by which this asymmetry is generated is very different from that in cells. By utilizing two different monolayers in Langmuir-Blodgett deposition or a combination of Langmuir-Blodgett and vesicle rupture deposition it is also possible to synthesize an asymmetric planar bilayer. This asymmetry may be lost over time as lipids in supported bilayers can be prone to flip-flop.
At a given temperature a lipid bilayer can exist in either a liquid or a gel (solid) phase. All lipids have a characteristic temperature at which they transition (melt) from the gel to liquid phase. In both phases the lipid molecules are prevented from flip-flopping across the bilayer, but in liquid phase bilayers a given lipid will exchange locations with its neighbor millions of times a second. This random walk exchange allows lipid to diffuse and thus wander across the surface of the membrane. Unlike liquid phase bilayers, the lipids in a gel phase bilayer have less mobility.
The phase behavior of lipid bilayers is determined largely by the strength of the attractive Van der Waals interactions between adjacent lipid molecules. Longer-tailed lipids have more area over which to interact, increasing the strength of this interaction and, as a consequence, decreasing the lipid mobility. Thus, at a given temperature, a short-tailed lipid will be more fluid than an otherwise identical long-tailed lipid. Transition temperature can also be affected by the degree of unsaturation of the lipid tails. An unsaturated double bond can produce a kink in the alkane chain, disrupting the lipid packing. This disruption creates extra free space within the bilayer that allows additional flexibility in the adjacent chains. An example of this effect can be noted in everyday life as butter, which has a large percentage saturated fats, is solid at room temperature while vegetable oil, which is mostly unsaturated, is liquid.
Most natural membranes are a complex mixture of different lipid molecules. If some of the components are liquid at a given temperature while others are in the gel phase, the two phases can coexist in spatially separated regions, rather like an iceberg floating in the ocean. This phase separation plays a critical role in biochemical phenomena because membrane components such as proteins can partition into one or the other phase and thus be locally concentrated or activated. One particularly important component of many mixed phase systems is cholesterol, which modulates bilayer permeability, mechanical strength, and biochemical interactions.
While lipid tails primarily modulate bilayer phase behavior, it is the headgroup that determines the bilayer surface chemistry. Most natural bilayers are composed primarily of phospholipids, but sphingolipids and sterols such as cholesterol are also important components. Of the phospholipids, the most common headgroup is phosphatidylcholine (PC), accounting for about half the phospholipids in most mammalian cells. PC is a zwitterionic headgroup, as it has a negative charge on the phosphate group and a positive charge on the amine but, because these local charges balance, no net charge.
Other headgroups are also present to varying degrees and can include phosphatidylserine (PS) phosphatidylethanolamine (PE) and phosphatidylglycerol (PG). These alternate headgroups often confer specific biological functionality that is highly context-dependent. For instance, PS presence on the extracellular membrane face of erythrocytes is a marker of cell apoptosis, whereas PS in growth plate vesicles is necessary for the nucleation of hydroxyapatite crystals and subsequent bone mineralization. Unlike PC, some of the other headgroups carry a net charge, which can alter the electrostatic interactions of small molecules with the bilayer.
The primary role of the lipid bilayer in biology is to separate aqueous compartments from their surroundings. Without some form of barrier delineating "self" from "non-self," it is difficult to even define the concept of an organism or of life. This barrier takes the form of a lipid bilayer in all known life forms except for a few species of archaea that utilize a specially adapted lipid monolayer. It has even been proposed that the very first form of life may have been a simple lipid vesicle with virtually its sole biosynthetic capability being the production of more phospholipids. The partitioning ability of the lipid bilayer is based on the fact that hydrophilic molecules cannot easily cross the hydrophobic bilayer core, as discussed in Transport across the bilayer below. The nucleus, mitochondria and chloroplasts have two lipid bilayers, while other sub-cellular structures are surrounded by a single lipid bilayer (such as the plasma membrane, endoplasmic reticula, Golgi apparatus and lysosomes). See Organelle.
Prokaryotes have only one lipid bilayer- the cell membrane (also known as the plasma membrane). Many prokaryotes also have a cell wall, but the cell wall is composed of proteins or long chain carbohydrates, not lipids. In contrast, eukaryotes have a range of organelles including the nucleus, mitochondria, lysosomes and endoplasmic reticulum. All of these sub-cellular compartments are surrounded by one or more lipid bilayers and, together, typically comprise the majority of the bilayer area present in the cell. In liver hepatocytes for example, the plasma membrane accounts for only two percent of the total bilayer area of the cell, whereas the endoplasmic reticulum contains more than fifty percent and the mitochondria a further thirty percent.
Probably the most familiar form of cellular signaling is synaptic transmission, whereby a nerve impulse that has reached the end of one neuron is conveyed to an adjacent neuron via the release of neurotransmitters. This transmission is made possible by the action of synaptic vesicles loaded with the neurotransmitters to be released. These vesicles fuse with the cell membrane at the pre-synaptic terminal and release its contents to the exterior of the cell. The contents then diffuse across the synapse to the post-synaptic terminal.
Lipid bilayers are also involved in signal transduction through their role as the home of integral membrane proteins. This is an extremely broad and important class of biomolecule. It is estimated that up to a third of the human proteome are membrane proteins. Some of these proteins are linked to the exterior of the cell membrane. An example of this is the CD59 protein, which identifies cells as "self" and thus inhibits their destruction by the immune system. The HIV virus evades the immune system in part by grafting these proteins from the host membrane onto its own surface. Alternatively, some membrane proteins penetrate all the way through the bilayer and serve to relay individual signal events from the outside to the inside of the cell. The most common class of this type of protein is the G protein-coupled receptor (GPCR). GPCRs are responsible for much of the cell's ability to sense its surroundings and, because of this important role, approximately 40% of all modern drugs are targeted at GPCRs.
In addition to protein- and solution-mediated processes, it is also possible for lipid bilayers to participate directly in signaling. A classic example of this is phosphatidylserine-triggered phagocytosis. Normally, phosphatidylserine is asymmetrically distributed in the cell membrane and is present only on the interior side. During programmed cell death a protein called a scramblase equilibrates this distribution, displaying phosphatidylserine on the extracellular bilayer face. The presence of phosphatidylserine then triggers phagocytosis to remove the dead or dying cell.
The lipid bilayer is a very difficult structure to study because it is so thin and fragile. In spite of these limitations dozens of techniques have been developed over the last seventy years to allow investigations of its structure and function.
Electrical measurements are a straightforward way to characterize an important function of a bilayer: its ability to segregate and prevent the flow of ions in solution. By applying a voltage across the bilayer and measuring the resulting current, the resistance of the bilayer is determined. This resistance is typically quite high (108 Ohm-cm2 or more) since the hydrophobic core is impermeable to charged species. The presence of even a few nanometer-scale holes results in a dramatic increase in current. The sensitivity of this system is such that even the activity of single ion channels can be resolved.
Electrical measurements do not provide an actual picture like imaging with a microscope can. Lipid bilayers cannot be seen in a traditional microscope because they are too thin. In order to see bilayers, researchers often use fluorescence microscopy. A sample is excited with one wavelength of light and observed in a different wavelength, so that only fluorescent molecules with a matching excitation and emission profile will be seen. Natural lipid bilayers are not fluorescent, so a dye is used that attaches to the desired molecules in the bilayer. Resolution is usually limited to a few hundred nanometers, much smaller than a typical cell but much larger than the thickness of a lipid bilayer.
Electron microscopy offers a higher resolution image. In an electron microscope, a beam of focused electrons interacts with the sample rather than a beam of light as in traditional microscopy. In conjunction with rapid freezing techniques, electron microscopy has also been used to study the mechanisms of inter- and intracellular transport, for instance in demonstrating that exocytotic vesicles are the means of chemical release at synapses.
31P-NMR(nuclear magnetic resonance) spectroscopy is widely used for studies of phospholipid bilayers and biological membranes in native conditions. The analysis of 31P-NMR spectra of lipids could provide a wide range of information about lipid bilayer packing, phase transitions (gel phase, physiological liquid crystal phase, ripple phases, non bilayer phases), lipid head group orientation/dynamics, and elastic properties of pure lipid bilayer and as a result of binding of proteins and other biomolecules.
A new method to study lipid bilayers is Atomic force microscopy (AFM). Rather than using a beam of light or particles, a very small sharpened tip scans the surface by making physical contact with the bilayer and moving across it, like a record player needle. AFM is a promising technique because it has the potential to image with nanometer resolution at room temperature and even under water or physiological buffer, conditions necessary for natural bilayer behavior. Utilizing this capability, AFM has been used to examine dynamic bilayer behavior including the formation of transmembrane pores (holes) and phase transitions in supported bilayers. Another advantage is that AFM does not require fluorescent or isotopic labeling of the lipids, since the probe tip interacts mechanically with the bilayer surface. Because of this, the same scan can image both lipids and associated proteins, sometimes even with single-molecule resolution. AFM can also probe the mechanical nature of lipid bilayers.
Lipid bilayers exhibit high levels of birefringence where the refractive index in the plane of the bilayer differs from that perpendicular by as much as 0.1 refractive index units. This has been used to characterise the degree of order and disruption in bilayers using dual polarisation interferometry to understand mechanisms of protein interaction.
Lipid bilayers are complicated molecular systems with many degrees of freedom. Thus atomistic simulation of membrane and in particular ab initio calculations of its properties is difficult and computationally expensive. Quantum chemical calculations has recently been successfully performed to estimate dipole and quadrupole moments of lipid membranes.
Most polar molecules have low solubility in the hydrocarbon core of a lipid bilayer and, as a consequence, have low permeability coefficients across the bilayer. This effect is particularly pronounced for charged species, which have even lower permeability coefficients than neutral polar molecules.Anions typically have a higher rate of diffusion through bilayers than cations. Compared to ions, water molecules actually have a relatively large permeability through the bilayer, as evidenced by osmotic swelling. When a cell or vesicle with a high interior salt concentration is placed in a solution with a low salt concentration it will swell and eventually burst. Such a result would not be observed unless water was able to pass through the bilayer with relative ease. The anomalously large permeability of water through bilayers is still not completely understood and continues to be the subject of active debate. Small uncharged apolar molecules diffuse through lipid bilayers many orders of magnitude faster than ions or water. This applies both to fats and organic solvents like chloroform and ether. Regardless of their polar character larger molecules diffuse more slowly across lipid bilayers than small molecules.
Two special classes of protein deal with the ionic gradients found across cellular and sub-cellular membranes in nature- ion channels and ion pumps. Both pumps and channels are integral membrane proteins that pass through the bilayer, but their roles are quite different. Ion pumps are the proteins that build and maintain the chemical gradients by utilizing an external energy source to move ions against the concentration gradient to an area of higher chemical potential. The energy source can be ATP, as is the case for the Na+-K+ ATPase. Alternatively, the energy source can be another chemical gradient already in place, as in the Ca2+/Na+ antiporter. It is through the action of ion pumps that cells are able to regulate pH via the pumping of protons.
In contrast to ion pumps, ion channels do not build chemical gradients but rather dissipate them in order to perform work or send a signal. Probably the most familiar and best studied example is the voltage-gated Na+ channel, which allows conduction of an action potential along neurons. All ion pumps have some sort of trigger or "gating" mechanism. In the previous example it was electrical bias, but other channels can be activated by binding a molecular agonist or through a conformational change in another nearby protein.
Some molecules or particles are too large or too hydrophilic to pass through a lipid bilayer. Other molecules could pass through the bilayer but must be transported rapidly in such large numbers that channel-type transport is impractical. In both cases, these types of cargo can be moved across the cell membrane through fusion or budding of vesicles. When a vesicle is produced inside the cell and fuses with the plasma membrane to release its contents into the extracellular space, this process is known as exocytosis. In the reverse process, a region of the cell membrane will dimple inwards and eventually pinch off, enclosing a portion of the extracellular fluid to transport it into the cell. Endocytosis and exocytosis rely on very different molecular machinery to function, but the two processes are intimately linked and could not work without each other. The primary mechanism of this interdependence is the large amount of lipid material involved. In a typical cell, an area of bilayer equivalent to the entire plasma membrane will travel through the endocytosis/exocytosis cycle in about half an hour. If these two processes were not balancing each other, the cell would either balloon outward to an unmanageable size or completely deplete its plasma membrane within a short time.
Exocytosis in prokaryotes: Membrane vesicular exocytosis, popularly known as membrane vesicle trafficking, a Nobel prize-winning (year, 2013) process, is traditionally regarded as a prerogative of eukaryotic cells. This myth was however broken with the revelation that nanovesicles, popularly known as bacterial outer membrane vesicles, released by gram-negative microbes, translocate bacterial signal molecules to host or target cells to carry out multiple processes in favour of the secreting microbe e.g., in host cell invasion and microbe-environment interactions, in general.
Electroporation is the rapid increase in bilayer permeability induced by the application of a large artificial electric field across the membrane. Experimentally, electroporation is used to introduce hydrophilic molecules into cells. It is a particularly useful technique for large highly charged molecules such as DNA, which would never passively diffuse across the hydrophobic bilayer core. Because of this, electroporation is one of the key methods of transfection as well as bacterial transformation. It has even been proposed that electroporation resulting from lightning strikes could be a mechanism of natural horizontal gene transfer.
This increase in permeability primarily affects transport of ions and other hydrated species, indicating that the mechanism is the creation of nm-scale water-filled holes in the membrane. Although electroporation and dielectric breakdown both result from application of an electric field, the mechanisms involved are fundamentally different. In dielectric breakdown the barrier material is ionized, creating a conductive pathway. The material alteration is thus chemical in nature. In contrast, during electroporation the lipid molecules are not chemically altered but simply shift position, opening up a pore that acts as the conductive pathway through the bilayer as it is filled with water.
Lipid bilayers are large enough structures to have some of the mechanical properties of liquids or solids. The area compression modulus Ka, bending modulus Kb, and edge energy , can be used to describe them. Solid lipid bilayers also have a shear modulus, but like any liquid, the shear modulus is zero for fluid bilayers. These mechanical properties affect how the membrane functions. Ka and Kb affect the ability of proteins and small molecules to insert into the bilayer, and bilayer mechanical properties have been shown to alter the function of mechanically activated ion channels. Bilayer mechanical properties also govern what types of stress a cell can withstand without tearing. Although lipid bilayers can easily bend, most cannot stretch more than a few percent before rupturing.
As discussed in the Structure and organization section, the hydrophobic attraction of lipid tails in water is the primary force holding lipid bilayers together. Thus, the elastic modulus of the bilayer is primarily determined by how much extra area is exposed to water when the lipid molecules are stretched apart. It is not surprising given this understanding of the forces involved that studies have shown that Ka varies strongly with osmotic pressure but only weakly with tail length and unsaturation. Because the forces involved are so small, it is difficult to experimentally determine Ka. Most techniques require sophisticated microscopy and very sensitive measurement equipment.
In contrast to Ka, which is a measure of how much energy is needed to stretch the bilayer, Kb is a measure of how much energy is needed to bend or flex the bilayer. Formally, bending modulus is defined as the energy required to deform a membrane from its intrinsic curvature to some other curvature. Intrinsic curvature is defined by the ratio of the diameter of the head group to that of the tail group. For two-tailed PC lipids, this ratio is nearly one so the intrinsic curvature is nearly zero. If a particular lipid has too large a deviation from zero intrinsic curvature it will not form a bilayer and will instead form other phases such as micelles or inverted micelles. Addition of small hydrophilic molecules like sucrose into mixed lipid lamellar liposomes made from galactolipid-rich thylakoid membranes destabilises bilayers into micellar phase. Typically, Kb is not measured experimentally but rather is calculated from measurements of Ka and bilayer thickness, since the three parameters are related.
is a measure of how much energy it takes to expose a bilayer edge to water by tearing the bilayer or creating a hole in it. The origin of this energy is the fact that creating such an interface exposes some of the lipid tails to water, but the exact orientation of these border lipids is unknown. There is some evidence that both hydrophobic (tails straight) and hydrophilic (heads curved around) pores can coexist.
Fusion is the process by which two lipid bilayers merge, resulting in one connected structure. If this fusion proceeds completely through both leaflets of both bilayers, a water-filled bridge is formed and the solutions contained by the bilayers can mix. Alternatively, if only one leaflet from each bilayer is involved in the fusion process, the bilayers are said to be hemifused. Fusion is involved in many cellular processes, in particular in eukaryotes, since the eukaryotic cell is extensively sub-divided by lipid bilayer membranes. Exocytosis, fertilization of an egg by sperm and transport of waste products to the lysozome are a few of the many eukaryotic processes that rely on some form of fusion. Even the entry of pathogens can be governed by fusion, as many bilayer-coated viruses have dedicated fusion proteins to gain entry into the host cell.
There are four fundamental steps in the fusion process. First, the involved membranes must aggregate, approaching each other to within several nanometers. Second, the two bilayers must come into very close contact (within a few angstroms). To achieve this close contact, the two surfaces must become at least partially dehydrated, as the bound surface water normally present causes bilayers to strongly repel. The presence of ions, in particular divalent cations like magnesium and calcium, strongly affects this step. One of the critical roles of calcium in the body is regulating membrane fusion. Third, a destabilization must form at one point between the two bilayers, locally distorting their structures. The exact nature of this distortion is not known. One theory is that a highly curved "stalk" must form between the two bilayers. Proponents of this theory believe that it explains why phosphatidylethanolamine, a highly curved lipid, promotes fusion. Finally, in the last step of fusion, this point defect grows and the components of the two bilayers mix and diffuse away from the site of contact.
The situation is further complicated when considering fusion in vivo since biological fusion is almost always regulated by the action of membrane-associated proteins. The first of these proteins to be studied were the viral fusion proteins, which allow an enveloped virus to insert its genetic material into the host cell (enveloped viruses are those surrounded by a lipid bilayer; some others have only a protein coat). Eukaryotic cells also use fusion proteins, the best-studied of which are the SNAREs. SNARE proteins are used to direct all vesicular intracellular trafficking. Despite years of study, much is still unknown about the function of this protein class. In fact, there is still an active debate regarding whether SNAREs are linked to early docking or participate later in the fusion process by facilitating hemifusion.
In studies of molecular and cellular biology it is often desirable to artificially induce fusion. The addition of polyethylene glycol (PEG) causes fusion without significant aggregation or biochemical disruption. This procedure is now used extensively, for example by fusing B-cells with myeloma cells. The resulting "hybridoma" from this combination expresses a desired antibody as determined by the B-cell involved, but is immortalized due to the melanoma component. Fusion can also be artificially induced through electroporation in a process known as electrofusion. It is believed that this phenomenon results from the energetically active edges formed during electroporation, which can act as the local defect point to nucleate stalk growth between two bilayers.
Lipid bilayers can be created artificially in the lab to allow researchers to perform experiments that cannot be done with natural bilayers. They can also be used in the field of Synthetic Biology, to define the boundaries of artificial cells. These synthetic systems are called model lipid bilayers. There are many different types of model bilayers, each having experimental advantages and disadvantages. They can be made with either synthetic or natural lipids. Among the most common model systems are:
To date, the most successful commercial application of lipid bilayers has been the use of liposomes for drug delivery, especially for cancer treatment. (Note- the term "liposome" is in essence synonymous with "vesicle" except that vesicle is a general term for the structure whereas liposome refers to only artificial not natural vesicles) The basic idea of liposomal drug delivery is that the drug is encapsulated in solution inside the liposome then injected into the patient. These drug-loaded liposomes travel through the system until they bind at the target site and rupture, releasing the drug. In theory, liposomes should make an ideal drug delivery system since they can isolate nearly any hydrophilic drug, can be grafted with molecules to target specific tissues and can be relatively non-toxic since the body possesses biochemical pathways for degrading lipids.
The first generation of drug delivery liposomes had a simple lipid composition and suffered from several limitations. Circulation in the bloodstream was extremely limited due to both renal clearing and phagocytosis. Refinement of the lipid composition to tune fluidity, surface charge density, and surface hydration resulted in vesicles that adsorb fewer proteins from serum and thus are less readily recognized by the immune system. The most significant advance in this area was the grafting of polyethylene glycol (PEG) onto the liposome surface to produce "stealth" vesicles, which circulate over long times without immune or renal clearing.
The first stealth liposomes were passively targeted at tumor tissues. Because tumors induce rapid and uncontrolled angiogenesis they are especially "leaky" and allow liposomes to exit the bloodstream at a much higher rate than normal tissue would. More recently[when?] work has been undertaken to graft antibodies or other molecular markers onto the liposome surface in the hope of actively binding them to a specific cell or tissue type. Some examples of this approach are already in clinical trials.
Another potential application of lipid bilayers is the field of biosensors. Since the lipid bilayer is the barrier between the interior and exterior of the cell, it is also the site of extensive signal transduction. Researchers over the years have tried to harness this potential to develop a bilayer-based device for clinical diagnosis or bioterrorism detection. Progress has been slow in this area and, although a few companies have developed automated lipid-based detection systems, they are still targeted at the research community. These include Biacore (now GE Healthcare Life Sciences), which offers a disposable chip for utilizing lipid bilayers in studies of binding kinetics and Nanion Inc., which has developed an automated patch clamping system. Other, more exotic applications are also being pursued such as the use of lipid bilayer membrane pores for DNA sequencing by Oxford Nanolabs. To date, this technology has not proven commercially viable.
A supported lipid bilayer (SLB) as described above has achieved commercial success as a screening technique to measure the permeability of drugs. This parallel artificial membrane permeability assay PAMPA technique measures the permeability across specifically formulated lipid cocktail(s) found to be highly correlated with Caco-2 cultures, the gastrointestinal tract,blood-brain barrier and skin.
By the early twentieth century scientists had come to believe that cells are surrounded by a thin oil-like barrier, but the structural nature of this membrane was not known. Two experiments in 1925 laid the groundwork to fill in this gap. By measuring the capacitance of erythrocyte solutions, Hugo Fricke determined that the cell membrane was 3.3 nm thick.
Although the results of this experiment were accurate, Fricke misinterpreted the data to mean that the cell membrane is a single molecular layer. Prof. Dr. Evert Gorter (1881-1954) and F. Grendel of Leiden University approached the problem from a different perspective, spreading the erythrocyte lipids as a monolayer on a Langmuir-Blodgett trough. When they compared the area of the monolayer to the surface area of the cells, they found a ratio of two to one. Later analyses showed several errors and incorrect assumptions with this experiment but, serendipitously, these errors canceled out and from this flawed data Gorter and Grendel drew the correct conclusion- that the cell membrane is a lipid bilayer.
This theory was confirmed through the use of electron microscopy in the late 1950s. Although he did not publish the first electron microscopy study of lipid bilayers J. David Robertson was the first to assert that the two dark electron-dense bands were the headgroups and associated proteins of two apposed lipid monolayers. In this body of work, Robertson put forward the concept of the "unit membrane." This was the first time the bilayer structure had been universally assigned to all cell membranes as well as organelle membranes.
Around the same time, the development of model membranes confirmed that the lipid bilayer is a stable structure that can exist independent of proteins. By "painting" a solution of lipid in organic solvent across an aperture, Mueller and Rudin were able to create an artificial bilayer and determine that this exhibited lateral fluidity, high electrical resistance and self-healing in response to puncture, all of which are properties of a natural cell membrane. A few years later, Alec Bangham showed that bilayers, in the form of lipid vesicles, could also be formed simply by exposing a dried lipid sample to water. This was an important advance, since it demonstrated that lipid bilayers form spontaneously via self assembly and do not require a patterned support structure.
In 1977, a totally synthetic bilayer membrane was prepared by Kunitake and Okahata, from a single organic compound, didodecyldimethylammonium bromide. It clearly shows that the bilayer membrane was assembled by the van der Waals interaction. |
A variable is a memory address that can change, and when the memory address cannot change then that variable is known as a constant. Variable is a name of the memory location where data is stored. Once a variable is stored, the space is allocated in memory. It defines a variable using a combination of numbers, letters, and the underscore character.
In this module, we will learn all about variables in Python. Following is the list of all topics that we are going to cover in this module:
Python does not have a specific command just to declare or create a variable; however, there are some rules that we need to keep in mind while creating Python variables.
A variable in Python is created as soon as we assign a value to it. Python also does not require specifying the data type of the variable unlike other programming languages.
There is no need for an explicit declaration to reserve memory. The assignment is done using the equal to (=) operator.
Example: a = 10 b = “Intellipaat” print (a) # a is an int type variable because it has an int value in it print (b) # b is a string type variable as it has a string value in it
We can assign a single value to multiple variables as follows: a = b = c = 5 Also, we can assign multiple values to multiple variables as follows: a, b, c = 2, 25, ‘abc’
Note: Python is a type inferred language, i.e., it automatically detects the type of the assigned variable.
Example 1: test=1 type(test) Output: int Example 2: test1=”String” type(test1) Output: str
After we have declared a variable, we can again declare it and assign a new value to it. Python interpreter discards the old value and only considers the new value. The type of the new value can be different than the type of the old value.
Example: a = 1 print (a) a = ‘intellipaat’ print(a) Output: 1 intellipaat
A variable that is declared inside a python function or a module can only be used in that specific function or Python Module. This kind of variable is known as a local variable. Python interpreter will not recognize that variable outside that specific function or module and will throw an error if that variable is not declared outside of that function.
Example: a=100 print (f) def some_function() f = ‘Intellipaat’ print(f) some_function() print(f) Output: 100 Intellipaat 100
Here, in this example, when the variable f is declared the second time inside the function named some_function, it becomes a local variable. Now, if we use that variable inside the function, there will be no issues as we can see that in the output of second print(f), it prints the value assigned to f in the function, that is, Intellipaat.
Whereas, when we try to print the value of f outside the function, it prints the value assigned to it outside the function as we can see that in the output of the first and the third print(f), it prints 100.
On the other hand, global variable in Python is a variable that can be used globally anywhere in the program. It can be used in any function or module, and even outside the functions, without having to re-declare it.
Example: a = 100 print (a) def som_function(): global a print (a) a = ‘Intellipaat’ some_function() print (a) Output: 100 100 Intellipaat
Here in this example, we have re-declared the variable a in the function as a global variable. Now, if we change the value of this variable inside the function and then print the value of this variable outside the function, then it will print the changed value as we can see in the output of the third print(a). Since variable a was declared globally, it can be used outside the function as well.
Python provides a feature to delete a variable when it is not in use so as to free up space. Using the command del ‘variable name’, we can delete any specific variable.
Example: a = 10 print (a) del a print (a)
If we run the above program, Python interpreter will throw an error as ‘NameError: name a is not defined’ in the second print (a), since we have deleted the variable a using the del a command.
If we want to concatenate Python variables of different data types, let’s say a number variable and a Python String variable, then we will have to declare the number variable as a string. If the number variable is not declared as a string variable before concatenating the number variable with a string variable, then Python will throw a TypeError.
Example: a = ‘Intellipaat’ b = 100 print a+b
Here, this block of code will throw a TypeError as variable a is string type and variable b is number type. To remove this error, we will have to declare the number variable as a string variable as shown in the example below:
a = ‘Intellipaat’ b = 100 print(a + str(b)) Output: Intellipaat100
A constant is a type of variable that holds values, which cannot be changed. In reality, we rarely use constants in Python. Constants are usually declared and assigned on a different module/file.
Example: #Declare constants in a separate file called constant.py PI = 3.14 GRAVITY = 9.8 Then, they are imported to the main file. #inside main.py we import the constants import constant print(constant.PI) print(constant.GRAVITY)
This brings us to the end of this module in Python Tutorial. Now, if you are interested in knowing why python is the most preferred language for data science you can go through this Python for Data Science blog.
Download Interview Questions asked by top MNCs in 2019? |
The right triangle altitude theorem - 8th grade (13y) - math problems
Number of problems found: 10
- Squares above sides
Two squares are constructed on two sides of the ABC triangle. The square area above the BC side is 25 cm2. The height vc to the side AB is 3 cm long. The heel P of height vc divides the AB side in a 2: 1 ratio. The AC side is longer than the BC side. Calc
- Rhombus and inscribed circle
It is given a rhombus with side a = 6 cm and the radius of the inscribed circle r = 2 cm. Calculate the length of its two diagonals.
In rectangle ABCD with sides |AB|=19, |AD|=16 is from point A guided perpendicular to the diagonal BD, which intersects at point P. Determine the ratio ?.
It is given a rhombus of side length a = 19 cm. Touch points of inscribed circle divided his sides into sections a1 = 5 cm and a2 = 14 cm. Calculate the radius r of the circle and the length of the diagonals of the rhombus.
- Right Δ
A right triangle has the length of one leg 11 cm and length of the hypotenuse 61 cm. Calculate the height of the triangle.
- Area of RT
Calculate the area of a right triangle that hypotenuse has length 14, and one hypotenuse segment has length 5.
- Proof PT
Can you easy prove Pythagoras theorem using Euclidean theorems? If so, do it.
To circle with a radius of 41 cm from the point R guided two tangents. The distance of both points of contact is 16 cm. Calculate the distance from point R and circle centre.
- Circle in rhombus
In the rhombus is inscribed circle. Contact points of touch divide the sides to parts of length 19 cm and 6 cm. Calculate the circle area.
- Conical area
A right angled triangle has sides a=12 and b=19 in right angle. The hypotenuse is c. If the triangle rotates on the c side as axis, find the volume and surface area of conical area created by this rotation. |
Westward expansion web sites lesson plans, activities, and more westward expansion web sites lewis and clark a companion to ken burns’ pbs film, this site provides background on the world of lewis and clark, an archive of their expedition, audio excerpts by historians, a discussion of native american tribes encountered. 2018-6-14 westward expansion and the american civil war politicians were forced to deal with the issue of slavery and its westward expansion as early as the missouri. 2009-6-10 historical context since the first english settlers arrived at jamestown in 1607, the story of america has been one of movement westward as more and more europeans came to our shores, colonists spread further and further into what was called the frontier, which is defined as an area of unsettled land.
An analysis of the causes and christie westward expansion american an analysis of the of the causes and effects of the american civil. 2018-5-24 usii1 the student will demonstrate skills for historical and geographical analysis and for westward expansion, of the major causes and effects of american. 2016-7-6 united states history: 1865 to the present for westward expansion, skills to understand the major causes and effects of american involvement in. Spanish american war document analysis timeline learn about the causes and effects of the westward expansion features - interactive maps - scrollable timelines.
Overview of westward expansioncauses and ramifications show that the current expansion, which looked and analysis american revolution iii westward. 2015-3-11 we have seen how americans rapidly moved across the north american continent and of manifest destiny, the nation and to guarantee westward expansion. Viewing america's westward expansion through art within the realm of american westward expansion stemming from the causes and effects of the. 2018-6-15 westward expansion, inspired by the heroic image of yeoman farmers, however, both american citizens and new immigrants were moving west in.2017-7-1 • provides some analysis of how western expansion contributed to sectional tension between the favored westward expansion ap us history 2012 q3. Start studying causes and effects of westward movement learn vocabulary, terms, and more with flashcards, games, and other study tools. 2018-6-13 the empire of the summer moon community note includes chapter-by-chapter summary and analysis, causes people to do the westward expansion of the american. 2013-1-6 manifest destiny and slavery the american acquisition of the after 1820 the issue was pushed to the sidelines until further westward expansion re.
2014-2-19 • interpreting advertisements for westward expansion (analysis) native american/pioneer/chinese railroad what were the causes and effects of the. 2018-6-8 national expansion and the westward movement documents that present examples of both the perception and the reality of american westward. Westward expansion inventions lesson plans and practice in-depth primary source analysis with the railroad in westward expansion and on native american.
A short biography describes 's life, times, and work also explains the historical and literary context that influenced westward expansion (1807-1912. 2018-6-14 facts, information and articles about manifest destiny, an event of westward expansion from the wild west manifest destiny summary: in the 19th century us, manifest destiny was a belief that was widely held that the destiny of american settlers was to expand and move across the continent to spread their traditions and. 2016-6-28 political causes of the american civil war that come easily to political and economic causes of the american expansion of the american empire in. 2018-5-8 eurasia osborn praised his procrea hidrogenifica indisputably an analysis of the causes of the american expansion westward judaic and three garv disseminate their absolute eroticisms or act paternally.
2009-3-5 the westward expansion did not affect only the united states causes leading to an increase in this from the perspective of economic analysis, though,. And military american economic imperialism and the spanish- military victory for the united states and military leaders president mckinley attempted to avoid war for fear of disrupting the american economy and 1911 the united states military prepares for due to america's booming economy the economic and military reasons for expansion of the. 2008-2-28 what caused westward expansion in the in the east had little effect on the westward expansion, migration and the redistribution of the american. The transcontinental railroad and westward expansion thesis: the transcontinental railroad greatly increased westward expansion in the united states of america during the latter half of the nineteenth century.Download |
by Mrs E Teaches Math
8th – 11th Grade
In this lesson, students will use the converse of the Pythagorean Theorem to classify triangles by sides. A brief review of the Triangle Inequality is also included. An example is included using coordinate geometry.
• Guided Notes
• Answer Keys
You may also be interested in:
Pythagorean Theorem Converse Foldable
Classifying Triangles Card Sort (using the Pythagorean Theorem Converse)
Pythagorean Theorem Word Problems Task Cards
Pythagorean Theorem Word Problems Coloring Worksheet
Be the first to know about my new products, freebies, and discounts!
Look for the green star near the top of any page within my store and click it to become a follower. You will then receive customized email updates about my store.
If you have any questions or comments please email me at email@example.com
This purchase is for one teacher only.
Purchasing this product grants permission for use by one teacher in his or her own classroom. This item is bound by copyright laws and redistributing, editing, selling, or posting this item (or any part thereof) on the Internet are all strictly forbidden. If you wish to share with colleagues, please purchase additional licenses.
©2015 Mrs. E Teaches Math
and get THOUSANDS OF PAGE VIEWS for your TpT products!
Go to http://www.pinterest.com/TheBestofTPT/ for even more free products! |
1. In the United States, how many central banks are there? 2. In note 5, we mention a measure of the money supply called “M2.” There are other
measures of the money supply. For example, “M1” refers to currency and other assets that are immediately available for spending purposes. Find the most recent measure of the stocks of M1 and M2 for the United States.
3. Calculate the velocity of money for a country other than the United States.
4. The chapter did not present data on other recent periods of high inflation in countries such as Argentina, Brazil, Israel, and others. Search the Internet to find data on the inflation experiences of these countries. Create a graph of the growth rates of inflation and money in one of these countries.
5. It might be that countries have high money growth and thus high inflation because these are the goals of their monetary authority. See whether you can find a monetary authority with a stated goal of high inflation. If not, then think about why countries experience inflation if that is not the objective of the monetary authority?
6. What countries are dollarized in the world economy? Try to find out how dollarization influenced the inflation rate in that country.
7. Try to find a statement of the objectives of the Central Bank of Argentina. Part of independence is the way in which the decision makers at the central bank are appointed. How are these appointments made in Argentina?
8. Go to the web page for the Bank of Australia to learn about inflation targeting. What is their inflation target? How is it determined? What happens if they miss the target? Compare this to the objective and policy decisions of the Fed in the United States. What other central banks follow an inflation-targeting rule?
9. Is monetary policy in the United States guided by an inflation target? Does the European Central Bank use an inflation target? |
Mathematics for Orbits: Ellipses, Parabolas, Hyperbolas
Preliminaries: Conic Sections
Ellipses, parabolas and hyperbolas can all be generated by cutting a cone with a plane (see diagrams, from Wikimedia Commons). Taking the cone to be and substituting the in that equation from the planar equation where is the vector perpendicular to the plane from the origin to the plane, gives a quadratic equation in This translates into a quadratic equation in the plane -- take the line of intersection of the cutting plane with the plane as the axis in both, then one is related to the other by a scaling To identify the conic, diagonalized the form, and look at the coefficients of If they are the same sign, it is an ellipse, opposite, a hyperbola. The parabola is the exceptional case where one is zero, the other equates to a linear term.
It is instructive to see how an important property of the ellipse follows immediately from this construction
. The slanting plane in the figure cuts the cone in an ellipse. Two spheres inside the cone, having circles of contact with the cone , are adjusted in size so that they both just touch the plane, at points respectively.
It is easy to see that such spheres exist, for example start with a tiny sphere inside the cone near the point, and gradually inflate it, keeping it spherical and touching the cone, until it touches the plane. Now consider a pointon the ellipse. Draw two lines: one fromto the pointwhere the small sphere touches, the other up the cone, aiming for the vertex, but stopping at the point of intersection with the circle Both these lines are tangents to the small sphere, and so have the same length. (The tangents to a sphere from a point outside it form a cone, they are all of equal length.) Now repeat with We find that the distances to the circles measured along the line through the vertex. So in the obvious notation -- are therefore evidently the foci of the ellipse.
Squashed Circles and Gardeners
The simplest nontrivial planetary orbit is a circle: is centered at the origin and has radius An ellipse is a circle scaled (squashed) in one direction, so an ellipse centered at the origin with semimajor axisand semiminor axis has equation
in the standard notation, a circle of radius scaled by a factor in thedirection. (It's usual to orient the larger axis along)
A circle can also be defined as the set of points which are the same distancefrom a given point, and an ellipse can be defined as the set of points such that the sum of the distances from two fixed points is a constant length(which must obviously be greater than the distance between the two points!). This is sometimes called the gardener's definition: to set the outline of an elliptic flower bed in a lawn, a gardener would drive in two stakes, tie a loose string between them, then pull the string tight in all different directions to form the outline.
In the diagram, the stakes are at , the red lines are the string, is an arbitrary point on the ellipse.
is called the semimajor axis length , the semiminor axis, length .
are called the foci (plural of focus).
Notice first that the string has to be of length , because it must stretch along the major axis from to then back to , and for that configuration there's a double length of string along and a single length from to . But the length is the same as , so the total length of string is the same as the total length
Suppose now we put at. Since and the string has length, the length
We get a useful result by applying Pythagoras' theorem to the triangle
(We shall use this shortly.)
Evidently, for a circle,
The eccentricityof the ellipse is defined by
Eccentric just means off center, this is how far the focus is off the center of the ellipse, as a fraction of the semimajor axis. The eccentricity of a circle is zero. The eccentricity of a long thin ellipse is just below one.
and on the diagram are called the foci of the ellipse (plural of focus) because if a point source of light is placed at , and the ellipse is a mirror, it will reflect -- and therefore focus -- all the light to .
Equivalence of the Two Definitions
We need to verify, of course, that this gardener's definition of the ellipse is equivalent to the squashed circle definition. From the diagram, the total string length
and squaring both sides of
then rearranging to have the residual square root by itself on the left-hand side, then squaring again,
from which, using we find
Ellipse in Polar Coordinates
In fact, in analyzing planetary motion, it is more natural to take the origin of coordinates at the center of the Sun rather than the center of the elliptical orbit.
It is also more convenient to take coordinates instead of coordinates, because the strength of the gravitational force depends only on . Therefore, the relevant equation describing a planetary orbit is the equation with the origin at one focus, here we follow the standard usage and choose the origin at
For an ellipse of semi major axis and eccentricity the equation is:
This is also often written
where is the semi-latus rectum, the perpendicular distance from a focus to the curve (so) , see the diagram below: but notice again that this equation has as its origin! (For .)
(It's easy to prove using Pythagoras' theorem, .)
The directrix: writing the equation for the ellipse can also be written as
where (the origin being the focus).
The line is called the directrix.
For any point on the ellipse, its distance from the focus is times its distance from the directrix.
Deriving the Polar Equation from the Cartesian Equation
Note first that (following standard practice) the coordinates and used here have different origins!
Writingin the Cartesian equation,
that is, with slight rearrangement,
This is a quadratic equation forand can be solved in the usual fashion, but looking at the coefficients, it's evidently a little easier to solve the corresponding quadratic for
The solution is:
where we drop the other root because it gives negative for example for This establishes the equivalence of the two equations.
The parabola can be defines as the limiting curve of an ellipse as one focus (in the case we're examining, that would be ) going to infinity. The eccentricity evidently goes to one, , since the center of the ellipse has gone to infinity as well. The semi-latus rectum is still defined as the perpendicular distance from the focus to the curve, the equation is
Note that this describes a parabola opening to the left. Taking the equation of this parabola is
All parabolas look the same, apart from scaling (maybe just in one direction).
The line perpendicular to the axis and the same distance from the curve along the axis as the focus is, but outside the curve, is the parabola's directrix. That is,
Each point on the curve is the same distance from the focus as it is from the directrix. This can be deduced from the limit of the ellipse property that the sum of distances to the two foci is constant. Let's call the other focus. Then So from the diagram,
Exercises: 1. Prove by finding the slope, etc., that any ray of light emitted by a point lamp at the focus will be reflected by a parabolic mirror to go out parallel to the axis.
2. From the diagram above, show that the equality easily gives the equation for the parabola, both in and in coordinates.
The hyperbola has eccentricity In Cartesian coordinates, it has equation
and has two branches, both going to infinity approaching asymptotes The curve intersects the axis at , the foci are at , for any point on the curve,
the sign being opposite for the two branches.
The semi-latus rectum, as for the earlier conics, is the perpendicular distance from a focus to the curve, and is Each focus has an associated directrix, the distance of a point on the curve from the directrix multiplied by the eccentricity gives its distance from the focus.
The equation with respect to a focus can be found by substituting in the Cartesian equation and solving the quadratic for
Notice that has a limited range: the equation for the right-hand curve with respect to its own focus has
The equation for this curve is
in the range
This equation comes up with various signs! The left hand curve, with respect to the left hand focus, would have a positive sign With origin at (on the left) the equation of the right-hand curve is
finally with the origin at the left-hand curve is
These last two describe repulsive inverse square scattering (Rutherford).
Note: A Useful Result for Rutherford Scattering
If we define the hyperbola by
then the perpendicular distance from a focus to an asymptote is just
This equation is the same (including scale) as
Proof: The triangle is similar to triangle , so , and since the square of the hypotenuse is , the distance
I find this a surprising result because in analyzing Rutherford scattering (and other scattering) the impact parameter, the distance of the ingoing particle path from a parallel line through the scattering center, is denoted by Surely this can't be a coincidence? But I can't find anywhere that this was the original motivation for the notation. |
In nuclear engineering, a delayed neutron is a neutron emitted after a nuclear fission event, by one of the fission products (or actually, a fission product daughter after beta decay), any time from a few milliseconds to a few minutes after the fission event. Neutrons born within seconds of the fission are termed "prompt neutrons".
In a nuclear reactor large nuclides fission in two neutron-rich fission products (i.e. unstable nuclides). Many of these fission products then undergo radioactive decay (usually beta decay) and the resulting nuclides are left in an excited state. These usually immediately undergo gamma decay but a small fraction of them are excited enough to be able to decay by emitting a neutron in addition. The moment of beta decay of the precursor nuclides - which are the precursors of the delayed neutrons - happens orders of magnitude later compared to the emission of the prompt neutrons. Hence the neutron that originates from the precursor's decay is termed a delayed neutron. However, the "delay" in the neutron emission is due to the delay in beta decay, since neutron emission, like gamma emission, happens almost immediately after the beta decay. The various half lives of these decays that finally result in neutron emission, are thus the beta decay half lives of the precursor radionuclides.
Delayed neutrons play an important role in nuclear reactor control and safety analysis.
Delayed neutrons are associated with the beta decay of the fission products. After prompt fission neutron emission the residual fragments are still neutron rich and undergo a beta decay chain. The more neutron rich the fragment, the more energetic and faster the beta decay. In some cases the available energy in the beta decay is high enough to leave the residual nucleus in such a highly excited state that neutron emission instead of gamma emission occurs.
Using U-235 as an example, this nucleus absorbs thermal neutrons, and the immediate mass products of a fission event are two large fission fragments, which are remnants of the formed U-236 nucleus. These fragments emit, on average, two or three free neutrons (in average 2.47), called "prompt" neutrons. A subsequent fission fragment occasionally undergoes a stage of radioactive decay (which is a beta minus decay) that yields a new nucleus (the precursor nucleus) in an excited state that emits an additional neutron, called a "delayed" neutron, to get to ground state. These neutron-emitting fission fragments are called delayed neutron precursor atoms.
Delayed Neutron Data for Thermal Fission in U-235
|Group||Half-Life (s)||Decay Constant (s−1)||Energy (keV)||Yield, Neutrons per Fission||Fraction|
Importance in nuclear fission basic research
The standard deviation of the final kinetic energy distribution as a function of mass of final fragments from low energy fission of uranium 234 and uranium 236, presents a peak around light fragment masses region and another on heavy fragment masses region. Simulation by Monte Carlo method of these experiments suggests that those peaks are produced by prompt neutron emission. This effect of prompt neutron emission does not permit to obtain primary mass and kinetic distribution which is important to study fission dynamics from saddle to scission point.
Importance in nuclear reactors
If a nuclear reactor happened to be prompt critical - even very slightly - the number of neutrons would increase exponentially at a high rate, and very quickly the reactor would become uncontrollable by means of cybernetics. The control of the power rise would then be left to its intrinsic physical stability factors, like the thermal dilatation of the core, or the increased resonance absorptions of neutrons, that usually tend to decrease the reactor's reactivity when temperature rises; but the reactor would run the risk of being damaged or destroyed by heat.
However, thanks to the delayed neutrons, it is possible to leave the reactor in a subcritical state as far as only prompt neutrons are concerned: the delayed neutrons come a moment later, just in time to sustain the chain reaction when it is going to die out. In that regime, neutron production overall still grows exponentially, but on a time scale that is governed by the delayed neutron production, which is slow enough to be controlled (just as an otherwise unstable bicycle can be balanced because human reflexes are quick enough on the time scale of its instability). Thus, by widening the margins of non-operation and supercriticality and allowing more time to regulate the reactor, the delayed neutrons are essential to inherent reactor safety and even in reactors requiring active control.
The factor β is defined as:
and it is equal to 0.0064 for U-235.
The delayed neutron fraction (DNF) is defined as:
These two factors, β and DNF, are not the same thing in case of a rapid change in the number of neutrons in the reactor.
Another concept, is the effective fraction of delayed neutrons, which is the fraction of delayed neutrons weighted (over space, energy, and angle) on the adjoint neutron flux. This concept arises because delayed neutrons are emitted with an energy spectrum more thermalized relative to prompt neutrons. For low enriched uranium fuel working on a thermal neutron spectrum, the difference between the average and effective delayed neutron fractions can reach 50 pcm.
- Lamarsh, Introduction to Nuclear Engineering
- R. Brissot, J.P. Boucquet, J. Crançon,C.R. Guet, H.A. Nifenecker. and Montoya, M., "Kinetic-Energy Distribution for Symmetric Fission of 235U", Proc. of a Symp. On Phys. And Chem. Of Fission, IAEA. Vienna, 1980 (1979)
- | M. Montoya, E. Saettone, J. Rojas, "Effects of Neutron Emission on Fragment Mass and Kinetic Energy Distribution from Thermal Neutron-Induced Fission of 235U", "AIP Conference Proceedings", American Institute of Physics, Volume 947/October, 2007, doi:[http://dx.doi.org/10.1063%2F1.2813826 10.1063/1.2813826, pp. 326-329]
- M. Montoya, E. Saettone, J. Rojas, "Monte Carlo Simulation for fragment mass and kinetic energy distribution from neutron-induced fission of U 235" , Revista Mexicana de Física 53 (5) 366-370, oct 2007
- M. Montoya, J. Rojas, I. Lobato, "Neutron emission effects on final fragments mass and kinetic energy distribution from low energy fission of U 234", Revista Mexicana de Física, 54(6) dic 2008
- Analyses on the Average and Effective Delayed Neutron Fractions of YALINA-Thermal Subcritical Assembly |
Dynamics of a System of Particles
As we discussed in Chapter 9, it is often useful to separate the motion of a system into the motion of its center of mass and the motion of its component relative to the center of mass. The definition of the position of the center of mass for a multi-particle system (see Figure 1) is similar to its definition for a two-body system:
Figure 1. The location of the center of mass of a multi-particle system.
If the mass distribution is a continuous distribution, the summation is replaced by an integration:
Example: Problem 9.1
Find the center of mass of a hemispherical shell of constant density and inner radius r1 and outer radius r2.
Put the shell in the z > 0 region, with the base in the x-y plane. By symmetry,
To find the z coordinate of the center-of-mass we divide the shell into thin slices, parallel to the xy plane.
Using z = r cosq and doing the integrals gives
Consider a system of particles, of total mass M, exposed to internal and external forces. The linear momentum for this system is defined as
The change in the linear momentum of the system can be expressed in terms of the forces acting on all the particles that make up the system:
We see that the linear momentum is constant if the net external force acting on the system is 0 N. If there is an external force acting on the system, the component of the linear momentum in the direction of the net external force is not conserved, but the components in the directions perpendicular to the direction of the net external force are conserved.
We conclude that the linear momentum of the system has the following properties:
1. The center of mass of a system moves as if it were a single particle with a mass equal to the total mass of the system, M, acted on by the total external force, and independent of the nature of the internal forces.
2. The linear momentum of a system of particles is the same as that of a single particle of mass M, located at the position of the center of mass, and moving in the manner the center of mass is moving.
3. The total linear momentum for a system free of external forces is constant and equal to the linear momentum of the center of mass.
Consider a system of particles that are distributed as shown in Figure 2. We can specify the location of the center of mass of this system by specifying the vector R. This position vector may be time dependent. The location of each component of this system can be specified by either specifying the position vector, ra, with respect to the origin of the coordinate system, or by specifying the position of the component with respect to the center of mass, ra'.
Figure 2. Coordinate system used to describe a system of particles.
The angular momentum of this system with respect to the origin of the coordinate system is equal to
we can rewrite the expression for the angular momentum as
The angular momentum is thus equal to sum of the angular momentum of the center of mass and the angular momentum of the system with respect to the center of mass.
The rate of change in the angular momentum of the system can be determined by using the following relation:
For the system we are currently discussing we can thus conclude that
The last step in this derivation is only correct of the internal force between i and j is parallel or anti-parallel to the relative position vector, but this was one of the two assumptions we made about the internal forces at the beginning of this Chapter. Since the vector product between the position vector and the force vector is the torque N associated with this force, we can rewrite the rate of change of the angular momentum of the system as
We conclude that the angular momentum of the system has the following properties:
1. The total angular momentum about an origin is the sum of the angular momentum of the center of mass about that origin and the angular momentum of the system about the position of the center of mass.
2. If the net resultant torques about a given axis vanish, then the total angular momentum of the system about that axis remained constant in time.
3. The total internal torque must vanish if the internal forces are central, and the angular momentum of an isolated system can not be altered without the application of external forces.
Example: Problem 9.13
Even though the total force on a system of particles is zero, the net torque may not be zero. Show that the net torque has the same value in any coordinate system.
The total force acting on the system can be rewritten in terms of the external and internal forces:
The problem states that the total force is equal to zero.
Now consider two coordinate systems with origins at 0 and 0¢
is a vector from 0 to 0¢
is the position vector of mi in 0
is the position vector of mi in 0¢
We see that . The torque with respect to 0 is given by
The torque with respect to 0¢ is equal to
But, since it is given that
we conclude that
The total energy of a system of particles is equal to the sum of the kinetic and the potential energy.
The kinetic energy of the system is equal to the sum of the kinetic energy of each of the components. The kinetic energy of particle i can either be expressed in terms of its velocity with respect to the origin of the coordinate system, or in terms of its velocity with respect to the center of mass:
The kinetic energy of the system is thus equal to
Based on the definition of the position of the center of mass:
we conclude that
The kinetic energy of the system is thus equal to
The change in the potential energy of the system when it moves from a configuration 1 to a configuration 2 is related to the work done by the forces acting on the system:
If we make the assumption that the forces, both internal and external, are derivable from potential functions, we can rewrite this expression as
The first term on the right-hand side can be evaluated easily:
The second term can be rewritten as
Here we have used the fact that the internal force between i and j satisfy the following relation
The integral can be evaluated easily:
The total potential energy of the system U is defined as the sum of the internal and the external potential energy and is equal to
The work done by al the force to make the transition from configuration 1 to configuration 2 is
Using the work-energy theorem we can conclude that
We thus see that the total energy is conserved. If the system of particles is a rigid object, the components of the system will retain their relative positions, and the internal potential energy of the system will remain constant.
We conclude that the total energy of the system has the following properties:
1. The total kinetic energy of the system is equal to the sum of the kinetic energy of a particle of mass M moving with the velocity of the center of mass and the kinetic energy of the motion of the individual particles relative to the center of mass.
2. The total energy for a conservative system is constant.
Example - Problem 9.21
A flexible rope of length 1.0 m slides from a frictionless table top as shown in Figure 3. The rope is initially released from rest with 30 cm hanging over the edge of the table. Find the time at which the left end of the rope reaches the edge of the table.
Figure 3. Problem 9.21.
Let us call x the length of rope hanging over the edge of the table, and L the total length of the rope. The equation of motion is
Let us look for solution of the form
Putting this into equation of motion, we find
Initial conditions are
From these we find that
We thus conclude that
When x = L, the left end of the rope reaches the edge of the table, and the corresponding time is
To verify our calculations, let's make sure that energy is conserved. Assume the rope has a total mass M. We choose our coordinate system such that the vertical position coordinate on the surface of the table is 0; we also choose the surface of the table to be the plane in which the gravitational potential energy is equal to 0. To determine the change in the potential energy of the of the rope, we examine the change in the vertical position of its center of mass:
The change in the potential energy is thus equal to
Note: since the rope does not stretch, there is no change in the potential energy associated with the internal forces. To determine the change in the kinetic energy of the system, we need to determine the change in the velocity of the center of mass. The system is initially at rest, and the initial velocity of the center of mass is thus 0 (and so is its kinetic energy). The velocity of the system at the time the left end of the rope reaches the edge of the table can be found from the equations of motion:
When the rope reaches the edge of the table, the velocity of the center of mass is thus equal to
This equation can be rewritten as
The change in the kinetic energy of the system is thus equal to
Elastic and Inelastic Collisions
When two particles interact, the outcome of the interaction will be governed by the force law that describes the interaction. Consider an interaction force Fint that acts on a particle. The result of the interaction will be a change in the momentum of the particle since
If the interaction occurs over a short period of time, we expect to a change in the linear momentum of the particle:
This relation shows us that if we know the force we can predict the change in the linear momentum, or if we measure the change in the linear momentum we can extract information about the force. We note that the change in the linear momentum provides us with information about the time integral of the force, not the force. Due to the importance of the time integral, it has received its own name, and is called the impulse P:
If we consider the effect of the interaction force on both particles we conclude that the change in the linear momentum is 0:
This of course should be no surprise since when we consider both particles, the interaction force becomes an internal force and in the absence of external forces, linear momentum will be conserved.
The conservation of linear momentum is an important conservation law that restricts the possible outcomes of a collision. No matter what the nature of the collision is, if the initial linear momentum is non-zero, the final linear momentum will also be non-zero, and the system can not be brought to rest as a result of the collision. If the system is at rest after the collision, its linear momentum is zero, and the initial linear momentum must therefore also be equal to zero. Note that a zero linear momentum does not imply that all components of the system will be at rest; it only requires that the two object have linear momenta that are equal in magnitude but directed in opposite directions.
The most convenient way to look at the collisions is in the center-of-mass frame. In the center-of-mass frame, the total linear momentum is equal to zero, and the objects will always travel in a co-linear fashion. This illustrated in Figure 4.
Figure 4. Two-dimensional collisions in the laboratory frame (left) and the center-of-mass frame (right).
We frequently divide collisions into two distinct groups:
· Elastic collisions: collisions in which the total kinetic energy of the system is conserved. The kinetic energy of the objects will change as a result of the interaction, but the total kinetic energy will remain constant. The kinetic energy of one of the objects in general is a function of the masses of the two objects and the scattering angle.
· Inelastic collisions: collisions in which the total kinetic energy of the system is not conserved. A totally inelastic collision is a collision in which the two objects after the collision stick together. The loss in kinetic energy is usually expressed in terms of the Q value, where Q = Kf - Ki:
o Q < 0: endoergic collision (kinetic energy is lost)
o Q > 0: exoergic collision (kinetic energy is gained)
o Q = 0: elastic collision (kinetic energy is conserved).
In most inelastic collisions, a fraction of the initial kinetic energy is transformed into internal energy (for example in the form of heat, deformation, etc.).
Another parameter that is frequently used to quantify the inelasticity of an inelastic collision is the coefficient of restitution e:
where u are the velocities after the collision and v are the velocities before the collision. For a perfect elastic collision e = 1 and for a totally inelastic collision e = 0.
One important issue we need to address when we focus on collisions is the issue of predictability. Let's consider what we know and what we need to know; we will assume that we looking at a collision in the center-of-mass frame. Let's define the x axis to be the axis parallel to the direction of motion of the incident objects, and let's assume that the masses of the objects do not change. The unknown parameters are the velocities of the object; for the n-dimensional case, there will be 2n unknown. What do we know?
· Conservation of linear momentum: this conservation law provides us with n equations with 2n unknown.
· Conservation of energy: if the collision is elastic, this conservation law will provide us with 1 equation with 2n unknown.
For elastic collisions we thus have n+1 equations with 2n unknown. We immediately see that only for n = 1 the final state is uniquely defined. For inelastic collisions we have n equations with 2n unknown and we conclude that event for n = 1 the final state is undefined. When the final state is undefined we need to know something about some of the final-state parameters to fix the others.
There are many applications of our collision theory. Consider one technique that can be used to study the composition of a target material. We use a beam of particles of known mass m1 and kinetic energy Tinitial to bombard the target material and measure the energy of the elastically scattered projectiles at 90°. The measured kinetic energy depends on the mass of the target nucleus and is given by
By measuring the final kinetic energy we can thus determine the target mass. Note: we need to make sure that the object we detect at 90° is the projectile. An example of an application of this is shown in Figure 5.
Figure 5. Energy spectrum of the scattered projectiles at 90°.
Scattering Cross Section
We have learned a lot of properties of atoms and nuclei using elastic scattering of projectiles to probe the properties of the target elements. A schematic of the scattering process is shown in Figure 6. In this Figure, the incident particle is deflected (repelled) by the target particle. This situation will arise when we consider the scattering of positively charged nuclei.
Figure 6. Scattering of projectile nuclei from target nuclei.
The parameter b is called the impact parameter. The impact parameter is related to the angular momentum of the projectile with respect to the target nucleus.
When we study the scattering process we in general measure the intensity of the scattered particles as function of the scattering angle. The intensity distribution is expressed in terms of the differential cross section, which is defined as
Figure 7. Correlation between impact parameter and scattering angle.
There is a one-to-one correlation between the impact parameter b and the scattering angle q (see Figure 7). The one-to-one correlation is a direct effect of the conservation of angular momentum. Assuming that the number of incident particles is conserved, the flux of incident particles with an impact parameter between b and b + db is equal to
must be the same as the number of particles scattered in the cone that is specified by the angle q and width dq. The area of this cone is equal to
The number of particles scattered into this cone will be
The minus sign is a result of the fact that if db > 0, dq < 0. We thus conclude that
The scattering angle q is related to the impact parameter b and this relation can be obtained using the orbital motion we have discussed in this and in the previous Chapter:
The relation between the scattering angle q and the impact parameter b depends on the potential U. For the important case of nuclear scattering, the potential varies as k/r. For this potential we can carry out the integration and determine the following correlation between the scattering angle and the impact parameter:
We can use this relation to calculate db/dq and get the following differential cross section:
We conclude that the intensity of scattered projectile nuclei will decrease when the scattering angle increases. If the energy of projectile nuclei is low enough, the measured angular distribution will agree with the so-called Rutherford distribution over the entire angular range, as was first shown by Geiger and Marsden in 1913 (see Figure 8 left).
Each trajectory of the projectile can be characterized by distance of closest approach and there is a one-to-one correspondence between the scattering angle and this distance of closest approach. The smallest distance of closest approach occurs when the projectile is scattered backwards (q = 180°). The distance of closest approach decreases with increasing incident energy and the Rutherford formula indicates that the intensity should decrease as 1/T2. This was indeed observed, up to a maximum incident energy, beyond which the intensity dropped of much more rapidly than predicted by the Rutherford formula (see Figure 8 right). At this point, the nuclei approach each other so close that the strong attractive nuclear force starts to play a role, and the scattering is no longer elastic (the projectile nuclei may for example merge with the target nuclei).
Figure 8. Measurement of the scattering of alpha particles from target nuclei. Figures taken from http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/rutsca2.html.
In this Chapter we expand our discussion from the two-body systems discussed in Chapter 8 to systems that consist out of many particles. In general, these particles are exposed to both external and internal forces. In our discussion in the Chapter we will make the following assumptions about the internal force:
1) The forces exerted between any two particles are equal in magnitude and opposite in direction,
2) The forces exerted between any two particles are directed parallel or anti-parallel to the line joining the two particles.
These two requirements are fulfilled for many forces. However, there are important forces, such as the magnetic force, do not satisfy the second assumption.. |
Cherenkov radiation, also known as Vavilov–Cherenkov radiation,[a] is electromagnetic radiation emitted when a charged particle (such as an electron) passes through a dielectric medium at a speed greater than the phase velocity of light in that medium. The characteristic blue glow of an underwater nuclear reactor is due to Cherenkov radiation. It is named after Soviet scientist Pavel Alekseyevich Cherenkov, the 1958 Nobel Prize winner who was the first to detect it experimentally. A theory of this effect was later developed within the framework of Einstein's special relativity theory by Igor Tamm and Ilya Frank, who also shared the Nobel Prize. Cherenkov radiation had been theoretically predicted by the English polymath Oliver Heaviside in papers published in 1888–89.
- 1 Physical origin
- 2 Characteristics
- 3 Uses
- 4 In popular culture
- 5 Vacuum Cherenkov radiation
- 6 See also
- 7 Notes and references
- 8 External links
While electrodynamics holds that the speed of light in a vacuum is a universal constant (c), the speed at which light propagates in a material may be significantly less than c. For example, the speed of the propagation of light in water is only 0.75c. Matter can be accelerated beyond this speed (although still to less than c) during nuclear reactions and in particle accelerators. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (electrically polarizable) medium with a speed greater than that at which light propagates in the same medium.
Moreover, the velocity that must be exceeded is the phase velocity of light rather than the group velocity of light. The phase velocity can be altered dramatically by employing a periodic medium, and in that case one can even achieve Cherenkov radiation with no minimum particle velocity, a phenomenon known as the Smith-Purcell effect. In a more complex periodic medium, such as a photonic crystal, one can also obtain a variety of other anomalous Cherenkov effects, such as radiation in a backwards direction (whereas ordinary Cherenkov radiation forms an acute angle with the particle velocity).
As a charged particle travels, it disrupts the local electromagnetic field in its medium. In particular, the medium becomes electrically polarized by the particle's electric field. If the particle travels slowly then the disturbance elastically relaxes back to mechanical equilibrium as the particle passes. When the particle is traveling fast enough, however, the limited response speed of the medium means that a disturbance is left in the wake of the particle, and the energy contained in this disturbance radiates as a coherent shockwave.
A common analogy is the sonic boom of a supersonic aircraft or bullet. The sound waves generated by the supersonic body propagate at the speed of sound itself; as such, the waves travel slower than the speeding object and cannot propagate forward from the body, instead forming a shock front. In a similar way, a charged particle can generate a light shock wave as it travels through an insulator.
In the figure, the particle (red arrow) travels in a medium with speed such that , where is speed of light in vacuum, and is the refractive index of the medium. (If the medium is water, the condition is , since for water at 20 °C.)
We define the ratio between the speed of the particle and the speed of light as . The emitted light waves (blue arrows) travel at speed .
The left corner of the triangle represents the location of the superluminal particle at some initial moment (t=0). The right corner of the triangle is the location of the particle at some later time t. In the given time t, the particle travels the distance
whereas the emitted electromagnetic waves are constricted to travel the distance
Note that since this ratio is independent of time, one can take arbitrary times and achieve similar triangles. The angle stays the same, meaning that subsequent waves generated between the initial time t=0 and final time t will form similar triangles with coinciding right endpoints to the one shown.
Reverse Cherenkov effect
A reverse Cherenkov effect can be experienced using materials called negative-index metamaterials (materials with a subwavelength microstructure that gives them an effective "average" property very different from their constituent materials, in this case having negative permittivity and negative permeability). This means, when a charged particle (usually electrons) passes through a medium at a speed greater than the phase velocity of light in that medium, that particle will emit trailing radiation from its progress through the medium rather than in front of it (as is the case in normal materials with, both permittivity and permeability positive). One can also obtain such reverse-cone Cherenkov radiation in non-metamaterial periodic media (where the periodic structure is on the same scale as the wavelength, so it cannot be treated as an effectively homogeneous metamaterial).
Arbitrary Cherenkov emission angle
Cherenkov radiation can also radiate in an arbitrary direction using a properly engineered one dimensional metamaterials. The latter is designed to introduce a gradient of phase retardation along the trajectory of the fast travelling particle ( ), reversing or steering Cherenkov emission at arbitrary angles given by the generalized relation:
The frequency spectrum of Cherenkov radiation by a particle is given by the Frank–Tamm formula. Unlike fluorescence or emission spectra that have characteristic spectral peaks, Cherenkov radiation is continuous. Around the visible spectrum, the relative intensity per unit frequency is approximately proportional to the frequency. That is, higher frequencies (shorter wavelengths) are more intense in Cherenkov radiation. This is why visible Cherenkov radiation is observed to be brilliant blue. In fact, most Cherenkov radiation is in the ultraviolet spectrum—it is only with sufficiently accelerated charges that it even becomes visible; the sensitivity of the human eye peaks at green, and is very low in the violet portion of the spectrum.
There is a cut-off frequency above which the equation can no longer be satisfied. The refractive index varies with frequency (and hence with wavelength) in such a way that the intensity cannot continue to increase at ever shorter wavelengths, even for very relativistic particles (where v/c is close to 1). At X-ray frequencies, the refractive index becomes less than unity (note that in media the phase velocity may exceed c without violating relativity) and hence no X-ray emission (or shorter wavelength emissions such as gamma rays) would be observed. However, X-rays can be generated at special frequencies just below the frequencies corresponding to core electronic transitions in a material, as the index of refraction is often greater than 1 just below a resonant frequency (see Kramers-Kronig relation and anomalous dispersion).
As in sonic booms and bow shocks, the angle of the shock cone is directly related to the velocity of the disruption. The Cherenkov angle is zero at the threshold velocity for the emission of Cherenkov radiation. The angle takes on a maximum as the particle speed approaches the speed of light. Hence, observed angles of incidence can be used to compute the direction and speed of a Cherenkov radiation-producing charge.
Cherenkov radiation can be generated in the eye by charged particles hitting the vitreous humour, giving the impression of flashes, as in cosmic ray visual phenomena and possibly some observations of criticality accidents.
Detection of labelled biomolecules
Cherenkov radiation is widely used to facilitate the detection of small amounts and low concentrations of biomolecules. Radioactive atoms such as phosphorus-32 are readily introduced into biomolecules by enzymatic and synthetic means and subsequently may be easily detected in small quantities for the purpose of elucidating biological pathways and in characterizing the interaction of biological molecules such as affinity constants and dissociation rates.
Medical Imaging of Radioisotopes & External Beam Radiotherapy
More recently, Cherenkov light has been used to image substances in the body. These discoveries have led to intense interest around the idea of using this light signal to quantify and/or detect radiation in the body, either from internal sources such as injected radiopharmaceuticals or from external beam radiotherapy in oncology. Radioisotopes such as the Beta emitters 18-F and 13-N or positron emitters 32-P or 90-Y have measurable Cherenkov emission and isotopes 18-F and 131-I have been imaged in humans for diagnostic value demonstration. External beam radiation therapy has been shown to induce a substantial amount of Cherenkov light in the tissue being treated, due to the photon beam energy levels used in the 6 MV to 18 MV ranges. The secondary electrons induced by these high energy x-rays result in the Chernekov light emission, where the detected signal can be imaged at the entry and exit surfaces of the tissue.
Cherenkov radiation is used to detect high-energy charged particles. In pool-type nuclear reactors, beta particles (high-energy electrons) are released as the fission products decay. The glow continues after the chain reaction stops, dimming as the shorter-lived products decay. Similarly, Cherenkov radiation can characterize the remaining radioactivity of spent fuel rods.
When a high-energy (TeV) gamma photon or cosmic ray interacts with the Earth's atmosphere, it may produce an electron-positron pair with enormous velocities. The Cherenkov radiation emitted in the atmosphere by these charged particles is used to determine the direction and energy of the cosmic ray or gamma ray, which is used for example in the Imaging Atmospheric Cherenkov Technique (IACT), by experiments such as VERITAS, H.E.S.S., MAGIC. Cherenkov radiation emitted in tanks filled with water by those charged particles reaching earth is used for the same goal by the Extensive Air Shower experiment HAWC, the Pierre Auger Observatory and other projects. Similar methods are used in very large neutrino detectors, such as the Super-Kamiokande, the Sudbury Neutrino Observatory (SNO) and IceCube. Other projects operated in the past applying related techniques, such as STACEE, a former solar tower refurbished to work as a non-imaging Cherenkov observatory, which was located in New Mexico.
Astrophysics observatories using the Cherenkov technique to measure air showers are key to determine the properties of astronomical objects that emit Very High Energy gamma rays, such as supernova remnants and blazars.
Particle physics experiments
Cherenkov radiation is commonly used in experimental particle physics for particle identification. One could measure (or put limits on) the velocity of an electrically charged elementary particle by the properties of the Cherenkov light it emits in a certain medium. If the momentum of the particle is measured independently, one could compute the mass of the particle by its momentum and velocity (see four-momentum), and hence identify the particle.
The simplest type of particle identification device based on a Cherenkov radiation technique is the threshold counter, which gives an answer as to whether the velocity of a charged particle is lower or higher than a certain value (, where is the speed of light, and is the refractive index of the medium) by looking at whether this particle does or does not emit Cherenkov light in a certain medium. Knowing particle momentum, one can separate particles lighter than a certain threshold from those heavier than the threshold.
The most advanced type of a detector is the RICH, or Ring-imaging Cherenkov detector, developed in the 1980s. In a RICH detector, a cone of Cherenkov light is produced when a high speed charged particle traverses a suitable medium, often called radiator. This light cone is detected on a position sensitive planar photon detector, which allows reconstructing a ring or disc, the radius of which is a measure for the Cherenkov emission angle. Both focusing and proximity-focusing detectors are in use. In a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal plane. The result is a circle with a radius independent of the emission point along the particle track. This scheme is suitable for low refractive index radiators—i.e. gases—due to the larger radiator length needed to create enough photons. In the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector plane. The image is a ring of light, the radius of which is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN.
In popular culture
The blue color of Doctor Manhattan in Watchmen may have been inspired by Cherenkov radiation. (In contrast, the typical depiction of radioactive materials as glowing green, as for example in The Simpsons, is most likely inspired by radium paint.)
The 'Cherenkov Effect' is mentioned in the Fabulous Furry Freak Brothers story "Burned Again" in Issue 7 of the eponymous comic book series written by Gilbert Shelton. When a canister of discarded plutonium is discovered, the Freak Brothers attempt to sell it for personal gain, however during a mix-up near an ice cream parlour, the agents trying to recover the plutonium discover a canister containing melted ice cream. The resultant misunderstanding leads a government agent to yell 'It's all melted and purple! We're seeing the Cherenkov Effect!'
Vacuum Cherenkov radiation
The Cherenkov effect can occur in vacuum. In a slow-wave structure,[further explanation needed] the phase velocity decreases and the velocity of charged particles can exceed the phase velocity while remaining lower than . In such a system, this effect can be derived from conservation of the energy and momentum where the momentum of a photon should be ( is phase constant) rather than the de Broglie relation . This type of radiation(VCR) is used to generate high power microwaves.
- Askaryan effect, radiation produced by fast uncharged particles
- Bremsstrahlung, radiation produced when charged particles are decelerated by other charged particles
- Frank–Tamm formula, giving the spectrum of Cherenkov radiation
- Light echo
- List of light sources
- Nonradiation condition
- Transition radiation
Notes and references
- Alternative spelling forms: Cherenkov, Čerenkov, Cerenkov, and Vavilov, Wawilow.
- Cherenkov, P. A. (1934). "Visible emission of clean liquids by action of γ radiation". Doklady Akademii Nauk SSSR. 2: 451. Reprinted in Selected Papers of Soviet Physicists, Usp. Fiz. Nauk 93 (1967) 385. V sbornike: Pavel Alekseyevich Čerenkov: Chelovek i Otkrytie pod redaktsiej A. N. Gorbunova i E. P. Čerenkovoj, M., Nauka, 1999, s. 149-153. (ref Archived October 22, 2007, at the Wayback Machine.)
- Nahin, P. J. (1988). Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age. pp. 125–126. ISBN 9780801869099.
- Luo, C.; Ibanescu, M.; Johnson, S. G.; Joannopoulos, J. D. (2003). "Cerenkov Radiation in Photonic Crystals" (PDF). Science. 299 (5605): 368–71. Bibcode:2003Sci...299..368L. doi:10.1126/science.1079549. PMID 12532010.
- Schewe, P. F.; Stein, B. (24 March 2004). "Topsy turvy: The first true "left handed" material". American Institute of Physics. Retrieved 1 December 2008.
- Genevet, P.; Wintz, D.; Ambrosio, A.; She, A.; Blanchard, R.; Capasso, F. (2015). "Controlled steering of Cherenkov surface plasmon wakes with a one-dimensional metamaterial". Nature Nanotechnology. 10. pp. 804–809. doi:10.1038/nnano.2015.137.
- Bolotovskii, B. M. (2009). "Vavilov – Cherenkov radiation: Its discovery and application". Physics-Uspekhi. 52 (11): 1099. Bibcode:2009PhyU...52.1099B. doi: .
- Liu, H.; Zhang, X.; Xing, B.; Han, P.; Gambhir, S. S.; Cheng, Z. (21 May 2010). "Radiation-luminescence-excited quantum dots for in vivo multiplexed optical imaging". Small. 6 (10): 1087–91. PMID 20473988.
- Liu, H.; Ren, G.; Liu, S.; Zhang, X.; Chen, L.; Han, P.; Cheng, Z. (2010). "Optical imaging of reporter gene expression using a positron-emission-tomography probe". Journal of Biomedical Optics. 15 (6): 060505. Bibcode:2010JBO....15f0505L. doi: . PMC . PMID 21198146.
- Zhong, J.; Qin, C.; Yang, X.; Zhu, S.; Zhang, X.; Tian, J (2011). "Cerenkov luminescence tomography for in vivo radiopharmaceutical imaging". International Journal of Biomedical Imaging. 2011 (641618): 1–6. doi:10.1155/2011/641618.
- Sinoff, C. L. (20 April 1991). "Radical irradiation for carcinoma of the prostate.". South African Medical Journal. 79 (8): 514. PMID 2020899.
- Mitchell, G. S.; Gill, R. K.; Boucher, D. L.; Li, C.; Cherry, S. R. (17 October 2011). "In vivo Cerenkov luminescence imaging: a new tool for molecular imaging". Philosophical Transactions of the Royal Society A. 369 (1955): 4605–4619. Bibcode:2011RSPTA.369.4605M. doi: .
- Das, S.; Thorek, D. L. J.; Grimm, J. (2014). "Cerenkov Imaging". Emerging Applications of Molecular Imaging to Oncology. Advances in Cancer Research. 124. p. 213. doi:10.1016/B978-0-12-411638-2.00006-9. ISBN 9780124116382.
- Spinelli, A. E.; Ferdeghini, M.; Cavedon, C.; Zivelonghi, E.; Calandrino, R.; Fenzi, A.; Sbarbati, A.; Boschi, F. (18 January 2013). "First human Cerenkography". Journal of Biomedical Optics. 18 (2): 020502. Bibcode:2013JBO....18b0502S. doi:10.1117/1.JBO.18.2.020502.
- Jarvis, L. A.; Zhang, R.; Gladstone, D. J.; Jiang, S.; Hitchcock, W.; Friedman, O. D.; Glaser, A. K.; Jermyn, M.; Pogue, B. W. (July 2014). "Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time". International Journal of Radiation Oncology*Biology*Physics. 89 (3): 615–622. doi:10.1016/j.ijrobp.2014.01.046.
- The High Momentum Particle Identification Detector at CERN
- http://www.livescience.com/3356-watchmen-science-dr-manhattan.html[full citation needed]
- "Why Does Radioactive Stuff Glow Green? (Or why do people think it does)". Depleted Cranium. 22 April 2008. Retrieved 11 March 2015.
- Wang, Z. Y. (2016). "Generalized momentum equation of quantum mechanics". Optical and Quantum Electronics. 48 (2): 1–9. doi:10.1007/s11082-015-0261-8.
- Bugaev, S. P.; Kanavets, V. I.; Klimov, A. I.; Koshelev, V. I.; Cherepenin, V. A. (1983). "Relativistic multiwave Cerenkov generator". Soviet Technical Physics Letters. 9: 1385–1389. Bibcode:1983PZhTF...9.1385B.
- Landau, L. D.; Liftshitz, E. M.; Pitaevskii, L. P. (1984). Electrodynamics of Continuous Media. New York: Pergamon Press. ISBN 0-08-030275-0.
- Jelley, J. V. (1958). Cerenkov Radiation and Its Applications. London: Pergamon Press.
- Smith, S. J.; Purcell, E. M. (1953). "Visible Light from Localized Surface Charges Moving across a Grating". Physical Review. 92 (4): 1069. Bibcode:1953PhRv...92.1069S. doi:10.1103/PhysRev.92.1069.
|Wikimedia Commons has media related to Cherenkov radiation.| |
Are you confused about division? This article gives you information about division. It also explains the definition of division, the sign of the division, terms like Dividend, Divisor, Quotient, and Remainder, Facts about Division. You can also check the examples for a better understanding of the concept.
Division – Definition
It is one of the four basic arithmetic operations which gives the result of sharing. In this method, the group of things is distributed into equal parts. There are a lot of signs that people used to indicate division. The most common signs are ‘%’, and back slash’ /’. Some members also write one number up and another number down with a line between them.
In the division Equation, there are Four parts namely 1. Dividend 2. Divisor 3.Quotient,4.Remainder.The dividend is the number you are dividing up. The divisor is the number you are dividing up. The quotient is the answer. The remainder is the part of the dividend that is remained after division. The division is the inverse of Multiplication.
24 is the Dividend.
4 is the Divisor.
6 is the Quotient.
Interesting Facts about Division | Division Facts Ideas to Learn
Here we will discuss the facts about division explained by considering few examples. They are along the lines
- When dividing the number by 1, the answer will always be the same number. It means if the divisor is 1, the quotient will always be equal to the dividend such as 50 ÷ 1= 50.
- Division by 0 is undefined. For example 25/0=undefined.
- The division of the same numerator(dividend) and Denominator (divisor) is always 1. For example: 4 ÷ 4 = 1.
- In a division sum, the remainder is always lesser than the divisor. For Example, 16%3 remainder is 1. 1 is smaller than 3.
- The dividend is always equal to the product of the divisor and the quotient with the addition of the remainder. Dividend=(Divisor × Quotient) + RemainderFor example 16%3
- The divisor and the quotient are always the factors of the dividend if there is no remainder. For Example 15%3=5
- The dividend is always a multiple of the divisor and the quotient if there is no remainder. For Example 40%8=5 5*8=40, 8*5=40.
FAQs on Facts of Division
1. What is division?
In this method, the group of things is distributed into equal parts.
2. What is the sign of division?
The sign of division is %, backslash( /).
3. What will be the Quotient when the dividend and divisor are the same?
The quotient of the same dividend and divisor is always 1.
4. What will be the Remainder when the dividend and divisor are the same?
The Remainder is zero when the dividend and divisor are the same.
5. In a division, does the remainder is always lesser than the divisor?
In a division, the remainder is less than the divisor. If it is larger than the divisor, it means that the division is incomplete.
6. How to check the division if it is correct or not?
In division, the dividend is always equal to the product of the divisor, quotient with addition to the remainder.
Dividend=(Divisor × Quotient) + Remainder |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.