text
string | id
string | dump
string | url
string | file_path
string | language
string | language_score
float64 | token_count
int64 | score
float64 | int_score
int64 |
|---|---|---|---|---|---|---|---|---|---|
The fossils of two interrelated ancestral mammals, newly discovered in China, suggest that the wide-ranging ecological diversity of modern mammals had a precedent more than 160 million years ago.
With claws for climbing and teeth adapted for a tree sap diet, Agilodocodon scansorius is the earliest-known tree-dwelling mammaliaform (long-extinct relatives of modern mammals). The other fossil, Docofossor brachydactylus, is the earliest-known subterranean mammaliaform, possessing multiple adaptations similar to African golden moles such as shovel-like paws. Docofossor also has distinct skeletal features that resemble patterns shaped by genes identified in living mammals, suggesting these genetic mechanisms operated long before the rise of modern mammals.
These discoveries are reported by international teams of scientists from the University of Chicago and Beijing Museum of Natural History in two separate papers published Feb. 13 in Science.
"We consistently find with every new fossil that the earliest mammals were just as diverse in both feeding and locomotor adaptations as modern mammals," said Zhe-Xi Luo, PhD, professor of organismal biology and anatomy at the University of Chicago and an author on both papers. "The groundwork for mammalian success today appears to have been laid long ago."
Agilodocodon and Docofossor provide strong evidence that arboreal and subterranean lifestyles evolved early in mammalian evolution, convergent to those of true mammals. These two shrew-sized creatures - members of the mammaliaform order Docodonta - have unique adaptations tailored for their respective ecological habitats.
Agilodocodon, which lived roughly 165 million years ago, had hands and feet with curved horny claws and limb proportions that are typical for mammals that live in trees or bushes. It is adapted for feeding on the gum or sap of trees, with spade-like front teeth to gnaw into bark. This adaptation is similar to the teeth of some modern New World monkeys, and is the earliest-known evidence of gumnivorous feeding in mammaliaforms. Agilodocodon also had well-developed, flexible elbows and wrist and ankle joints that allowed for much greater mobility, all characteristics of climbing mammals.
"The finger and limb bone dimensions of Agilodocodon match up with those of modern tree-dwellers, and its incisors are evidence it fed on plant sap," said study co-author David Grossnickle, graduate student at the University of Chicago. "It's amazing that these arboreal adaptions occurred so early in the history of mammals and shows that at least some extinct mammalian relatives exploited evolutionarily significant herbivorous niches, long before true mammals."
Docofossor, which lived around 160 million years ago, had a skeletal structure and body proportions strikingly similar to the modern day African golden mole. It had shovel-like fingers for digging, short and wide upper molars typical of mammals that forage underground, and a sprawling posture indicative of subterranean movement.
Docofossor had reduced bone segments in its fingers, leading to shortened but wide digits. African golden moles possess almost the exact same adaptation, which provides an evolutionary advantage for digging mammals. This characteristic is due to the fusion of bone joints during development - a process influenced by the genes BMP and GDF-5. Because of the many anatomical similarities, the researchers hypothesize that this genetic mechanism may have played a comparable role in early mammal evolution, as in the case of Docofossor.
The spines and ribs of both Agilodocodon and Docofossor also show evidence for the influence of genes seen in modern mammals. Agilodocodon has a sharp boundary between the thoracic ribcage to lumbar vertebrae that have no ribs. However, Docofossor shows a gradual thoracic to lumber transition. These shifting patterns of thoracic-lumbar transition have been seen in modern mammals and are known to be regulated by the genes Hox 9-10 and Myf 5-6. That these ancient mammaliaforms had similar developmental patterns is an evidence that these gene networks could have functioned in a similar way long before true mammals evolved.
"We believe the shortened digits of Docofossor, which is a dead ringer for modern golden moles, could very well have been caused by BMP and GDF," Luo said. "We can now provide fossil evidence that gene patterning that causes variation in modern mammalian skeletal development also operated in basal mammals all the way back in the Jurassic."
Early mammals were once thought to have limited ecological opportunities to diversify during the dinosaur-dominated Mesozoic era. However, Agilodocodon, Docofossor and numerous other fossils - including Castorocauda, a swimming, fish-eating mammaliaform described by Luo and colleagues in 2006 - provide strong evidence that ancestral mammals adapted to wide-ranging environments despite competition from dinosaurs.
"We know that modern mammals are spectacularly diverse, but it was unknown whether early mammals managed to diversify in the same way," Luo said. "These new fossils help demonstrate that early mammals did indeed have a wide range of ecological diversity. It appears dinosaurs did not dominate the Mesozoic landscape as much as previously thought."
The study, "Evolutionary development in basal mammaliaforms as revealed by a docodontan," was supported by the Beijing Science and Technology Commission, the Ministry of Science and Technology of China and the University of Chicago. Additional authors include Qing-Jin Meng, Qiang Ji, Di Liu, Yu-Guang Zhang and April I. Neander.
The study "An arboreal docodont from the Jurassic and mammaliaform ecological diversification," was supported by the Beijing Science and Technology Commission, the Ministry of Science and Technology of China, the Chinese Academy of Geological Science and the University of Chicago. Additional authors include Qing-Jin Meng, Qiang Ji,Yu-Guang Zhang and Di Liu.
|
<urn:uuid:ef9f11da-f3b2-4975-a4ff-7f6468619123>
|
CC-MAIN-2018-51
|
https://www.eurekalert.org/pub_releases/2015-02/uocm-eaa020615.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00375.warc.gz
|
en
| 0.951452
| 1,233
| 4.03125
| 4
|
Civil War Anti-War Protests
Like some residents of other Northern states, numerous Ohioans strenuously objected to the American Civil War. Various reasons existed for the reluctance of these Ohioans and their fellow Northerners to support the Union.
A sizable number of white Ohioans, especially those living along the Ohio River, had migrated to the state from slaveholding states. While opponents of the war could not legally own slaves in Ohio, many of them had family members residing in the South who did own African American slaves. These people often sympathized with slaveholders, agreeing with many white Southerners that the federal government did not have the power to limit slavery's existence. These Ohioans preferred political compromise rather than warfare.
Other Ohioans had economic ties to the South. These Ohioans either operated businesses in the South or engaged in trade with Southerners. These Ohioans feared that a war would hurt them financially, as it theoretically could end trade between Ohio and the Southern states.
Some Ohioans did not support the war for religious reasons. Numerous groups in Ohio objected to violence due to their religious beliefs. These people included members of the Society of Friends, the Mennonites, the Amish, and several other denominations. While these groups did not formally protest the war, many of their followers refused to participate in the conflict. Some members of these faiths violated their religious teachings and did take up arms against the Confederacy. While groups like the Quakers opposed violence, they also believed that slavery was equally unjust and against God's will.
Later, some Ohioans began to oppose the Civil War after Abraham Lincoln issued the Emancipation Proclamation in September 1862. That document declared that the slaves in areas still in rebellion as of January 1, 1863 would receive their freedom on that date. By issuing the proclamation, Lincoln made ending slavery one of the North's war aims. Many Northerners, including some Ohioans, were willing to fight to reunite the nation and to secure a government where the majority ruled, but they were unwilling to fight a war to terminate slavery. This was especially true among some soldiers from the working class. These men feared that, with slavery's end, African Americans would migrate to the North, taking jobs away from the white workers. Several Northern soldiers, including some Ohioans, deserted from the Union army in protest of the Emancipation Proclamation.
A final and, perhaps, most important reason for anti-war protests was the draft. In 1863, the United States government implemented the Conscription Act, which was also known as the Enrollment Act. This act required states to draft men to serve in the Union military if individual states did not meet their enlistment quotas through volunteers. The Conscription Act permitted drafted men to pay a commutation fee of three hundred dollars or to hire a substitute to escape service if they were drafted.
Draft riots occurred in both New York City, New York and Boston, Massachusetts. Some Ohioans also strongly objected to the Conscription Act. Many of the opponents were members of the anti-war or "Peace" section of the Democratic Party and encouraged men to resist the draft or to desert once they were drafted. In Hoskinville, residents attempted to hide a deserter from government authorities. The local federal marshal called in soldiers to arrest the deserter. In Holmes County, nine hundred to one thousand men created a makeshift fort to defend themselves from federal officials sent to enforce the Conscription Act. These men were responding to attempts by the federal government to enlist men into the Union army during June 1863. A mob had attacked an officer sent to enlist men into the service, and a provost marshal captured the ringleaders behind the assault. A group of residents freed the four men arrested. They built Fort Fizzle to resist future attempts to arrest the ringleaders and to prevent the draft's enforcement. They equipped themselves with guns and four artillery pieces, although some scholars doubt that any cannons were actually inside of the fort. Approximately 420 federal soldiers arrived to disarm the men and to implement the draft. A brief skirmish occurred, with the soldiers emerging victorious. Two draft resisters were wounded. The demonstrators dispersed into the woods, and the Battle of Fort Fizzle, as it became known, quickly ended. The soldiers continued to hunt for the protestors. Eventually a deal was brokered in which the four men originally arrested would surrender. When the men turned themselves in, a majority of the soldiers returned to Columbus. This was just one of many protests in response to the draft in Ohio. Unlike the Battle of Fort Fizzle, government authorities easily put down most of these uprisings without having to resort to violence.
Clement Vallandigham and the Peace Democrats
Several Ohioans participated in a peace convention during early 1861. The convention was held in Washington, DC, and the delegates hoped to convince President Abraham Lincoln to either agree to the Confederacy's demands to get its citizens to rejoin the Union or simply to let the Southern states leave the United States. Lincoln ignored the peace convention's attempt to end the conflict peacefully. Politically, most people who participated in the peace convention affiliated themselves with the Democratic Party. These people became known as Peace Democrats.
Clement Vallandigham was the best known Peace Democrat in Ohio. He helped organize a rally for the Democratic Party at Mount Vernon, Ohio, on May 1, 1863. Peace Democrats Vallandigham, Samuel Cox, and George Pendleton all delivered speeches denouncing General Order No. 38. In April 1863, General Ambrose Burnside, commander of the Department of Ohio, issued General Order No. 38. Burnside placed his headquarters in Cincinnati. Located on the Ohio River, just north of the slave state of Kentucky, Cincinnati had a number of residents sympathetic to the Confederacy. Burnside hoped to intimidate Confederate sympathizers with General Order No. 38.
General Order No. 38 stated:
The habit of declaring sympathy for the enemy will not be allowed in this department. Persons committing such offenses will be at once arrested with a view of being tried or sent beyond our lines into the lines of their friends. It must be understood that treason, expressed or implied, will not be tolerated in this department.
Burnside also declared that, in certain cases, violations of General Order No. 38 could result in death.
Vallandigham was so opposed to the order that he allegedly said that he "despised it, spit upon it, trampled it under his feet" He also supposedly encouraged his fellow Peace Democrats to openly resist Burnside. Vallandigham went on to chastise President Lincoln for not seeking a peaceable and immediate end to the Civil War and for allowing General Burnside to thwart citizen rights under a free government.
In attendance at the Mount Vernon rally were two army officers under Burnside's command. They reported to Burnside that Vallandigham had violated General Order No. 38. The general ordered his immediate arrest. On May 5, 1863, a company of soldiers arrested Vallandigham at his home in Dayton and brought him to Cincinnati to stand trial.
Burnside charged Vallandigham with the following crimes:
Publicly expressing, in violation of General Orders No. 38, from Head-quarters Department of Ohio, sympathy for those in arms against the Government of the United States, and declaring disloyal sentiments and opinions, with the object and purpose of weakening the power of the Government in its efforts to suppress an unlawful rebellion.
A military tribunal heard the case, and Vallandigham offered no serious defense against the charges. He contended that military courts had no jurisdiction over his case. The tribunal found Vallandigham guilty and sentenced him to remain in a United States prison for the remainder of the war.
Vallandigham's attorney, George Pugh, appealed the tribunal's decision to Humphrey Leavitt, a judge on the federal circuit court. Pugh, like his client, claimed that the military court did not have proper jurisdiction in this case and had violated Vallandigham's constitutional rights. Judge Leavitt rejected Vallandigham's argument. He agreed with General Burnside that military authority was necessary during a time of war to ensure that opponents to the United States Constitution did not succeed in overthrowing the Constitution and the rights that it guaranteed United States citizens.
As a result of Leavitt's decision, authorities were to send Vallandigham to federal prison. President Lincoln feared that Peace Democrats across the North might rise up to prevent Vallandigham's detention. The president commuted Vallandigham's sentence to exile in the Confederacy. On May 25, Burnside sent Vallandigham into Confederate lines.
Some Peace Democrats resorted to more radical means, including subversion, to protest the Civil War. Some of these men formed secret societies such as the Sons of Liberty. Members of these organizations resided primarily in Northern and Border States. In February 1864, Clement Vallandigham was elected supreme commander of the sons of Liberty. Ohio government officials estimated that between eighty thousand and 110,000 Ohioans belonged to these organizations, but most historians discount these numbers as being dramatically higher than the group's actual numbers.
Rumors circulated throughout the North during 1864 that the Confederate sympathizers intended to free Southern prisoners at several prison camps, including Johnson's Island and Camp Chase in Ohio. These freed prisoners would form the basis of a new Confederate army that would operate in the heart of the Union. Supposedly, General John Hunt Morgan, who had raided Ohio the previous year, would return to the state and assist this new army. The plot never materialized. William Rosecrans, assigned to oversee the Department of Missouri, discovered the planned uprising and warned Northern governors to remain cautious. John Brough, Ohio's governor sent out spies to infiltrate the sympathizer groups. These men succeeded and stopped the uprising before it could occur. Confederate supporters hoped to capture the Michigan, a gunboat operating on Lake Erie near Sandusky. They would then use the gunboat to free Confederate prisoners at Johnson's Island. Union authorities arrested the plot's ringleader, Charles Cole.
While some Ohioans did openly oppose the Civil War, these people remained a distinct minority. Most Ohioans supported the war and a very large number of them volunteered for military service. Nevertheless, at least to some degree, the war protesters caused difficulties for both the state and federal government and hampered the government's abilities to wage the war.
- Ambrose Burnside
- John Brough
- Abraham Lincoln
- George Pendleton
- Clement Vallandigham
- Battle of Fort Fizzle
- African Americans
- Mennonite Church
- Peace Democrats
- Camp Chase
- Cincinnati, Ohio
- Dayton, Ohio
- Democratic Party
- Sons of Liberty
- Johnson's Island
- American Civil War
- Ohio River
- Samuel S. Cox
- Emancipation Proclamation
- John H. Morgan
- Morgan's Raid
- Lake Erie
- Columbus, Ohio
- Holmes County
- Mount Vernon, Ohio
- Enrollment Act
- Dee, Christine, ed. Ohio's War: The Civil War in Documents. Athens: Ohio University Press, 2007.
- Klement, Frank L. The Limits of Dissent: Clement L. Vallandigham & the Civil War. New York, NY: Fordham University Press, 1998.
- Reid, Whitelaw. Ohio in the War: Her Statesmen, Generals and Soldiers. Cincinnati, OH: Clarke, 1895.
- Roseboom, Eugene H. The Civil War Era: 1850-1873. Columbus: Ohio State Archaeological and Historical Society, 1944.
- Vallandigham, Clement Laird. Speeches, Arguments, Addresses, and Letters of Clement L. Vallandigham. New York, NY: J. Walter, 1864.
- Vallandigham, Clement Laird. The Trial of Hon. Clement L. Vallandigham, by a Military Commission and the Proceedings Under his Application for a Writ of Habeas Corpus in the Circuit Court of the United States for the Southern District of Ohio. Cincinnati, OH: Rickey and Carroll, 1863.
- Vallandigham, James L. A Life of Clement L. Vallandigham, by his Brother, Rev. James L. Vallandigham. Baltimore, MD: Turnbull Brothers, 1872.
|
<urn:uuid:8e821f4f-3015-49a8-89d0-14898c3ced65>
|
CC-MAIN-2018-47
|
http://www.ohiohistorycentral.org/w/Civil_War_Anti-War_Protests
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746800.89/warc/CC-MAIN-20181120211528-20181120233528-00474.warc.gz
|
en
| 0.963449
| 2,528
| 4.5
| 4
|
5.2 The HSV Colorspace
The perception of color and our way of talking about it in everyday
life is not well served by the RGB colorspace. If we're thinking of
repainting the walls of the living room, for example, we usually think
about what shade of color it should be, how bright we want it, and
whether it should be pastel or vivid.
Typically, the first thing we usually notice about a color is its
hue. Hue describes the shade of color and where
that color it is found in the color spectrum. Red, yellow, and purple
are words that describe hue. Figure
Hue, Saturation, and Value
illustrates the range of hues, H, as a circle represented by values
from 0 to 360. The reasons for this will become clear shortly.
The next most significant aspect of color is typically the
saturation, S. The
saturation describes how pure the hue is with respect to a white
reference. For example, a color that is all red and no white is fully
saturated. If we add some white to the red, the result becomes more
pastel, and the color shifts from red to pink. The hue is still red
but it has become less saturated. This is illustrated in the vertical
bar of Figure
5.3. Saturation is a
percentage that ranges from 0 to 100. A pure red that has no white is
Finally, a color also has a
brightness. This is a
relative description of how much light is coming from the color. If
the color reflects a lot of light, we would say that it is bright.
Imagine seeing a red sportscar during the day. Its color looks
bright. Compare this with the perception of the car as night is
falling. We can see that the car is red but it looks duller because
is reflecting less light into the eye. Less light means the color
looks darker. In the GIMP, the most important measure of brightness
is measured by a quantity called value. However, there are also other
measures of brightness that will be introduced shortly. For the
moment, though, the horizontal bar in
5.3 illustrates a range of red values.
Value, like saturation, is a percentage that goes from 0 to 100. This
range can be thought of as the amount of light illuminating a color.
For example, when the hue is red and the value is high the color looks
bright. When the value is low it looks dark.
Thus, hue, saturation, and value are like an alternative colorspace.
Any color can be decomposed into these three components and, like for
RGB, it is possible to represent this space as a cube.
Decomposing a Color Image into its HSV Components
illustrates the result of using Image:Image/Mode/Decompose
on the color
image in Figure
(a). Choosing the HSV
option in the Decompose
dialog produces the
decomposition shown in
(b), (c), and (d). It is
interesting to note that hue really doesn't change much. It is almost
constant over broad regions of the image. For, example, although
there is significant detail in the saturation and value components of
the sky, the hue is quite uniform there. Of the three, it is the
value component that is the most detailed.
Because colors are created on the monitor using mixes of red, green,
and blue, it is useful and instructive to see how the HSV colorspace
lives inside of the RGB cube.
|
<urn:uuid:eb5759e1-fedf-46ae-9082-4bc34778c692>
|
CC-MAIN-2021-25
|
https://www.linuxtopia.org/online_books/graphics_tools/gimp_advanced_guide/gimp_guide_node51.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00516.warc.gz
|
en
| 0.928126
| 770
| 4.1875
| 4
|
From Wikipedia, the free encyclopedia - View original article
Stereoscopy (also called stereoscopics or 3D imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek στερεός (stereos), meaning "firm, solid", and σκοπέω (skopeō), meaning "to look, to see". Any stereoscopic image is called stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.
Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements.
Stereoscopy creates the illusion of three-dimensional depth from given two-dimensional images. Human vision, including the perception of depth, is a complex process which only begins with the acquisition of visual information taken in through the eyes; much processing ensues within the brain, as it strives to make intelligent and meaningful sense of the raw information provided. One of the very important visual functions that occur within the brain as it interprets what the eyes see is that of assessing the relative distances of various objects from the viewer, and the depth dimension of those same perceived objects. The brain makes use of a number of cues to determine relative distances and depth in a perceived scene, including:
(All the above cues, with the exception of the first two, are present in traditional two-dimensional images such as paintings, photographs, and television.)
Stereoscopy is the production of the illusion of depth in a photograph, movie, or other two-dimensional image by presenting a slightly different image to each eye, and thereby adding the first of these cues (stereopsis) as well. Both of the 2D offset images are then combined in the brain to give the perception of 3D depth. It is important to note that since all points in the image focus at the same plane regardless of their depth in the original scene, the second cue, focus, is still not duplicated and therefore the illusion of depth is incomplete. There are also primarily two effects of stereoscopy that are unnatural for the human vision: first, the mismatch between convergence and accommodation, caused by the difference between an object's perceived position in front of or behind the display or screen and the real origin of that light and second, possible crosstalk between the eyes, caused by imperfect image separation by some methods.
Although the term "3D" is ubiquitously used, it is also important to note that the presentation of dual 2D images is distinctly different from displaying an image in three full dimensions. The most notable difference is that, in the case of "3D" displays, the observer's head and eye movement will not increase information about the 3-dimensional objects being displayed. Holographic displays or volumetric display are examples of displays that do not have this limitation. Similar to the technology of sound reproduction, in which it is not possible to recreate a full 3-dimensional sound field merely with two stereophonic speakers, it is likewise an overstatement of capability to refer to dual 2D images as being "3D". The accurate term "stereoscopic" is more cumbersome than the common misnomer "3D", which has been entrenched after many decades of unquestioned misuse. Although most stereoscopic displays do not qualify as real 3D display, all real 3D displays are also stereoscopic displays because they meet the lower criteria as well.
Most 3D displays use this stereoscopic method to convey images. It was first invented by Sir Charles Wheatstone in 1838, and improved by Sir David Brewster who made the first portable 3D viewing device.
Wheatstone originally used his stereoscope (a rather bulky device) with drawings because photography was not yet available, yet his original paper seems to foresee the development of a realistic imaging method:
For the purposes of illustration I have employed only outline figures, for had either shading or colouring been introduced it might be supposed that the effect was wholly or in part due to these circumstances, whereas by leaving them out of consideration no room is left to doubt that the entire effect of relief is owing to the simultaneous perception of the two monocular projections, one on each retina. But if it be required to obtain the most faithful resemblances of real objects, shadowing and colouring may properly be employed to heighten the effects. Careful attention would enable an artist to draw and paint the two component pictures, so as to present to the mind of the observer, in the resultant perception, perfect identity with the object represented. Flowers, crystals, busts, vases, instruments of various kinds, &c., might thus be represented so as not to be distinguished by sight from the real objects themselves.
Stereoscopy is used in photogrammetry and also for entertainment through the production of stereograms. Stereoscopy is useful in viewing images rendered from large multi-dimensional data sets such as are produced by experimental data. An early patent for 3D imaging in cinema and television was granted to physicist Theodor V. Ionescu in 1936. Modern industrial three-dimensional photography may use 3D scanners to detect and record three-dimensional information. The three-dimensional depth information can be reconstructed from two images using a computer by corresponding the pixels in the left and right images (e.g.,). Solving the Correspondence problem in the field of Computer Vision aims to create meaningful depth information from two images.
Anatomically, there are 3 levels of binocular vision required to view stereo images:
These functions develop in early childhood. Some people who have strabismus disrupt the development of stereopsis, however orthoptics treatment can be used to improve binocular vision. A person's stereoacuity determines the minimum image disparity they can perceive as depth. It is believed that approximately 12% of people are unable to properly see 3D images, due to a variety of medical conditions. According to another experiment up to 30% of people have very weak stereoscopic vision preventing them from depth perception based on stereo disparity. This nullifies or greatly decreases immersion effects of stereo to them.
Traditional stereoscopic photography consists of creating a 3D illusion starting from a pair of 2D images, a stereogram. The easiest way to enhance depth perception in the brain is to provide the eyes of the viewer with two different images, representing two perspectives of the same object, with a minor deviation equal or nearly equal to the perspectives that both eyes naturally receive in binocular vision.
To avoid eyestrain and distortion, each of the two 2D images should be presented to the viewer so that any object at infinite distance is perceived by the eye as being straight ahead, the viewer's eyes being neither crossed nor diverging. When the picture contains no object at infinite distance, such as a horizon or a cloud, the pictures should be spaced correspondingly closer together.
The principal advantages of side-by-side viewers is the lack of diminution of brightness, allowing the presentation of images at very high resolution and in full spectrum color, simplicity in creation, and little or no additional image processing is required. Under some circumstances, such as when a pair of images are presented for freeviewing, no device or additional optical equipment is needed.
The principal disadvantage of side-by-side viewers is that large image displays are not practical and resolution is limited by the lesser of the display medium or human eye. This is because as the dimensions of an image are increased, either the viewing apparatus or viewer themselves must move proportionately further away from it in order to view it comfortably. Moving closer to an image in order to see more detail would only be possible with viewing equipment that adjusted to the difference.
Freeviewing is viewing a side-by-side image pair without using a viewing device.
Prismatic, self-masking glasses are now being used by some cross-eyed-view advocates. These reduce the degree of convergence required and allow large images to be displayed. However, any viewing aid that uses prisms, mirrors or lenses to assist fusion or focus is simply a type of stereoscope, excluded by the customary definition of freeviewing.
Stereoscopically fusing two separate images without the aid of mirrors or prisms while simultaneously keeping them in sharp focus without the aid of suitable viewing lenses inevitably requires an unnatural combination of eye vergence and accommodation. Simple freeviewing therefore cannot accurately reproduce the physiological depth cues of the real-world viewing experience. Different individuals may experience differing degrees of ease and comfort in achieving fusion and good focus, as well as differing tendencies to eye fatigue or strain.
An autostereogram is a single-image stereogram (SIS), designed to create the visual illusion of a three-dimensional (3D) scene within the human brain from an external two-dimensional image. In order to perceive 3D shapes in these autostereograms, one must overcome the normally automatic coordination between focusing and vergence.
The stereoscope is essentially an instrument in which two photographs of the same object, taken from slightly different angles, are simultaneously presented, one to each eye. A simple stereoscope is limited in the size of the image that may be used. A more complex stereoscope uses a pair of horizontal periscope-like devices, allowing the use of larger images that can present more detailed information in a wider field of view.
Some stereoscopes are designed for viewing transparent photographs on film or glass, known as transparencies or diapositives and commonly called slides. Some of the earliest stereoscope views, issued in the 1850s, were on glass. In the early 20th century, 45x107 mm and 6x13 cm glass slides were common formats for amateur stereo photography, especially in Europe. In later years, several film-based formats were in use. The best-known formats for commercially issued stereo views on film are Tru-Vue, introduced in 1931, and View-Master, introduced in 1939 and still in production. For amateur stereo slides, the Stereo Realist format, introduced in 1947, is by far the most common.
The user typically wears a helmet or glasses with two small LCD or OLED displays with magnifying lenses, one for each eye. The technology can be used to show stereo films, images or games, but it can also be used to create a virtual display. Head-mounted displays may also be coupled with head-tracking devices, allowing the user to "look around" the virtual world by moving their head, eliminating the need for a separate controller. Performing this update quickly enough to avoid inducing nausea in the user requires a great amount of computer image processing. If six axis position sensing (direction and position) is used then wearer may move about within the limitations of the equipment used. Owing to rapid advancements in computer graphics and the continuing miniaturization of video and other equipment these devices are beginning to become available at more reasonable cost.
Head-mounted or wearable glasses may be used to view a see-through image imposed upon the real world view, creating what is called augmented reality. This is done by reflecting the video images through partially reflective mirrors. The real world view is seen through the mirrors' reflective surface. Experimental systems have been used for gaming, where virtual opponents may peek from real windows as a player moves about. This type of system is expected to have wide application in the maintenance of complex systems, as it can give a technician what is effectively "x-ray vision" by combining computer graphics rendering of hidden elements with the technician's natural vision. Additionally, technical data and schematic diagrams may be delivered to this same equipment, eliminating the need to obtain and carry bulky paper documents.
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), not to be confused with a "Retina Display", is a display technology that draws a raster image (like a television picture) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them. For true stereoscopy, each eye must be provided with its own discrete display. To produce a virtual display that occupies a usefully large visual angle but does not involve the use of relatively large lenses or mirrors, the light source must be very close to the eye. A contact lens incorporating one or more semiconductor light sources is the form most commonly proposed. As of 2013, the inclusion of suitable light-beam-scanning means in a contact lens is still very problematic, as is the alternative of embedding a reasonably transparent array of hundreds of thousands (or millions, for HD resolution) of accurately aligned sources of collimated light.
There are two categories of 3D viewer technology, active and passive. Active viewers have electronics which interact with a display. Passive viewers filter constant streams of binocular input to the appropriate eye.
A shutter system works by openly presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image. It generally uses liquid crystal shutter glasses. Each eye's glass contains a liquid crystal layer which has the property of becoming dark when voltage is applied, being otherwise transparent. The glasses are controlled by a timing signal that allows the glasses to alternately darken over one eye, and then the other, in synchronization with the refresh rate of the screen.
To present stereoscopic pictures, two images are projected superimposed onto the same screen through polarizing filters or presented on a display with polarized filters. For projection, a silver screen is used so that polarization is preserved. On most passive displays every other row of pixels are polarized for one eye or the other. This method is also known as being interlaced. The viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As each filter only passes light which is similarly polarized and blocks the opposite polarized light, each eye only sees one of the images, and the effect is achieved.
This technique uses specific wavelengths of red, green, and blue for the right eye, and different wavelengths of red, green, and blue for the left eye. Eyeglasses which filter out the very specific wavelengths allow the wearer to see a full color 3D image. It is also known as spectral comb filtering or wavelength multiplex visualization or super-anaglyph. Dolby 3D uses this principle. The Omega 3D/Panavision 3D system has also used an improved version of this technology In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO Theatrical, who marketed it on behalf of Panavision, citing ″challenging global economic and 3D market conditions″. Although DPVO dissolved its business operations, Omega Optical continues promoting and selling 3D systems to non-theatrical markets. Omega Optical’s 3D system contains projection filters and 3D glasses. In addition to the passive stereoscopic 3D system, Omega Optical has produced enhanced anaglyph 3D glasses. The Omega’s red/cyan anaglyph glasses use complex metal oxide thin film coatings and high quality annealed glass optics.
Anaglyph 3D is the name given to the stereoscopic 3D effect achieved by means of encoding each eye's image using filters of different (usually chromatically opposite) colors, typically red and cyan. Anaglyph 3D images contain two differently filtered colored images, one for each eye. When viewed through the "color-coded" "anaglyph glasses", each of the two images reaches one eye, revealing an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition.
The ChromaDepth procedure of American Paper Optics is based on the fact that with a prism, colors are separated by varying degrees. The ChromaDepth eyeglasses contain special view foils, which consist of microscopically small prisms. This causes the image to be translated a certain amount that depends on its color. If one uses a prism foil now with one eye but not on the other eye, then the two seen pictures – depending upon color – are more or less widely separated. The brain produces the spatial impression from this difference. The advantage of this technology consists above all of the fact that one can regard ChromaDepth pictures also without eyeglasses (thus two-dimensional) problem-free (unlike with two-color anaglyph). However the colors are only limitedly selectable, since they contain the depth information of the picture. If one changes the color of an object, then its observed distance will also be changed.
The Pulfrich effect is based on the phenomenon of the human eye processing images more slowly when there is less light, as when looking through a dark lens. Because the Pulfrich effect depends on motion in a particular direction to instigate the illusion of depth, it is not useful as a general stereoscopic technique. For example, it cannot be used to show a stationary object apparently extending into or out of the screen; similarly, objects moving vertically will not be seen as moving in depth. Incidental movement of objects will create spurious artifacts, and these incidental effects will be seen as artificial depth not related to actual depth in the scene.
Stereoscopic viewing is achieved by placing an image pair one above one another. Special viewers are made for over/under format that tilt the right eyesight slightly up and the left eyesight slightly down. The most common one with mirrors is the View Magic. Another with prismatic glasses is the KMQ viewer. A recent usage of this technique is the openKMQ project.
Autostereoscopic display technologies use optical components in the display, rather than worn by the user, to enable each eye to see a different image. Because headgear is not required, it is also called "glasses-free 3D". The optics split the images directionally into the viewer's eyes, so the display viewing geometry requires limited head positions that will achieve the stereoscopic effect. Automultiscopic displays provide multiple views of the same scene, rather than just two. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The technology includes two broad classes of displays: those that use head-tracking to ensure that each of the viewer's two eyes sees a different image on the screen, and those that display multiple views so that the display does not need to know where the viewers' eyes are directed. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, volumetric display, holography and light field displays.
Laser holography, in its original "pure" form of the photographic transmission hologram, is the only technology yet created which can reproduce an object or scene with such complete realism that the reproduction is visually indistinguishable from the original, given the original lighting conditions. It creates a light field identical to that which emanated from the original scene, with parallax about all axes and a very wide viewing angle. The eye differentially focuses objects at different distances and subject detail is preserved down to the microscopic level. The effect is exactly like looking through a window. Unfortunately, this "pure" form requires the subject to be laser-lit and completely motionless—to within a minor fraction of the wavelength of light—during the photographic exposure, and laser light must be used to properly view the results. Most people have never seen a laser-lit transmission hologram. The types of holograms commonly encountered have seriously compromised image quality so that ordinary white light can be used for viewing, and non-holographic intermediate imaging processes are almost always resorted to, as an alternative to using powerful and hazardous pulsed lasers, when living subjects are photographed.
Although the original photographic processes have proven impractical for general use, the combination of computer-generated holograms (CGH) and optoelectronic holographic displays, both under development for many years, has the potential to transform the half-century-old pipe dream of holographic 3D television into a reality; so far, however, the large amount of calculation required to generate just one detailed hologram, and the huge bandwidth required to transmit a stream of them, have confined this technology to the research laboratory.
Volumetric displays use some physical mechanism to display points of light within a volume. Such displays use voxels instead of pixels. Volumetric displays include multiplanar displays, which have multiple display planes stacked up, and rotating panel displays, where a rotating panel sweeps out a volume.
Other technologies have been developed to project light dots in the air above a device. An infrared laser is focused on the destination in space, generating a small bubble of plasma which emits visible light.
Integral imaging is an autostereoscopic or multiscopic 3D display, meaning that it displays a 3D image without the use of special glasses on the part of the viewer. It achieves this by placing an array of microlenses (similar to a lenticular lens) in front of the image, where each lens looks different depending on viewing angle. Thus rather than displaying a 2D image that looks the same from every direction, it reproduces a 4D light field, creating stereo images that exhibit parallax when the viewer moves.
Wiggle stereoscopy is an image display technique achieved by quickly alternating display of left and right sides of a stereogram. Found in animated GIF format on the web. Online examples are visible in the New-York Public Library stereogram collection. The technique is also known as "Piku-Piku".
For general purpose stereo photography, where the goal is to duplicate natural human vision and give a visual impression as close as possible to actually being there, the correct baseline (distance between where the right and left images are taken) would be the same as the distance between the eyes. When images taken with such a baseline are viewed using a viewing method that duplicates the conditions under which the picture is taken then the result would be an image pretty much the same as what would be seen at the site the photo was taken. This could be described as "ortho stereo."
There are, however, situations where it might be desirable to use a longer or shorter baseline. The factors to consider include the viewing method to be used and the goal in taking the picture. Note that the concept of baseline also applies to other branches of stereography, such as stereo drawings and computer generated stereo images, but it involves the point of view chosen rather than actual physical separation of cameras or lenses.
For any branch of stereoscopy the concept of the stereo window is important. If a scene is viewed through a window the entire scene would normally be behind the window, if the scene is distant, it would be some distance behind the window, if it is nearby, it would appear to be just beyond the window. An object smaller than the window itself could even go through the window and appear partially or completely in front of it. The same applies to a part of a larger object that is smaller than the window.
The goal of setting the stereo window is to duplicate this effect.
To truly understand the concept of window adjustment it is necessary to understand where the stereo window itself is. In the case of projected stereo, including "3D" movies, the window would be the surface of the screen. With printed material the window is at the surface of the paper. When stereo images are seen by looking into a viewer the window is at the position of the frame. In the case of Virtual Reality the window seems to disappear as the scene becomes truly immersive.
The entire scene can be moved backwards or forwards in depth, relative to the stereo window, by horizontally sliding the left and right eye views relative to each other. Moving either or both images away from the center will bring the whole scene away from the viewer, whereas moving either or both images toward the center will move the whole scene toward the viewer. Any objects in the scene that have no horizontal offset, will appear at the same depth as the stereo window.
There are several considerations in deciding where to place the scene relative to the window.
First, in the case of an actual physical window, the left eye will see less of the left side of the scene and the right eye will see less of the right side of the scene, because the view is partly blocked by the window frame. This principle is known as "less to the left on the left" or 3L, and is often used as a guide when adjusting the stereo window where all objects are to appear behind the window. When the images are moved further apart, the outer edges are cropped by the same amount, thus duplicating the effect of a window frame.
Another consideration involves deciding where individual objects are placed relative to the window. It would be normal for the frame of an actual window to partly overlap or "cut off" an object that is behind the window. Thus an object behind the stereo window might be partly cut off by the frame or side of the stereo window. So the stereo window is often adjusted to place objects cut off by window behind the window. If an object, or part of an object, is not cut off by the window then it could be placed in front of it and the stereo window may be adjusted with this in mind. This effect is how swords, bugs, flashlights, etc. often seem to "come off the screen" in 3D movies.
If an object which is cut off by the window is placed in front of it, an effect results that is somewhat unnatural and is usually considered undesirable, this is often called a "window violation". This can best be understood by returning to the analogy of an actual physical window. An object in front of the window would not be cut off by the window frame but would, rather, continue to the right and/or left of it. This can't be duplicated in stereography techniques other than Virtual Reality so the stereo window will normally be adjusted to avoid window violations. There are, however, circumstances where they could be considered permissible.
A third consideration is viewing comfort. If the window is adjusted too far back the right and left images of distant parts of the scene may be more than 2.5" apart, requiring that the viewers eyes diverge in order to fuse them. This results in image doubling and/or viewer discomfort. In such cases a compromise is necessary between viewing comfort and the avoidance of window violations.
In stereo photography window adjustments is accomplished by shifting/cropping the images, in other forms of stereoscopy such as drawings and computer generated images the window is built into the design of the images as they are generated. It is by design that in CGI movies certain images are behind the screen whereas others are in front of it.
While stereoscopy have typically been used for amusement, including stereographic cards, 3D films, printings using anaglyph and pictures, posters and books of autostereograms, there are also other uses of this technology.
In the 19th Century, it was realized that stereoscopic images provided an opportunity for people to experience places and things far away, and many tour sets were produced, and books were published allowing people to learn about geography, science, history, and other subjects. Such uses continued till the mid 20th Century, with the Keystone View Company producing cards into the 1960s.
The two cameras that make up each rover's Pancam are situated 1.5m above the ground surface, and are separated by 30 cm, with 1 degree of toe-in. This allows the image pairs to be made into scientifically useful stereoscopic images, which can be viewed as stereograms, anaglyphs, or processed into 3D computer images.
The ability to create realistic 3D images from a pair of cameras at roughly human-height gives researchers increased insight as to the nature of the landscapes being viewed. In environments without hazy atmospheres or familiar landmarks, humans rely on stereoscopic clues to judge distance. Single camera viewpoints are therefore more difficult to interpret. Multiple camera stereoscopic systems like the Pancam address this problem with unmanned space exploration.
Stereopair photographs provided a way for 3-dimensional (3D) visualisations of aerial photographs; since about 2000, 3D aerial views are mainly based on digital stereo imaging technologies. Cartographers generate today stereopairs using computer programs in order to visualise topography in three dimensions. Computerised stereo visualisation applies stereo matching programs. In biology and chemistry, complex molecular structures are often rendered in stereopairs. The same technique can also be applied to any mathematical (or scientific, or engineering) parameter that is a function of two variables, although in these cases it is more common for a three-dimensional effect to be created using a 'distorted' mesh or shading (as if from a distant light source).
Guide to the Edward R. Frank Stereograph Collection. Special Collections and Archives, The UC Irvine Libraries, Irvine, California.
|Wikimedia Commons has media related to Stereoscopy.|
|
<urn:uuid:e23d0050-ccef-4b11-859f-149bd0154c14>
|
CC-MAIN-2014-42
|
http://blekko.com/wiki/Stereoscopy?source=672620ff
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645920.6/warc/CC-MAIN-20141024030045-00239-ip-10-16-133-185.ec2.internal.warc.gz
|
en
| 0.934976
| 6,103
| 4.09375
| 4
|
Connecting Children With Nature — Learning About Trees
- Grades: PreK–K
Our playground is surrounded by an abundance of beautiful trees, which always seem to captivate my very curious kindergartners. Who would have guessed that a group of five- and six-year-olds would find trees more intriguing than slides and swings? Read on as I share the lessons I created to capitalize on my students' natural enthusiasm for trees.
1. Start with a discussion about trees. For this discussion, I use prompts like:
- What is a tree?
- What do you like best about trees?
- Why do people like to have trees in their yards and parks?
- What would our world be like without trees?
2. Then read the book The Giving Tree by Shel Silverstein. It is a great introduction to how trees help us.
3. Next, discuss the specific ways that trees help us.I choose three of these gifts and illustrate them with tree-related activities.
a. Trees give us food.
Students were surprised to find that many of the fruits and nuts we enjoy come from trees.
- Brainstorm a list of things people eat that come from trees.
- Sample foods that come from trees.
- Graph favorite edible tree products.
- Make maple syrup.
- After reading The Apple Pie Tree, allow students to sample apple pie.
b. Trees give us wood.
Many of the products we use on a daily basis are made from the wood we get from trees.
- Have students go on a scavenger hunt throughout the school identifying objects that are made of wood.
- Invite students to bring things from home made of wood or to cut photographs of wooden things out of magazines. During share time, have your students discuss the importance of these objects.
- Let students use wooden popsicle sticks to build houses and picture frames.
c. Trees are a home for animals.
Discuss how trees provide a home for many animals. I like to begin this discussion by reading Tree Homes. This book will help students learn about the different types of animals that make their homes in trees.
- Go on a nature walk to observe some of the animals that live in and visit trees.
- Have students use a T-chart to distinguish animals that live in trees from those who do not.
- Imitate your favorite tree animal.
4. Adopt a tree.
My students adopted a tree near our school as a special friend. We took a photograph of the tree and posted it in our classroom. Students learned about this type of tree, what kind of life goes on around it, and how it changes from season to season. We also discussed ways we can help our new friend stay healthy (e.g., watering it, protecting it from bicycles, lawn mowers, vandals, etc.) We plan to visit our tree periodically and watch for changes. Students will record their observations in their tree journals.
We also used cubes to measure the thickness of our tree trunk. The children placed the cubes around the trunk and counted how many it took to complete the circle.
5. Other tree-related assignment include:
- Count how many trees you see on your way to school.
- Take a picture with your favorite tree.
- Draw a picture illustrating trees of the four seasons.
- Explore the various shapes of trees.
- Plant a tree outside your school.
- Label the parts of a tree.
6. For more on trees, visit:
- Trees Are Terrific: A child-friendly site from the University of Illinois Urban Programs Resource Network.
- Mrs. Jones' Room: A page full of tree-related lessons, songs, and links from another kindergarten teacher.
- First-School: Tree-related crafts and activities for kindergarten and preschool classes.
- FOSSweb: The Trees Module from FOSS, of the National Science Foundation and the University of California at Berkeley.
- Real Trees 4 Kids: A wealth of information and resources for children grades K–12.
7. Take a look at more of my favorite books to use during a study on trees:
Kindergartners respond well to nature-inspired curriculum. Trees provide opportunities for children to learn about math, science, and more while exploring one of nature’s wonders. I hope the activities I’ve provided will inspire you and your students!
|
<urn:uuid:6c8b709a-73d1-4d39-b411-965ae68d851b>
|
CC-MAIN-2016-30
|
http://www.scholastic.com/teachers/classroom-solutions/2011/10/connecting-students-three-easy-strategies
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823805.20/warc/CC-MAIN-20160723071023-00187-ip-10-185-27-174.ec2.internal.warc.gz
|
en
| 0.948331
| 913
| 4.03125
| 4
|
Nineteenth century England had flourishing cities and emerging industries. Machines made it possible for those with money to invest to earn great profits, especially with an abundance of poor people who were willing to work long hours at hard or repetitive jobs for little pay. By contrast, the rural system included landlords, farmers, and common laborers who owned no land. In this rural system that had existed for centuries, those without land had no hope of bettering their lives: once in poverty, always in poverty. These hopeless poor moved to the city on the dream of making their own fortunes; it was usual for working class families to send young children off to the factories for twelve to fourteen-hour shifts or longer. Child labor laws would not be enacted until the 1860s.
Meanwhile, children and women were ideal workers because they did not form labor unions, and were easily intimidated, beaten, or fired if they protested against an employer’s mistreatment. School attendance was a luxury reserved for the children of parents who could afford to pay private tutors in addition to the family’s loss of income from a child’s labor. The first publicly funded elementary schools were not established until the 1870s, when the demand for skilled laborers increased. The idea of high schools did not receive England’s public support until the turn of the century, after Dickens’ death. Meanwhile, the laborsaving machines that were to make a few people’s fortunes earned many others little more than bad health or early graves.
The new money caused new needs. Prior to the nineteenth century, banking had been left to businesses and was fairly informal, by reputation. Since there had been little money to exchange. except by a well-known few, there had been little need for that service. The Bank of England had been established in 1694, but it dealt main Iy with government projects. Industrialization changed that. and banking houses became more numerous as a middle class emerged. New businesses needed to borrow money, and the rapid production of goods for a growing economy promised new wealth for both borrowers and lenders. That is how Pip found employment for his friend, Herbert Pocket, who later hired Pip.
Obviously. not all who turned to the city for fortune found it. There were workhouses and debtors’ prisons for those who failed to achieve their dreams of advancement. Those shut out from that promise lived in misery and often turned to crime. Since money was made in the city, the rise in criminal activity appeared there. As the number of jobless residents increased, so did the number of smugglers. pickpockets. thieves. and swindlers. Those with enough money to escape the SOOI and dangers of London, began to build up the towns, as we see in Wemmick’s choice of address. Only the outlying country folk stayed much the same as they had for centuries. and we see Pip’s travel is either by stagecoach or on foot . That was normal until the ]8605 when the railroad finally connected the country to the city and the past lO the new age of the machine.
Marie Rose Napierkowski, Novels for Students: Presenting Analysis, Context & Criticism on Commonly Studied Novels, Volume 4, Charles Dickens, Gale-Cengage Learning, 1998
|
<urn:uuid:6740e4b5-451d-4655-b9d8-112b7d9380c1>
|
CC-MAIN-2023-06
|
https://jottedlines.com/great-expectations-setting/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00548.warc.gz
|
en
| 0.983181
| 686
| 4.0625
| 4
|
Quantum technology has a lot of promise, but several research barriers need to be overcome before it can be widely used. A team of US researchers has advanced the field another step, by bringing multiple molecules into a single quantum state at the same time.
A Bose-Einstein condensate is a state of matter that only occurs at very low temperatures – close to absolute zero. At this temperature, multiple particles can clump together and behave as though they were a single atom – something that could be useful in quantum technology. But while scientists have been able to get single atoms into this state for decades, they hadn’t yet achieved it with molecules.
“Atoms are simple spherical objects, whereas molecules can vibrate, rotate, carry small magnets,” says Cheng Chin, a professor of physics at the University of Chicago, US. “Because molecules can do so many different things, it makes them more useful, and at the same time much harder to control.”
Chin’s team has now brought molecules of caesium (Cs2) into the Bose-Einstein state. “People have been trying to do this for decades, so we’re very excited,” he says.
The team used a low temperature of 10 nanokelvins to reach this point. A nanokelvin is a billionth of a kelvin, or a billionth of one degree Celsius, making this temperature just fractionally above absolute zero. They also packed the caesium molecules tightly to limit their movement.
“Typically, molecules want to move in all directions, and if you allow that, they are much less stable,” says Chin. “We confined the molecules so that they are on a 2D surface and can only move in two directions.”
These conditions made the molecules effectively identical: lined up in the same orientation, with the same vibrational frequency and in the same quantum state. The team was able to link up several thousand molecules in this condensate.
Chin says this achievement has implications for quantum engineering. “It’s the absolute ideal starting point. For example, if you want to build quantum systems to hold information, you need a clean slate to write on before you can format and store that information.”
Chin is the senior author on a paper describing the research, published in Nature.
“In the traditional way to think about chemistry, you think about a few atoms and molecules colliding and forming a new molecule,” he says. “But in the quantum regime, all molecules act together, in collective behaviour. This opens a whole new way to explore how molecules can all react together to become a new kind of molecule.”
More on quantum physics:
|
<urn:uuid:4540b5bc-df35-4ba2-8f89-2f434412b79b>
|
CC-MAIN-2023-40
|
https://cosmosmagazine.com/science/molecules-brought-in-a-single-quantum-state/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511055.59/warc/CC-MAIN-20231003060619-20231003090619-00874.warc.gz
|
en
| 0.940243
| 577
| 4
| 4
|
Measles is a highly contagious disease caused by the measles virus. Infected people have the measles virus in the mucus of their nose and throat. When they sneeze or cough, moisture droplets spray into the air. The virus in these droplets can remain active on surfaces for up to two hours. The virus is spread by coming in contact with these infected droplets.
Following exposure to the measles virus, there is usually an incubation period lasting 10 to 12 days, during which there are no signs of the disease. During this time, the virus begins to multiply and infect the cells of the respiratory tract, eyes and lymph nodes—increasing the levels of the virus in the blood stream. The first stage of the disease begins with a runny nose, cough, and a slight fever. As the infection progresses, the person's eyes become red and sensitive to light.
The second stage of measles is marked by a high temperature—sometimes as high as 103o F-105o F, and the characteristic red blotchy rash. The rash usually starts on the face and then spreads to the chest, back, and arms and legs, including the palms of the hands and soles of the feet. After about five days, the rash fades in the same order as it appeared. Tiny white spots, called Koplik’s spots, can also appear in the mouth. A person with measles can be contagious from up to 4 days before and after the rash appears.
An effective "MMR" vaccine for measles is usually given in combination with vaccines for mumps and the less severe German measles, or rubella. This vaccine contains weakened or killed forms of the virus which stimulates the body's immune system to "recognize" the virus as foreign. Therefore, the immune system can more easily identify and kill any of these viruses that it encounters in the future.
|
<urn:uuid:7900a524-a46a-415c-a698-b27e07d32971>
|
CC-MAIN-2014-42
|
http://www.nebraskamed.com/health-library/3d-medical-atlas/294/measles
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067496.61/warc/CC-MAIN-20141017150107-00012-ip-10-16-133-185.ec2.internal.warc.gz
|
en
| 0.962005
| 381
| 4.15625
| 4
|
Asteroids are small pieces of rock that orbit the Sun, mostly between Mars and Jupiter. Asteroids move quickly across the sky, so they can be seen in SDSS images. (See the Asteroid Hunt project to learn more.) If an asteroid moves slowly, it shows up as a blue dot next to a yellow dot. Fast-moving asteroids show up as a red, green and blue dot. Very fast asteroids appear as a single colored streak. Examples of each type are shown below.
Asteroids that appear as blue-yellow dots trick the computer program that classifies objects, so their “object type” is listed as star.
Galaxies form in clusters of dozens or hundreds. The SDSS has seen many clusters, including the one shown at the right. Galaxy clusters can be so far away that individual galaxies almost look like stars!
When you see a cluster in the Navigation tool, click on one of the objects to see the object type. You might be surprised to find what you thought was a star cluster is actually a galaxy cluster!
Sometimes, when the SDSS telescope looks at a very bright object, the object’s light is reflected inside the telescope. These reflections can cause “ghosts.”
Ghosts are bands of light. They are usually a single color; either red, green or blue, depending on which filter the camera was looking through. A typical ghost is shown at the right.
Now you’re ready for the scavenger hunt! Click Next to get started.
|
<urn:uuid:e5a4cb06-b812-46fd-8740-765c9b44d7bd>
|
CC-MAIN-2020-16
|
http://voyages.sdss.org/skyserver/for-kids/sdss-scavenger-hunt/types-of-objects-2/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00377.warc.gz
|
en
| 0.939858
| 318
| 4.03125
| 4
|
Teaching Practice 5
To provide fluency speaking practice in a discussions in the context of moral dilemmas
To provide review of second conditional in the context of moral dilemmas
Procedure (34-46 minutes)
Teacher starts the lesson by showing the picture. Teacher asks: "What do you see in the picture?" She elicits some answers. She waits fpr the answer, he is trying to decide on something? He is not sure about something. He is a hesitant or indecisive person etc. Teacher asks: "What is he thinking about?" and draws speech bubbles on the board and writes the answers given by students. Finally teacher asks: "Do you have similar situations?" and elicits one or two answers and tells them to talk in pairs.
The teacher shows some pictures and elicits the problems. For example, in the first picture A woman is trying to decide what to wear. She gives situations like "What would you wear if you had a job interview? What would you wear, if you went out with your new boyfriend/girlfriend ? In another picture, she shows two opposite directions and ask students have they ever felt like that? She elicits some answers and based on the answers creates situations.
Teacher creates the context for relative. She says that apple, banana, orange are ........... in general. She also draws a spiderweb on the board and writes these words around the spiderweb. She wants them to find out the hyponym "fruit". She does the same example with animals. She finally writes uncle, aunt, cousin nephew around the spiderweb and expects them to find out the word "relative" Second word is "inherit". The teacher tells the students that: My grandfather died last month. The had only one house and I was his only relative. So, his house became mine. I ......... his things. The third word is "colleague". I am a teacher and I work with other teachers in the same school. They are my .........
The teacher makes 2 groups in the class. She tries to balance strong students in these groups. She shows the cut-outs and explain the game Instructions: There are two sets of cut outs. One students picks a card and read the question. Other students answer the question. Short answers are not OK. You should explain and support your answers. The one who asks the question chooses the most interesting answer and gives the card to that student. Another student picks a card and asks the question. CCQs: Are we working in pairs or groups?- In groups How many groups are there? - 2 Are we writing our answers? - No, just talking Does the same person ask the questions? - No, it takes turns Are short answers OK ? No
The teacher monitors the students while they are discussing the questions and takes some notes about correct and incorrect use of target language. For feedback session, teacher elicits the most interesting answers from the groups and students share their opinions. Finally, the teacher writes the sentences on the board and wants them to talk in pairs about which ones are correct and which ones are in correct. Then she asks the students to come the board and correct the sentences.
|
<urn:uuid:be47ad38-dbe2-4ea8-8c1f-ec9b464836e1>
|
CC-MAIN-2018-51
|
https://www.englishlessonplanner.com/plans/5509
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00537.warc.gz
|
en
| 0.974634
| 652
| 4.0625
| 4
|
Stars are thought to form in huge filaments of molecular gas. Areas where one or more of these filaments converge, known as hubs, are where massive stars form.
These massive stars, located nearby, would have put the early solar system in danger of a powerful supernova. This risk is more than just hypothetical; a research team from the National Astronomical Observatory of Japan, led by astrophysicist Doris Arzoumanian, looked at isotopes found in ancient meteorites and found possible evidence of the turbulent death of a massive star.
So why did the solar system survive? The gas in the filament seems to be able to protect it from the supernova and its onslaught of radioactive isotopes. “The host filament may protect the young solar system from stellar feedback, both during star formation and evolution (stellar outflow, wind and radiation) and at the end of their lives (supernovae),” Arzoumanian and her team said in a study recently published. in The Astrophysical Journal Letters.
Signs of a supernova
The meteorites studied by the researchers contained small inclusions, or clumps, in the rock about as old as the solar system. These chunks contain isotopes derived from the decay of short-lived radionuclides (SLRs), which can be generated by supernovae. Although SLRs decay after a few hundred million years, which is nothing in cosmic terms, they still leave behind distinctive isotopes.
The team found particularly high levels of SLR isotopes in the meteorites they examined. From the age of the isotopes, they were able to deduce that the SLRs they once belonged to were present in the early solar system. Supernovae are one SLR source, which could mean our solar system has evaded a supernova, though they could form in other ways.
SLRs from the interstellar medium can already float around in the molecular cloud in which a star forms. The birth of massive stars, which don’t live that long (at least in cosmic terms) and die quickly via supernovae, may be another source of can isotopes produced by highly energetic solar or galactic cosmic rays. All of these sources could possibly explain the existence of SLRs in the early solar system.
While SLRs likely existed in the part of the filament where the Sun and Solar System formed, the meteorite samples contained too much of a particular aluminum isotope for the interstellar medium to be the Solar System’s only SLR source. Cosmic rays, which can convert stable isotopes into radioactive ones, had a better chance of explaining the number of isotopes in the meteorites. However, it would have taken too long for this process to produce the levels of SLRs found in the early solar system.
It is very likely that such high SLR levels could come from very intense stellar winds, which would have occurred during the formation of massive stars, or from what was left after one of the massive stars went supernova.
So why didn’t the supernova disrupt the solar system? It seems that the destructive blow was softened by the molecular gases of the filament in which the sun formed. If the isotopes from those long-dead SLRs really came from a supernova or stellar winds, then the amount passing through the filament gas was enough to match what was suggested by the meteorite findings, but not enough to decimate the solar system. The size of this hypothetical supernova or newborn star is still unknown.
“This scenario may have several important implications for our understanding of the formation, evolution and properties of stellar systems,” the researchers also said in the study.
While there are still some unanswered questions, the scientists suspect that if the clouds of the filament in which the sun and solar system formed were large enough, our star and planets would have easily survived a supernova impact.
The Astrophysical Journal Letters, 2023. DOI: 10.3847/2041-8213/acc849 (About DOIs).
Elizabeth Rayne is a creature that writes. Her work has appeared on SYFY WIRE, Space.com, Live Science, Grunge, Den of Geek, and Forbidden Futures. When she’s not writing, she’s altering, drawing, or cosplaying as a character she’s never heard of before. Follow her on Twitter @quothravenrayne.
|
<urn:uuid:8b3bc0d3-9388-4944-a27b-d8f2d8ad7871>
|
CC-MAIN-2023-50
|
https://cbnewz.com/our-solar-system-may-have-survived-a-supernova-because-of-the-way-the-sun-formed/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00128.warc.gz
|
en
| 0.961624
| 912
| 4.28125
| 4
|
The first manned attempt came about 2 months later on November 21st, with a balloon made by two French brothers, Joseph and Etienne Montgolfier. The balloon was launched from the centre of Paris and flew for a period of 20 minutes. It proved to be the birth of hot air ballooning.
Just two years later in 1785 a French balloonist, Jean Pierre Blanchard, and his American co pilot, John Jefferies, became the first to fly across the English Channel. In these early days of ballooning, the English Channel was considered the first step to long distance ballooning so this was a large benchmark in ballooning history.
Unfortunately, this same year Jean-François Pilatre de Rozier (the world's first balloonist) was killed in his attempt at crossing the channel. His balloon exploded half an hour after takeoff due to the experimental design of using a hydrogen balloon and hot air balloon tied together.
Now a large jump in time of over 100 years: In August of 1932 Swiss scientist Auguste Piccard was the first to achieve a manned flight to the Stratosphere. He reached a height of 52,498 feet, setting the new altitude record. Over the next couple of years, altitude records continued to be set and broken every couple of months - the race was on to see who would reach the highest point.
In 1935 a new altitude record was set and it remained at this level for the next 20 years. The balloon Explorer 2, a gas helium model reached an altitude of 72,395 feet (13.7 miles)! For the first time in history, it was proven that humans could survive in a pressurized chamber at extremely high altitudes. This flight set a milestone for aviation and helped pave the way for future space travel.
The altitude record was set again in 1960 when Captain Joe Kittinger parachute jumped from a balloon that was at a height of 102,000 feet. The balloon broke the altitude record and Captain Kittinger, the high altitude parachute jump record. He broke the sound barrier with his body.
In 1987, Richard Branson and Per Lindstrand were the first to cross the Atlantic in a hot air balloon, rather than a helium/gas filled balloon. They flew a distance of 2,900 miles in a record breaking time of 33 hours. At the time, the envelope they used was the largest ever flown, at 2.3 million cubic feet of capacity. A year later, Per Lindstand set yet another record, this time for highest solo flight ever recorded in a hot air balloon - 65,000 feet!
The great team of Richard Branson and Per Lindstrand paired up again in 1991 and became the first to cross the Pacific in a hot air balloon. They travelled 6,700 miles in 47 hours, from Japan to Canada breaking the world distance record, travelling at speeds of up to 245 mph. 4 years later, Steve Fossett became the first to complete the Transpacific balloon route by himself, travelling from Korea and landing in Canada 4 days later.
Finally, in 1999 the first around the world flight was completed by Bertrand Piccard and Brian Jones. Leaving from Switzerland and landing in Africa, they smashed all previous distance records, flying for 19 days, 21 hours and 55 minutes.
|
<urn:uuid:f681b771-6c5b-4a28-b4cb-7468566fe629>
|
CC-MAIN-2017-30
|
http://diaryofamadinvalid.blogspot.com/2017/03/facts-about-hot-air-balloons.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426161.99/warc/CC-MAIN-20170726122153-20170726142153-00354.warc.gz
|
en
| 0.957948
| 669
| 4.0625
| 4
|
Find out more about our Academic Medical Centre and efforts in Academic Medicine
Find out more about what JOAM do to support AM initiatives
Academic Medicine Executive Committee (AM EXCO)
Our appointed ACP leaders within the respective 15 ACPs
Guidelines, forms, and templates for Academic Medicine.
Pinworms (also called threadworms) are an intestinal infection caused by tiny parasitic worms called
Enterobius vermicularis. It's a common infection that affects millions of people each year, particularly toddlers and school-age kids. Infection often occurs in more than one family member.
Pinworms are thin and white, measuring about six to 13 millimetres in length.
The most common signs of a pinworm infection are:
The itching is usually worse at night because the worms move to the area around the anus to lay their eggs (up to 10,000 to 15,000 eggs). In girls, pinworm infection can spread to the vagina and cause a vaginal discharge. If the itching breaks the skin, it also could lead to a bacterial skin infection. Pinworms can also cause bedwetting at night.
Some infected people have no symptoms at all.
Pinworms get into the body when people ingest or breathe in the microscopic pinworm eggs. These eggs are light and float in the air and can be found on contaminated hands and surfaces, such as:
The eggs pass into the digestive system and hatch in the small intestine. From the small intestine, pinworm larvae go to the large intestine, where they live as parasites (with their heads attached to the inside wall of the bowel).
About one to two months later, adult female pinworms leave the large intestine through the anus (the opening where bowel movements come out). They lay eggs on the skin right around the anus, which triggers itching in that area, usually at night.
When someone scratches the itchy area, microscopic pinworm eggs transfer to their fingers. Contaminated fingers can then carry pinworm eggs to the mouth, where they go back into the body, or stay on various surfaces, where eggs can survive for two to three weeks.
Fortunately, most eggs dry out within 72 hours. In the absence of host autoinfection, infestation usually lasts only four to six weeks.
Itching during the night in a child’s perianal area strongly suggests pinworm infection. Diagnosis is made by identifying the worm or its eggs.
If your child has a pinworm infection, you can see worms on the skin near the anal region or on underwear, pyjamas or sheets, about two or three hours after your child has fallen asleep. You also might see the worms in the toilet after your child goes to the bathroom. They look like tiny pieces of white thread and are really small — about as long as a staple. You might also see them on your child's underwear in the morning.
Pinworm eggs can be collected and examined using the “tape test” as soon as the person wakes up. This “test” is done by firmly pressing the adhesive side of clear, transparent cellophane tape to the skin around the anus. The eggs stick to the tape and the tape can be placed on a slide and looked at under a microscope. This test should be done as soon as the person wakes up in the morning before they wash, bathe, go to the toilet, or get dressed. The “tape test” should be done on three consecutive mornings to increase the chance of finding pinworm eggs.
Oral medication such as mebendazole or albendazole should be given to everybody in the household. There is a risk of transmission between family members; so the chances of being infected if somebody has been diagnosed are high, even if no symptoms are present.
Both medications block the worm's ability to absorb glucose, effectively killing it within a few days. Treatment involves two doses of medication, best administered on an empty stomach, with the second dose being given two weeks after the first dose. All household contacts and caretakers of the infected person should be treated at the same time.
Hygiene measures should be continued for another two weeks following the initial treatment.
Although medicine takes care of the worm infection, the itching may continue for about a week. Apply a zinc ointment or other medicine to help stop the itching.
Reinfection can occur easily so strict observance of good hand hygiene is essential (e.g. proper handwashing, maintaining clean short fingernails, avoiding nail biting, avoiding scratching the perianal area).
|
<urn:uuid:8db46edd-9df1-489a-8117-f5d4fe4a5093>
|
CC-MAIN-2022-21
|
https://www.singhealthdukenus.com.sg/conditions-treatments/pinworms/
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00150.warc.gz
|
en
| 0.949241
| 944
| 4
| 4
|
At a young age, kids are first taught to write letters in print only. When kids reach the age of eight to ten, they are taught how to write in cursive. They may find this quite difficult and boring at first. But one fun way to teach them this is to use worksheets also.
These writing worksheets have traceable patterns of the different strokes of writing letters. By tracing these patterns, kids slowly learn how a letter is structured.
free coloring worksheets for 3rd graders coloring musica worksheet at 13 number color worksheet color coded cells worksheet biology of free color by color word worksheets teaching colors worksheets esl
free coloring worksheets for 3rd graders grinch coloring worksheets plus mickey mouse coloring worksheet color by number stitch worksheets as well as dr seuss color by number worksheet the color black worksheets
free coloring worksheets for 3rd graders articulation coloring worksheets to greater than coloring worksheet math coloring worksheets 3rd grade with 6th garde color by numer worksheet roman numerals coloring worksheet
free coloring worksheets for 3rd graders crayfish coloring worksheet on soil science color worksheet color the crab red worksheet plus parts of an atom coloring worksheet color the chunk worksheets
The learning should be real-worldly. It is easiest to learn and remember when whatever is learned is immediately applied to a practical, real-life situation. You should use every opportunity to teach and regularly reinforce basic concepts taught, in real-life and in real-time. For instance, during snack-time, if a child is eating a biscuit, you can say – ’B’ for ’biscuit’. While waiting for a school van, you can say – ’V’ for ’Van’ and so on.
The learning should be fun. It should not feel like work, but play. For otherwise, children will quickly get bored. Hence it would be a good idea to use a lot of interesting activities, games, coloring sheets, illustrated kindergarten worksheets etc. You should be well prepared with these teaching aids, which can be made very easily.
Find out the most recent images of Free Coloring Worksheets for 3rd Graders here, and also you can get the image here simply image posted uploaded by Matthew Porter that saved in our collection.
|
<urn:uuid:14b7cc83-2d89-41e8-b0bd-d1885d65e1d6>
|
CC-MAIN-2020-40
|
https://www.upperclatfordpc.org/free-coloring-worksheets-for-3rd-graders/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00106.warc.gz
|
en
| 0.928858
| 507
| 4.03125
| 4
|
A study that followed the evolution of Pluto’s atmosphere for fourteen years shows its seasonal nature, and predicts that it will now start to condensate as frost.
This study was published in the journal Astronomy and Astrophysics and had the participation of Pedro Machado, of Instituto de Astrofísica e Ciências do Espaço (IA) and Faculdade de Ciências da Universidade de Lisboa (FCUL).
The authors analysed data from this dwarf planet’s atmosphere in the altitude range of 5 to 380 kilometres, collected between 2002 and 2016. This period overlapped with the Summer in Pluto’s northern hemisphere1, where are mostly concentrated the reservoirs of nitrogen ice, which sublimate under the exposure and the proximity to the Sun.
Data indicate that the atmospheric pressure at the surface has risen by about twofold and a half since 1988 until its maximum in 2015, yet still one hundred thousand times thinner than the average atmospheric pressure on Earth at sea-level.
“More and more we look at Pluto’s seasonal atmosphere as a cometary activity,” says Pedro Machado. “Since it is a body of small mass, nitrogen molecules gain the escape velocity very easily, and Pluto looses atmosphere, like the comets.”
- Due to its strongly tilted rotation axis, Pluto spins almost laying on its orbit. This causes it to expose permanently to the Sun the northern latitudes during a fraction of the more than two centuries that it takes to complete a full turn around the Sun. This period overlaps with the crossing of the point in its orbit closest to the Sun (perihelion), which happened in 1989. Pluto has a very eccentric orbit, varying its distance from the Sun between about 30 and 49 times the average distance of the Earth from the Sun.
|
<urn:uuid:a8eb0880-2f06-41e6-aa1d-709a01d52824>
|
CC-MAIN-2019-39
|
http://divulgacao.iastro.pt/en/2019/05/10/on-pluto-the-winter-is-approaching-and-the-atmosphere-is-vanishing-into-frost/
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573070.36/warc/CC-MAIN-20190917101137-20190917123137-00353.warc.gz
|
en
| 0.943198
| 385
| 4.09375
| 4
|
A teacher's guide to social and emotional learning
When asked about a teacher’s job description, most people can tell you they’re responsible for lesson planning, classroom instruction and grading assignments. What’s not as commonly known is the “hidden curriculum” of unwritten and often unintended lessons in social and emotional learning (SEL).
SEL is a critical part of a young child’s development, yet it is an often overlooked quality in educators. When students lack social-emotional abilities, they struggle more than their well-developed peers when faced with change, challenge and conflict.
As a teacher, you already play a critical role in your students’ development of these skills. But as you know, there is always more to learn and improve upon. Find out how you can increase your impact by proactively incorporating SEL skills into your lessons.
We created this guide to social and emotional learning based on our recent webinar presented by Tenley Hardin, MA, MFT candidate and certified professional life coach (iPEC). Learn more about how a focus on SEL can have positive effects on academic outcomes and classroom management.
What is social emotional learning?
According to the Collaborative for Academic, Social, and Emotional Learning (CASEL), SEL is the process through which all young people and adults acquire and apply the knowledge, skills and attitudes to:
- Develop healthy identities
- Manage emotions
- Achieve personal and collective goals
- Feel and show empathy for others
- Establish and maintain supportive relationships
- Make responsible and caring decisions
To teach SEL, educators must engage in self-reflection and become aware of their own biases, triggers, positive and maladaptive patterns. This requires vulnerability and the willingness to recognize areas of improvement without becoming discouraged about being imperfect.
5 Core competencies of social emotional learning
To break it down even further, let’s unpack the five SEL competencies and how they can impact your professional development, along with student success.
Self-awareness is the ability to identify and understand your emotions, thoughts and values and how they influence your responses and behaviors. This includes capacities like:
- Letting yourself feel emotions instead of dismissing or suppressing them
- Checking in with yourself and identifying emotions
- Examining your personal prejudices and biases
- Maintaining a growth mindset
Why is self-awareness important for teachers?
Teachers who are self-aware are better able to recognize strengths, overcome fears and interrupt cycles of negative self-talk. Becoming more self-aware takes time, but with practice, you’ll be able to shift to a more empowered and positive state of mind.
Reflect on the following questions to deepen your understanding of yourself:
- What thoughts trigger an emotional reaction in me?
- How are my emotions influencing my responses and behavioral patterns?
- What kind of obstacles have I already overcome in my life?
Self-management is the ability to set goals, deal with stress and control impulses, reactions and behaviors. Mastering these skills can be challenging, especially for children who have experienced trauma. Young brains are still developing, and strong feelings can be overwhelming.
Self-management skills include things like:
- Identifying and using stress management strategies
- Exhibiting self-discipline and self-motivation
- Setting personal and collective goals
- Using planning and organizational skills
Why is self-management important for teachers?
Teaching is a rewarding, important and challenging job. You constantly use self-management skills to prioritize responsibilities and cope with stress. But even the most experienced teachers have moments of anger, frustration and helplessness.
The more adults model how to recover from a difficult or stressful situation, the more a child will follow and use the same strategies. When young people witness trusted adults acknowledging their own mistakes and limitations, it gives them examples of how to do the same. It destigmatizes common fears like failure, making errors or not having answers to all the questions.
One helpful exercise targeted at developing your own self-management skills is to identify stressors or emotional triggers and your responses to them. Then take the time to reframe those thoughts into something more positive. For example, you may start by thinking, “That lesson didn’t go as planned, I feel like a bad teacher.” Instead, flip your thinking to, “That lesson took an unexpected turn, how can I improve it for next time?”
3. Social awareness
Social awareness is a complex skill. It is the ability to appreciate different perspectives and empathize with others, including those from diverse background and cultures. A socially aware person feels compassion for others and understands social norms for behavior in different settings. Social awareness competencies include things like:
- Recognizing strengths in others
- Showing concern for the feelings of others
- Understanding and expressing gratitude
- Identifying diverse social norms, including unjust ones
Why is social awareness important for teachers?
As an educator, you’re responsible for creating a safe and welcoming environment that honors all students. Without high levels of social awareness, teachers can unintentionally replicate or exacerbate harmful practices and conditions. If students or their families don’t feel seen, respected or represented in the classroom, they are unlikely to engage with the school and the child will suffer as a result.Start challenging yourself to increase your social awareness by contemplating the following questions:
- Who am I in relation to others?
- How do others perceive me?
- How do aspects of my identity (race, gender, class, body size, age, etc.) affect my perceptions of others and vice versa?
4. Relationship skills
Humans are social creatures by design. Establishing mutually supportive relationships is an incredibly important component of a healthy and happy life. People who successfully sustain relationships with diverse individual and groups are skilled at things such as:
- Communicating effectively
- Demonstrating cultural competency
- Resolving conflicts
- Showing leadership in groups
- Seeking or offering support
Why are relationship skills important for teachers?
Successful teachers know how to build bonds with their students and their families, co-workers and the school community at large. Managing multiple diverse relationships is often complicated but having strong listening and conflict resolution skills makes it much easier.
One of the most important social-emotional competencies is repair. In this context, repair means recognizing you may have harmed or alienated someone and reaching out to address it and work through it together.
All teachers have reacted to stress by being harsh or yelling. It’s not ideal, but you now have the opportunity to repair. In these situations, try taking a deep breath and saying, “I’m sorry, I shouldn’t have yelled. I am frustrated right now, but I will make sure I use a calmer voice next time.”
5. Responsible decision making
Responsible decision making is the ability to make caring and constructive choices about personal behavior and social interactions. Some well versed in making good decisions will consider ethical standards and safety concerns and evaluate consequences before reaching a conclusion.
Identifying problems and proposing solutions Acknowledging and validating another person’s thoughts, feelings and ideas Demonstrating curiosity and open-mindedness Learning how to make a judgment after analyzing information, data and facts
Why is responsible decision making important for teachers?
You’re faced with many important decisions each day as a teacher. Your actions impact students, their families, fellow teachers and the entire school community, which means your choices carry a great deal of responsibility. Being able to critically think about the consequences of different potential actions is critical, as well as knowing your limitations and when it’s necessary to ask for help.
Children are also faced with important decisions that have consequences on the rest of their lives. Demonstrating this process and communicating its importance will help your students understand the impact of their choices and how they affect those around them.
You can instill this by using the responsible decision-making model, which outlines the following five steps:
- Identify the problem
- Analyze the situation
- Brainstorm solutions and solve the problem
- Consider ethical responsibility
- Evaluate and reflect
Set your students up for success
You’re already an important role model for your students. After learning more about social and emotional learning and reflecting on the prompts outlined above, you may be better equipped to foster these principles in your classroom. With your guidance and example, your students will learn how to become more resilient and deal with difficult emotions and situations.
Looking for more ways to help your students build the skills and habits that will help them succeed in the future? Check out the many Professional Development Courses offered at UMass Global.
Become a Student
Have questions about enrollment, degree programs, financial aid, or next steps?
|
<urn:uuid:2b1144c9-5e3e-488b-9137-9ebe44abc61d>
|
CC-MAIN-2023-06
|
https://www.umassglobal.edu/news-and-events/blog/teachers-guide-to-social-emotional-learning
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00747.warc.gz
|
en
| 0.946536
| 1,851
| 4.0625
| 4
|
Why should I bother learning this?
Tell them that they need to be able to answer questions using numbers that make sense in context. Ask students to compare their responses to How do I get to ______ from here? and How far is it to ______? Fill in the blank with some well-known local place that requires just a right or left turn out of the parking lot and a short (straight) drive or walk. Extend the discussion into a clear differentiation between direction and distance.
What's so important about closed and open dots in an absolute value graph?
The issue here is not really absolute value. It's the distinction between the inequality symbols <, > and , . Use time as a way to model comparisons that include a number and those that exclude it.
Today is Tuesday. You have less than a week until your exam. Could that exam be next Tuesday? E < 7
It will take at least an hour to cook tonight's dinner. Could it possibly take exactly an hour? D 60 min.
Help students to see that the symbols that have two parts (including half of an equals symbol) are the ones that include the compared number and so require a closed dot.
|
<urn:uuid:3fa4fb72-7bfb-4e34-8db7-97e495629b56>
|
CC-MAIN-2014-10
|
http://www.eduplace.com/math/mathsteps/7/b/7.absvalue.ask.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999666921/warc/CC-MAIN-20140305060746-00087-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.966388
| 241
| 4.125
| 4
|
Students learn about the different roles and responsibilities in a court by participating in a mock trial.
Through several activities, students learn about the roles and responsibilities of the U.S. president and their own duties as citizens of a democracy
This scripted mock trial includes ideas for pre and post mock trial activities.
Students will better understand the concept of the Electoral College by participating in a mock Electoral College vote.
Students learn about the three functions of government in this interactive role play.
This short scripted mock trial for grades 4-6 involves SpongeBob suing Abercrombie and Fish for pants that don’t fit. Scripted parts allow the trial to move quickly to jury deliberations during which the student jurors actually decide the verdict of the case.
In this lesson, students are asked which of two chocolate bars – one with nuts, one without – they prefer. A single representative is taken from each preference group. These representatives are given the chocolate bar that they prefer less, motivating a contractual trade. One student unknowingly has an empty wrapper, eliciting debate after the trade is completed. The class concludes by discussing possible equitable solutions.
Students reflect on when and why rules are needed and the importance of rules in the classroom or in a community setting.
This mock trial exposes students to the mechanics of a jury trial, and stresses the importance of functioning as a juror.
This lesson offers students the opportunity to play the role of voters with special interests. Students draw up initiatives for new classroom or school rules. Working in groups of four or five, students share their ideas and rationale for new rules.
In this lesson, students will gain an understanding of the separation of powers using role playing and discussion. Students will identify which parts of the Constitution provide for the branches of our government, and will categorize public officials into one of these three branches.
Students learn why laws need to be interpreted by discussing laws/constitutional provisions. They present their findings to the class.
Through these activities, students learn about the roles and responsibilities of the U.S. president and their own roles as citizens of a democracy.
The purpose of this lesson is to help students understand the original purpose and
powers of the Supreme Court according to the Constitution. Students learn the Supreme Court’s role in preserving the U.S. Constitution and the balance of power it creates.
This lesson helps students to identify the requirements of a position of authority and the qualifications a person should possess to fill that position. Students learn a set of intellectual tools designed to help them both analyze the duties of the position and to decide if an individual is qualified to serve in that particular position. During the lesson students practice using the intellectual tools.
The lesson includes a read aloud book to teach students about the Michigan Court System.
The Preamble to the U.S. Constitution sets out the purposes or functions of American government as envisioned by the framers. Using the Preamble as a guide, students will identify the purposes of their own classroom and create a class “constitution.”
Students learn about the Bill of Rights and the Importance of Rights
American colonists had some strong ideas about what they wanted in a government. These ideas surface in colonial documents, and eventually became a part of the founding documents like the Declaration of Independence and Constitution. But where did they come from? This lesson looks at the Magna Carta, Mayflower Compact, English Bill of Rights, Cato’s Letters and Common Sense.
Students will read about the election process and correctly put the steps in proper sequence. Students will participate in a debate on an issue that relates to their day-to-day school experience.
|
<urn:uuid:4972875e-f99a-44ed-af30-1c42668d855e>
|
CC-MAIN-2020-16
|
https://www.miciviced.org/grades/grades-k-5/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00461.warc.gz
|
en
| 0.947633
| 747
| 4.0625
| 4
|
Curriculum differentiation can be defined very simply and that is; To be given equal opportunities in the learning environment.
Each individual should have the chance to develop and expand their knowledge to the best of their ability, and be given the chance to make the best use of their talents and capabilities as possible.
With this ethos in mind the general curriculum will be expanded and streamlined to make extra provision for the pupils that are going to need extra help because they have special learning needs or certain disabilities.
Curricular differentiation will be applied by using lesson planning, the equal opportunity program, health and safety regulations, and child protection law.
This means that every young person or child has a right to be taught in a safe, comfortable, friendly and mentally stimulating environment.
Learning mentors, teaching assistants and special needs coordinators all play a vital part in maintaining an equilibrium in the learning environment, and will inform planning to adjust the level of care or support given often forming the frontline in the humanist battle against inequality, prejudice and discrimination.
Formal and informal observation, work and behavior assessment will be used to establish the learning levels of a child and their abilities in relation to their age and individual needs.
Special learning needs can vary and may include children who have learning difficulties or children who have simply moved to a new school from a different area or even a different country.
Each case is assessed on its own merits, and strategies are put into place depending on the level of support needed. This may include extra support within the classroom, or through a special learning program that can be delivered within mainstream school or in a separate learning unit. This will consist of support strategies, learning incentives, use of resources and different methods of effective communication.
For example, special needs may include children who are blind or deaf or who may have speech difficulties. Their learning programs will be tailored to take these disabilities into consideration. A deaf child may be given a 1:1 support T.A. who can ‘sign’ the lessons to him, or extra resources may be used – such as a hearing loop. A child with speech difficulties may simply need a little extra ‘thinking’ and ‘talking’ time and this is something that can be accommodated within lesson planning and social development activities.
By its very nature, curriculum differentiation is a flexible and evolving system, and if it can be implemented in a timely fashion within schools, vulnerable pupils will benefit greatly and their more able peers will learn valuable lessons in positive social interaction consideration towards others.
|
<urn:uuid:646983c3-3e79-4183-b331-dbddcbfb74b0>
|
CC-MAIN-2018-34
|
http://wh-magazine.com/alternative-education/what-is-curriculum-differentiation-2
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210304.2/warc/CC-MAIN-20180815200546-20180815220546-00278.warc.gz
|
en
| 0.953693
| 512
| 4.125
| 4
|
A day on Neptune lasts precisely 15 hours, 57 minutes and 59 seconds, according to the first accurate measurement of its rotational period made by University of Arizona planetary scientist Erich Karkoschka.
His result is one of the largest improvements in determining the rotational period of a gas planet in almost 350 years since Italian astronomer Giovanni Cassini made the first observations of Jupiter's Red Spot.
"The rotational period of a planet is one of its fundamental properties," said Karkoschka, a senior staff scientist at the UA's Lunar and Planetary Laboratory. "Neptune has two features observable with the Hubble Space Telescope that seem to track the interior rotation of the planet. Nothing similar has been seen before on any of the four giant planets."
The discovery is published in Icarus, the official scientific publication of the Division for Planetary Sciences of the American Astronomical Society.
Unlike the rocky planets – Mercury, Venus, Earth and Mars – which behave like solid balls spinning in a rather straightforward manner, the giant gas planets – Jupiter, Saturn, Uranus and Neptune – rotate more like giant blobs of liquid. Since they are believed to consist of mainly ice and gas around a relatively small solid core, their rotation involves a lot of sloshing, swirling and roiling, which has made it difficult for astronomers to get an accurate grip on exactly how fast they spin around.
"If you looked at Earth from space, you'd see mountains and other features on the ground rotating with great regularity, but if you looked at the clouds, they wouldn't because the winds change all the time," Karkoschka explained. "If you look at the giant planets, you don't see a surface, just a thick cloudy atmosphere."
"On Neptune, all you see is moving clouds and features in the planet's atmosphere. Some move faster, some move slower, some accelerate, but you really don't know what the rotational period is, if there even is some solid inner core that is rotating."
In the 1950s, when astronomers built the first radio telescopes, they discovered that Jupiter sends out pulsating radio beams, like a lighthouse in space. Those signals originate from a magnetic field generated by the rotation of the planet's inner core.
No clues about the rotation of the other gas giants, however, were available because any radio signals they may emit are being swept out into space by the solar wind and never reach Earth.
"The only way to measure radio waves is to send spacecraft to those planets," Karkoschka said. "When Voyager 1 and 2 flew past Saturn, they found radio signals and clocked them at exactly 10.66 hours, and they found radio signals for Uranus and Neptune as well. So based on those radio signals, we thought we knew the rotation periods of those planets."
But when the Cassini probe arrived at Saturn 15 years later, its sensors detected its radio period had changed by about 1 percent. Karkoschka explained that because of its large mass, it was impossible for Saturn to incur that much change in its rotation over such a short time.
"Because the gas planets are so big, they have enough angular momentum to keep them spinning at pretty much the same rate for billions of years," he said. "So something strange was going on."
Even more puzzling was Cassini's later discovery that Saturn's northern and southern hemispheres appear to be rotating at different speeds.
"That's when we realized the magnetic field is not like clockwork but slipping," Karkoschka said. "The interior is rotating and drags the magnetic field along, but because of the solar wind or other, unknown influences, the magnetic field cannot keep up with respect to the planet's core and lags behind."
Instead of spacecraft powered by billions of dollars, Karkoschka took advantage of what one might call the scraps of space science: publicly available images of Neptune from the Hubble Space Telescope archive. With unwavering determination and unmatched patience, he then pored over hundreds of images, recording every detail and tracking distinctive features over long periods of time.
Other scientists before him had observed Neptune and analyzed images, but nobody had sleuthed through 500 of them.
"When I looked at the images, I found Neptune's rotation to be faster than what Voyager observed," Karkoschka said. "I think the accuracy of my data is about 1,000 times better than what we had based on the Voyager measurements – a huge improvement in determining the exact rotational period of Neptune, which hasn't happened for any of the giant planets for the last three centuries."
Two features in Neptune's atmosphere, Karkoschka discovered, stand out in that they rotate about five times more steadily than even Saturn's hexagon, the most regularly rotating feature known on any of the gas giants.
Named the South Polar Feature and the South Polar Wave, the features are likely vortices swirling in the atmosphere, similar to Jupiter's famous Red Spot, which can last for a long time due to negligible friction. Karkoschka was able to track them over the course of more than 20 years.
An observer watching the massive planet turn from a fixed spot in space would see both features appear exactly every 15.9663 hours, with less than a few seconds of variation.
"The regularity suggests those features are connected to Neptune's interior in some way," Karkoschka said. "How they are connected is up to speculation."
One possible scenario involves convection driven by warmer and cooler areas within the planet's thick atmosphere, analogous to hot spots within the Earth's mantle, giant circular flows of molten material that stay in the same location over millions of years.
"I thought the extraordinary regularity of Neptune's rotation indicated by the two features was something really special," Karkoschka said.
"So I dug up the images of Neptune that Voyager took in 1989, which have better resolution than the Hubble images, to see whether I could find anything else in the vicinity of those two features. I discovered six more features that rotate with the same speed, but they were too faint to be visible with the Hubble Space Telescope, and visible to Voyager only for a few months, so we wouldn't know if the rotational period was accurate to the six digits. But they were really connected. So now we have eight features that are locked together on one planet, and that is really exciting."
In addition to getting a better grip on Neptune's rotational period, the study could lead to a better understanding of the giant gas planets in general.
"We know Neptune's total mass but we don't know how it is distributed," Karkoschka explained. "If the planet rotates faster than we thought, it means the mass has to be closer to the center than we thought. These results might change the models of the planets' interior and could have many other implications."
LINK:Neptune’s Rotational Period Suggested by the Extraordinary Stability of Two Features, Icarus, article in press (accepted manuscript), doi:10.1016/j.icarus.2011.05.013
Daniel Stolte | University of Arizona
New quantum phenomena in graphene superlattices
19.09.2017 | Graphene Flagship
Solar wind impacts on giant 'space hurricanes' may affect satellite safety
19.09.2017 | Embry-Riddle Aeronautical University
Using ultrafast flashes of laser and x-ray radiation, scientists at the Max Planck Institute of Quantum Optics (Garching, Germany) took snapshots of the briefest electron motion inside a solid material to date. The electron motion lasted only 750 billionths of the billionth of a second before it fainted, setting a new record of human capability to capture ultrafast processes inside solids!
When x-rays shine onto solid materials or large molecules, an electron is pushed away from its original place near the nucleus of the atom, leaving a hole...
For the first time, physicists have successfully imaged spiral magnetic ordering in a multiferroic material. These materials are considered highly promising candidates for future data storage media. The researchers were able to prove their findings using unique quantum sensors that were developed at Basel University and that can analyze electromagnetic fields on the nanometer scale. The results – obtained by scientists from the University of Basel’s Department of Physics, the Swiss Nanoscience Institute, the University of Montpellier and several laboratories from University Paris-Saclay – were recently published in the journal Nature.
Multiferroics are materials that simultaneously react to electric and magnetic fields. These two properties are rarely found together, and their combined...
MBM ScienceBridge GmbH successfully negotiated a license agreement between University Medical Center Göttingen (UMG) and the biotech company Tissue Systems Holding GmbH about commercial use of a multi-well tissue plate for automated and reliable tissue engineering & drug testing.
MBM ScienceBridge GmbH successfully negotiated a license agreement between University Medical Center Göttingen (UMG) and the biotech company Tissue Systems...
Pathogenic bacteria are becoming resistant to common antibiotics to an ever increasing degree. One of the most difficult germs is Pseudomonas aeruginosa, a...
Scientists from the MPI for Chemical Energy Conversion report in the first issue of the new journal JOULE.
Cell Press has just released the first issue of Joule, a new journal dedicated to sustainable energy research. In this issue James Birrell, Olaf Rüdiger,...
12.09.2017 | Event News
06.09.2017 | Event News
06.09.2017 | Event News
19.09.2017 | Materials Sciences
19.09.2017 | Earth Sciences
19.09.2017 | Materials Sciences
|
<urn:uuid:41a37d1b-82cb-4923-b434-4c12219536f1>
|
CC-MAIN-2017-39
|
http://www.innovations-report.com/html/reports/physics-astronomy/clocking-neptune-039-s-spin-177825.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685698.18/warc/CC-MAIN-20170919131102-20170919151102-00110.warc.gz
|
en
| 0.947158
| 2,033
| 4.21875
| 4
|
Find the Latest Resources in Education Today
How games can engage students and improve learning
Understanding how games create a sense of flow and engagement can help teachers make better choices about their instructional use of games
The reason is that teachers recognize the emotional energy that is created when students play games, and they strive to take advantage of the level of excitement and commitment to succeed that is difficult to achieve through other instructional strategies.
In order to be able to make the best use of educational games that achieve this level of engagement it is important to understand how this commitment to a game is fostered.
Games that are highly engaging create a sense of “flow” for the players. Flow is the experience of being totally involved in an activity and usually involves high levels of both concentration and enjoyment. Game developers strive to create a sense of flow during game play because when a player achieves a state of total or compete focus, complete immersion, and limited awareness of time, there is also created a strong desire to repeat or extend the experience.
Developers identify this as a compulsion to play, the drive to play a game over and over. This feeling is exactly what a teacher wants to establish during instruction: to create an emotional connection with the content and a desire to repeat the experience.
There are a number of game features that have been identified as helping create a sense of flow. Some of these include, for example, ease of use, simplicity of play, clear goals, feedback, interactivity, competition, control over actions, and a sense of community. These features of a game do not have to be a part of the educational content of the game and can actually involve actions that are separate from the content that is the focus of the game.
These features are able to generate a connection to the content through the overall commitment to continuing and succeeding in the game that is established through the sense of flow felt by the player. Arcade-style games in which speed and competition are critical features can be used to engage students with content that is as simple as math facts or as complex as scientific argumentation.
Understanding how games create a sense of flow and engagement can help teachers make better choices about their instructional use of games to introduce or reinforce learning academic content.
Watch this clip from Reason Racer on science and space.
Marilyn Ault is an Associate Research Scientist at the University of Kansas Center for Research on Learning. She and her colleagues have conducted research on the use of targeted games in the learning of complex skills such as Reason Racer. This game uses a rally-race format to engage middle school students in the skills and knowledge related to scientific argumentation.
|
<urn:uuid:28e9c699-360b-4b9c-9e13-019ca6e8e260>
|
CC-MAIN-2014-23
|
http://www.eschoolnews.com/2014/06/06/games-engage-students-241/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877693.48/warc/CC-MAIN-20140722025757-00067-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.960458
| 534
| 4.21875
| 4
|
The ability to see in color is not specific to humans, but many animals can only see in black and white. Colored vision is possible because of the presence of cone photoreceptors in the eye; the different types of cone cells respond to different wavelengths of light, resulting in the perception of different colors. Cone cells are not active in low-light conditions, unlike the more sensitive rod photoreceptors.
TL;DR (Too Long; Didn't Read)
Some of the animals that only see in black, white and shades of gray include bats, golden hamsters, flat-haired mice, raccoons, seals, sea lions, walruses, some fish, whales and dolphins, to name a few.
Monochromats, Dichromats and Trichromats
Humans, along with several other primates, are trichromats when it comes to cone receptors – they have three different types. It was once thought that most mammals only saw in black and white, but this is not the case. Dogs and cats, for example, are dichromatic with limited color vision. Animals that are monochromatic, with only one type of cone, can typically only see in shades of black, white and gray.
Diurnal and Nocturnal Animals
The amount and ratio of rod to cone cells varies among animal species. In terrestrial animals, these factors are largely affected by whether the animal is diurnal or nocturnal. Diurnal species, such as humans, usually have a higher density of cone cells than nocturnal species, which have a greater number of rod cells to help them distinguish shapes and movement in low light. Monochromatic nocturnal mammals include various bats, rodents such as the golden hamster and flat-haired mouse, and the common raccoon.
Old World primate species, such as chimpanzees, gorillas and orangutans, have trichromatic vision as do humans, but New World monkeys exhibit various ranges. Howler monkeys have three cones, but male tamarins and spider monkeys only have two, with females split between trichromacy and dichromacy. Night monkeys, or owl monkeys, are monochromatic. As their name suggests, they are nocturnal, with better vision in dim light than other primates have.
Fish and Marine Mammals
Most marine mammals are monochromatic; this includes seals, sea lions and walruses, and cetaceans, such as dolphins and whales. Most fish are trichromatic, with good color vision, but there are some exceptions. The only animals known to have no cones at all, and therefore that are incapable of color vision, are skates, cartilaginous fishes related to rays and, more distantly, to sharks. Sharks are also monochromatic, but rays are thought to have relatively good color vision. Marine mammals and fish may have lost their color vision over time as it was not advantageous in the water.
About the Author
Based in Scotland, Clare Smith is a writer specializing in natural science topics. She holds a Master of Science in plant biodiversity from the University of Edinburgh.
|
<urn:uuid:cbc7a4a8-c8c3-4fb3-91cd-c028c65b0c57>
|
CC-MAIN-2023-23
|
https://sciencing.com/list-animals-see-black-white-8518587.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648465.70/warc/CC-MAIN-20230602072202-20230602102202-00207.warc.gz
|
en
| 0.946788
| 657
| 4.09375
| 4
|
Nuclear power is an established and reliable way to generate electricity. In normal operating conditions, 75% of UK nuclear capacity can be assumed available to meet peak demand.
Eight of the nine existing UK nuclear power stations are scheduled to close by 2028. Some may continue to operate for longer than currently scheduled, if EDF Energy and the Office for Nuclear Regulation (ONR) is satisfied this is safe, but eventually they will need to be replaced.
No nuclear power station generates electricity all of the time. There are periods when it will operate at reduced levels or will be shut down for refuelling and maintenance. Most shutdowns are planned, and because of this can happen when demand is expected to be lower. Some reactors continue to operate at 20–40% of capacity while being refuelled, which typically takes three to four days, about every six weeks.
Unplanned shutdowns occur when the power station is forced to shut down either by its control system (automatic shutdown) or by the plant operator (manual shutdown) due to a suspected fault. A precautionary approach is used for the shutdown systems and operating regimes of all nuclear power stations.
Nuclear power stations are designed to deliver a reliable level of electricity for long periods of time. The new plants proposed for the UK are expected to generate electricity as much as 90% of the time during normal operation.
In 1990, only a quarter of the world's nuclear plants had load factors of over 75% – that is, they generated more than 75% of their theoretical maximum electrical output. Today, almost two thirds of nuclear plants have load factors of over 75%, and a quarter have load factors higher than 90%.
The proposed new generation of nuclear power stations in the UK aims to set a new standard, with shorter outage periods and reduced fuel consumption per kilowatt-hour (kWh) of electricity generated. This means they will use less fuel and will need to refuel less often.
|
<urn:uuid:c3ee17e0-4256-4c11-b56e-14e5ee927b95>
|
CC-MAIN-2018-13
|
https://www.edfenergy.com/future-energy/nuclear-energy-reliability-challenge-detail
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645943.23/warc/CC-MAIN-20180318184945-20180318204945-00315.warc.gz
|
en
| 0.95975
| 397
| 4.03125
| 4
|
How HIV Damages the Immune System
September 18, 2008
The basic structure of HIV is similar to that of other viruses (Figure 1). HIV has a core of genetic material surrounded by a protective sheath, called a capsid. The genetic material in the core is RNA (ribonucleic acid), which contains the information that the virus needs in order to replicate (make more copies of itself) and perform other functions. You can think of RNA as the set of rules the virus follows in order to live.
In HIV, viral RNA has a protein called "reverse transcriptase" that is crucial for viral replication inside T cells, white blood cells that help coordinate activities of the immune system. (The function of reverse transcriptase, which means "writing backwards," will be explained later when we discuss how HIV infects T cells.)
HIV, like all other viruses, has proteins that are particular to itself. These proteins are called antigens. Antigens have diverse functions in viral replication. In the case of HIV, a combination of two antigens, gp120 and gp41, allow the virus to hook onto T cells and infect them. These antigens are located on the surface of the virus. (Another HIV antigen is p24, an antigen of the core of the virus that is measured to estimate the amount of active free-floating virus in the blood of HIV positive people).
T cells are the main target of HIV in the blood, and they act as the host that the virus needs in order to replicate. (However, macrophages, B cells, monocytes, and other cells in the body can also be infected by HIV.) The T cell has a nucleus that contains genetic material in the form of DNA (deoxyribonucleic acid) (Figure 2). The cell's DNA has all the information that the cell needs in order to function. The difference between RNA and DNA is that the former is a single strand of genetic material, while the latter is a double strand (Figure 3). This difference is crucial in the process of T cell infection by HIV.
Once inside the cell, the capsid dissolves, liberating the viral RNA and the reverse transcriptase. Now, in order to infect the cell, the viral RNA needs to travel into the T cell's nucleus (where it can change the cell's rules and convert it into a virus factory). However, for that to happen, an important transformation needs to take place.
Normally, the T cell's nucleus communicates with the rest of the cell by transforming DNA into RNA and sending it out of the nucleus. (In all the cells of the body, RNA acts as a messenger between the nucleus and the rest of the cell. The DNA makes RNA and sends it out to convey orders.) The genetic material's passport to leave the nucleus is to be transformed into single-stranded RNA. In the same fashion, the passport to enter the nucleus is to be transformed into double-stranded DNA.
Viral RNA needs to become DNA in order to start the replication process. Reverse transcriptase allows the RNA to borrow material from the cell and to "write backwards" a chain of viral DNA.
Once transformed, the viral DNA will travel into the T cell's nucleus and attach itself to the cell's DNA (a process similar to placing a "bug" in a computer software program). At this point, if the T cell is activated, it will start producing new virus instead of performing normal T cell functions.
Because it hijacks the "coordinator" T cells that help keep the immune system working, HIV is particularly devastating to immune health. In the process of replication, the virus destroys increasing numbers of T cells. The coordinator cells of an important part of the immune system are annihilated, leaving the body open to opportunistic infections.
This article was provided by San Francisco AIDS Foundation. It is a part of the publication AIDS 101. Visit San Francisco AIDS Foundation's Web site to find out more about their activities, publications and services.
|
<urn:uuid:a734b69f-928f-465b-a6d7-d3838144878c>
|
CC-MAIN-2017-17
|
http://www.thebody.com/content/art2494.html?nxtprv
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00134-ip-10-145-167-34.ec2.internal.warc.gz
|
en
| 0.945409
| 824
| 4.09375
| 4
|
The people of the United States have begun to recognize that wetlands
have numerous and widespread benefits. However, many of the goods and
services wetlands provide have little or no market value. Because of
this, the benefits produced by wetlands accrue primarily to the
general public. Therefore, the Government provides incentives and
regulates and manages wetland resources to protect the resources from
degradation and destruction. Other mechanisms for wetland protection
include acquisition, planning, mitigation, disincentives for
conversion of wetlands to other land uses, technical assistance,
education, and research.
Although many States have their own wetland regulations, the Federal Government bears a major responsibility for regulating wetlands. The five Federal agencies that share the primary responsibility for protecting wetlands include the Department of Defense, U.S. Army Corps of Engineers (Corps); the U.S. Environmental Protection Agency (EPA); the Department of the Interior, U.S. Fish and Wildlife Service (FWS); the Department of Commerce, National Oceanic and Atmospheric Administration (NOAA); and the Department of Agriculture, Natural Resources Conservation Service (NRCS) (formerly the Soil Conservation Service). Each of these agencies has a different mission that is reflected in the implementation of the agency's authority for wetland protection. The Corps' duties are related to navigation and water supply. The EPA's authorities are related to protecting wetlands primarily for their contributions to the chemical, physical, and biological integrity of the Nation's waters. The FWS's authorities are related to managing fish and wildlife-game species and threatened and endangered species. Wetland authority of NOAA lies in its charge to manage the Nation's coastal resources. The NRCS focuses on wetlands affected by agricultural activities.
States are becoming more active in wetland protection. As of 1993, 29
States had some type of wetland law (Want, 1993). Many of these
States have adopted programs to protect wetlands beyond those programs
enacted by the Federal Government. As more responsibility is
delegated from the Federal Government to the States, State wetland
programs are gaining in importance. Thus far, States have devoted
more attention to regulating coastal wetlands than inland wetlands.
The most comprehensive State programs include those of Connecticut,
Rhode Island, New York, Massachusetts, Florida, New Jersey, and
Minnesota (Mitsch and Gosselink, 1993). Many of these States regulate
those activities affecting wetlands that are exempt from the Clean
Water Act, Section 404 program. (For more information on specific
State wetland protection programs, see the State Summary section of
Despite the current recognition of wetland benefits, many potentially conflicting interests still exist, such as that between the interests of landowners and the general public and between developers and conservationists. Belated recognition of wetland benefits and disagreement on how to protect them has led to discrepancies in local, State, and Federal guidelines. Discrepancies in Federal programs are apparent in table 6, which shows programs that encourage conversion of wetlands and those that discourage conversion of wetlands. Conflicting interests are the source of much tension and controversy in current wetland protection policy. Although attempts are being made to reconcile some of these differences, many policies will have to be modified to achieve consistency.
Despite all the government legislation, policies, and programs, wetlands will not be protected if the regulations are not enforced. Perhaps the best way to protect wetlands is to educate the public of their benefits. If the public does not recognize the benefits of wetland preservation, wetlands will not be preserved. Protection can be accomplished only through the cooperative efforts of citizens.
If the public does not recognize the benefits of wetland preservation, wetlands will not be preserved
FEDERAL WETLAND PROTECTION PROGRAMS AND POLICIES
The Federal Government protects wetlands directly and indirectly
through regulation, by acquisition, or through incentives and
disincentives as described in table 6. Section 404 of the Clean Water
Act is the primary vehicle for Federal regulation of some of the
activities that occur in wetlands. Other programs, such as the
"Swampbuster" program and the Coastal Management and Coastal Barriers
Resources Acts, provide additional protection. Coastal wetlands
generally benefit most from the current network of statutes and
regulations. Inland wetlands are more vulnerable than coastal
wetlands to degradation or loss because current statutes and policies
provide them less comprehensive protection. Several of the major
Federal policies and programs affecting wetlands are discussed in the
following few pages. Also discussed are some of the States' roles in
Federal wetland policies.
The Clean Water ActThe Federal Government regulates, through Section 404 of the Clean Water Act, some of the activities that occur in wetlands. The Section 404 program originated in 1972, when Congress substantially amended the Federal Water Pollution Control Act and created a Federal regulatory plan to control the discharge of dredged or fill materials into wetlands and other waters of the United States. Discharges are commonly associated with projects such as channel construction and maintenance, port development, fills to create dry land for development sites near the water, and water-control projects such as dams and levees. Other kinds of activities, such as the straightening of river channels to speed the flow of water downstream and clearing land, are regulated as Section 404 discharges if they involve discharges of more than incidental amounts of soil or other materials into wetlands or other waters.
The Corps and the EPA share the responsibility for implementing the
permitting program under Section 404 of the Clean Water Act. However,
Section 404(c) of the Clean Water Act gives the EPA authority to veto the
permit if discharge materials at the selected sites would adversely affect
such things as municipal water supplies, shellfish beds and fishery areas,
wildlife, or recreational resources. By 1991, the EPA had vetoed 11 of
several hundred thousand permits since the Act was passed (Schley and
The review process for a Section 404 permit is shown in figure 39. After notice and opportunity for a public hearing, the Corps' District Engineer may issue or deny the permit. The District Engineer must comply with the EPA's Section 404(b)(1) Guidelines and must consider the public interest when evaluating a proposed permit. Four questions related to the guidelines are considered during a review of an application:
The Clean Water Act regulates dredge and fill activities that would adversely affect wetlands.
Through a public interest review, the Corps tries to balance the benefits
an activity may provide against the costs it may incur. The criteria
applied in this process are the relative extent of the public and private
need for the proposed structure or work and the extent and permanence of
the beneficial or detrimental effects on the public and private uses to
which the area is suited. Some of the factors considered in the public
interest review are listed in figure 39.
Cumulative effects of numerous piecemeal changes are considered in addition
to the individual effects of the projects.
The FWS, NOAA, and State fish and wildlife agencies, as the organizations in possession of most of the country's biological data, have important advisory roles in the Section 404 program. The FWS and NOAA (if a coastal area is involved) provide the Corps and the EPA with comments about the potential environmental effects of pending Section 404 permits. Other government agencies, industry, and the public are invited to participate through public notices of permit applications, hearings, or other information-collecting activities. However, the public interest review usually does not involve public comment unless the permit is likely to generate significant public interest or if the potential consequences of the permit are expected to be significant. All recommendations must be given full consideration by the Corps, but there is no requirement that they must be acted upon. If the FWS or NOAA disagree with a permit approved by a District Engineer, they can request that the permit be reviewed at a higher level within the Corps. However, the Assistant Secretary of the Army has the unilateral right to refuse all requests for higher level reviews. The Assistant Secretary accepted the additional review of 16 of the 18 requested out of the total 105,000 individual permits issued between 1985 and 1992 (Schley and Winter, 1992).
Because many activities may cause the discharge of dredged and fill materials, and the potential effects of these activities differ, the Corps has issued general regulations to deal with a wide range of activities that could require a Section 404 permit. The Corps can forgo individual permit review by issuing general permits on a State, regional, or nationwide basis. General permits cover specific categories of activities that the Corps determines will have minimal effects on the aquatic environment, including wetlands. General permits are designed to allow activities with minimal effects to begin with little, if any, delay or paperwork. General permits authorize approximately 75,000 activities annually that might otherwise require a permit (U.S. Environmental Protection Agency, 1991); however, most activities in wetlands are not covered by general permits (Morris, 1991).
Not all dredge and fill activities require a Section 404 permit. Many
activities that cause the discharge of dredged and fill materials are
exempt from Section 404. The areas specifically exempted from Section 404
include: normal farming, forestry, and ranching activities; dike, dam,
levee, and other navigation and transportation structure maintenance;
construction of temporary sedimentation basins on construction sites; and
construction or maintenance of farm roads, forest roads, or temporary roads
for moving mining equipment (Morris, 1991). In addition, the Corps' flood-
control and drainage projects and other Federal projects authorized by
Congress and planned, financed, and constructed by a Federal agency also
are exempt from the Section 404 permitting requirements if an adequate
environmental impact statement is prepared.
Not all methods of altering wetlands are regulated by Section 404. Common methods of altering wetlands are listed in table 7. Unregulated methods include: wetland drainage, the lowering of ground-water levels in areas adjacent to wetlands, permanent flooding of existing wetlands, deposition of material that is not specifically defined as dredged and fill material by the Clean Water Act, and wetland vegetation removal (Office of Technology Assessment, 1984).
State authority over the Federal Section 404 program is a goal of the Clean Water Act. Assumption of authority from the EPA has been completed only by Michigan and New Jersey. Under this arrangement, the EPA is responsible for approving State assumptions and retains oversight of the State Section 404 program, and the Corps retains the navigable waters permit program (Mitsch and Gosselink, 1993). States cannot issue permits over EPA's objection, but EPA has the authority to waive its review for selected categories of permit applications. Few States have chosen to assume the program, in part because few Federal resources are available to assist States and assumption does not include navigable waters (World Wildlife Fund, 1992).
|The program that seeks to remove Federal incentives for the agricultural conversion of wetlands is part of the Food Security Act of 1985 and 1990, and is known as "Swampbuster." Swampbuster renders farmers who drained or otherwise converted wetlands for the purpose of planting crops after December 23, 1985, ineligible for most Federal farm subsidies. Through Swampbuster, Congress directed the U.S. Department of Agriculture (USDA) to slow wetland conversion by agricultural activities (U.S. Fish and Wildlife Service, 1992). The government programs that Swampbuster specifically affects are listed in Section 1221 of the Food Security Act. If a farmer loses eligibility for USDA programs under Swampbuster, he or she may regain eligibility during the next year simply by not using wetlands for growing crops. Swampbuster is administered by USDA's Consolidated Farm Service Agency. The NRCS and the FWS serve as technical consultants (World Wildlife Fund, 1992).||The Swampbuster was amended by the Food, Agriculture, Conservation, and Trade Act of 1990 to create the Wetland Reserve Program. The Wetland Reserve Program provides financial incentives to farmers to restore and protect wetlands through the use of long-term easements (usually 30-year or permanent). The program provides farmers the opportunity to offer a property easement for purchase by the USDA and to recieve cost-share assistance (from 50 to 75 percent) to restore converted wetlands. Landowners make bids to participate in the program. The bids represent the payment they are willing to accept for granting an easement to the Federal Government. The Consolidated Farm Service Agency ranks the bids according to the environmental benefit per dollar. Easements require that farmers implement conservation plans approved by the NRCS and the FWS. Enrollment in the pilot program was authorized for nine States. The program's goal is to enroll 1 million acres by 1995 (U.S. Fish and Wildlife Service, 1992). Funding for this program is appropriated annually by Congress (U.S. Army Corps of Engineers, 1994). Because 74 percent of United States' wetlands are on private land, programs that provide incentives for private landowners to preserve their wetlands, such as the Wetland Reserve Program, are critical for protecting wetlands (Council of Environmental Quality, 1989).|
"Swampbuster" removes Federal incentives for the agricultural conversion of wetlands.
Coastal Wetlands Protection Programs
The 1972 Coastal Zone Management Act and the 1982 Coastal Barriers
Resources Act protect coastal wetlands. The Coastal Zone Management Act
encourages States (35 States and territories are eligible, including the
Great Lakes States) to establish voluntary coastal zone management plans
under NOAA's Coastal Zone Management Program and provides funds for
developing and implementing the plans. The NOAA also provides technical
assistance to States for developing and implementing these programs. For
Federal approval, the plans must demonstrate enforceable standards that
provide for the conservation and environmentally sound development of
coastal resources. The program provides States with some control over
wetland resources by requiring that Federal activities be consistent with
State coastal zone management plans, which can be more stringent than
Federal standards (World Wildlife Fund, 1992, p. 87). A State also can
require that design changes or mitigation requirements be added to Section
404 permits to be consistent with the State coastal zone management plan.
The Coastal Zone Management Act has provided as much as 80 percent of the
matching-funds grants to States to develop plans for coastal management
that emphasize wetland protection (Mitsch and Gosselink, 1993). Some
States pass part of the grants on to local governments. The Act's
authorities are limited to wetlands within a State's coastal zone boundary,
the definition of which differs among States. As of 1990, 23 States had
federally approved plans.
The 1982 Coastal Barriers Resources Act denies Federal subsidies for development within undeveloped, unprotected coastal barrier areas, including wetlands, designated as part of the Coastal Barrier Resources System. Congress designates areas for inclusion in the Coastal Barriers Resource System on the basis of some of the following criteria (Watzin, 1990):
In addition, States, local governments, and conservation organizations
owning lands that were "otherwise protected" could have their lands added
to this system until May 1992. ("Otherwise protected" lands are areas
within undeveloped coastal barriers that were already under some form of
protection.) Once in the Coastal Barriers Resources System, these areas
are rendered ineligible for almost all Federal financial subsidies for
programs that might encourage development. In particular, these lands no
longer qualify for Federal flood insurance, which discourages development
because coastal lands are frequently subject to flooding and damage from
hurricanes and other storms. The FWS is responsible for mapping these
areas and approves lands to be included in the system. The purposes of the
Coastal Barrier Resources Act are to minimize the loss of human life, to
reduce damage to fish and wildlife habitats and other valuable resources,
and to reduce wasteful expenditure of Federal revenues (Watzin, 1990). In
the future, eligible surplus government land will be included if approved
by the FWS. About 95 percent of the 788,000 acres added to the system in
1990 along the Atlantic and Gulf coasts consists of coastal wetlands and
near-shore waters (World Wildlife Fund, 1992).
Flood-Plain and Wetland Protection OrdersExecutive Orders 11988, Floodplain Management, and 11990, Protection of Wetlands, were signed by President Carter in 1977. The purpose of these Executive Orders was to ensure protection and proper management of flood plains and wetlands by Federal agencies. The Executive Orders require Federal agencies to consider the direct and indirect adverse effects of their activities on flood plains and wetlands. This requirement extends to any Federal action within a flood plain or a wetland except for routine maintenance of existing Federal facilities and structures. The Clinton administration has proposed revising Executive Order 11990 to direct Federal agencies to consider wetland protection and restoration planning in the larger scale watershed/ecosystem context.
The Coastal Zone Management Program provides States with some control over wetland resources.
WETLAND DELINEATION STANDARDS
|The Corps published, in 1987, the Corps of Engineers Wetland Delineation Manual, a technical manual that provides guidance to Federal agencies about how to use wetland field indicators to identify and delineate wetland boundaries (U.S. Army Corps of Engineers, 1987). In January of 1989, the EPA, Corps, SCS, and FWS adopted a single manual for delineating wetlands under the Section 404 and Swampbuster programs-The Federal Manual for Identifying and Delineating Jurisdictional Wetlands (commonly referred to as the "1989 Manual"). The "1989 Manual" establishes a national standard for identifying and delineating wetlands by specifying the technical criteria used to determine the presence of the three wetland characteristics: wetland hydrology, water-dependent vegetation, and soils that have developed under anaerobic conditions (U.S. Environmental Protection Agency, 1991).||In 1991, the President's Council on Competitiveness proposed revisions to the 1989 Manual because of some concern that nonwetland areas were regularly being classified as wetlands (Environmental Law Reporter, 1992a). The proposed 1991 Manual was characterized by many wetland scientists as politically based rather than scientifically based. In September of 1992, Congress authorized the National Academy of Science to conduct a $400,000 study of the methods used to identify and delineate wetlands (Environmental Law Reporter, 1992b). On August 25, 1993, the Clinton administration's wetland policy, proclaimed that, "Federal wetlands policy should be based upon the best science available" (White House Office of Environmental Policy, 1993) and the 1987 Corps Manual is the sole delineation manual for the Federal Government until the National Academy of Sciences completes its study (White House Office of Environmental Policy, 1993).|
"Federal wetlands policy should be based upon the best science available."
Mitigation is the attempt to alleviate some or all of the detrimental
effects arising from a given action. Wetland mitigation replaces an
existing wetland or its functions by creating a new wetland, restoring a
former wetland, or enhancing or preserving an existing wetland. This is
done to compensate for the authorized destruction of the existing wetland.
Mitigation commonly is required as a condition for receiving a permit to
develop a wetland.
Wetland mitigation can be conducted directly on a case-by-case onsite basis, or through a banking system. Onsite mitigation requires that a developer create a wetland as close as possible to the site where a wetland is to be destroyed. This usually involves a one-to-one replacement.
A mitigation bank is a designated wetland that is created, restored, or enhanced to compensate for future wetland loss through development. It may be and usually is located somewhere other than near the site to be destroyed and built by someone other than the developer. The currency of a mitigation bank is the mitigation credit. "Mitigation banks require systems for valuing the compensation credits produced and for determining the type and number of credits needed as compensation for any particular project. ***Mitigation bank credit definitions are an attempt to identify those features [of wetland] which allow reasonable approximations of replacement" (U.S. Army Corps of Engineers, 1994, p. 63). Wetland evaluation methods have been developed or are being developed to address the problem of evaluating two different wetlands so that the degradation of one can be offset by the restoration, enhancement, or creation of the other and to assign either a qualitative or quantitative value to each wetland. When buying the credits, developers pay a proportionate cost toward acquiring, restoring, maintaining, enhancing, and monitoring the mitigation bank wetland. Banks cover their costs by selling credits to those who develop wetlands, or by receiving a taxpayer subsidy.
Several problems are associated with wetland mitigation. The concept of
wetland compensation may actually encourage destruction of natural wetlands
if people believe that wetlands can be easily replaced. A 1990 Florida
Department of Environmental Regulation study examined the success of
wetland creation projects and found that the success rate of created tidal
wetlands was 45 percent, whereas the success rate for created freshwater
wetlands was only 12 percent. (Redmond, 1992). Figure 40 shows the relative success of wetland
mitigation projects overall in south Florida. The apparent factor
controlling the lower success rate for freshwater wetlands was the
difficulty in duplicating wetland hydrology, that is, water-table
fluctuations, frequency and seasonality of flooding, and
A study of wetland mitigation practices in eight States revealed that in most of the States, more wetland acreage was destroyed than was required to be created or restored, resulting in a net loss of acreage when mitigation was included in a wetlands permit (Kentula and others, 1992). Less than 55 percent of the permits included monitoring of the project by site visit. A limited amount of information exists about the number of acres of wetlands affected by mitigation or the effectiveness of particular mitigation techniques because of the lack of followup. Several studies in Florida reported that as many as 60 percent of the required mitigation projects were never even started (Lewis, 1992). In addition, the mitigation wetland commonly was not the same type of wetland that was destroyed, which resulted in a net loss of some wetland types. (See article "Wetland Restoration and Creation" in this volume.)
RECENT PRESIDENTIAL WETLAND PROTECTION INITIATIVES
In his 1988 Presidential address and in his 1990 budget address to
Congress, President Bush echoed the recommendations of the National Wetland
Policy Forum. The Forum was convened in 1987 by the Conservation
Foundation at the request of EPA. The short-term recommendation of the
forum was to decrease wetland losses and increase wetland restoration and
creation-the concept of "no net loss"-as a national goal. This implied
that when wetland loss was unavoidable, creation and restoration should
replace destroyed wetlands (Mitsch and Gosselink, 1993).
On August 25, 1993, President Clinton unveiled his new policy for managing America's wetland resources. The program was developed by the Interagency Working Group on Federal Wetlands Policy, a group chaired by the White House Office on Environmental Policy with participants from the EPA, the Corps, the Office of Management and Budget, and the Departments of Agriculture, Commerce, Energy, Interior, Justice, and Transportation. The Administration's proposals mix measures that tighten restrictions on activities affecting wetlands in some cases and relax restrictions in other areas. The Clinton policy endorses the goal of "no net loss" of wetlands; however, it clearly refers to "no net loss" of wetland acreage rather than "no net loss" of wetland functions.
The President's wetland proposal would expand Federal authority under the
Section 404 program to regulate the draining of wetlands in addition to
regulating dredging and filling of wetlands. Other proposed changes to the
Federal permitting program include the requirement that most Section 404
permit applications be approved or disapproved within 90 days, and the
addition of an appeal process for applicants whose permits are denied. The
EPA and the Corps are directed to relax regulatory restrictions that cause
only minor adverse effects to wetlands such as activities affecting very
The Clinton policy calls for avoiding future wetland losses by incorporating wetland protection into State and local government watershed-management planning. This new policy also significantly expands the use of mitigation banks to compensate for federally approved wetland development or loss.
Clinton's proposals relaxed some of the current restrictions on agricultural effects on wetlands and increased funding for incentives to preserve and restore wetlands on agricultural lands. The administration policy excluded 53 million acres of "prior converted croplands" from regulation as wetlands. Also, authority over wetland programs affecting agriculture was shifted from the FWS to the NRCS and proposed increased funding for the Wetlands Reserve Program, which pays farmers to preserve and restore wetlands on their property.
"No net loss" of wetlands is a national goal.
For Additional Information:Todd H. Votteler,
4312 Larchmont Avenue,
Dallas, TX 75205
Thomas A. Muir,
|
<urn:uuid:67247005-5068-44d1-8041-82ee5ee7a66c>
|
CC-MAIN-2017-17
|
https://water.usgs.gov/nwsum/WSP2425/legislation.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00228-ip-10-145-167-34.ec2.internal.warc.gz
|
en
| 0.927908
| 5,153
| 4
| 4
|
NASA is building a laser-based contraption for the International Space Station, whose mission is to create a 3-D map of the Earth's forest, and unlock mysteries of forests' role in the carbon cycle.
The contraption is known as the Global Ecosystem Dynamics Investigation lidar, and it is one of two new devices being built as part of the Earth Venture Instrument program.
GEDI has a large and important task -- and not just due to the sheer amount of forest on Earth. The 3-D view of Earth's forests will specifically help scientists understand the impact of trees and forests on the carbon cycle. It will help fill in knowledge gaps about how much carbon trees store, and what the carbon release -- and environmental impact -- would be if forests were destroyed, releasing more carbon into our atmosphere.
"One of the most poorly quantified components of the carbon cycle is the net balance between forest disturbance and regrowth,” said Ralph Dubayah, one of University of Maryland's principal GEDI investigators. “GEDI will help scientists fill in this missing piece by revealing the vertical structure of the forest, which is information we really can’t get with sufficient accuracy any other way.”
And how GEDI will accomplish this is nothing short of incredible. It is a laser-based system, or lidar, and is equipped with a trio of Goddard-developed lasers. These lasers, which can be divided into 14 tracks, will scan all land 50 degrees north and south of the Equator -- covering most tropical and temperate forests.
These "eye-safe" lasers will send out quick pulses of light that can penetrate the dense canopy, without causing harm. They'll then reflect back to a detector in space. It is estimated that in one year, GEDI will send out around 16 billion pulses.
GEDI and these pulses, NASA explains, "can measure the distance from the space-based instrument to Earth’s surface with enough accuracy to detect subtle variations, including the tops of trees, the ground, and the vertical distribution of aboveground biomass in forests."
“Lidar has the unique ability to peer into the tree canopy to precisely measure the height and internal structure of the forest at the fine scale required to accurately estimate their carbon content,” stated Bryan Blair, an investigator for GEDI at Goddard Space Flight Center.
GEDI is expected to be completed in 2018, and will also be used to discover the age of trees, map biodiversity and understand the effects of climate change.
h/t the Verge
|
<urn:uuid:9be266f0-ef3e-4fcd-9785-fa5508b7f467>
|
CC-MAIN-2020-34
|
https://www.salon.com/2014/09/10/nasa_planning_to_send_billons_of_laser_pulses_from_space_to_create_3d_map_of_earths_forests/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00431.warc.gz
|
en
| 0.93308
| 532
| 4.28125
| 4
|
NASA's Cassini spacecraft has discovered a strange cloud on Titan that goes against everything scientists thought they knew about the moon's atmosphere.
Titan is a cold place. This moon of Saturn is far enough from the Sun that temperatures are around 300 degrees Fahrenheit colder than on Earth. In this environment, liquid water can't exist. Instead, hydrocarbons like methane can condense and freeze, forming a cycle complete with clouds, rain, and surface oceans of liquid methane. It is the only place in the solar system besides Earth where these exist.
NASA's Cassini probe was sent to observe Saturn and its moons, and besides taking incredible images, it spends a great deal of time studying Titan's atmosphere. Recently, it spotted the oddball cloud. It exists in Titan's stratosphere. It's made of a chemical called dicyanoacetylene, or C4N2. The problem is, Titan's stratosphere has almost no C4N2, so scientists aren't sure where all the stuff in the cloud came from.
A possible answer is found in the Earth's own stratosphere. High above Earth's poles, water combines with pollutants like CFCs in thin, wispy clouds. The chemical reaction releases chlorine, which is present in these clouds in high concentrations despite being almost completely absent in the surrounding atmosphere.
A similar process might occur on Titan. Chemicals already present in Titan's upper atmosphere could combine inside clouds, creating excess amounts of C4N2. The fact that Earth and Titan have similar processes in their upper atmosphere means that there might be other weather patterns the two have in common. Studying Titan's clouds could, in the future, provide answers to weather mysteries here on Earth.
|
<urn:uuid:0b677c8d-385c-472e-a1c7-f958b65feb1a>
|
CC-MAIN-2021-17
|
https://www.popularmechanics.com/space/a22953/cloud-titan/
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00460.warc.gz
|
en
| 0.934021
| 351
| 4
| 4
|
With this simulation from the NASA Climate website, learners explore different examples of how ice is melting due to climate change in four places where large quantities of ice are found. The photo comparisons, graphs, animations, and especially the time lapse video clips of glaciers receding are astonishing and dramatic.
This music video features a rap song about some of the causes and effects of climate change with the goal of increasing awareness of climate change and how it will impact nature and humans. The website also includes links to short fact sheets with lyrics to the song that are annotated with the sources of the information in the lyrics.
This is a hands-on inquiry activity using zip-lock plastic bags that allows students to observe the process of fermentation and the challenge of producing ethanol from cellulosic sources. Students are asked to predict outcomes and check their observations with their predictions. Teachers can easily adapt to materials and specific classroom issues.
In this activity, students chart temperature changes over time in Antarctica's paleoclimate history by reading rock cores. Students use their data to create an interactive display illustrating how Antarctica's climate timeline can be interpreted from ANDRILL rock cores.
This animation describes how citizen observations can document the impact of climate change on plants and animals. It introduces the topic of phenology and data collection, the impact of climate change on phenology, and how individuals can become citizen scientists.
This interactive shows the extent of the killing of lodgepole pine trees in western Canada. The spread of pine beetle throughout British Columbia has devastated the lodgepole pine forests there. This animation shows the spread of the beetle and the increasing numbers of trees affected from 1999-2008 and predicts the spread up until 2015.
Students perform a lab to explore how the color of materials at the Earth's surface affect the amount of warming. Topics covered include developing a hypothesis, collecting data, and making interpretations to explain why dark colored materials become hotter.
|
<urn:uuid:c06e70e1-ed55-4b3d-9dd1-9113530d3298>
|
CC-MAIN-2014-52
|
http://climate.gov/teaching/resources/search-education/informal-125/search-education/intermediate-3-5-124
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765610.7/warc/CC-MAIN-20141217075245-00144-ip-10-231-17-201.ec2.internal.warc.gz
|
en
| 0.921865
| 384
| 4.09375
| 4
|
Campylobacteriosis is food poisoning caused by the campylobacter bacterium.
Campylobacteriosis occurs much more often in the summer months than in the winter months. Infants, young adults, and males are most likely to get the condition.
Campylobacteriosis is usually caused by handling poultry (such as chicken or turkey) that is contaminated with the campylobacter bacterium and is raw or undercooked. For example, you can be infected by cutting poultry meat on a cutting board and then using the unwashed cutting board or utensil to prepare vegetables or other raw or lightly cooked foods. Drinking contaminated milk or water from contaminated lakes or streams can also result in infection.
Campylobacteriosis usually is not spread from person to person. Some people have become infected through contact with the infected stool of a dog or cat.
The symptoms of campylobacteriosis include diarrhea, cramping, stomach pain, and fever within 2 to 5 days after exposure to the bacteria. Your diarrhea may be bloody, and you may feel sick to your stomach and vomit. The illness usually lasts 1 week. Some people don't have any symptoms at all. In people with impaired immune systems, campylobacteriosis can be life-threatening.
Your doctor will do a medical history and a physical exam and ask you questions about your symptoms, foods you have recently eaten, and your work and home environments. A stool culture can confirm the diagnosis.
You treat campylobacteriosis by managing any complications until it passes. Dehydration caused by diarrhea and vomiting is the most common complication. Do not use medicines, including antibiotics and other treatments, unless your doctor recommends them. Most people recover completely within a week after symptoms begin, although sometimes recovery can take up to 10 days.
To prevent dehydration, drink plenty of fluids. Choose water and other clear liquids until you feel better. You can take frequent sips of a rehydration drink (such as Pedialyte). Soda, fruit juices, and sports drinks have too much sugar and not enough of the important electrolytes that are lost during diarrhea. These kinds of drinks should not be used to rehydrate.
When you feel like eating again, start with small amounts of food.
In more severe cases, your doctor may recommend antibiotics.
In rare cases, long-term problems can result from campylobacteriosis. Some people may have arthritis following campylobacteriosis. Others may develop a rare disease called Guillain-Barré syndrome. This occurs when your immune system attacks your nerves, which can lead to paralysis that lasts several weeks and usually requires that you go to a hospital.
You can prevent campylobacteriosis by practicing safe food handling.
It is important to pay particular attention to food preparation and storage during warm months when food is often served outside. Bacteria grow faster in warmer weather, so food can spoil more quickly and possibly cause illness. Do not leave food outdoors for more than 1 hour if the temperature is above 90°F (32°C), and never leave it outdoors for more than 2 hours.
To learn more about Healthwise, visit Healthwise.org.
© 1995-2021 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
|
<urn:uuid:2b9677a1-fb16-4b2b-8f26-5b27cf5f6f38>
|
CC-MAIN-2021-39
|
https://www.cigna.com/individuals-families/health-wellness/hw/medical-topics/campylobacteriosis-te6319spec
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00620.warc.gz
|
en
| 0.938177
| 692
| 4.09375
| 4
|
It’s important to choose a book that your child is interested in. Books come in a lot of different varieties or genres. A genre is a category characterized by similarities in form, style or subject matter. This article will discuss some different types of book genres that you and your child may enjoy exploring. Some different types of book genres include:
Biography or autobiography – A biography is a nonfiction (true) account of someone’s life. It is written by someone other than the subject of the biography. An autobiography is a nonfiction (true) account of someone’s life. It is written by the subject of the autobiography.
Drama/play – Drama is divided into different character parts that can be read by different people or in different voices.
Fantasy – In fantasy, events occur that are outside of the normal ways the universe operates. Magic is very important and often involves journeys or quests. Fantasy is different than science fiction because science fiction is usually set in the future and also involves technology (see science fiction genre below).
Fiction – Fiction is the form of any work that deals, in part or in whole, with information or events that are not real. They are invented by the author.
Graphic novel – The term graphic novel is generally used to describe any book in a comic format that resembles a novel in length and narrative development.
Historical fiction – A novel that is written at a different time than what is in the story. It tries to use the spirit and social conditions of a past age with realistic detail to historical fact.
Mystery – A mystery is a puzzle in which the reader receives clues and solves step-by-step throughout the book. There is usually a conclusion that solves the mystery.
Nonfiction – Nonfiction is true. Its primary function is to describe, inform, explain, persuade and/or instruct. Although nonfiction is true, it can still be entertaining.
Science fiction – This genre often involves science and technology of the future. Science fiction is frequently set in space or a different universe or world. It often uses some real theories of science.
Poetry – A poem is a collection of words that express an emotion or idea, sometimes with a specific rhythm.
If your child is new to learning about different genres, this is a great time to
help them explore books from each one. Consider helping your child find two books
and authors from each genre. You can print this out and let your child list them.
- Biography or autobiography
- Graphic novel
- Historical fiction
Encourage your child to read books from different genres. This will enhance his/her reading levels and encourage them to try new things. They’ll find a genre that probably suits them more than another. That’s normal and to be expected. One book doesn’t suit everyone, and all children have different tastes. The most important thing is that your child is reading!
Lesley Woodrum, WVU Extension Agent, Summers County
|
<urn:uuid:ad968e71-fc45-4e3d-827a-88b1882ffd6c>
|
CC-MAIN-2022-21
|
https://extension.wvu.edu/youth-family/youth-education/literacy/book-genres
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00378.warc.gz
|
en
| 0.950339
| 642
| 4.15625
| 4
|
The Exxon-Valdez oil spill of March 24, 1989, had long-lasting effects on Alaska's environment, animals and way of life. At the time of the spill, hundreds of volunteers stepped forward to clean up seabirds and other animals drenched in oil. Their work helped a modest number of animals, but many still died, and recovery efforts for a number of species continue after 24 years.
According to the National Wildlife Federation, the death toll of individual species of native Alaskan wildlife is still being tallied as of 2013. In the days immediately following the spill -- which, at the time was the worst in U.S. history -- many animals died including upwards of 100,000 and possibly as many as 250,000 seabirds. More than 2,800 sea otters and 12 river otters immediately expired. At least 300 harbor seals and almost 250 bald eagles were also instantly destroyed. Orcas living in the area at the time, 22 in number, were killed, as were countless fish. Small organisms were killed by the trillions, leaving those animals who prey on them with nothing to eat, causing even more deaths. In the following days and weeks, these numbers climbed much higher.
How They Died
Aside from the reef fish and other animals nearby when the Exxon Valdez ran aground, millions of animals died as a direct or proximate cause of the spill. Animals covered in oil tried vainly to clean their bodies by licking themselves, only to be poisoned by the toxins in the oil. Birds weighted down by the heavy oil were unable to fly. Otters depend upon the unique design of their fur to help them tolerate extreme cold climates. When covered in oil, their fur is unable to act as a protective covering, so otters die of hypothermia. Whales are killed when they eat fish covered in oil or when their blowholes are plugged with oil, making it impossible for them to breathe.
Ten Years After
Ten years after the Exxon Valdez oil spill, scientists from the University of North Carolina at Chapel Hill reported in the journal "Science" that many animal species were still recovering and the damage to their habitats had not significantly decreased. It was once thought that the number of animals killed acutely -- that is, immediately following the spill -- would be much higher than any subsequent numbers. But Chapel Hill's researchers reported in 2009 that Alaska's coastal ecosystem continues to show toxins that affect wildlife.
Twenty Years After
In 2007 -- two decades after the oil spill -- the National Oceanic and Atmospheric Administration reported that 21,000 gallons of crude oil still pollutes the ecosystem within a 450-mile radius -- and the oil continues to kill animals within its sphere. The problem persists because the spill is contained within the Prince William Sound, so it doesn't biodegrade as it would in the open ocean. The orca pod affected by the spill never recovered. Sea otters and ducks, who forage for food in the beaches, need only scratch the surface to find layers of oil soaked into the sand. The oil remains toxic to these animals. Oceana, a conservation organization, reports that some species of loons, salmon, seals, ducks, herrings, pigeons, mussel and clam populations have never fully recovered. Commercial fishing, a $286 million industry, has not completely resumed in the area.
- Scientific American: Environmental Effects of Exxon Valdez Spill Still Being Felt
- GoodHousekeeping.com: 4 Dirty Secrets of the Exxon Valdez Oil Spill
- National Wildlife Foundation: Voices from the Exxon Valdez Oil Spill: "The Day the Water Died"
- Mother Nature Network: The 13 Largest Oil Spills in History
- American Association for the Advancement of Science: Long-Term Ecosystem Response To The Exxon Valdez Oil Spill
- National Geographic: Exxon Valdez Anniversary: 20 Years Later, Oil Remains
- Oceana: Exxon Valdez Oil Spill Facts
- Photos.com/Photos.com/Getty Images
|
<urn:uuid:c61f9166-1238-4c89-84c1-dc080171e957>
|
CC-MAIN-2017-51
|
http://animals.mom.me/effects-exxon-valdez-oil-spill-alaskan-wildlife-5478.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948519776.34/warc/CC-MAIN-20171212212152-20171212232152-00277.warc.gz
|
en
| 0.964683
| 824
| 4.03125
| 4
|
Children need to know that letters stand for phonemes and spellings map out the phonemes in spoken words in order for them to learn to read and spell words. Short vowels are the toughest to identify. The goal of this activity is to help the students recognize the phoneme /o/ in written and spoken words. In this activity students will learn the phoneme /o/ by learning a meaningful representation, letter symbol, and by finding /o/ in words.
- Doc in the Fog (Educational Insights.
- /o/ Tongue Twister "Owen observes offenses often."
- Letter tiles: b,c,i,k,l,m,o,p,s,t,x.
- Pictures of objects: mop, box, clock, shop
-student assessment worksheet
1.Explain why new idea is valuable: "Why do you think it is important for us to learn the sound /o/ as well as the letter o?" In order to read and spell words it is important to recognize each sound in a word? What are other reasons that recognizing the sounds in words is important?
"Raise your hand if have ever been to the doctor's office and
the doctor look down your throat? What
does he tell you to say when he does this?
That's right, he tells you to open up and say /o/." I want all of us
to pretend that we are at the doctor's office and the doctor has to
look in our
mouths and say /o/. (Everyone
should open mouth and stick out
their tongues as if the doctor was really looking to practice.) Cue students by giving them a
3.Okay, now we are going to try a tongue twister that involves several words with the /o/. "Oliver observes offenses often." Teacher says once then students repeat. Now, every time you hear a word with the /o/ sound I want you to really stretch out the /o/ at the beginning of the words. Let's try. I'll model first. "Ooooliver ooobserves ooofenses oooften." Now everyone else try it together. (Cue 1-2-3)
4.Now that we know hot to recognize the /o/ sound in words, lets do some practice activities. I'm going to say two words and I want you to tell me which word you heard the /o/ sound in. Do you hear /o/ hot or hat? Cat or dog? Offense or defense? Note or Knot? Ship or shop? Airplane or Helicopter? Great Job!
5.Now we are going to practice spelling and reading words by using our letterboxes. First, I am going to ask you to make words such as stop. You need to place each letter that represents the sound you hear in a box. For example [Model]: Stop- /s/ -/t/ - /o/ - /p/. I hear the /s/ first so lets place the letter that makes the /s/ sound in the first box. [Model] Does everyone have the letter s in the first box? Great! Now let's finish spelling our word. Does everyone have /t/-/o/-/p/? Great! That's t, o, and p. Now you try the following words:
"I will put the tiles together to make the words and I want you to read them to me. [Teacher places s,t,o,p tiles together to make the words stop] Now let's go through the list of words together. I want you to read each word aloud."
Now I want you to read Doc in the Fog aloud to me. [Book talk] Do you like magic? Doc is a magician. We have to read the book to see what magic tricks are in store for us.
Students will be assessed on recognizing /o/ in spoken words as well as during the letterbox lesson. Students will also be given a worksheet after reading the book. The worksheet provides pictures of different objects with items that have the /o/ sound in their name and some that do not have the /o/ sound. Teacher will assess by informal observation at each table and listening to the students reading the name of the objects. Teacher should read all the names of the objects after the students begin circling the ones that have the /o/ sound.
Doc in the Fog (Educational Insights.
Melanie Tew: Its obvious Your Sick, http://www.auburn.edu/academic/education/reading_genie/persp/tewbr.html
Heather Langley: Dr. Ollie Says Open Wide and Say /o/, http://www.auburn.edu/academic/education/reading_genie/voyages/langleybr.html
Return to Passages Index
|
<urn:uuid:c32c7276-2ea1-4010-917c-29af87d9a5ad>
|
CC-MAIN-2017-04
|
http://www.auburn.edu/academic/education/reading_genie/passages/mitchumbr.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz
|
en
| 0.940259
| 1,021
| 4.375
| 4
|
At this stage we can draw a distinction between sound and unsound arguments. An argument is called sound if and only if it is valid and all its premises are true. Otherwise, the argument is called unsound. The following is an example of a sound argument.
All mammals have lungs.
All rabbits are mammals.
Therefore, all rabbits have lungs.
Here all the premises are true and the argument is valid. Hence, it is a sound argument, the other hand, an argument is unsound if it is either invalid or some of its premises are false.
No mammals have lungs.
No whales are mammals.
Therefore, no whales have lungs.
Here the argument is invalid and the premises are also false. Hence it is unsound. Further, even if an argument is valid but some or all of its premises are false then also the argument is sound. Consider the following example:
No insects have six legs.
All spiders are insects.
Therefore, no spiders have six legs.
Here both the premises are false but the argument is valid. Hence, it is also an unsound argument. Thus mere validity of an argument does not make the argument sound, because there ire valid arguments those are not sound. To say that an argument is unsound amounts to the claim that the argument is either invalid or some of its premises are false.
Thus the soundness of an argument implies validity as well as the truth of all its premises. But the unsoundness of an argument does not imply invalidity, because there are unsound arguments that are valid.
At this stage the following question may be asked. Why logicians should not confine their attention only to sound arguments? The answer is, we cannot study only sound arguments though it is interesting. Because, to know an argument to be sound we must know that all its premises are true. But knowing the truth of the premises is not always possible.
Further, we are often intercoted in arguments whose premises are not known to be true. For example, when a scientist verifies a scientific hypothesis or even a theory, he or she very often deduces consequences from the hypothesis or the theory in question and compares these consequences with the data and if the result tallies then the hypothesis or the theory is verified to be true.
Here the investigator can not know the truth of the hypothesis or the theory prior to the process of testing. If the truth of theory or the hypothesis was known to the scientist prior to the verification, the verification would be pointless. This is in fact not the case.
So, to confine our attention to sound arguments only would be self-defeating. But this does not make sound arguments logically uninteresting because, if by some means, we know that an argument is sound then we may infer the truth of its conclusion.
|
<urn:uuid:e0641d9e-fd42-4a98-8694-1eb50adc6a61>
|
CC-MAIN-2014-41
|
http://www.preservearticles.com/201105317311/sound-and-unsound-argument.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133132.72/warc/CC-MAIN-20140914011213-00204-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
en
| 0.936388
| 577
| 4.28125
| 4
|
The stability of life on Earth depends on the biogeochemical cycles of carbon and other essential elements, which in turn depend on microbial ecosystems which are, at present, poorly understood. EAPS Professor Daniel Rothman has a plan for a major new research program aimed at gauging the potential for another mass extinction event, like the end-Permian Great Dying.
Five times in the last 500 million years, more than three quarters of living species have vanished in mass extinctions. Each of these events has been associated with a significant change in Earth’s carbon cycle. Some scientists think that human-induced environmental change—including our massive discharges of carbon into the atmosphere—may soon cause a sixth major extinction. Is such a catastrophe really possible?
The key to answering this question lies in the recognition that the Earth’s physical environment and the life it supports continuously interact as a closely coupled system. The core of this interaction is the carbon cycle. Plants and microorganisms, both on land and in the surface layers of the ocean, take carbon dioxide from the atmosphere and “fix” the carbon in organic matter through the process of photosynthesis. Other organisms—most importantly microbes, but also including animals and people—metabolize organic matter, releasing carbon back to the atmosphere, a process known as respiration. But while photosynthesis is visible in the greening of leaves and the spectacular algal blooms on the ocean surface, respiration is neither visible nor well understood. That’s because respiration occurs in different places and at very different timescales. In the ocean’s surface layers, for example, respiration happens fairly quickly—minutes to months. A small percentage of organic matter escapes degradation and drops slowly to the bottom of the ocean, becoming buried in the sediments, where respiration can take thousands of years. So over time, lots of organic carbon accumulates at the bottom of the ocean. And some of that gets embedded in sedimentary rocks, where the effective timescale for respiration can be many millions of years. Virtually all of the fossil fuels we burn—oil, coal, natural gas—come from that latter reservoir of organic carbon.
Over the last billion years, including through multiple ice ages, the Earth’s carbon cycle has remained mostly stable. That means that the process of fixing carbon through photosynthesis and the process of respiration have remained approximately in balance. But because the ocean sediments contain much more carbon than the atmosphere—at least 10 times as much—even small changes in respiration rates could have a huge, de-stabilizing impact. A disruption in the carbon cycle that rapidly released large amounts of carbon dioxide, for example, could potentially cause mass extinctions—by triggering a rapid shift to warmer climates, or by acidifying the oceans, or by other mechanisms.
The conventional explanation for what killed off the dinosaurs and caused the most recent, end-Cretaceous mass extinction was a huge asteroid impact on Earth—which certainly caused a massive debris shower and likely darkened the sky, perhaps for years. This and some other extinctions are also associated with massive and widespread volcanism. Are these sufficient to trigger mass extinctions, even in the deep oceans? In at least one case, our calculations strongly suggest that these physical events, by themselves, were not enough to explain the observed changes—that whatever triggering role impacts or volcanism may have played, other factors contributed to and amplified changes in the carbon cycle. We believe that acceleration of the microbial respiration rate must have been involved, thus releasing carbon from the deep ocean and sediment reservoirs. In any event, the evidence is clear that significant disruptions or instabilities have punctuated an otherwise stable carbon cycle throughout Earth’s history, with changes so rapid or so large that they triggered a shift to a new and different equilibrium, with profound impact on all living things.
One example is the microbial invention, about two-and-a-half billion years ago, of photosynthesis—which resulted in a transition from an atmosphere without oxygen to a stable oxygenated state. That in turn enabled the evolution of macroscopic, multi-cellular life, including us. Another example is the end-Permian extinction, the most severe in Earth history, which was immediately preceded by an explosive increase of carbon in the atmosphere and the oceans. A recent research paper (Rothman et al., 2014) attributes the surge of carbon to the rapid evolution of a new microbial mechanism for the conversion of organic matter to methane, which accelerated respiration. In both cases, the disruption of the carbon cycle was driven or at least accelerated by life itself—microbial life. Other mass extinctions are also associated with severe disruption of the carbon cycle, although the specific triggering mechanisms are not known. But what seems clear is that small changes in the ways microbes respire organic matter can have considerable global impact.
Might the current human releases of carbon trigger such a change as well, enabling microorganisms to accelerate their conversion of the huge reservoir of marine sedimentary carbon into carbon dioxide? Understanding the mechanisms of respiration in detail—including in the deep ocean and the sediment reservoirs of organic carbon—is thus critical to understanding the potential for another mass extinction.
For the modern carbon cycle, the principal problem concerns the fate of marine organic carbon that resists degradation for decades or longer. Two reservoirs are critical: dissolved organic carbon, which can persist for thousands of years, and sedimentary organic carbon, which can persist for millions of years. Imbalances in the carbon cycle are determined by shifts of these timescales or respiration rates. These rates are especially hard to determine when organic compounds are complex and/or the organic matter is tightly embedded in sedimentary rocks. New tools will enable us to measure how specific enzymes bind to specific organic molecules found in seawater. And controlled experiments will measure how microbes, organic matter, and minerals interact in sediments, developing new methods such as high-resolution calorimeters to measure the rates of degradation in the lab and in the field.
Unlike the major extinction events already mentioned, many past disturbances of the carbon cycle had no large-scale impact. What sets them apart? Sedimentary rocks deposited at different times record indications of environmental change, but the interpretation of these signals is an evolving science. The project will reconstruct, for as many events as possible, the sequence of environmental changes, focusing on fluxes of carbon. By employing mathematical techniques similar to those used to establish the modern theory of chaos, we expect to discover distinct classes of behavior that separate true instabilities from more gradual environmental change. During periods of unstable growth, important changes in the molecular composition of organic matter are likely. By analyzing these changes, we expect to discover mechanisms associated with or leading to instabilities of the Earth’s carbon cycle. Especially pertinent is the potential for rapid evolution in microbial ecosystems. Rapid evolution modifies the structure of populations and thus can alter the respiration rates—with impact on all components of ecosystems, potentially leading to instability, disruption, and the emergence of new stable states.
The central challenge will be to use these new findings to develop a theory of instability for the Earth’s carbon cycle system. Linking the specific mechanisms discovered in our studies of the past and present carbon cycles to such a theory is a key objective. It requires learning how to translate molecular, genomic, and microbial metabolic information into an understanding of evolutionary feedbacks that can drive instability and mass extinctions. Collectively, this work amounts to the design and execution of a stress test of the carbon cycle system. Our studies of the modern carbon cycle will provide a base case. Theoretical models of carbon cycle dynamics will yield specific hypotheses for the conditions that determine its unstable evolution. These hypotheses will then be tested using geochemical signals derived from past extreme environmental events. That should provide an explicit understanding of the range of stability of the carbon cycle system and the potential for a sixth extinction.
Daniel H. Rothman, Gregory P. Fournier, Katherine L. French, Eric J. Alm, Edward A. Boyle, Changqun Cao, and Roger E. Summons (2016), Methanogenic burst in the end-Permian carbon cycle, Proceeding of the National Academy of Sciences, vol. 111, no. 15, pp. 5462–5467, doi: 10.1073/pnas.1318106111
In this issue
For further information on giving opportunities or creating a named fund to benefit the Department of Earth, Atmospheric and Planetary Sciences, please contact:
Senior Development Officer
Earth, Atmospheric and Planetary Sciences at MIT
617 253 5796
Keep up to date with all things EAPS: subscribe to our newsletter - email@example.com
|
<urn:uuid:0569b7db-c445-4287-9d74-a428d42e0007>
|
CC-MAIN-2021-04
|
https://eapsweb.mit.edu/news-events/eaps-scope/2016/the-sixth-dying
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00323.warc.gz
|
en
| 0.924306
| 1,790
| 4.34375
| 4
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 23