text
string
id
string
dump
string
url
string
file_path
string
language
string
language_score
float64
token_count
int64
score
float64
int_score
int64
The fossils of two interrelated ancestral mammals, newly discovered in China, suggest that the wide-ranging ecological diversity of modern mammals had a precedent more than 160 million years ago. With claws for climbing and teeth adapted for a tree sap diet, Agilodocodon scansorius is the earliest-known tree-dwelling mammaliaform (long-extinct relatives of modern mammals). The other fossil, Docofossor brachydactylus, is the earliest-known subterranean mammaliaform, possessing multiple adaptations similar to African golden moles such as shovel-like paws. Docofossor also has distinct skeletal features that resemble patterns shaped by genes identified in living mammals, suggesting these genetic mechanisms operated long before the rise of modern mammals. These discoveries are reported by international teams of scientists from the University of Chicago and Beijing Museum of Natural History in two separate papers published Feb. 13 in Science. "We consistently find with every new fossil that the earliest mammals were just as diverse in both feeding and locomotor adaptations as modern mammals," said Zhe-Xi Luo, PhD, professor of organismal biology and anatomy at the University of Chicago and an author on both papers. "The groundwork for mammalian success today appears to have been laid long ago." Agilodocodon and Docofossor provide strong evidence that arboreal and subterranean lifestyles evolved early in mammalian evolution, convergent to those of true mammals. These two shrew-sized creatures - members of the mammaliaform order Docodonta - have unique adaptations tailored for their respective ecological habitats. Agilodocodon, which lived roughly 165 million years ago, had hands and feet with curved horny claws and limb proportions that are typical for mammals that live in trees or bushes. It is adapted for feeding on the gum or sap of trees, with spade-like front teeth to gnaw into bark. This adaptation is similar to the teeth of some modern New World monkeys, and is the earliest-known evidence of gumnivorous feeding in mammaliaforms. Agilodocodon also had well-developed, flexible elbows and wrist and ankle joints that allowed for much greater mobility, all characteristics of climbing mammals. "The finger and limb bone dimensions of Agilodocodon match up with those of modern tree-dwellers, and its incisors are evidence it fed on plant sap," said study co-author David Grossnickle, graduate student at the University of Chicago. "It's amazing that these arboreal adaptions occurred so early in the history of mammals and shows that at least some extinct mammalian relatives exploited evolutionarily significant herbivorous niches, long before true mammals." Docofossor, which lived around 160 million years ago, had a skeletal structure and body proportions strikingly similar to the modern day African golden mole. It had shovel-like fingers for digging, short and wide upper molars typical of mammals that forage underground, and a sprawling posture indicative of subterranean movement. Docofossor had reduced bone segments in its fingers, leading to shortened but wide digits. African golden moles possess almost the exact same adaptation, which provides an evolutionary advantage for digging mammals. This characteristic is due to the fusion of bone joints during development - a process influenced by the genes BMP and GDF-5. Because of the many anatomical similarities, the researchers hypothesize that this genetic mechanism may have played a comparable role in early mammal evolution, as in the case of Docofossor. The spines and ribs of both Agilodocodon and Docofossor also show evidence for the influence of genes seen in modern mammals. Agilodocodon has a sharp boundary between the thoracic ribcage to lumbar vertebrae that have no ribs. However, Docofossor shows a gradual thoracic to lumber transition. These shifting patterns of thoracic-lumbar transition have been seen in modern mammals and are known to be regulated by the genes Hox 9-10 and Myf 5-6. That these ancient mammaliaforms had similar developmental patterns is an evidence that these gene networks could have functioned in a similar way long before true mammals evolved. "We believe the shortened digits of Docofossor, which is a dead ringer for modern golden moles, could very well have been caused by BMP and GDF," Luo said. "We can now provide fossil evidence that gene patterning that causes variation in modern mammalian skeletal development also operated in basal mammals all the way back in the Jurassic." Early mammals were once thought to have limited ecological opportunities to diversify during the dinosaur-dominated Mesozoic era. However, Agilodocodon, Docofossor and numerous other fossils - including Castorocauda, a swimming, fish-eating mammaliaform described by Luo and colleagues in 2006 - provide strong evidence that ancestral mammals adapted to wide-ranging environments despite competition from dinosaurs. "We know that modern mammals are spectacularly diverse, but it was unknown whether early mammals managed to diversify in the same way," Luo said. "These new fossils help demonstrate that early mammals did indeed have a wide range of ecological diversity. It appears dinosaurs did not dominate the Mesozoic landscape as much as previously thought." The study, "Evolutionary development in basal mammaliaforms as revealed by a docodontan," was supported by the Beijing Science and Technology Commission, the Ministry of Science and Technology of China and the University of Chicago. Additional authors include Qing-Jin Meng, Qiang Ji, Di Liu, Yu-Guang Zhang and April I. Neander. The study "An arboreal docodont from the Jurassic and mammaliaform ecological diversification," was supported by the Beijing Science and Technology Commission, the Ministry of Science and Technology of China, the Chinese Academy of Geological Science and the University of Chicago. Additional authors include Qing-Jin Meng, Qiang Ji,Yu-Guang Zhang and Di Liu.
<urn:uuid:ef9f11da-f3b2-4975-a4ff-7f6468619123>
CC-MAIN-2018-51
https://www.eurekalert.org/pub_releases/2015-02/uocm-eaa020615.php
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00375.warc.gz
en
0.951452
1,233
4.03125
4
Civil War Anti-War Protests Like some residents of other Northern states, numerous Ohioans strenuously objected to the American Civil War. Various reasons existed for the reluctance of these Ohioans and their fellow Northerners to support the Union. A sizable number of white Ohioans, especially those living along the Ohio River, had migrated to the state from slaveholding states. While opponents of the war could not legally own slaves in Ohio, many of them had family members residing in the South who did own African American slaves. These people often sympathized with slaveholders, agreeing with many white Southerners that the federal government did not have the power to limit slavery's existence. These Ohioans preferred political compromise rather than warfare. Other Ohioans had economic ties to the South. These Ohioans either operated businesses in the South or engaged in trade with Southerners. These Ohioans feared that a war would hurt them financially, as it theoretically could end trade between Ohio and the Southern states. Some Ohioans did not support the war for religious reasons. Numerous groups in Ohio objected to violence due to their religious beliefs. These people included members of the Society of Friends, the Mennonites, the Amish, and several other denominations. While these groups did not formally protest the war, many of their followers refused to participate in the conflict. Some members of these faiths violated their religious teachings and did take up arms against the Confederacy. While groups like the Quakers opposed violence, they also believed that slavery was equally unjust and against God's will. Later, some Ohioans began to oppose the Civil War after Abraham Lincoln issued the Emancipation Proclamation in September 1862. That document declared that the slaves in areas still in rebellion as of January 1, 1863 would receive their freedom on that date. By issuing the proclamation, Lincoln made ending slavery one of the North's war aims. Many Northerners, including some Ohioans, were willing to fight to reunite the nation and to secure a government where the majority ruled, but they were unwilling to fight a war to terminate slavery. This was especially true among some soldiers from the working class. These men feared that, with slavery's end, African Americans would migrate to the North, taking jobs away from the white workers. Several Northern soldiers, including some Ohioans, deserted from the Union army in protest of the Emancipation Proclamation. A final and, perhaps, most important reason for anti-war protests was the draft. In 1863, the United States government implemented the Conscription Act, which was also known as the Enrollment Act. This act required states to draft men to serve in the Union military if individual states did not meet their enlistment quotas through volunteers. The Conscription Act permitted drafted men to pay a commutation fee of three hundred dollars or to hire a substitute to escape service if they were drafted. Draft riots occurred in both New York City, New York and Boston, Massachusetts. Some Ohioans also strongly objected to the Conscription Act. Many of the opponents were members of the anti-war or "Peace" section of the Democratic Party and encouraged men to resist the draft or to desert once they were drafted. In Hoskinville, residents attempted to hide a deserter from government authorities. The local federal marshal called in soldiers to arrest the deserter. In Holmes County, nine hundred to one thousand men created a makeshift fort to defend themselves from federal officials sent to enforce the Conscription Act. These men were responding to attempts by the federal government to enlist men into the Union army during June 1863. A mob had attacked an officer sent to enlist men into the service, and a provost marshal captured the ringleaders behind the assault. A group of residents freed the four men arrested. They built Fort Fizzle to resist future attempts to arrest the ringleaders and to prevent the draft's enforcement. They equipped themselves with guns and four artillery pieces, although some scholars doubt that any cannons were actually inside of the fort. Approximately 420 federal soldiers arrived to disarm the men and to implement the draft. A brief skirmish occurred, with the soldiers emerging victorious. Two draft resisters were wounded. The demonstrators dispersed into the woods, and the Battle of Fort Fizzle, as it became known, quickly ended. The soldiers continued to hunt for the protestors. Eventually a deal was brokered in which the four men originally arrested would surrender. When the men turned themselves in, a majority of the soldiers returned to Columbus. This was just one of many protests in response to the draft in Ohio. Unlike the Battle of Fort Fizzle, government authorities easily put down most of these uprisings without having to resort to violence. Clement Vallandigham and the Peace Democrats Several Ohioans participated in a peace convention during early 1861. The convention was held in Washington, DC, and the delegates hoped to convince President Abraham Lincoln to either agree to the Confederacy's demands to get its citizens to rejoin the Union or simply to let the Southern states leave the United States. Lincoln ignored the peace convention's attempt to end the conflict peacefully. Politically, most people who participated in the peace convention affiliated themselves with the Democratic Party. These people became known as Peace Democrats. Clement Vallandigham was the best known Peace Democrat in Ohio. He helped organize a rally for the Democratic Party at Mount Vernon, Ohio, on May 1, 1863. Peace Democrats Vallandigham, Samuel Cox, and George Pendleton all delivered speeches denouncing General Order No. 38. In April 1863, General Ambrose Burnside, commander of the Department of Ohio, issued General Order No. 38. Burnside placed his headquarters in Cincinnati. Located on the Ohio River, just north of the slave state of Kentucky, Cincinnati had a number of residents sympathetic to the Confederacy. Burnside hoped to intimidate Confederate sympathizers with General Order No. 38. General Order No. 38 stated: The habit of declaring sympathy for the enemy will not be allowed in this department. Persons committing such offenses will be at once arrested with a view of being tried or sent beyond our lines into the lines of their friends. It must be understood that treason, expressed or implied, will not be tolerated in this department. Burnside also declared that, in certain cases, violations of General Order No. 38 could result in death. Vallandigham was so opposed to the order that he allegedly said that he "despised it, spit upon it, trampled it under his feet" He also supposedly encouraged his fellow Peace Democrats to openly resist Burnside. Vallandigham went on to chastise President Lincoln for not seeking a peaceable and immediate end to the Civil War and for allowing General Burnside to thwart citizen rights under a free government. In attendance at the Mount Vernon rally were two army officers under Burnside's command. They reported to Burnside that Vallandigham had violated General Order No. 38. The general ordered his immediate arrest. On May 5, 1863, a company of soldiers arrested Vallandigham at his home in Dayton and brought him to Cincinnati to stand trial. Burnside charged Vallandigham with the following crimes: Publicly expressing, in violation of General Orders No. 38, from Head-quarters Department of Ohio, sympathy for those in arms against the Government of the United States, and declaring disloyal sentiments and opinions, with the object and purpose of weakening the power of the Government in its efforts to suppress an unlawful rebellion. A military tribunal heard the case, and Vallandigham offered no serious defense against the charges. He contended that military courts had no jurisdiction over his case. The tribunal found Vallandigham guilty and sentenced him to remain in a United States prison for the remainder of the war. Vallandigham's attorney, George Pugh, appealed the tribunal's decision to Humphrey Leavitt, a judge on the federal circuit court. Pugh, like his client, claimed that the military court did not have proper jurisdiction in this case and had violated Vallandigham's constitutional rights. Judge Leavitt rejected Vallandigham's argument. He agreed with General Burnside that military authority was necessary during a time of war to ensure that opponents to the United States Constitution did not succeed in overthrowing the Constitution and the rights that it guaranteed United States citizens. As a result of Leavitt's decision, authorities were to send Vallandigham to federal prison. President Lincoln feared that Peace Democrats across the North might rise up to prevent Vallandigham's detention. The president commuted Vallandigham's sentence to exile in the Confederacy. On May 25, Burnside sent Vallandigham into Confederate lines. Some Peace Democrats resorted to more radical means, including subversion, to protest the Civil War. Some of these men formed secret societies such as the Sons of Liberty. Members of these organizations resided primarily in Northern and Border States. In February 1864, Clement Vallandigham was elected supreme commander of the sons of Liberty. Ohio government officials estimated that between eighty thousand and 110,000 Ohioans belonged to these organizations, but most historians discount these numbers as being dramatically higher than the group's actual numbers. Rumors circulated throughout the North during 1864 that the Confederate sympathizers intended to free Southern prisoners at several prison camps, including Johnson's Island and Camp Chase in Ohio. These freed prisoners would form the basis of a new Confederate army that would operate in the heart of the Union. Supposedly, General John Hunt Morgan, who had raided Ohio the previous year, would return to the state and assist this new army. The plot never materialized. William Rosecrans, assigned to oversee the Department of Missouri, discovered the planned uprising and warned Northern governors to remain cautious. John Brough, Ohio's governor sent out spies to infiltrate the sympathizer groups. These men succeeded and stopped the uprising before it could occur. Confederate supporters hoped to capture the Michigan, a gunboat operating on Lake Erie near Sandusky. They would then use the gunboat to free Confederate prisoners at Johnson's Island. Union authorities arrested the plot's ringleader, Charles Cole. While some Ohioans did openly oppose the Civil War, these people remained a distinct minority. Most Ohioans supported the war and a very large number of them volunteered for military service. Nevertheless, at least to some degree, the war protesters caused difficulties for both the state and federal government and hampered the government's abilities to wage the war. - Ambrose Burnside - John Brough - Abraham Lincoln - George Pendleton - Clement Vallandigham - Battle of Fort Fizzle - African Americans - Mennonite Church - Peace Democrats - Camp Chase - Cincinnati, Ohio - Dayton, Ohio - Democratic Party - Sons of Liberty - Johnson's Island - American Civil War - Ohio River - Samuel S. Cox - Emancipation Proclamation - John H. Morgan - Morgan's Raid - Lake Erie - Columbus, Ohio - Holmes County - Mount Vernon, Ohio - Enrollment Act - Dee, Christine, ed. Ohio's War: The Civil War in Documents. Athens: Ohio University Press, 2007. - Klement, Frank L. The Limits of Dissent: Clement L. Vallandigham & the Civil War. New York, NY: Fordham University Press, 1998. - Reid, Whitelaw. Ohio in the War: Her Statesmen, Generals and Soldiers. Cincinnati, OH: Clarke, 1895. - Roseboom, Eugene H. The Civil War Era: 1850-1873. Columbus: Ohio State Archaeological and Historical Society, 1944. - Vallandigham, Clement Laird. Speeches, Arguments, Addresses, and Letters of Clement L. Vallandigham. New York, NY: J. Walter, 1864. - Vallandigham, Clement Laird. The Trial of Hon. Clement L. Vallandigham, by a Military Commission and the Proceedings Under his Application for a Writ of Habeas Corpus in the Circuit Court of the United States for the Southern District of Ohio. Cincinnati, OH: Rickey and Carroll, 1863. - Vallandigham, James L. A Life of Clement L. Vallandigham, by his Brother, Rev. James L. Vallandigham. Baltimore, MD: Turnbull Brothers, 1872.
<urn:uuid:8e821f4f-3015-49a8-89d0-14898c3ced65>
CC-MAIN-2018-47
http://www.ohiohistorycentral.org/w/Civil_War_Anti-War_Protests
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746800.89/warc/CC-MAIN-20181120211528-20181120233528-00474.warc.gz
en
0.963449
2,528
4.5
4
5.2 The HSV Colorspace The perception of color and our way of talking about it in everyday life is not well served by the RGB colorspace. If we're thinking of repainting the walls of the living room, for example, we usually think about what shade of color it should be, how bright we want it, and whether it should be pastel or vivid. Typically, the first thing we usually notice about a color is its hue. Hue describes the shade of color and where that color it is found in the color spectrum. Red, yellow, and purple are words that describe hue. Figure Hue, Saturation, and Value illustrates the range of hues, H, as a circle represented by values from 0 to 360. The reasons for this will become clear shortly. The next most significant aspect of color is typically the saturation, S. The saturation describes how pure the hue is with respect to a white reference. For example, a color that is all red and no white is fully saturated. If we add some white to the red, the result becomes more pastel, and the color shifts from red to pink. The hue is still red but it has become less saturated. This is illustrated in the vertical bar of Figure 5.3. Saturation is a percentage that ranges from 0 to 100. A pure red that has no white is Finally, a color also has a brightness. This is a relative description of how much light is coming from the color. If the color reflects a lot of light, we would say that it is bright. Imagine seeing a red sportscar during the day. Its color looks bright. Compare this with the perception of the car as night is falling. We can see that the car is red but it looks duller because is reflecting less light into the eye. Less light means the color looks darker. In the GIMP, the most important measure of brightness is measured by a quantity called value. However, there are also other measures of brightness that will be introduced shortly. For the moment, though, the horizontal bar in 5.3 illustrates a range of red values. Value, like saturation, is a percentage that goes from 0 to 100. This range can be thought of as the amount of light illuminating a color. For example, when the hue is red and the value is high the color looks bright. When the value is low it looks dark. Thus, hue, saturation, and value are like an alternative colorspace. Any color can be decomposed into these three components and, like for RGB, it is possible to represent this space as a cube. Decomposing a Color Image into its HSV Components illustrates the result of using Image:Image/Mode/Decompose on the color image in Figure (a). Choosing the HSV option in the Decompose dialog produces the decomposition shown in (b), (c), and (d). It is interesting to note that hue really doesn't change much. It is almost constant over broad regions of the image. For, example, although there is significant detail in the saturation and value components of the sky, the hue is quite uniform there. Of the three, it is the value component that is the most detailed. Because colors are created on the monitor using mixes of red, green, and blue, it is useful and instructive to see how the HSV colorspace lives inside of the RGB cube.
<urn:uuid:eb5759e1-fedf-46ae-9082-4bc34778c692>
CC-MAIN-2021-25
https://www.linuxtopia.org/online_books/graphics_tools/gimp_advanced_guide/gimp_guide_node51.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00516.warc.gz
en
0.928126
770
4.1875
4
From Wikipedia, the free encyclopedia - View original article Stereoscopy (also called stereoscopics or 3D imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek στερεός (stereos), meaning "firm, solid", and σκοπέω (skopeō), meaning "to look, to see". Any stereoscopic image is called stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope. Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements. Stereoscopy creates the illusion of three-dimensional depth from given two-dimensional images. Human vision, including the perception of depth, is a complex process which only begins with the acquisition of visual information taken in through the eyes; much processing ensues within the brain, as it strives to make intelligent and meaningful sense of the raw information provided. One of the very important visual functions that occur within the brain as it interprets what the eyes see is that of assessing the relative distances of various objects from the viewer, and the depth dimension of those same perceived objects. The brain makes use of a number of cues to determine relative distances and depth in a perceived scene, including: (All the above cues, with the exception of the first two, are present in traditional two-dimensional images such as paintings, photographs, and television.) Stereoscopy is the production of the illusion of depth in a photograph, movie, or other two-dimensional image by presenting a slightly different image to each eye, and thereby adding the first of these cues (stereopsis) as well. Both of the 2D offset images are then combined in the brain to give the perception of 3D depth. It is important to note that since all points in the image focus at the same plane regardless of their depth in the original scene, the second cue, focus, is still not duplicated and therefore the illusion of depth is incomplete. There are also primarily two effects of stereoscopy that are unnatural for the human vision: first, the mismatch between convergence and accommodation, caused by the difference between an object's perceived position in front of or behind the display or screen and the real origin of that light and second, possible crosstalk between the eyes, caused by imperfect image separation by some methods. Although the term "3D" is ubiquitously used, it is also important to note that the presentation of dual 2D images is distinctly different from displaying an image in three full dimensions. The most notable difference is that, in the case of "3D" displays, the observer's head and eye movement will not increase information about the 3-dimensional objects being displayed. Holographic displays or volumetric display are examples of displays that do not have this limitation. Similar to the technology of sound reproduction, in which it is not possible to recreate a full 3-dimensional sound field merely with two stereophonic speakers, it is likewise an overstatement of capability to refer to dual 2D images as being "3D". The accurate term "stereoscopic" is more cumbersome than the common misnomer "3D", which has been entrenched after many decades of unquestioned misuse. Although most stereoscopic displays do not qualify as real 3D display, all real 3D displays are also stereoscopic displays because they meet the lower criteria as well. Most 3D displays use this stereoscopic method to convey images. It was first invented by Sir Charles Wheatstone in 1838, and improved by Sir David Brewster who made the first portable 3D viewing device. Wheatstone originally used his stereoscope (a rather bulky device) with drawings because photography was not yet available, yet his original paper seems to foresee the development of a realistic imaging method: For the purposes of illustration I have employed only outline figures, for had either shading or colouring been introduced it might be supposed that the effect was wholly or in part due to these circumstances, whereas by leaving them out of consideration no room is left to doubt that the entire effect of relief is owing to the simultaneous perception of the two monocular projections, one on each retina. But if it be required to obtain the most faithful resemblances of real objects, shadowing and colouring may properly be employed to heighten the effects. Careful attention would enable an artist to draw and paint the two component pictures, so as to present to the mind of the observer, in the resultant perception, perfect identity with the object represented. Flowers, crystals, busts, vases, instruments of various kinds, &c., might thus be represented so as not to be distinguished by sight from the real objects themselves. Stereoscopy is used in photogrammetry and also for entertainment through the production of stereograms. Stereoscopy is useful in viewing images rendered from large multi-dimensional data sets such as are produced by experimental data. An early patent for 3D imaging in cinema and television was granted to physicist Theodor V. Ionescu in 1936. Modern industrial three-dimensional photography may use 3D scanners to detect and record three-dimensional information. The three-dimensional depth information can be reconstructed from two images using a computer by corresponding the pixels in the left and right images (e.g.,). Solving the Correspondence problem in the field of Computer Vision aims to create meaningful depth information from two images. Anatomically, there are 3 levels of binocular vision required to view stereo images: These functions develop in early childhood. Some people who have strabismus disrupt the development of stereopsis, however orthoptics treatment can be used to improve binocular vision. A person's stereoacuity determines the minimum image disparity they can perceive as depth. It is believed that approximately 12% of people are unable to properly see 3D images, due to a variety of medical conditions. According to another experiment up to 30% of people have very weak stereoscopic vision preventing them from depth perception based on stereo disparity. This nullifies or greatly decreases immersion effects of stereo to them. Traditional stereoscopic photography consists of creating a 3D illusion starting from a pair of 2D images, a stereogram. The easiest way to enhance depth perception in the brain is to provide the eyes of the viewer with two different images, representing two perspectives of the same object, with a minor deviation equal or nearly equal to the perspectives that both eyes naturally receive in binocular vision. To avoid eyestrain and distortion, each of the two 2D images should be presented to the viewer so that any object at infinite distance is perceived by the eye as being straight ahead, the viewer's eyes being neither crossed nor diverging. When the picture contains no object at infinite distance, such as a horizon or a cloud, the pictures should be spaced correspondingly closer together. The principal advantages of side-by-side viewers is the lack of diminution of brightness, allowing the presentation of images at very high resolution and in full spectrum color, simplicity in creation, and little or no additional image processing is required. Under some circumstances, such as when a pair of images are presented for freeviewing, no device or additional optical equipment is needed. The principal disadvantage of side-by-side viewers is that large image displays are not practical and resolution is limited by the lesser of the display medium or human eye. This is because as the dimensions of an image are increased, either the viewing apparatus or viewer themselves must move proportionately further away from it in order to view it comfortably. Moving closer to an image in order to see more detail would only be possible with viewing equipment that adjusted to the difference. Freeviewing is viewing a side-by-side image pair without using a viewing device. Prismatic, self-masking glasses are now being used by some cross-eyed-view advocates. These reduce the degree of convergence required and allow large images to be displayed. However, any viewing aid that uses prisms, mirrors or lenses to assist fusion or focus is simply a type of stereoscope, excluded by the customary definition of freeviewing. Stereoscopically fusing two separate images without the aid of mirrors or prisms while simultaneously keeping them in sharp focus without the aid of suitable viewing lenses inevitably requires an unnatural combination of eye vergence and accommodation. Simple freeviewing therefore cannot accurately reproduce the physiological depth cues of the real-world viewing experience. Different individuals may experience differing degrees of ease and comfort in achieving fusion and good focus, as well as differing tendencies to eye fatigue or strain. An autostereogram is a single-image stereogram (SIS), designed to create the visual illusion of a three-dimensional (3D) scene within the human brain from an external two-dimensional image. In order to perceive 3D shapes in these autostereograms, one must overcome the normally automatic coordination between focusing and vergence. The stereoscope is essentially an instrument in which two photographs of the same object, taken from slightly different angles, are simultaneously presented, one to each eye. A simple stereoscope is limited in the size of the image that may be used. A more complex stereoscope uses a pair of horizontal periscope-like devices, allowing the use of larger images that can present more detailed information in a wider field of view. Some stereoscopes are designed for viewing transparent photographs on film or glass, known as transparencies or diapositives and commonly called slides. Some of the earliest stereoscope views, issued in the 1850s, were on glass. In the early 20th century, 45x107 mm and 6x13 cm glass slides were common formats for amateur stereo photography, especially in Europe. In later years, several film-based formats were in use. The best-known formats for commercially issued stereo views on film are Tru-Vue, introduced in 1931, and View-Master, introduced in 1939 and still in production. For amateur stereo slides, the Stereo Realist format, introduced in 1947, is by far the most common. The user typically wears a helmet or glasses with two small LCD or OLED displays with magnifying lenses, one for each eye. The technology can be used to show stereo films, images or games, but it can also be used to create a virtual display. Head-mounted displays may also be coupled with head-tracking devices, allowing the user to "look around" the virtual world by moving their head, eliminating the need for a separate controller. Performing this update quickly enough to avoid inducing nausea in the user requires a great amount of computer image processing. If six axis position sensing (direction and position) is used then wearer may move about within the limitations of the equipment used. Owing to rapid advancements in computer graphics and the continuing miniaturization of video and other equipment these devices are beginning to become available at more reasonable cost. Head-mounted or wearable glasses may be used to view a see-through image imposed upon the real world view, creating what is called augmented reality. This is done by reflecting the video images through partially reflective mirrors. The real world view is seen through the mirrors' reflective surface. Experimental systems have been used for gaming, where virtual opponents may peek from real windows as a player moves about. This type of system is expected to have wide application in the maintenance of complex systems, as it can give a technician what is effectively "x-ray vision" by combining computer graphics rendering of hidden elements with the technician's natural vision. Additionally, technical data and schematic diagrams may be delivered to this same equipment, eliminating the need to obtain and carry bulky paper documents. A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), not to be confused with a "Retina Display", is a display technology that draws a raster image (like a television picture) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them. For true stereoscopy, each eye must be provided with its own discrete display. To produce a virtual display that occupies a usefully large visual angle but does not involve the use of relatively large lenses or mirrors, the light source must be very close to the eye. A contact lens incorporating one or more semiconductor light sources is the form most commonly proposed. As of 2013, the inclusion of suitable light-beam-scanning means in a contact lens is still very problematic, as is the alternative of embedding a reasonably transparent array of hundreds of thousands (or millions, for HD resolution) of accurately aligned sources of collimated light. There are two categories of 3D viewer technology, active and passive. Active viewers have electronics which interact with a display. Passive viewers filter constant streams of binocular input to the appropriate eye. A shutter system works by openly presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image. It generally uses liquid crystal shutter glasses. Each eye's glass contains a liquid crystal layer which has the property of becoming dark when voltage is applied, being otherwise transparent. The glasses are controlled by a timing signal that allows the glasses to alternately darken over one eye, and then the other, in synchronization with the refresh rate of the screen. To present stereoscopic pictures, two images are projected superimposed onto the same screen through polarizing filters or presented on a display with polarized filters. For projection, a silver screen is used so that polarization is preserved. On most passive displays every other row of pixels are polarized for one eye or the other. This method is also known as being interlaced. The viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As each filter only passes light which is similarly polarized and blocks the opposite polarized light, each eye only sees one of the images, and the effect is achieved. This technique uses specific wavelengths of red, green, and blue for the right eye, and different wavelengths of red, green, and blue for the left eye. Eyeglasses which filter out the very specific wavelengths allow the wearer to see a full color 3D image. It is also known as spectral comb filtering or wavelength multiplex visualization or super-anaglyph. Dolby 3D uses this principle. The Omega 3D/Panavision 3D system has also used an improved version of this technology In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO Theatrical, who marketed it on behalf of Panavision, citing ″challenging global economic and 3D market conditions″. Although DPVO dissolved its business operations, Omega Optical continues promoting and selling 3D systems to non-theatrical markets. Omega Optical’s 3D system contains projection filters and 3D glasses. In addition to the passive stereoscopic 3D system, Omega Optical has produced enhanced anaglyph 3D glasses. The Omega’s red/cyan anaglyph glasses use complex metal oxide thin film coatings and high quality annealed glass optics. Anaglyph 3D is the name given to the stereoscopic 3D effect achieved by means of encoding each eye's image using filters of different (usually chromatically opposite) colors, typically red and cyan. Anaglyph 3D images contain two differently filtered colored images, one for each eye. When viewed through the "color-coded" "anaglyph glasses", each of the two images reaches one eye, revealing an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition. The ChromaDepth procedure of American Paper Optics is based on the fact that with a prism, colors are separated by varying degrees. The ChromaDepth eyeglasses contain special view foils, which consist of microscopically small prisms. This causes the image to be translated a certain amount that depends on its color. If one uses a prism foil now with one eye but not on the other eye, then the two seen pictures – depending upon color – are more or less widely separated. The brain produces the spatial impression from this difference. The advantage of this technology consists above all of the fact that one can regard ChromaDepth pictures also without eyeglasses (thus two-dimensional) problem-free (unlike with two-color anaglyph). However the colors are only limitedly selectable, since they contain the depth information of the picture. If one changes the color of an object, then its observed distance will also be changed. The Pulfrich effect is based on the phenomenon of the human eye processing images more slowly when there is less light, as when looking through a dark lens. Because the Pulfrich effect depends on motion in a particular direction to instigate the illusion of depth, it is not useful as a general stereoscopic technique. For example, it cannot be used to show a stationary object apparently extending into or out of the screen; similarly, objects moving vertically will not be seen as moving in depth. Incidental movement of objects will create spurious artifacts, and these incidental effects will be seen as artificial depth not related to actual depth in the scene. Stereoscopic viewing is achieved by placing an image pair one above one another. Special viewers are made for over/under format that tilt the right eyesight slightly up and the left eyesight slightly down. The most common one with mirrors is the View Magic. Another with prismatic glasses is the KMQ viewer. A recent usage of this technique is the openKMQ project. Autostereoscopic display technologies use optical components in the display, rather than worn by the user, to enable each eye to see a different image. Because headgear is not required, it is also called "glasses-free 3D". The optics split the images directionally into the viewer's eyes, so the display viewing geometry requires limited head positions that will achieve the stereoscopic effect. Automultiscopic displays provide multiple views of the same scene, rather than just two. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The technology includes two broad classes of displays: those that use head-tracking to ensure that each of the viewer's two eyes sees a different image on the screen, and those that display multiple views so that the display does not need to know where the viewers' eyes are directed. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, volumetric display, holography and light field displays. Laser holography, in its original "pure" form of the photographic transmission hologram, is the only technology yet created which can reproduce an object or scene with such complete realism that the reproduction is visually indistinguishable from the original, given the original lighting conditions. It creates a light field identical to that which emanated from the original scene, with parallax about all axes and a very wide viewing angle. The eye differentially focuses objects at different distances and subject detail is preserved down to the microscopic level. The effect is exactly like looking through a window. Unfortunately, this "pure" form requires the subject to be laser-lit and completely motionless—to within a minor fraction of the wavelength of light—during the photographic exposure, and laser light must be used to properly view the results. Most people have never seen a laser-lit transmission hologram. The types of holograms commonly encountered have seriously compromised image quality so that ordinary white light can be used for viewing, and non-holographic intermediate imaging processes are almost always resorted to, as an alternative to using powerful and hazardous pulsed lasers, when living subjects are photographed. Although the original photographic processes have proven impractical for general use, the combination of computer-generated holograms (CGH) and optoelectronic holographic displays, both under development for many years, has the potential to transform the half-century-old pipe dream of holographic 3D television into a reality; so far, however, the large amount of calculation required to generate just one detailed hologram, and the huge bandwidth required to transmit a stream of them, have confined this technology to the research laboratory. Volumetric displays use some physical mechanism to display points of light within a volume. Such displays use voxels instead of pixels. Volumetric displays include multiplanar displays, which have multiple display planes stacked up, and rotating panel displays, where a rotating panel sweeps out a volume. Other technologies have been developed to project light dots in the air above a device. An infrared laser is focused on the destination in space, generating a small bubble of plasma which emits visible light. Integral imaging is an autostereoscopic or multiscopic 3D display, meaning that it displays a 3D image without the use of special glasses on the part of the viewer. It achieves this by placing an array of microlenses (similar to a lenticular lens) in front of the image, where each lens looks different depending on viewing angle. Thus rather than displaying a 2D image that looks the same from every direction, it reproduces a 4D light field, creating stereo images that exhibit parallax when the viewer moves. Wiggle stereoscopy is an image display technique achieved by quickly alternating display of left and right sides of a stereogram. Found in animated GIF format on the web. Online examples are visible in the New-York Public Library stereogram collection. The technique is also known as "Piku-Piku". For general purpose stereo photography, where the goal is to duplicate natural human vision and give a visual impression as close as possible to actually being there, the correct baseline (distance between where the right and left images are taken) would be the same as the distance between the eyes. When images taken with such a baseline are viewed using a viewing method that duplicates the conditions under which the picture is taken then the result would be an image pretty much the same as what would be seen at the site the photo was taken. This could be described as "ortho stereo." There are, however, situations where it might be desirable to use a longer or shorter baseline. The factors to consider include the viewing method to be used and the goal in taking the picture. Note that the concept of baseline also applies to other branches of stereography, such as stereo drawings and computer generated stereo images, but it involves the point of view chosen rather than actual physical separation of cameras or lenses. For any branch of stereoscopy the concept of the stereo window is important. If a scene is viewed through a window the entire scene would normally be behind the window, if the scene is distant, it would be some distance behind the window, if it is nearby, it would appear to be just beyond the window. An object smaller than the window itself could even go through the window and appear partially or completely in front of it. The same applies to a part of a larger object that is smaller than the window. The goal of setting the stereo window is to duplicate this effect. To truly understand the concept of window adjustment it is necessary to understand where the stereo window itself is. In the case of projected stereo, including "3D" movies, the window would be the surface of the screen. With printed material the window is at the surface of the paper. When stereo images are seen by looking into a viewer the window is at the position of the frame. In the case of Virtual Reality the window seems to disappear as the scene becomes truly immersive. The entire scene can be moved backwards or forwards in depth, relative to the stereo window, by horizontally sliding the left and right eye views relative to each other. Moving either or both images away from the center will bring the whole scene away from the viewer, whereas moving either or both images toward the center will move the whole scene toward the viewer. Any objects in the scene that have no horizontal offset, will appear at the same depth as the stereo window. There are several considerations in deciding where to place the scene relative to the window. First, in the case of an actual physical window, the left eye will see less of the left side of the scene and the right eye will see less of the right side of the scene, because the view is partly blocked by the window frame. This principle is known as "less to the left on the left" or 3L, and is often used as a guide when adjusting the stereo window where all objects are to appear behind the window. When the images are moved further apart, the outer edges are cropped by the same amount, thus duplicating the effect of a window frame. Another consideration involves deciding where individual objects are placed relative to the window. It would be normal for the frame of an actual window to partly overlap or "cut off" an object that is behind the window. Thus an object behind the stereo window might be partly cut off by the frame or side of the stereo window. So the stereo window is often adjusted to place objects cut off by window behind the window. If an object, or part of an object, is not cut off by the window then it could be placed in front of it and the stereo window may be adjusted with this in mind. This effect is how swords, bugs, flashlights, etc. often seem to "come off the screen" in 3D movies. If an object which is cut off by the window is placed in front of it, an effect results that is somewhat unnatural and is usually considered undesirable, this is often called a "window violation". This can best be understood by returning to the analogy of an actual physical window. An object in front of the window would not be cut off by the window frame but would, rather, continue to the right and/or left of it. This can't be duplicated in stereography techniques other than Virtual Reality so the stereo window will normally be adjusted to avoid window violations. There are, however, circumstances where they could be considered permissible. A third consideration is viewing comfort. If the window is adjusted too far back the right and left images of distant parts of the scene may be more than 2.5" apart, requiring that the viewers eyes diverge in order to fuse them. This results in image doubling and/or viewer discomfort. In such cases a compromise is necessary between viewing comfort and the avoidance of window violations. In stereo photography window adjustments is accomplished by shifting/cropping the images, in other forms of stereoscopy such as drawings and computer generated images the window is built into the design of the images as they are generated. It is by design that in CGI movies certain images are behind the screen whereas others are in front of it. While stereoscopy have typically been used for amusement, including stereographic cards, 3D films, printings using anaglyph and pictures, posters and books of autostereograms, there are also other uses of this technology. In the 19th Century, it was realized that stereoscopic images provided an opportunity for people to experience places and things far away, and many tour sets were produced, and books were published allowing people to learn about geography, science, history, and other subjects. Such uses continued till the mid 20th Century, with the Keystone View Company producing cards into the 1960s. The two cameras that make up each rover's Pancam are situated 1.5m above the ground surface, and are separated by 30 cm, with 1 degree of toe-in. This allows the image pairs to be made into scientifically useful stereoscopic images, which can be viewed as stereograms, anaglyphs, or processed into 3D computer images. The ability to create realistic 3D images from a pair of cameras at roughly human-height gives researchers increased insight as to the nature of the landscapes being viewed. In environments without hazy atmospheres or familiar landmarks, humans rely on stereoscopic clues to judge distance. Single camera viewpoints are therefore more difficult to interpret. Multiple camera stereoscopic systems like the Pancam address this problem with unmanned space exploration. Stereopair photographs provided a way for 3-dimensional (3D) visualisations of aerial photographs; since about 2000, 3D aerial views are mainly based on digital stereo imaging technologies. Cartographers generate today stereopairs using computer programs in order to visualise topography in three dimensions. Computerised stereo visualisation applies stereo matching programs. In biology and chemistry, complex molecular structures are often rendered in stereopairs. The same technique can also be applied to any mathematical (or scientific, or engineering) parameter that is a function of two variables, although in these cases it is more common for a three-dimensional effect to be created using a 'distorted' mesh or shading (as if from a distant light source). Guide to the Edward R. Frank Stereograph Collection. Special Collections and Archives, The UC Irvine Libraries, Irvine, California. |Wikimedia Commons has media related to Stereoscopy.|
<urn:uuid:e23d0050-ccef-4b11-859f-149bd0154c14>
CC-MAIN-2014-42
http://blekko.com/wiki/Stereoscopy?source=672620ff
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645920.6/warc/CC-MAIN-20141024030045-00239-ip-10-16-133-185.ec2.internal.warc.gz
en
0.934976
6,103
4.09375
4
Connecting Children With Nature — Learning About Trees - Grades: PreK–K Our playground is surrounded by an abundance of beautiful trees, which always seem to captivate my very curious kindergartners. Who would have guessed that a group of five- and six-year-olds would find trees more intriguing than slides and swings? Read on as I share the lessons I created to capitalize on my students' natural enthusiasm for trees. 1. Start with a discussion about trees. For this discussion, I use prompts like: - What is a tree? - What do you like best about trees? - Why do people like to have trees in their yards and parks? - What would our world be like without trees? 2. Then read the book The Giving Tree by Shel Silverstein. It is a great introduction to how trees help us. 3. Next, discuss the specific ways that trees help us.I choose three of these gifts and illustrate them with tree-related activities. a. Trees give us food. Students were surprised to find that many of the fruits and nuts we enjoy come from trees. - Brainstorm a list of things people eat that come from trees. - Sample foods that come from trees. - Graph favorite edible tree products. - Make maple syrup. - After reading The Apple Pie Tree, allow students to sample apple pie. b. Trees give us wood. Many of the products we use on a daily basis are made from the wood we get from trees. - Have students go on a scavenger hunt throughout the school identifying objects that are made of wood. - Invite students to bring things from home made of wood or to cut photographs of wooden things out of magazines. During share time, have your students discuss the importance of these objects. - Let students use wooden popsicle sticks to build houses and picture frames. c. Trees are a home for animals. Discuss how trees provide a home for many animals. I like to begin this discussion by reading Tree Homes. This book will help students learn about the different types of animals that make their homes in trees. - Go on a nature walk to observe some of the animals that live in and visit trees. - Have students use a T-chart to distinguish animals that live in trees from those who do not. - Imitate your favorite tree animal. 4. Adopt a tree. My students adopted a tree near our school as a special friend. We took a photograph of the tree and posted it in our classroom. Students learned about this type of tree, what kind of life goes on around it, and how it changes from season to season. We also discussed ways we can help our new friend stay healthy (e.g., watering it, protecting it from bicycles, lawn mowers, vandals, etc.) We plan to visit our tree periodically and watch for changes. Students will record their observations in their tree journals. We also used cubes to measure the thickness of our tree trunk. The children placed the cubes around the trunk and counted how many it took to complete the circle. 5. Other tree-related assignment include: - Count how many trees you see on your way to school. - Take a picture with your favorite tree. - Draw a picture illustrating trees of the four seasons. - Explore the various shapes of trees. - Plant a tree outside your school. - Label the parts of a tree. 6. For more on trees, visit: - Trees Are Terrific: A child-friendly site from the University of Illinois Urban Programs Resource Network. - Mrs. Jones' Room: A page full of tree-related lessons, songs, and links from another kindergarten teacher. - First-School: Tree-related crafts and activities for kindergarten and preschool classes. - FOSSweb: The Trees Module from FOSS, of the National Science Foundation and the University of California at Berkeley. - Real Trees 4 Kids: A wealth of information and resources for children grades K–12. 7. Take a look at more of my favorite books to use during a study on trees: Kindergartners respond well to nature-inspired curriculum. Trees provide opportunities for children to learn about math, science, and more while exploring one of nature’s wonders. I hope the activities I’ve provided will inspire you and your students!
<urn:uuid:6c8b709a-73d1-4d39-b411-965ae68d851b>
CC-MAIN-2016-30
http://www.scholastic.com/teachers/classroom-solutions/2011/10/connecting-students-three-easy-strategies
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823805.20/warc/CC-MAIN-20160723071023-00187-ip-10-185-27-174.ec2.internal.warc.gz
en
0.948331
913
4.03125
4
Nineteenth century England had flourishing cities and emerging industries. Machines made it possible for those with money to invest to earn great profits, especially with an abundance of poor people who were willing to work long hours at hard or repetitive jobs for little pay. By contrast, the rural system included landlords, farmers, and common laborers who owned no land. In this rural system that had existed for centuries, those without land had no hope of bettering their lives: once in poverty, always in poverty. These hopeless poor moved to the city on the dream of making their own fortunes; it was usual for working class families to send young children off to the factories for twelve to fourteen-hour shifts or longer. Child labor laws would not be enacted until the 1860s. Meanwhile, children and women were ideal workers because they did not form labor unions, and were easily intimidated, beaten, or fired if they protested against an employer’s mistreatment. School attendance was a luxury reserved for the children of parents who could afford to pay private tutors in addition to the family’s loss of income from a child’s labor. The first publicly funded elementary schools were not established until the 1870s, when the demand for skilled laborers increased. The idea of high schools did not receive England’s public support until the turn of the century, after Dickens’ death. Meanwhile, the laborsaving machines that were to make a few people’s fortunes earned many others little more than bad health or early graves. The new money caused new needs. Prior to the nineteenth century, banking had been left to businesses and was fairly informal, by reputation. Since there had been little money to exchange. except by a well-known few, there had been little need for that service. The Bank of England had been established in 1694, but it dealt main Iy with government projects. Industrialization changed that. and banking houses became more numerous as a middle class emerged. New businesses needed to borrow money, and the rapid production of goods for a growing economy promised new wealth for both borrowers and lenders. That is how Pip found employment for his friend, Herbert Pocket, who later hired Pip. Obviously. not all who turned to the city for fortune found it. There were workhouses and debtors’ prisons for those who failed to achieve their dreams of advancement. Those shut out from that promise lived in misery and often turned to crime. Since money was made in the city, the rise in criminal activity appeared there. As the number of jobless residents increased, so did the number of smugglers. pickpockets. thieves. and swindlers. Those with enough money to escape the SOOI and dangers of London, began to build up the towns, as we see in Wemmick’s choice of address. Only the outlying country folk stayed much the same as they had for centuries. and we see Pip’s travel is either by stagecoach or on foot . That was normal until the ]8605 when the railroad finally connected the country to the city and the past lO the new age of the machine. Marie Rose Napierkowski, Novels for Students: Presenting Analysis, Context & Criticism on Commonly Studied Novels, Volume 4, Charles Dickens, Gale-Cengage Learning, 1998
<urn:uuid:6740e4b5-451d-4655-b9d8-112b7d9380c1>
CC-MAIN-2023-06
https://jottedlines.com/great-expectations-setting/
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00548.warc.gz
en
0.983181
686
4.0625
4
Quantum technology has a lot of promise, but several research barriers need to be overcome before it can be widely used. A team of US researchers has advanced the field another step, by bringing multiple molecules into a single quantum state at the same time. A Bose-Einstein condensate is a state of matter that only occurs at very low temperatures – close to absolute zero. At this temperature, multiple particles can clump together and behave as though they were a single atom – something that could be useful in quantum technology. But while scientists have been able to get single atoms into this state for decades, they hadn’t yet achieved it with molecules. “Atoms are simple spherical objects, whereas molecules can vibrate, rotate, carry small magnets,” says Cheng Chin, a professor of physics at the University of Chicago, US. “Because molecules can do so many different things, it makes them more useful, and at the same time much harder to control.” Chin’s team has now brought molecules of caesium (Cs2) into the Bose-Einstein state. “People have been trying to do this for decades, so we’re very excited,” he says. The team used a low temperature of 10 nanokelvins to reach this point. A nanokelvin is a billionth of a kelvin, or a billionth of one degree Celsius, making this temperature just fractionally above absolute zero. They also packed the caesium molecules tightly to limit their movement. “Typically, molecules want to move in all directions, and if you allow that, they are much less stable,” says Chin. “We confined the molecules so that they are on a 2D surface and can only move in two directions.” These conditions made the molecules effectively identical: lined up in the same orientation, with the same vibrational frequency and in the same quantum state. The team was able to link up several thousand molecules in this condensate. Chin says this achievement has implications for quantum engineering. “It’s the absolute ideal starting point. For example, if you want to build quantum systems to hold information, you need a clean slate to write on before you can format and store that information.” Chin is the senior author on a paper describing the research, published in Nature. “In the traditional way to think about chemistry, you think about a few atoms and molecules colliding and forming a new molecule,” he says. “But in the quantum regime, all molecules act together, in collective behaviour. This opens a whole new way to explore how molecules can all react together to become a new kind of molecule.” More on quantum physics:
<urn:uuid:4540b5bc-df35-4ba2-8f89-2f434412b79b>
CC-MAIN-2023-40
https://cosmosmagazine.com/science/molecules-brought-in-a-single-quantum-state/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511055.59/warc/CC-MAIN-20231003060619-20231003090619-00874.warc.gz
en
0.940243
577
4
4
Measles is a highly contagious disease caused by the measles virus. Infected people have the measles virus in the mucus of their nose and throat. When they sneeze or cough, moisture droplets spray into the air. The virus in these droplets can remain active on surfaces for up to two hours. The virus is spread by coming in contact with these infected droplets. Following exposure to the measles virus, there is usually an incubation period lasting 10 to 12 days, during which there are no signs of the disease. During this time, the virus begins to multiply and infect the cells of the respiratory tract, eyes and lymph nodes—increasing the levels of the virus in the blood stream. The first stage of the disease begins with a runny nose, cough, and a slight fever. As the infection progresses, the person's eyes become red and sensitive to light. The second stage of measles is marked by a high temperature—sometimes as high as 103o F-105o F, and the characteristic red blotchy rash. The rash usually starts on the face and then spreads to the chest, back, and arms and legs, including the palms of the hands and soles of the feet. After about five days, the rash fades in the same order as it appeared. Tiny white spots, called Koplik’s spots, can also appear in the mouth. A person with measles can be contagious from up to 4 days before and after the rash appears. An effective "MMR" vaccine for measles is usually given in combination with vaccines for mumps and the less severe German measles, or rubella. This vaccine contains weakened or killed forms of the virus which stimulates the body's immune system to "recognize" the virus as foreign. Therefore, the immune system can more easily identify and kill any of these viruses that it encounters in the future.
<urn:uuid:7900a524-a46a-415c-a698-b27e07d32971>
CC-MAIN-2014-42
http://www.nebraskamed.com/health-library/3d-medical-atlas/294/measles
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067496.61/warc/CC-MAIN-20141017150107-00012-ip-10-16-133-185.ec2.internal.warc.gz
en
0.962005
381
4.15625
4
Asteroids are small pieces of rock that orbit the Sun, mostly between Mars and Jupiter. Asteroids move quickly across the sky, so they can be seen in SDSS images. (See the Asteroid Hunt project to learn more.) If an asteroid moves slowly, it shows up as a blue dot next to a yellow dot. Fast-moving asteroids show up as a red, green and blue dot. Very fast asteroids appear as a single colored streak. Examples of each type are shown below. Asteroids that appear as blue-yellow dots trick the computer program that classifies objects, so their “object type” is listed as star. Galaxies form in clusters of dozens or hundreds. The SDSS has seen many clusters, including the one shown at the right. Galaxy clusters can be so far away that individual galaxies almost look like stars! When you see a cluster in the Navigation tool, click on one of the objects to see the object type. You might be surprised to find what you thought was a star cluster is actually a galaxy cluster! Sometimes, when the SDSS telescope looks at a very bright object, the object’s light is reflected inside the telescope. These reflections can cause “ghosts.” Ghosts are bands of light. They are usually a single color; either red, green or blue, depending on which filter the camera was looking through. A typical ghost is shown at the right. Now you’re ready for the scavenger hunt! Click Next to get started.
<urn:uuid:e5a4cb06-b812-46fd-8740-765c9b44d7bd>
CC-MAIN-2020-16
http://voyages.sdss.org/skyserver/for-kids/sdss-scavenger-hunt/types-of-objects-2/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00377.warc.gz
en
0.939858
318
4.03125
4
Teaching Practice 5 To provide fluency speaking practice in a discussions in the context of moral dilemmas To provide review of second conditional in the context of moral dilemmas Procedure (34-46 minutes) Teacher starts the lesson by showing the picture. Teacher asks: "What do you see in the picture?" She elicits some answers. She waits fpr the answer, he is trying to decide on something? He is not sure about something. He is a hesitant or indecisive person etc. Teacher asks: "What is he thinking about?" and draws speech bubbles on the board and writes the answers given by students. Finally teacher asks: "Do you have similar situations?" and elicits one or two answers and tells them to talk in pairs. The teacher shows some pictures and elicits the problems. For example, in the first picture A woman is trying to decide what to wear. She gives situations like "What would you wear if you had a job interview? What would you wear, if you went out with your new boyfriend/girlfriend ? In another picture, she shows two opposite directions and ask students have they ever felt like that? She elicits some answers and based on the answers creates situations. Teacher creates the context for relative. She says that apple, banana, orange are ........... in general. She also draws a spiderweb on the board and writes these words around the spiderweb. She wants them to find out the hyponym "fruit". She does the same example with animals. She finally writes uncle, aunt, cousin nephew around the spiderweb and expects them to find out the word "relative" Second word is "inherit". The teacher tells the students that: My grandfather died last month. The had only one house and I was his only relative. So, his house became mine. I ......... his things. The third word is "colleague". I am a teacher and I work with other teachers in the same school. They are my ......... The teacher makes 2 groups in the class. She tries to balance strong students in these groups. She shows the cut-outs and explain the game Instructions: There are two sets of cut outs. One students picks a card and read the question. Other students answer the question. Short answers are not OK. You should explain and support your answers. The one who asks the question chooses the most interesting answer and gives the card to that student. Another student picks a card and asks the question. CCQs: Are we working in pairs or groups?- In groups How many groups are there? - 2 Are we writing our answers? - No, just talking Does the same person ask the questions? - No, it takes turns Are short answers OK ? No The teacher monitors the students while they are discussing the questions and takes some notes about correct and incorrect use of target language. For feedback session, teacher elicits the most interesting answers from the groups and students share their opinions. Finally, the teacher writes the sentences on the board and wants them to talk in pairs about which ones are correct and which ones are in correct. Then she asks the students to come the board and correct the sentences.
<urn:uuid:be47ad38-dbe2-4ea8-8c1f-ec9b464836e1>
CC-MAIN-2018-51
https://www.englishlessonplanner.com/plans/5509
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00537.warc.gz
en
0.974634
652
4.0625
4
Stars are thought to form in huge filaments of molecular gas. Areas where one or more of these filaments converge, known as hubs, are where massive stars form. These massive stars, located nearby, would have put the early solar system in danger of a powerful supernova. This risk is more than just hypothetical; a research team from the National Astronomical Observatory of Japan, led by astrophysicist Doris Arzoumanian, looked at isotopes found in ancient meteorites and found possible evidence of the turbulent death of a massive star. So why did the solar system survive? The gas in the filament seems to be able to protect it from the supernova and its onslaught of radioactive isotopes. “The host filament may protect the young solar system from stellar feedback, both during star formation and evolution (stellar outflow, wind and radiation) and at the end of their lives (supernovae),” Arzoumanian and her team said in a study recently published. in The Astrophysical Journal Letters. Signs of a supernova The meteorites studied by the researchers contained small inclusions, or clumps, in the rock about as old as the solar system. These chunks contain isotopes derived from the decay of short-lived radionuclides (SLRs), which can be generated by supernovae. Although SLRs decay after a few hundred million years, which is nothing in cosmic terms, they still leave behind distinctive isotopes. The team found particularly high levels of SLR isotopes in the meteorites they examined. From the age of the isotopes, they were able to deduce that the SLRs they once belonged to were present in the early solar system. Supernovae are one SLR source, which could mean our solar system has evaded a supernova, though they could form in other ways. SLRs from the interstellar medium can already float around in the molecular cloud in which a star forms. The birth of massive stars, which don’t live that long (at least in cosmic terms) and die quickly via supernovae, may be another source of can isotopes produced by highly energetic solar or galactic cosmic rays. All of these sources could possibly explain the existence of SLRs in the early solar system. While SLRs likely existed in the part of the filament where the Sun and Solar System formed, the meteorite samples contained too much of a particular aluminum isotope for the interstellar medium to be the Solar System’s only SLR source. Cosmic rays, which can convert stable isotopes into radioactive ones, had a better chance of explaining the number of isotopes in the meteorites. However, it would have taken too long for this process to produce the levels of SLRs found in the early solar system. It is very likely that such high SLR levels could come from very intense stellar winds, which would have occurred during the formation of massive stars, or from what was left after one of the massive stars went supernova. So why didn’t the supernova disrupt the solar system? It seems that the destructive blow was softened by the molecular gases of the filament in which the sun formed. If the isotopes from those long-dead SLRs really came from a supernova or stellar winds, then the amount passing through the filament gas was enough to match what was suggested by the meteorite findings, but not enough to decimate the solar system. The size of this hypothetical supernova or newborn star is still unknown. “This scenario may have several important implications for our understanding of the formation, evolution and properties of stellar systems,” the researchers also said in the study. While there are still some unanswered questions, the scientists suspect that if the clouds of the filament in which the sun and solar system formed were large enough, our star and planets would have easily survived a supernova impact. The Astrophysical Journal Letters, 2023. DOI: 10.3847/2041-8213/acc849 (About DOIs). Elizabeth Rayne is a creature that writes. Her work has appeared on SYFY WIRE, Space.com, Live Science, Grunge, Den of Geek, and Forbidden Futures. When she’s not writing, she’s altering, drawing, or cosplaying as a character she’s never heard of before. Follow her on Twitter @quothravenrayne.
<urn:uuid:8b3bc0d3-9388-4944-a27b-d8f2d8ad7871>
CC-MAIN-2023-50
https://cbnewz.com/our-solar-system-may-have-survived-a-supernova-because-of-the-way-the-sun-formed/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00128.warc.gz
en
0.961624
912
4.28125
4
The first manned attempt came about 2 months later on November 21st, with a balloon made by two French brothers, Joseph and Etienne Montgolfier. The balloon was launched from the centre of Paris and flew for a period of 20 minutes. It proved to be the birth of hot air ballooning. Just two years later in 1785 a French balloonist, Jean Pierre Blanchard, and his American co pilot, John Jefferies, became the first to fly across the English Channel. In these early days of ballooning, the English Channel was considered the first step to long distance ballooning so this was a large benchmark in ballooning history. Unfortunately, this same year Jean-François Pilatre de Rozier (the world's first balloonist) was killed in his attempt at crossing the channel. His balloon exploded half an hour after takeoff due to the experimental design of using a hydrogen balloon and hot air balloon tied together. Now a large jump in time of over 100 years: In August of 1932 Swiss scientist Auguste Piccard was the first to achieve a manned flight to the Stratosphere. He reached a height of 52,498 feet, setting the new altitude record. Over the next couple of years, altitude records continued to be set and broken every couple of months - the race was on to see who would reach the highest point. In 1935 a new altitude record was set and it remained at this level for the next 20 years. The balloon Explorer 2, a gas helium model reached an altitude of 72,395 feet (13.7 miles)! For the first time in history, it was proven that humans could survive in a pressurized chamber at extremely high altitudes. This flight set a milestone for aviation and helped pave the way for future space travel. The altitude record was set again in 1960 when Captain Joe Kittinger parachute jumped from a balloon that was at a height of 102,000 feet. The balloon broke the altitude record and Captain Kittinger, the high altitude parachute jump record. He broke the sound barrier with his body. In 1987, Richard Branson and Per Lindstrand were the first to cross the Atlantic in a hot air balloon, rather than a helium/gas filled balloon. They flew a distance of 2,900 miles in a record breaking time of 33 hours. At the time, the envelope they used was the largest ever flown, at 2.3 million cubic feet of capacity. A year later, Per Lindstand set yet another record, this time for highest solo flight ever recorded in a hot air balloon - 65,000 feet! The great team of Richard Branson and Per Lindstrand paired up again in 1991 and became the first to cross the Pacific in a hot air balloon. They travelled 6,700 miles in 47 hours, from Japan to Canada breaking the world distance record, travelling at speeds of up to 245 mph. 4 years later, Steve Fossett became the first to complete the Transpacific balloon route by himself, travelling from Korea and landing in Canada 4 days later. Finally, in 1999 the first around the world flight was completed by Bertrand Piccard and Brian Jones. Leaving from Switzerland and landing in Africa, they smashed all previous distance records, flying for 19 days, 21 hours and 55 minutes.
<urn:uuid:f681b771-6c5b-4a28-b4cb-7468566fe629>
CC-MAIN-2017-30
http://diaryofamadinvalid.blogspot.com/2017/03/facts-about-hot-air-balloons.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426161.99/warc/CC-MAIN-20170726122153-20170726142153-00354.warc.gz
en
0.957948
669
4.0625
4
Find out more about our Academic Medical Centre and efforts in Academic Medicine Find out more about what JOAM do to support AM initiatives Academic Medicine Executive Committee (AM EXCO) Our appointed ACP leaders within the respective 15 ACPs Guidelines, forms, and templates for Academic Medicine. Pinworms (also called threadworms) are an intestinal infection caused by tiny parasitic worms called Enterobius vermicularis. It's a common infection that affects millions of people each year, particularly toddlers and school-age kids. Infection often occurs in more than one family member. Pinworms are thin and white, measuring about six to 13 millimetres in length. The most common signs of a pinworm infection are: The itching is usually worse at night because the worms move to the area around the anus to lay their eggs (up to 10,000 to 15,000 eggs). In girls, pinworm infection can spread to the vagina and cause a vaginal discharge. If the itching breaks the skin, it also could lead to a bacterial skin infection. Pinworms can also cause bedwetting at night. Some infected people have no symptoms at all. Pinworms get into the body when people ingest or breathe in the microscopic pinworm eggs. These eggs are light and float in the air and can be found on contaminated hands and surfaces, such as: The eggs pass into the digestive system and hatch in the small intestine. From the small intestine, pinworm larvae go to the large intestine, where they live as parasites (with their heads attached to the inside wall of the bowel). About one to two months later, adult female pinworms leave the large intestine through the anus (the opening where bowel movements come out). They lay eggs on the skin right around the anus, which triggers itching in that area, usually at night. When someone scratches the itchy area, microscopic pinworm eggs transfer to their fingers. Contaminated fingers can then carry pinworm eggs to the mouth, where they go back into the body, or stay on various surfaces, where eggs can survive for two to three weeks. Fortunately, most eggs dry out within 72 hours. In the absence of host autoinfection, infestation usually lasts only four to six weeks. Itching during the night in a child’s perianal area strongly suggests pinworm infection. Diagnosis is made by identifying the worm or its eggs. If your child has a pinworm infection, you can see worms on the skin near the anal region or on underwear, pyjamas or sheets, about two or three hours after your child has fallen asleep. You also might see the worms in the toilet after your child goes to the bathroom. They look like tiny pieces of white thread and are really small — about as long as a staple. You might also see them on your child's underwear in the morning. Pinworm eggs can be collected and examined using the “tape test” as soon as the person wakes up. This “test” is done by firmly pressing the adhesive side of clear, transparent cellophane tape to the skin around the anus. The eggs stick to the tape and the tape can be placed on a slide and looked at under a microscope. This test should be done as soon as the person wakes up in the morning before they wash, bathe, go to the toilet, or get dressed. The “tape test” should be done on three consecutive mornings to increase the chance of finding pinworm eggs. Oral medication such as mebendazole or albendazole should be given to everybody in the household. There is a risk of transmission between family members; so the chances of being infected if somebody has been diagnosed are high, even if no symptoms are present. Both medications block the worm's ability to absorb glucose, effectively killing it within a few days. Treatment involves two doses of medication, best administered on an empty stomach, with the second dose being given two weeks after the first dose. All household contacts and caretakers of the infected person should be treated at the same time. Hygiene measures should be continued for another two weeks following the initial treatment. Although medicine takes care of the worm infection, the itching may continue for about a week. Apply a zinc ointment or other medicine to help stop the itching. Reinfection can occur easily so strict observance of good hand hygiene is essential (e.g. proper handwashing, maintaining clean short fingernails, avoiding nail biting, avoiding scratching the perianal area).
<urn:uuid:8db46edd-9df1-489a-8117-f5d4fe4a5093>
CC-MAIN-2022-21
https://www.singhealthdukenus.com.sg/conditions-treatments/pinworms/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00150.warc.gz
en
0.949241
944
4
4
At a young age, kids are first taught to write letters in print only. When kids reach the age of eight to ten, they are taught how to write in cursive. They may find this quite difficult and boring at first. But one fun way to teach them this is to use worksheets also. These writing worksheets have traceable patterns of the different strokes of writing letters. By tracing these patterns, kids slowly learn how a letter is structured. free coloring worksheets for 3rd graders coloring musica worksheet at 13 number color worksheet color coded cells worksheet biology of free color by color word worksheets teaching colors worksheets esl free coloring worksheets for 3rd graders grinch coloring worksheets plus mickey mouse coloring worksheet color by number stitch worksheets as well as dr seuss color by number worksheet the color black worksheets free coloring worksheets for 3rd graders articulation coloring worksheets to greater than coloring worksheet math coloring worksheets 3rd grade with 6th garde color by numer worksheet roman numerals coloring worksheet free coloring worksheets for 3rd graders crayfish coloring worksheet on soil science color worksheet color the crab red worksheet plus parts of an atom coloring worksheet color the chunk worksheets The learning should be real-worldly. It is easiest to learn and remember when whatever is learned is immediately applied to a practical, real-life situation. You should use every opportunity to teach and regularly reinforce basic concepts taught, in real-life and in real-time. For instance, during snack-time, if a child is eating a biscuit, you can say – ’B’ for ’biscuit’. While waiting for a school van, you can say – ’V’ for ’Van’ and so on. The learning should be fun. It should not feel like work, but play. For otherwise, children will quickly get bored. Hence it would be a good idea to use a lot of interesting activities, games, coloring sheets, illustrated kindergarten worksheets etc. You should be well prepared with these teaching aids, which can be made very easily. Find out the most recent images of Free Coloring Worksheets for 3rd Graders here, and also you can get the image here simply image posted uploaded by Matthew Porter that saved in our collection.
<urn:uuid:14b7cc83-2d89-41e8-b0bd-d1885d65e1d6>
CC-MAIN-2020-40
https://www.upperclatfordpc.org/free-coloring-worksheets-for-3rd-graders/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00106.warc.gz
en
0.928858
507
4.03125
4
A study that followed the evolution of Pluto’s atmosphere for fourteen years shows its seasonal nature, and predicts that it will now start to condensate as frost. This study was published in the journal Astronomy and Astrophysics and had the participation of Pedro Machado, of Instituto de Astrofísica e Ciências do Espaço (IA) and Faculdade de Ciências da Universidade de Lisboa (FCUL). The authors analysed data from this dwarf planet’s atmosphere in the altitude range of 5 to 380 kilometres, collected between 2002 and 2016. This period overlapped with the Summer in Pluto’s northern hemisphere1, where are mostly concentrated the reservoirs of nitrogen ice, which sublimate under the exposure and the proximity to the Sun. Data indicate that the atmospheric pressure at the surface has risen by about twofold and a half since 1988 until its maximum in 2015, yet still one hundred thousand times thinner than the average atmospheric pressure on Earth at sea-level. “More and more we look at Pluto’s seasonal atmosphere as a cometary activity,” says Pedro Machado. “Since it is a body of small mass, nitrogen molecules gain the escape velocity very easily, and Pluto looses atmosphere, like the comets.” - Due to its strongly tilted rotation axis, Pluto spins almost laying on its orbit. This causes it to expose permanently to the Sun the northern latitudes during a fraction of the more than two centuries that it takes to complete a full turn around the Sun. This period overlaps with the crossing of the point in its orbit closest to the Sun (perihelion), which happened in 1989. Pluto has a very eccentric orbit, varying its distance from the Sun between about 30 and 49 times the average distance of the Earth from the Sun.
<urn:uuid:a8eb0880-2f06-41e6-aa1d-709a01d52824>
CC-MAIN-2019-39
http://divulgacao.iastro.pt/en/2019/05/10/on-pluto-the-winter-is-approaching-and-the-atmosphere-is-vanishing-into-frost/
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573070.36/warc/CC-MAIN-20190917101137-20190917123137-00353.warc.gz
en
0.943198
385
4.09375
4
A teacher's guide to social and emotional learning When asked about a teacher’s job description, most people can tell you they’re responsible for lesson planning, classroom instruction and grading assignments. What’s not as commonly known is the “hidden curriculum” of unwritten and often unintended lessons in social and emotional learning (SEL). SEL is a critical part of a young child’s development, yet it is an often overlooked quality in educators. When students lack social-emotional abilities, they struggle more than their well-developed peers when faced with change, challenge and conflict. As a teacher, you already play a critical role in your students’ development of these skills. But as you know, there is always more to learn and improve upon. Find out how you can increase your impact by proactively incorporating SEL skills into your lessons. We created this guide to social and emotional learning based on our recent webinar presented by Tenley Hardin, MA, MFT candidate and certified professional life coach (iPEC). Learn more about how a focus on SEL can have positive effects on academic outcomes and classroom management. What is social emotional learning? According to the Collaborative for Academic, Social, and Emotional Learning (CASEL), SEL is the process through which all young people and adults acquire and apply the knowledge, skills and attitudes to: - Develop healthy identities - Manage emotions - Achieve personal and collective goals - Feel and show empathy for others - Establish and maintain supportive relationships - Make responsible and caring decisions To teach SEL, educators must engage in self-reflection and become aware of their own biases, triggers, positive and maladaptive patterns. This requires vulnerability and the willingness to recognize areas of improvement without becoming discouraged about being imperfect. 5 Core competencies of social emotional learning To break it down even further, let’s unpack the five SEL competencies and how they can impact your professional development, along with student success. Self-awareness is the ability to identify and understand your emotions, thoughts and values and how they influence your responses and behaviors. This includes capacities like: - Letting yourself feel emotions instead of dismissing or suppressing them - Checking in with yourself and identifying emotions - Examining your personal prejudices and biases - Maintaining a growth mindset Why is self-awareness important for teachers? Teachers who are self-aware are better able to recognize strengths, overcome fears and interrupt cycles of negative self-talk. Becoming more self-aware takes time, but with practice, you’ll be able to shift to a more empowered and positive state of mind. Reflect on the following questions to deepen your understanding of yourself: - What thoughts trigger an emotional reaction in me? - How are my emotions influencing my responses and behavioral patterns? - What kind of obstacles have I already overcome in my life? Self-management is the ability to set goals, deal with stress and control impulses, reactions and behaviors. Mastering these skills can be challenging, especially for children who have experienced trauma. Young brains are still developing, and strong feelings can be overwhelming. Self-management skills include things like: - Identifying and using stress management strategies - Exhibiting self-discipline and self-motivation - Setting personal and collective goals - Using planning and organizational skills Why is self-management important for teachers? Teaching is a rewarding, important and challenging job. You constantly use self-management skills to prioritize responsibilities and cope with stress. But even the most experienced teachers have moments of anger, frustration and helplessness. The more adults model how to recover from a difficult or stressful situation, the more a child will follow and use the same strategies. When young people witness trusted adults acknowledging their own mistakes and limitations, it gives them examples of how to do the same. It destigmatizes common fears like failure, making errors or not having answers to all the questions. One helpful exercise targeted at developing your own self-management skills is to identify stressors or emotional triggers and your responses to them. Then take the time to reframe those thoughts into something more positive. For example, you may start by thinking, “That lesson didn’t go as planned, I feel like a bad teacher.” Instead, flip your thinking to, “That lesson took an unexpected turn, how can I improve it for next time?” 3. Social awareness Social awareness is a complex skill. It is the ability to appreciate different perspectives and empathize with others, including those from diverse background and cultures. A socially aware person feels compassion for others and understands social norms for behavior in different settings. Social awareness competencies include things like: - Recognizing strengths in others - Showing concern for the feelings of others - Understanding and expressing gratitude - Identifying diverse social norms, including unjust ones Why is social awareness important for teachers? As an educator, you’re responsible for creating a safe and welcoming environment that honors all students. Without high levels of social awareness, teachers can unintentionally replicate or exacerbate harmful practices and conditions. If students or their families don’t feel seen, respected or represented in the classroom, they are unlikely to engage with the school and the child will suffer as a result.Start challenging yourself to increase your social awareness by contemplating the following questions: - Who am I in relation to others? - How do others perceive me? - How do aspects of my identity (race, gender, class, body size, age, etc.) affect my perceptions of others and vice versa? 4. Relationship skills Humans are social creatures by design. Establishing mutually supportive relationships is an incredibly important component of a healthy and happy life. People who successfully sustain relationships with diverse individual and groups are skilled at things such as: - Communicating effectively - Demonstrating cultural competency - Resolving conflicts - Showing leadership in groups - Seeking or offering support Why are relationship skills important for teachers? Successful teachers know how to build bonds with their students and their families, co-workers and the school community at large. Managing multiple diverse relationships is often complicated but having strong listening and conflict resolution skills makes it much easier. One of the most important social-emotional competencies is repair. In this context, repair means recognizing you may have harmed or alienated someone and reaching out to address it and work through it together. All teachers have reacted to stress by being harsh or yelling. It’s not ideal, but you now have the opportunity to repair. In these situations, try taking a deep breath and saying, “I’m sorry, I shouldn’t have yelled. I am frustrated right now, but I will make sure I use a calmer voice next time.” 5. Responsible decision making Responsible decision making is the ability to make caring and constructive choices about personal behavior and social interactions. Some well versed in making good decisions will consider ethical standards and safety concerns and evaluate consequences before reaching a conclusion. Identifying problems and proposing solutions Acknowledging and validating another person’s thoughts, feelings and ideas Demonstrating curiosity and open-mindedness Learning how to make a judgment after analyzing information, data and facts Why is responsible decision making important for teachers? You’re faced with many important decisions each day as a teacher. Your actions impact students, their families, fellow teachers and the entire school community, which means your choices carry a great deal of responsibility. Being able to critically think about the consequences of different potential actions is critical, as well as knowing your limitations and when it’s necessary to ask for help. Children are also faced with important decisions that have consequences on the rest of their lives. Demonstrating this process and communicating its importance will help your students understand the impact of their choices and how they affect those around them. You can instill this by using the responsible decision-making model, which outlines the following five steps: - Identify the problem - Analyze the situation - Brainstorm solutions and solve the problem - Consider ethical responsibility - Evaluate and reflect Set your students up for success You’re already an important role model for your students. After learning more about social and emotional learning and reflecting on the prompts outlined above, you may be better equipped to foster these principles in your classroom. With your guidance and example, your students will learn how to become more resilient and deal with difficult emotions and situations. Looking for more ways to help your students build the skills and habits that will help them succeed in the future? Check out the many Professional Development Courses offered at UMass Global. Become a Student Have questions about enrollment, degree programs, financial aid, or next steps?
<urn:uuid:2b1144c9-5e3e-488b-9137-9ebe44abc61d>
CC-MAIN-2023-06
https://www.umassglobal.edu/news-and-events/blog/teachers-guide-to-social-emotional-learning
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00747.warc.gz
en
0.946536
1,851
4.0625
4
Why should I bother learning this? Tell them that they need to be able to answer questions using numbers that make sense in context. Ask students to compare their responses to How do I get to ______ from here? and How far is it to ______? Fill in the blank with some well-known local place that requires just a right or left turn out of the parking lot and a short (straight) drive or walk. Extend the discussion into a clear differentiation between direction and distance. What's so important about closed and open dots in an absolute value graph? The issue here is not really absolute value. It's the distinction between the inequality symbols <, > and , . Use time as a way to model comparisons that include a number and those that exclude it. Today is Tuesday. You have less than a week until your exam. Could that exam be next Tuesday? E < 7 It will take at least an hour to cook tonight's dinner. Could it possibly take exactly an hour? D 60 min. Help students to see that the symbols that have two parts (including half of an equals symbol) are the ones that include the compared number and so require a closed dot.
<urn:uuid:3fa4fb72-7bfb-4e34-8db7-97e495629b56>
CC-MAIN-2014-10
http://www.eduplace.com/math/mathsteps/7/b/7.absvalue.ask.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999666921/warc/CC-MAIN-20140305060746-00087-ip-10-183-142-35.ec2.internal.warc.gz
en
0.966388
241
4.125
4
Students learn about the different roles and responsibilities in a court by participating in a mock trial. Through several activities, students learn about the roles and responsibilities of the U.S. president and their own duties as citizens of a democracy This scripted mock trial includes ideas for pre and post mock trial activities. Students will better understand the concept of the Electoral College by participating in a mock Electoral College vote. Students learn about the three functions of government in this interactive role play. This short scripted mock trial for grades 4-6 involves SpongeBob suing Abercrombie and Fish for pants that don’t fit. Scripted parts allow the trial to move quickly to jury deliberations during which the student jurors actually decide the verdict of the case. In this lesson, students are asked which of two chocolate bars – one with nuts, one without – they prefer. A single representative is taken from each preference group. These representatives are given the chocolate bar that they prefer less, motivating a contractual trade. One student unknowingly has an empty wrapper, eliciting debate after the trade is completed. The class concludes by discussing possible equitable solutions. Students reflect on when and why rules are needed and the importance of rules in the classroom or in a community setting. This mock trial exposes students to the mechanics of a jury trial, and stresses the importance of functioning as a juror. This lesson offers students the opportunity to play the role of voters with special interests. Students draw up initiatives for new classroom or school rules. Working in groups of four or five, students share their ideas and rationale for new rules. In this lesson, students will gain an understanding of the separation of powers using role playing and discussion. Students will identify which parts of the Constitution provide for the branches of our government, and will categorize public officials into one of these three branches. Students learn why laws need to be interpreted by discussing laws/constitutional provisions. They present their findings to the class. Through these activities, students learn about the roles and responsibilities of the U.S. president and their own roles as citizens of a democracy. The purpose of this lesson is to help students understand the original purpose and powers of the Supreme Court according to the Constitution. Students learn the Supreme Court’s role in preserving the U.S. Constitution and the balance of power it creates. This lesson helps students to identify the requirements of a position of authority and the qualifications a person should possess to fill that position. Students learn a set of intellectual tools designed to help them both analyze the duties of the position and to decide if an individual is qualified to serve in that particular position. During the lesson students practice using the intellectual tools. The lesson includes a read aloud book to teach students about the Michigan Court System. The Preamble to the U.S. Constitution sets out the purposes or functions of American government as envisioned by the framers. Using the Preamble as a guide, students will identify the purposes of their own classroom and create a class “constitution.” Students learn about the Bill of Rights and the Importance of Rights American colonists had some strong ideas about what they wanted in a government. These ideas surface in colonial documents, and eventually became a part of the founding documents like the Declaration of Independence and Constitution. But where did they come from? This lesson looks at the Magna Carta, Mayflower Compact, English Bill of Rights, Cato’s Letters and Common Sense. Students will read about the election process and correctly put the steps in proper sequence. Students will participate in a debate on an issue that relates to their day-to-day school experience.
<urn:uuid:4972875e-f99a-44ed-af30-1c42668d855e>
CC-MAIN-2020-16
https://www.miciviced.org/grades/grades-k-5/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00461.warc.gz
en
0.947633
747
4.0625
4
Curriculum differentiation can be defined very simply and that is; To be given equal opportunities in the learning environment. Each individual should have the chance to develop and expand their knowledge to the best of their ability, and be given the chance to make the best use of their talents and capabilities as possible. With this ethos in mind the general curriculum will be expanded and streamlined to make extra provision for the pupils that are going to need extra help because they have special learning needs or certain disabilities. Curricular differentiation will be applied by using lesson planning, the equal opportunity program, health and safety regulations, and child protection law. This means that every young person or child has a right to be taught in a safe, comfortable, friendly and mentally stimulating environment. Learning mentors, teaching assistants and special needs coordinators all play a vital part in maintaining an equilibrium in the learning environment, and will inform planning to adjust the level of care or support given often forming the frontline in the humanist battle against inequality, prejudice and discrimination. Formal and informal observation, work and behavior assessment will be used to establish the learning levels of a child and their abilities in relation to their age and individual needs. Special learning needs can vary and may include children who have learning difficulties or children who have simply moved to a new school from a different area or even a different country. Each case is assessed on its own merits, and strategies are put into place depending on the level of support needed. This may include extra support within the classroom, or through a special learning program that can be delivered within mainstream school or in a separate learning unit. This will consist of support strategies, learning incentives, use of resources and different methods of effective communication. For example, special needs may include children who are blind or deaf or who may have speech difficulties. Their learning programs will be tailored to take these disabilities into consideration. A deaf child may be given a 1:1 support T.A. who can ‘sign’ the lessons to him, or extra resources may be used – such as a hearing loop. A child with speech difficulties may simply need a little extra ‘thinking’ and ‘talking’ time and this is something that can be accommodated within lesson planning and social development activities. By its very nature, curriculum differentiation is a flexible and evolving system, and if it can be implemented in a timely fashion within schools, vulnerable pupils will benefit greatly and their more able peers will learn valuable lessons in positive social interaction consideration towards others.
<urn:uuid:646983c3-3e79-4183-b331-dbddcbfb74b0>
CC-MAIN-2018-34
http://wh-magazine.com/alternative-education/what-is-curriculum-differentiation-2
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210304.2/warc/CC-MAIN-20180815200546-20180815220546-00278.warc.gz
en
0.953693
512
4.125
4
A day on Neptune lasts precisely 15 hours, 57 minutes and 59 seconds, according to the first accurate measurement of its rotational period made by University of Arizona planetary scientist Erich Karkoschka. His result is one of the largest improvements in determining the rotational period of a gas planet in almost 350 years since Italian astronomer Giovanni Cassini made the first observations of Jupiter's Red Spot. "The rotational period of a planet is one of its fundamental properties," said Karkoschka, a senior staff scientist at the UA's Lunar and Planetary Laboratory. "Neptune has two features observable with the Hubble Space Telescope that seem to track the interior rotation of the planet. Nothing similar has been seen before on any of the four giant planets." The discovery is published in Icarus, the official scientific publication of the Division for Planetary Sciences of the American Astronomical Society. Unlike the rocky planets – Mercury, Venus, Earth and Mars – which behave like solid balls spinning in a rather straightforward manner, the giant gas planets – Jupiter, Saturn, Uranus and Neptune – rotate more like giant blobs of liquid. Since they are believed to consist of mainly ice and gas around a relatively small solid core, their rotation involves a lot of sloshing, swirling and roiling, which has made it difficult for astronomers to get an accurate grip on exactly how fast they spin around. "If you looked at Earth from space, you'd see mountains and other features on the ground rotating with great regularity, but if you looked at the clouds, they wouldn't because the winds change all the time," Karkoschka explained. "If you look at the giant planets, you don't see a surface, just a thick cloudy atmosphere." "On Neptune, all you see is moving clouds and features in the planet's atmosphere. Some move faster, some move slower, some accelerate, but you really don't know what the rotational period is, if there even is some solid inner core that is rotating." In the 1950s, when astronomers built the first radio telescopes, they discovered that Jupiter sends out pulsating radio beams, like a lighthouse in space. Those signals originate from a magnetic field generated by the rotation of the planet's inner core. No clues about the rotation of the other gas giants, however, were available because any radio signals they may emit are being swept out into space by the solar wind and never reach Earth. "The only way to measure radio waves is to send spacecraft to those planets," Karkoschka said. "When Voyager 1 and 2 flew past Saturn, they found radio signals and clocked them at exactly 10.66 hours, and they found radio signals for Uranus and Neptune as well. So based on those radio signals, we thought we knew the rotation periods of those planets." But when the Cassini probe arrived at Saturn 15 years later, its sensors detected its radio period had changed by about 1 percent. Karkoschka explained that because of its large mass, it was impossible for Saturn to incur that much change in its rotation over such a short time. "Because the gas planets are so big, they have enough angular momentum to keep them spinning at pretty much the same rate for billions of years," he said. "So something strange was going on." Even more puzzling was Cassini's later discovery that Saturn's northern and southern hemispheres appear to be rotating at different speeds. "That's when we realized the magnetic field is not like clockwork but slipping," Karkoschka said. "The interior is rotating and drags the magnetic field along, but because of the solar wind or other, unknown influences, the magnetic field cannot keep up with respect to the planet's core and lags behind." Instead of spacecraft powered by billions of dollars, Karkoschka took advantage of what one might call the scraps of space science: publicly available images of Neptune from the Hubble Space Telescope archive. With unwavering determination and unmatched patience, he then pored over hundreds of images, recording every detail and tracking distinctive features over long periods of time. Other scientists before him had observed Neptune and analyzed images, but nobody had sleuthed through 500 of them. "When I looked at the images, I found Neptune's rotation to be faster than what Voyager observed," Karkoschka said. "I think the accuracy of my data is about 1,000 times better than what we had based on the Voyager measurements – a huge improvement in determining the exact rotational period of Neptune, which hasn't happened for any of the giant planets for the last three centuries." Two features in Neptune's atmosphere, Karkoschka discovered, stand out in that they rotate about five times more steadily than even Saturn's hexagon, the most regularly rotating feature known on any of the gas giants. Named the South Polar Feature and the South Polar Wave, the features are likely vortices swirling in the atmosphere, similar to Jupiter's famous Red Spot, which can last for a long time due to negligible friction. Karkoschka was able to track them over the course of more than 20 years. An observer watching the massive planet turn from a fixed spot in space would see both features appear exactly every 15.9663 hours, with less than a few seconds of variation. "The regularity suggests those features are connected to Neptune's interior in some way," Karkoschka said. "How they are connected is up to speculation." One possible scenario involves convection driven by warmer and cooler areas within the planet's thick atmosphere, analogous to hot spots within the Earth's mantle, giant circular flows of molten material that stay in the same location over millions of years. "I thought the extraordinary regularity of Neptune's rotation indicated by the two features was something really special," Karkoschka said. "So I dug up the images of Neptune that Voyager took in 1989, which have better resolution than the Hubble images, to see whether I could find anything else in the vicinity of those two features. I discovered six more features that rotate with the same speed, but they were too faint to be visible with the Hubble Space Telescope, and visible to Voyager only for a few months, so we wouldn't know if the rotational period was accurate to the six digits. But they were really connected. So now we have eight features that are locked together on one planet, and that is really exciting." In addition to getting a better grip on Neptune's rotational period, the study could lead to a better understanding of the giant gas planets in general. "We know Neptune's total mass but we don't know how it is distributed," Karkoschka explained. "If the planet rotates faster than we thought, it means the mass has to be closer to the center than we thought. These results might change the models of the planets' interior and could have many other implications." LINK:Neptune’s Rotational Period Suggested by the Extraordinary Stability of Two Features, Icarus, article in press (accepted manuscript), doi:10.1016/j.icarus.2011.05.013 Daniel Stolte | University of Arizona New quantum phenomena in graphene superlattices 19.09.2017 | Graphene Flagship Solar wind impacts on giant 'space hurricanes' may affect satellite safety 19.09.2017 | Embry-Riddle Aeronautical University Using ultrafast flashes of laser and x-ray radiation, scientists at the Max Planck Institute of Quantum Optics (Garching, Germany) took snapshots of the briefest electron motion inside a solid material to date. The electron motion lasted only 750 billionths of the billionth of a second before it fainted, setting a new record of human capability to capture ultrafast processes inside solids! When x-rays shine onto solid materials or large molecules, an electron is pushed away from its original place near the nucleus of the atom, leaving a hole... For the first time, physicists have successfully imaged spiral magnetic ordering in a multiferroic material. These materials are considered highly promising candidates for future data storage media. The researchers were able to prove their findings using unique quantum sensors that were developed at Basel University and that can analyze electromagnetic fields on the nanometer scale. The results – obtained by scientists from the University of Basel’s Department of Physics, the Swiss Nanoscience Institute, the University of Montpellier and several laboratories from University Paris-Saclay – were recently published in the journal Nature. Multiferroics are materials that simultaneously react to electric and magnetic fields. These two properties are rarely found together, and their combined... MBM ScienceBridge GmbH successfully negotiated a license agreement between University Medical Center Göttingen (UMG) and the biotech company Tissue Systems Holding GmbH about commercial use of a multi-well tissue plate for automated and reliable tissue engineering & drug testing. MBM ScienceBridge GmbH successfully negotiated a license agreement between University Medical Center Göttingen (UMG) and the biotech company Tissue Systems... Pathogenic bacteria are becoming resistant to common antibiotics to an ever increasing degree. One of the most difficult germs is Pseudomonas aeruginosa, a... Scientists from the MPI for Chemical Energy Conversion report in the first issue of the new journal JOULE. Cell Press has just released the first issue of Joule, a new journal dedicated to sustainable energy research. In this issue James Birrell, Olaf Rüdiger,... 12.09.2017 | Event News 06.09.2017 | Event News 06.09.2017 | Event News 19.09.2017 | Materials Sciences 19.09.2017 | Earth Sciences 19.09.2017 | Materials Sciences
<urn:uuid:41a37d1b-82cb-4923-b434-4c12219536f1>
CC-MAIN-2017-39
http://www.innovations-report.com/html/reports/physics-astronomy/clocking-neptune-039-s-spin-177825.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685698.18/warc/CC-MAIN-20170919131102-20170919151102-00110.warc.gz
en
0.947158
2,033
4.21875
4
Find the Latest Resources in Education Today How games can engage students and improve learning Understanding how games create a sense of flow and engagement can help teachers make better choices about their instructional use of games The reason is that teachers recognize the emotional energy that is created when students play games, and they strive to take advantage of the level of excitement and commitment to succeed that is difficult to achieve through other instructional strategies. In order to be able to make the best use of educational games that achieve this level of engagement it is important to understand how this commitment to a game is fostered. Games that are highly engaging create a sense of “flow” for the players. Flow is the experience of being totally involved in an activity and usually involves high levels of both concentration and enjoyment. Game developers strive to create a sense of flow during game play because when a player achieves a state of total or compete focus, complete immersion, and limited awareness of time, there is also created a strong desire to repeat or extend the experience. Developers identify this as a compulsion to play, the drive to play a game over and over. This feeling is exactly what a teacher wants to establish during instruction: to create an emotional connection with the content and a desire to repeat the experience. There are a number of game features that have been identified as helping create a sense of flow. Some of these include, for example, ease of use, simplicity of play, clear goals, feedback, interactivity, competition, control over actions, and a sense of community. These features of a game do not have to be a part of the educational content of the game and can actually involve actions that are separate from the content that is the focus of the game. These features are able to generate a connection to the content through the overall commitment to continuing and succeeding in the game that is established through the sense of flow felt by the player. Arcade-style games in which speed and competition are critical features can be used to engage students with content that is as simple as math facts or as complex as scientific argumentation. Understanding how games create a sense of flow and engagement can help teachers make better choices about their instructional use of games to introduce or reinforce learning academic content. Watch this clip from Reason Racer on science and space. Marilyn Ault is an Associate Research Scientist at the University of Kansas Center for Research on Learning. She and her colleagues have conducted research on the use of targeted games in the learning of complex skills such as Reason Racer. This game uses a rally-race format to engage middle school students in the skills and knowledge related to scientific argumentation.
<urn:uuid:28e9c699-360b-4b9c-9e13-019ca6e8e260>
CC-MAIN-2014-23
http://www.eschoolnews.com/2014/06/06/games-engage-students-241/
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877693.48/warc/CC-MAIN-20140722025757-00067-ip-10-33-131-23.ec2.internal.warc.gz
en
0.960458
534
4.21875
4
The ability to see in color is not specific to humans, but many animals can only see in black and white. Colored vision is possible because of the presence of cone photoreceptors in the eye; the different types of cone cells respond to different wavelengths of light, resulting in the perception of different colors. Cone cells are not active in low-light conditions, unlike the more sensitive rod photoreceptors. TL;DR (Too Long; Didn't Read) Some of the animals that only see in black, white and shades of gray include bats, golden hamsters, flat-haired mice, raccoons, seals, sea lions, walruses, some fish, whales and dolphins, to name a few. Monochromats, Dichromats and Trichromats Humans, along with several other primates, are trichromats when it comes to cone receptors – they have three different types. It was once thought that most mammals only saw in black and white, but this is not the case. Dogs and cats, for example, are dichromatic with limited color vision. Animals that are monochromatic, with only one type of cone, can typically only see in shades of black, white and gray. Diurnal and Nocturnal Animals The amount and ratio of rod to cone cells varies among animal species. In terrestrial animals, these factors are largely affected by whether the animal is diurnal or nocturnal. Diurnal species, such as humans, usually have a higher density of cone cells than nocturnal species, which have a greater number of rod cells to help them distinguish shapes and movement in low light. Monochromatic nocturnal mammals include various bats, rodents such as the golden hamster and flat-haired mouse, and the common raccoon. Old World primate species, such as chimpanzees, gorillas and orangutans, have trichromatic vision as do humans, but New World monkeys exhibit various ranges. Howler monkeys have three cones, but male tamarins and spider monkeys only have two, with females split between trichromacy and dichromacy. Night monkeys, or owl monkeys, are monochromatic. As their name suggests, they are nocturnal, with better vision in dim light than other primates have. Fish and Marine Mammals Most marine mammals are monochromatic; this includes seals, sea lions and walruses, and cetaceans, such as dolphins and whales. Most fish are trichromatic, with good color vision, but there are some exceptions. The only animals known to have no cones at all, and therefore that are incapable of color vision, are skates, cartilaginous fishes related to rays and, more distantly, to sharks. Sharks are also monochromatic, but rays are thought to have relatively good color vision. Marine mammals and fish may have lost their color vision over time as it was not advantageous in the water. About the Author Based in Scotland, Clare Smith is a writer specializing in natural science topics. She holds a Master of Science in plant biodiversity from the University of Edinburgh.
<urn:uuid:cbc7a4a8-c8c3-4fb3-91cd-c028c65b0c57>
CC-MAIN-2023-23
https://sciencing.com/list-animals-see-black-white-8518587.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648465.70/warc/CC-MAIN-20230602072202-20230602102202-00207.warc.gz
en
0.946788
657
4.09375
4
Nuclear power is an established and reliable way to generate electricity. In normal operating conditions, 75% of UK nuclear capacity can be assumed available to meet peak demand. Eight of the nine existing UK nuclear power stations are scheduled to close by 2028. Some may continue to operate for longer than currently scheduled, if EDF Energy and the Office for Nuclear Regulation (ONR) is satisfied this is safe, but eventually they will need to be replaced. No nuclear power station generates electricity all of the time. There are periods when it will operate at reduced levels or will be shut down for refuelling and maintenance. Most shutdowns are planned, and because of this can happen when demand is expected to be lower. Some reactors continue to operate at 20–40% of capacity while being refuelled, which typically takes three to four days, about every six weeks. Unplanned shutdowns occur when the power station is forced to shut down either by its control system (automatic shutdown) or by the plant operator (manual shutdown) due to a suspected fault. A precautionary approach is used for the shutdown systems and operating regimes of all nuclear power stations. Nuclear power stations are designed to deliver a reliable level of electricity for long periods of time. The new plants proposed for the UK are expected to generate electricity as much as 90% of the time during normal operation. In 1990, only a quarter of the world's nuclear plants had load factors of over 75% – that is, they generated more than 75% of their theoretical maximum electrical output. Today, almost two thirds of nuclear plants have load factors of over 75%, and a quarter have load factors higher than 90%. The proposed new generation of nuclear power stations in the UK aims to set a new standard, with shorter outage periods and reduced fuel consumption per kilowatt-hour (kWh) of electricity generated. This means they will use less fuel and will need to refuel less often.
<urn:uuid:c3ee17e0-4256-4c11-b56e-14e5ee927b95>
CC-MAIN-2018-13
https://www.edfenergy.com/future-energy/nuclear-energy-reliability-challenge-detail
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645943.23/warc/CC-MAIN-20180318184945-20180318204945-00315.warc.gz
en
0.95975
397
4.03125
4
How HIV Damages the Immune System September 18, 2008 The basic structure of HIV is similar to that of other viruses (Figure 1). HIV has a core of genetic material surrounded by a protective sheath, called a capsid. The genetic material in the core is RNA (ribonucleic acid), which contains the information that the virus needs in order to replicate (make more copies of itself) and perform other functions. You can think of RNA as the set of rules the virus follows in order to live. In HIV, viral RNA has a protein called "reverse transcriptase" that is crucial for viral replication inside T cells, white blood cells that help coordinate activities of the immune system. (The function of reverse transcriptase, which means "writing backwards," will be explained later when we discuss how HIV infects T cells.) HIV, like all other viruses, has proteins that are particular to itself. These proteins are called antigens. Antigens have diverse functions in viral replication. In the case of HIV, a combination of two antigens, gp120 and gp41, allow the virus to hook onto T cells and infect them. These antigens are located on the surface of the virus. (Another HIV antigen is p24, an antigen of the core of the virus that is measured to estimate the amount of active free-floating virus in the blood of HIV positive people). T cells are the main target of HIV in the blood, and they act as the host that the virus needs in order to replicate. (However, macrophages, B cells, monocytes, and other cells in the body can also be infected by HIV.) The T cell has a nucleus that contains genetic material in the form of DNA (deoxyribonucleic acid) (Figure 2). The cell's DNA has all the information that the cell needs in order to function. The difference between RNA and DNA is that the former is a single strand of genetic material, while the latter is a double strand (Figure 3). This difference is crucial in the process of T cell infection by HIV. Once inside the cell, the capsid dissolves, liberating the viral RNA and the reverse transcriptase. Now, in order to infect the cell, the viral RNA needs to travel into the T cell's nucleus (where it can change the cell's rules and convert it into a virus factory). However, for that to happen, an important transformation needs to take place. Normally, the T cell's nucleus communicates with the rest of the cell by transforming DNA into RNA and sending it out of the nucleus. (In all the cells of the body, RNA acts as a messenger between the nucleus and the rest of the cell. The DNA makes RNA and sends it out to convey orders.) The genetic material's passport to leave the nucleus is to be transformed into single-stranded RNA. In the same fashion, the passport to enter the nucleus is to be transformed into double-stranded DNA. Viral RNA needs to become DNA in order to start the replication process. Reverse transcriptase allows the RNA to borrow material from the cell and to "write backwards" a chain of viral DNA. Once transformed, the viral DNA will travel into the T cell's nucleus and attach itself to the cell's DNA (a process similar to placing a "bug" in a computer software program). At this point, if the T cell is activated, it will start producing new virus instead of performing normal T cell functions. Because it hijacks the "coordinator" T cells that help keep the immune system working, HIV is particularly devastating to immune health. In the process of replication, the virus destroys increasing numbers of T cells. The coordinator cells of an important part of the immune system are annihilated, leaving the body open to opportunistic infections. This article was provided by San Francisco AIDS Foundation. It is a part of the publication AIDS 101. Visit San Francisco AIDS Foundation's Web site to find out more about their activities, publications and services.
<urn:uuid:a734b69f-928f-465b-a6d7-d3838144878c>
CC-MAIN-2017-17
http://www.thebody.com/content/art2494.html?nxtprv
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00134-ip-10-145-167-34.ec2.internal.warc.gz
en
0.945409
824
4.09375
4
The people of the United States have begun to recognize that wetlands have numerous and widespread benefits. However, many of the goods and services wetlands provide have little or no market value. Because of this, the benefits produced by wetlands accrue primarily to the general public. Therefore, the Government provides incentives and regulates and manages wetland resources to protect the resources from degradation and destruction. Other mechanisms for wetland protection include acquisition, planning, mitigation, disincentives for conversion of wetlands to other land uses, technical assistance, education, and research. Although many States have their own wetland regulations, the Federal Government bears a major responsibility for regulating wetlands. The five Federal agencies that share the primary responsibility for protecting wetlands include the Department of Defense, U.S. Army Corps of Engineers (Corps); the U.S. Environmental Protection Agency (EPA); the Department of the Interior, U.S. Fish and Wildlife Service (FWS); the Department of Commerce, National Oceanic and Atmospheric Administration (NOAA); and the Department of Agriculture, Natural Resources Conservation Service (NRCS) (formerly the Soil Conservation Service). Each of these agencies has a different mission that is reflected in the implementation of the agency's authority for wetland protection. The Corps' duties are related to navigation and water supply. The EPA's authorities are related to protecting wetlands primarily for their contributions to the chemical, physical, and biological integrity of the Nation's waters. The FWS's authorities are related to managing fish and wildlife-game species and threatened and endangered species. Wetland authority of NOAA lies in its charge to manage the Nation's coastal resources. The NRCS focuses on wetlands affected by agricultural activities. States are becoming more active in wetland protection. As of 1993, 29 States had some type of wetland law (Want, 1993). Many of these States have adopted programs to protect wetlands beyond those programs enacted by the Federal Government. As more responsibility is delegated from the Federal Government to the States, State wetland programs are gaining in importance. Thus far, States have devoted more attention to regulating coastal wetlands than inland wetlands. The most comprehensive State programs include those of Connecticut, Rhode Island, New York, Massachusetts, Florida, New Jersey, and Minnesota (Mitsch and Gosselink, 1993). Many of these States regulate those activities affecting wetlands that are exempt from the Clean Water Act, Section 404 program. (For more information on specific State wetland protection programs, see the State Summary section of Despite the current recognition of wetland benefits, many potentially conflicting interests still exist, such as that between the interests of landowners and the general public and between developers and conservationists. Belated recognition of wetland benefits and disagreement on how to protect them has led to discrepancies in local, State, and Federal guidelines. Discrepancies in Federal programs are apparent in table 6, which shows programs that encourage conversion of wetlands and those that discourage conversion of wetlands. Conflicting interests are the source of much tension and controversy in current wetland protection policy. Although attempts are being made to reconcile some of these differences, many policies will have to be modified to achieve consistency. Despite all the government legislation, policies, and programs, wetlands will not be protected if the regulations are not enforced. Perhaps the best way to protect wetlands is to educate the public of their benefits. If the public does not recognize the benefits of wetland preservation, wetlands will not be preserved. Protection can be accomplished only through the cooperative efforts of citizens. If the public does not recognize the benefits of wetland preservation, wetlands will not be preserved FEDERAL WETLAND PROTECTION PROGRAMS AND POLICIES The Federal Government protects wetlands directly and indirectly through regulation, by acquisition, or through incentives and disincentives as described in table 6. Section 404 of the Clean Water Act is the primary vehicle for Federal regulation of some of the activities that occur in wetlands. Other programs, such as the "Swampbuster" program and the Coastal Management and Coastal Barriers Resources Acts, provide additional protection. Coastal wetlands generally benefit most from the current network of statutes and regulations. Inland wetlands are more vulnerable than coastal wetlands to degradation or loss because current statutes and policies provide them less comprehensive protection. Several of the major Federal policies and programs affecting wetlands are discussed in the following few pages. Also discussed are some of the States' roles in Federal wetland policies. The Clean Water ActThe Federal Government regulates, through Section 404 of the Clean Water Act, some of the activities that occur in wetlands. The Section 404 program originated in 1972, when Congress substantially amended the Federal Water Pollution Control Act and created a Federal regulatory plan to control the discharge of dredged or fill materials into wetlands and other waters of the United States. Discharges are commonly associated with projects such as channel construction and maintenance, port development, fills to create dry land for development sites near the water, and water-control projects such as dams and levees. Other kinds of activities, such as the straightening of river channels to speed the flow of water downstream and clearing land, are regulated as Section 404 discharges if they involve discharges of more than incidental amounts of soil or other materials into wetlands or other waters. The Corps and the EPA share the responsibility for implementing the permitting program under Section 404 of the Clean Water Act. However, Section 404(c) of the Clean Water Act gives the EPA authority to veto the permit if discharge materials at the selected sites would adversely affect such things as municipal water supplies, shellfish beds and fishery areas, wildlife, or recreational resources. By 1991, the EPA had vetoed 11 of several hundred thousand permits since the Act was passed (Schley and The review process for a Section 404 permit is shown in figure 39. After notice and opportunity for a public hearing, the Corps' District Engineer may issue or deny the permit. The District Engineer must comply with the EPA's Section 404(b)(1) Guidelines and must consider the public interest when evaluating a proposed permit. Four questions related to the guidelines are considered during a review of an application: The Clean Water Act regulates dredge and fill activities that would adversely affect wetlands. Through a public interest review, the Corps tries to balance the benefits an activity may provide against the costs it may incur. The criteria applied in this process are the relative extent of the public and private need for the proposed structure or work and the extent and permanence of the beneficial or detrimental effects on the public and private uses to which the area is suited. Some of the factors considered in the public interest review are listed in figure 39. Cumulative effects of numerous piecemeal changes are considered in addition to the individual effects of the projects. The FWS, NOAA, and State fish and wildlife agencies, as the organizations in possession of most of the country's biological data, have important advisory roles in the Section 404 program. The FWS and NOAA (if a coastal area is involved) provide the Corps and the EPA with comments about the potential environmental effects of pending Section 404 permits. Other government agencies, industry, and the public are invited to participate through public notices of permit applications, hearings, or other information-collecting activities. However, the public interest review usually does not involve public comment unless the permit is likely to generate significant public interest or if the potential consequences of the permit are expected to be significant. All recommendations must be given full consideration by the Corps, but there is no requirement that they must be acted upon. If the FWS or NOAA disagree with a permit approved by a District Engineer, they can request that the permit be reviewed at a higher level within the Corps. However, the Assistant Secretary of the Army has the unilateral right to refuse all requests for higher level reviews. The Assistant Secretary accepted the additional review of 16 of the 18 requested out of the total 105,000 individual permits issued between 1985 and 1992 (Schley and Winter, 1992). Because many activities may cause the discharge of dredged and fill materials, and the potential effects of these activities differ, the Corps has issued general regulations to deal with a wide range of activities that could require a Section 404 permit. The Corps can forgo individual permit review by issuing general permits on a State, regional, or nationwide basis. General permits cover specific categories of activities that the Corps determines will have minimal effects on the aquatic environment, including wetlands. General permits are designed to allow activities with minimal effects to begin with little, if any, delay or paperwork. General permits authorize approximately 75,000 activities annually that might otherwise require a permit (U.S. Environmental Protection Agency, 1991); however, most activities in wetlands are not covered by general permits (Morris, 1991). Not all dredge and fill activities require a Section 404 permit. Many activities that cause the discharge of dredged and fill materials are exempt from Section 404. The areas specifically exempted from Section 404 include: normal farming, forestry, and ranching activities; dike, dam, levee, and other navigation and transportation structure maintenance; construction of temporary sedimentation basins on construction sites; and construction or maintenance of farm roads, forest roads, or temporary roads for moving mining equipment (Morris, 1991). In addition, the Corps' flood- control and drainage projects and other Federal projects authorized by Congress and planned, financed, and constructed by a Federal agency also are exempt from the Section 404 permitting requirements if an adequate environmental impact statement is prepared. Not all methods of altering wetlands are regulated by Section 404. Common methods of altering wetlands are listed in table 7. Unregulated methods include: wetland drainage, the lowering of ground-water levels in areas adjacent to wetlands, permanent flooding of existing wetlands, deposition of material that is not specifically defined as dredged and fill material by the Clean Water Act, and wetland vegetation removal (Office of Technology Assessment, 1984). State authority over the Federal Section 404 program is a goal of the Clean Water Act. Assumption of authority from the EPA has been completed only by Michigan and New Jersey. Under this arrangement, the EPA is responsible for approving State assumptions and retains oversight of the State Section 404 program, and the Corps retains the navigable waters permit program (Mitsch and Gosselink, 1993). States cannot issue permits over EPA's objection, but EPA has the authority to waive its review for selected categories of permit applications. Few States have chosen to assume the program, in part because few Federal resources are available to assist States and assumption does not include navigable waters (World Wildlife Fund, 1992). |The program that seeks to remove Federal incentives for the agricultural conversion of wetlands is part of the Food Security Act of 1985 and 1990, and is known as "Swampbuster." Swampbuster renders farmers who drained or otherwise converted wetlands for the purpose of planting crops after December 23, 1985, ineligible for most Federal farm subsidies. Through Swampbuster, Congress directed the U.S. Department of Agriculture (USDA) to slow wetland conversion by agricultural activities (U.S. Fish and Wildlife Service, 1992). The government programs that Swampbuster specifically affects are listed in Section 1221 of the Food Security Act. If a farmer loses eligibility for USDA programs under Swampbuster, he or she may regain eligibility during the next year simply by not using wetlands for growing crops. Swampbuster is administered by USDA's Consolidated Farm Service Agency. The NRCS and the FWS serve as technical consultants (World Wildlife Fund, 1992).||The Swampbuster was amended by the Food, Agriculture, Conservation, and Trade Act of 1990 to create the Wetland Reserve Program. The Wetland Reserve Program provides financial incentives to farmers to restore and protect wetlands through the use of long-term easements (usually 30-year or permanent). The program provides farmers the opportunity to offer a property easement for purchase by the USDA and to recieve cost-share assistance (from 50 to 75 percent) to restore converted wetlands. Landowners make bids to participate in the program. The bids represent the payment they are willing to accept for granting an easement to the Federal Government. The Consolidated Farm Service Agency ranks the bids according to the environmental benefit per dollar. Easements require that farmers implement conservation plans approved by the NRCS and the FWS. Enrollment in the pilot program was authorized for nine States. The program's goal is to enroll 1 million acres by 1995 (U.S. Fish and Wildlife Service, 1992). Funding for this program is appropriated annually by Congress (U.S. Army Corps of Engineers, 1994). Because 74 percent of United States' wetlands are on private land, programs that provide incentives for private landowners to preserve their wetlands, such as the Wetland Reserve Program, are critical for protecting wetlands (Council of Environmental Quality, 1989).| "Swampbuster" removes Federal incentives for the agricultural conversion of wetlands. Coastal Wetlands Protection Programs The 1972 Coastal Zone Management Act and the 1982 Coastal Barriers Resources Act protect coastal wetlands. The Coastal Zone Management Act encourages States (35 States and territories are eligible, including the Great Lakes States) to establish voluntary coastal zone management plans under NOAA's Coastal Zone Management Program and provides funds for developing and implementing the plans. The NOAA also provides technical assistance to States for developing and implementing these programs. For Federal approval, the plans must demonstrate enforceable standards that provide for the conservation and environmentally sound development of coastal resources. The program provides States with some control over wetland resources by requiring that Federal activities be consistent with State coastal zone management plans, which can be more stringent than Federal standards (World Wildlife Fund, 1992, p. 87). A State also can require that design changes or mitigation requirements be added to Section 404 permits to be consistent with the State coastal zone management plan. The Coastal Zone Management Act has provided as much as 80 percent of the matching-funds grants to States to develop plans for coastal management that emphasize wetland protection (Mitsch and Gosselink, 1993). Some States pass part of the grants on to local governments. The Act's authorities are limited to wetlands within a State's coastal zone boundary, the definition of which differs among States. As of 1990, 23 States had federally approved plans. The 1982 Coastal Barriers Resources Act denies Federal subsidies for development within undeveloped, unprotected coastal barrier areas, including wetlands, designated as part of the Coastal Barrier Resources System. Congress designates areas for inclusion in the Coastal Barriers Resource System on the basis of some of the following criteria (Watzin, 1990): In addition, States, local governments, and conservation organizations owning lands that were "otherwise protected" could have their lands added to this system until May 1992. ("Otherwise protected" lands are areas within undeveloped coastal barriers that were already under some form of protection.) Once in the Coastal Barriers Resources System, these areas are rendered ineligible for almost all Federal financial subsidies for programs that might encourage development. In particular, these lands no longer qualify for Federal flood insurance, which discourages development because coastal lands are frequently subject to flooding and damage from hurricanes and other storms. The FWS is responsible for mapping these areas and approves lands to be included in the system. The purposes of the Coastal Barrier Resources Act are to minimize the loss of human life, to reduce damage to fish and wildlife habitats and other valuable resources, and to reduce wasteful expenditure of Federal revenues (Watzin, 1990). In the future, eligible surplus government land will be included if approved by the FWS. About 95 percent of the 788,000 acres added to the system in 1990 along the Atlantic and Gulf coasts consists of coastal wetlands and near-shore waters (World Wildlife Fund, 1992). Flood-Plain and Wetland Protection OrdersExecutive Orders 11988, Floodplain Management, and 11990, Protection of Wetlands, were signed by President Carter in 1977. The purpose of these Executive Orders was to ensure protection and proper management of flood plains and wetlands by Federal agencies. The Executive Orders require Federal agencies to consider the direct and indirect adverse effects of their activities on flood plains and wetlands. This requirement extends to any Federal action within a flood plain or a wetland except for routine maintenance of existing Federal facilities and structures. The Clinton administration has proposed revising Executive Order 11990 to direct Federal agencies to consider wetland protection and restoration planning in the larger scale watershed/ecosystem context. The Coastal Zone Management Program provides States with some control over wetland resources. WETLAND DELINEATION STANDARDS |The Corps published, in 1987, the Corps of Engineers Wetland Delineation Manual, a technical manual that provides guidance to Federal agencies about how to use wetland field indicators to identify and delineate wetland boundaries (U.S. Army Corps of Engineers, 1987). In January of 1989, the EPA, Corps, SCS, and FWS adopted a single manual for delineating wetlands under the Section 404 and Swampbuster programs-The Federal Manual for Identifying and Delineating Jurisdictional Wetlands (commonly referred to as the "1989 Manual"). The "1989 Manual" establishes a national standard for identifying and delineating wetlands by specifying the technical criteria used to determine the presence of the three wetland characteristics: wetland hydrology, water-dependent vegetation, and soils that have developed under anaerobic conditions (U.S. Environmental Protection Agency, 1991).||In 1991, the President's Council on Competitiveness proposed revisions to the 1989 Manual because of some concern that nonwetland areas were regularly being classified as wetlands (Environmental Law Reporter, 1992a). The proposed 1991 Manual was characterized by many wetland scientists as politically based rather than scientifically based. In September of 1992, Congress authorized the National Academy of Science to conduct a $400,000 study of the methods used to identify and delineate wetlands (Environmental Law Reporter, 1992b). On August 25, 1993, the Clinton administration's wetland policy, proclaimed that, "Federal wetlands policy should be based upon the best science available" (White House Office of Environmental Policy, 1993) and the 1987 Corps Manual is the sole delineation manual for the Federal Government until the National Academy of Sciences completes its study (White House Office of Environmental Policy, 1993).| "Federal wetlands policy should be based upon the best science available." Mitigation is the attempt to alleviate some or all of the detrimental effects arising from a given action. Wetland mitigation replaces an existing wetland or its functions by creating a new wetland, restoring a former wetland, or enhancing or preserving an existing wetland. This is done to compensate for the authorized destruction of the existing wetland. Mitigation commonly is required as a condition for receiving a permit to develop a wetland. Wetland mitigation can be conducted directly on a case-by-case onsite basis, or through a banking system. Onsite mitigation requires that a developer create a wetland as close as possible to the site where a wetland is to be destroyed. This usually involves a one-to-one replacement. A mitigation bank is a designated wetland that is created, restored, or enhanced to compensate for future wetland loss through development. It may be and usually is located somewhere other than near the site to be destroyed and built by someone other than the developer. The currency of a mitigation bank is the mitigation credit. "Mitigation banks require systems for valuing the compensation credits produced and for determining the type and number of credits needed as compensation for any particular project. ***Mitigation bank credit definitions are an attempt to identify those features [of wetland] which allow reasonable approximations of replacement" (U.S. Army Corps of Engineers, 1994, p. 63). Wetland evaluation methods have been developed or are being developed to address the problem of evaluating two different wetlands so that the degradation of one can be offset by the restoration, enhancement, or creation of the other and to assign either a qualitative or quantitative value to each wetland. When buying the credits, developers pay a proportionate cost toward acquiring, restoring, maintaining, enhancing, and monitoring the mitigation bank wetland. Banks cover their costs by selling credits to those who develop wetlands, or by receiving a taxpayer subsidy. Several problems are associated with wetland mitigation. The concept of wetland compensation may actually encourage destruction of natural wetlands if people believe that wetlands can be easily replaced. A 1990 Florida Department of Environmental Regulation study examined the success of wetland creation projects and found that the success rate of created tidal wetlands was 45 percent, whereas the success rate for created freshwater wetlands was only 12 percent. (Redmond, 1992). Figure 40 shows the relative success of wetland mitigation projects overall in south Florida. The apparent factor controlling the lower success rate for freshwater wetlands was the difficulty in duplicating wetland hydrology, that is, water-table fluctuations, frequency and seasonality of flooding, and A study of wetland mitigation practices in eight States revealed that in most of the States, more wetland acreage was destroyed than was required to be created or restored, resulting in a net loss of acreage when mitigation was included in a wetlands permit (Kentula and others, 1992). Less than 55 percent of the permits included monitoring of the project by site visit. A limited amount of information exists about the number of acres of wetlands affected by mitigation or the effectiveness of particular mitigation techniques because of the lack of followup. Several studies in Florida reported that as many as 60 percent of the required mitigation projects were never even started (Lewis, 1992). In addition, the mitigation wetland commonly was not the same type of wetland that was destroyed, which resulted in a net loss of some wetland types. (See article "Wetland Restoration and Creation" in this volume.) RECENT PRESIDENTIAL WETLAND PROTECTION INITIATIVES In his 1988 Presidential address and in his 1990 budget address to Congress, President Bush echoed the recommendations of the National Wetland Policy Forum. The Forum was convened in 1987 by the Conservation Foundation at the request of EPA. The short-term recommendation of the forum was to decrease wetland losses and increase wetland restoration and creation-the concept of "no net loss"-as a national goal. This implied that when wetland loss was unavoidable, creation and restoration should replace destroyed wetlands (Mitsch and Gosselink, 1993). On August 25, 1993, President Clinton unveiled his new policy for managing America's wetland resources. The program was developed by the Interagency Working Group on Federal Wetlands Policy, a group chaired by the White House Office on Environmental Policy with participants from the EPA, the Corps, the Office of Management and Budget, and the Departments of Agriculture, Commerce, Energy, Interior, Justice, and Transportation. The Administration's proposals mix measures that tighten restrictions on activities affecting wetlands in some cases and relax restrictions in other areas. The Clinton policy endorses the goal of "no net loss" of wetlands; however, it clearly refers to "no net loss" of wetland acreage rather than "no net loss" of wetland functions. The President's wetland proposal would expand Federal authority under the Section 404 program to regulate the draining of wetlands in addition to regulating dredging and filling of wetlands. Other proposed changes to the Federal permitting program include the requirement that most Section 404 permit applications be approved or disapproved within 90 days, and the addition of an appeal process for applicants whose permits are denied. The EPA and the Corps are directed to relax regulatory restrictions that cause only minor adverse effects to wetlands such as activities affecting very The Clinton policy calls for avoiding future wetland losses by incorporating wetland protection into State and local government watershed-management planning. This new policy also significantly expands the use of mitigation banks to compensate for federally approved wetland development or loss. Clinton's proposals relaxed some of the current restrictions on agricultural effects on wetlands and increased funding for incentives to preserve and restore wetlands on agricultural lands. The administration policy excluded 53 million acres of "prior converted croplands" from regulation as wetlands. Also, authority over wetland programs affecting agriculture was shifted from the FWS to the NRCS and proposed increased funding for the Wetlands Reserve Program, which pays farmers to preserve and restore wetlands on their property. "No net loss" of wetlands is a national goal. For Additional Information:Todd H. Votteler, 4312 Larchmont Avenue, Dallas, TX 75205 Thomas A. Muir,
<urn:uuid:67247005-5068-44d1-8041-82ee5ee7a66c>
CC-MAIN-2017-17
https://water.usgs.gov/nwsum/WSP2425/legislation.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00228-ip-10-145-167-34.ec2.internal.warc.gz
en
0.927908
5,153
4
4
NASA is building a laser-based contraption for the International Space Station, whose mission is to create a 3-D map of the Earth's forest, and unlock mysteries of forests' role in the carbon cycle. The contraption is known as the Global Ecosystem Dynamics Investigation lidar, and it is one of two new devices being built as part of the Earth Venture Instrument program. GEDI has a large and important task -- and not just due to the sheer amount of forest on Earth. The 3-D view of Earth's forests will specifically help scientists understand the impact of trees and forests on the carbon cycle. It will help fill in knowledge gaps about how much carbon trees store, and what the carbon release -- and environmental impact -- would be if forests were destroyed, releasing more carbon into our atmosphere. "One of the most poorly quantified components of the carbon cycle is the net balance between forest disturbance and regrowth,” said Ralph Dubayah, one of University of Maryland's principal GEDI investigators. “GEDI will help scientists fill in this missing piece by revealing the vertical structure of the forest, which is information we really can’t get with sufficient accuracy any other way.” And how GEDI will accomplish this is nothing short of incredible. It is a laser-based system, or lidar, and is equipped with a trio of Goddard-developed lasers. These lasers, which can be divided into 14 tracks, will scan all land 50 degrees north and south of the Equator -- covering most tropical and temperate forests. These "eye-safe" lasers will send out quick pulses of light that can penetrate the dense canopy, without causing harm. They'll then reflect back to a detector in space. It is estimated that in one year, GEDI will send out around 16 billion pulses. GEDI and these pulses, NASA explains, "can measure the distance from the space-based instrument to Earth’s surface with enough accuracy to detect subtle variations, including the tops of trees, the ground, and the vertical distribution of aboveground biomass in forests." “Lidar has the unique ability to peer into the tree canopy to precisely measure the height and internal structure of the forest at the fine scale required to accurately estimate their carbon content,” stated Bryan Blair, an investigator for GEDI at Goddard Space Flight Center. GEDI is expected to be completed in 2018, and will also be used to discover the age of trees, map biodiversity and understand the effects of climate change. h/t the Verge
<urn:uuid:9be266f0-ef3e-4fcd-9785-fa5508b7f467>
CC-MAIN-2020-34
https://www.salon.com/2014/09/10/nasa_planning_to_send_billons_of_laser_pulses_from_space_to_create_3d_map_of_earths_forests/
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00431.warc.gz
en
0.93308
532
4.28125
4
NASA's Cassini spacecraft has discovered a strange cloud on Titan that goes against everything scientists thought they knew about the moon's atmosphere. Titan is a cold place. This moon of Saturn is far enough from the Sun that temperatures are around 300 degrees Fahrenheit colder than on Earth. In this environment, liquid water can't exist. Instead, hydrocarbons like methane can condense and freeze, forming a cycle complete with clouds, rain, and surface oceans of liquid methane. It is the only place in the solar system besides Earth where these exist. NASA's Cassini probe was sent to observe Saturn and its moons, and besides taking incredible images, it spends a great deal of time studying Titan's atmosphere. Recently, it spotted the oddball cloud. It exists in Titan's stratosphere. It's made of a chemical called dicyanoacetylene, or C4N2. The problem is, Titan's stratosphere has almost no C4N2, so scientists aren't sure where all the stuff in the cloud came from. A possible answer is found in the Earth's own stratosphere. High above Earth's poles, water combines with pollutants like CFCs in thin, wispy clouds. The chemical reaction releases chlorine, which is present in these clouds in high concentrations despite being almost completely absent in the surrounding atmosphere. A similar process might occur on Titan. Chemicals already present in Titan's upper atmosphere could combine inside clouds, creating excess amounts of C4N2. The fact that Earth and Titan have similar processes in their upper atmosphere means that there might be other weather patterns the two have in common. Studying Titan's clouds could, in the future, provide answers to weather mysteries here on Earth.
<urn:uuid:0b677c8d-385c-472e-a1c7-f958b65feb1a>
CC-MAIN-2021-17
https://www.popularmechanics.com/space/a22953/cloud-titan/
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00460.warc.gz
en
0.934021
351
4
4
With this simulation from the NASA Climate website, learners explore different examples of how ice is melting due to climate change in four places where large quantities of ice are found. The photo comparisons, graphs, animations, and especially the time lapse video clips of glaciers receding are astonishing and dramatic. This music video features a rap song about some of the causes and effects of climate change with the goal of increasing awareness of climate change and how it will impact nature and humans. The website also includes links to short fact sheets with lyrics to the song that are annotated with the sources of the information in the lyrics. This is a hands-on inquiry activity using zip-lock plastic bags that allows students to observe the process of fermentation and the challenge of producing ethanol from cellulosic sources. Students are asked to predict outcomes and check their observations with their predictions. Teachers can easily adapt to materials and specific classroom issues. In this activity, students chart temperature changes over time in Antarctica's paleoclimate history by reading rock cores. Students use their data to create an interactive display illustrating how Antarctica's climate timeline can be interpreted from ANDRILL rock cores. This animation describes how citizen observations can document the impact of climate change on plants and animals. It introduces the topic of phenology and data collection, the impact of climate change on phenology, and how individuals can become citizen scientists. This interactive shows the extent of the killing of lodgepole pine trees in western Canada. The spread of pine beetle throughout British Columbia has devastated the lodgepole pine forests there. This animation shows the spread of the beetle and the increasing numbers of trees affected from 1999-2008 and predicts the spread up until 2015. Students perform a lab to explore how the color of materials at the Earth's surface affect the amount of warming. Topics covered include developing a hypothesis, collecting data, and making interpretations to explain why dark colored materials become hotter.
<urn:uuid:c06e70e1-ed55-4b3d-9dd1-9113530d3298>
CC-MAIN-2014-52
http://climate.gov/teaching/resources/search-education/informal-125/search-education/intermediate-3-5-124
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765610.7/warc/CC-MAIN-20141217075245-00144-ip-10-231-17-201.ec2.internal.warc.gz
en
0.921865
384
4.09375
4
Campylobacteriosis is food poisoning caused by the campylobacter bacterium. Campylobacteriosis occurs much more often in the summer months than in the winter months. Infants, young adults, and males are most likely to get the condition. Campylobacteriosis is usually caused by handling poultry (such as chicken or turkey) that is contaminated with the campylobacter bacterium and is raw or undercooked. For example, you can be infected by cutting poultry meat on a cutting board and then using the unwashed cutting board or utensil to prepare vegetables or other raw or lightly cooked foods. Drinking contaminated milk or water from contaminated lakes or streams can also result in infection. Campylobacteriosis usually is not spread from person to person. Some people have become infected through contact with the infected stool of a dog or cat. The symptoms of campylobacteriosis include diarrhea, cramping, stomach pain, and fever within 2 to 5 days after exposure to the bacteria. Your diarrhea may be bloody, and you may feel sick to your stomach and vomit. The illness usually lasts 1 week. Some people don't have any symptoms at all. In people with impaired immune systems, campylobacteriosis can be life-threatening. Your doctor will do a medical history and a physical exam and ask you questions about your symptoms, foods you have recently eaten, and your work and home environments. A stool culture can confirm the diagnosis. You treat campylobacteriosis by managing any complications until it passes. Dehydration caused by diarrhea and vomiting is the most common complication. Do not use medicines, including antibiotics and other treatments, unless your doctor recommends them. Most people recover completely within a week after symptoms begin, although sometimes recovery can take up to 10 days. To prevent dehydration, drink plenty of fluids. Choose water and other clear liquids until you feel better. You can take frequent sips of a rehydration drink (such as Pedialyte). Soda, fruit juices, and sports drinks have too much sugar and not enough of the important electrolytes that are lost during diarrhea. These kinds of drinks should not be used to rehydrate. When you feel like eating again, start with small amounts of food. In more severe cases, your doctor may recommend antibiotics. In rare cases, long-term problems can result from campylobacteriosis. Some people may have arthritis following campylobacteriosis. Others may develop a rare disease called Guillain-Barré syndrome. This occurs when your immune system attacks your nerves, which can lead to paralysis that lasts several weeks and usually requires that you go to a hospital. You can prevent campylobacteriosis by practicing safe food handling. It is important to pay particular attention to food preparation and storage during warm months when food is often served outside. Bacteria grow faster in warmer weather, so food can spoil more quickly and possibly cause illness. Do not leave food outdoors for more than 1 hour if the temperature is above 90°F (32°C), and never leave it outdoors for more than 2 hours. To learn more about Healthwise, visit Healthwise.org. © 1995-2021 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:2b9677a1-fb16-4b2b-8f26-5b27cf5f6f38>
CC-MAIN-2021-39
https://www.cigna.com/individuals-families/health-wellness/hw/medical-topics/campylobacteriosis-te6319spec
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00620.warc.gz
en
0.938177
692
4.09375
4
It’s important to choose a book that your child is interested in. Books come in a lot of different varieties or genres. A genre is a category characterized by similarities in form, style or subject matter. This article will discuss some different types of book genres that you and your child may enjoy exploring. Some different types of book genres include: Biography or autobiography – A biography is a nonfiction (true) account of someone’s life. It is written by someone other than the subject of the biography. An autobiography is a nonfiction (true) account of someone’s life. It is written by the subject of the autobiography. Drama/play – Drama is divided into different character parts that can be read by different people or in different voices. Fantasy – In fantasy, events occur that are outside of the normal ways the universe operates. Magic is very important and often involves journeys or quests. Fantasy is different than science fiction because science fiction is usually set in the future and also involves technology (see science fiction genre below). Fiction – Fiction is the form of any work that deals, in part or in whole, with information or events that are not real. They are invented by the author. Graphic novel – The term graphic novel is generally used to describe any book in a comic format that resembles a novel in length and narrative development. Historical fiction – A novel that is written at a different time than what is in the story. It tries to use the spirit and social conditions of a past age with realistic detail to historical fact. Mystery – A mystery is a puzzle in which the reader receives clues and solves step-by-step throughout the book. There is usually a conclusion that solves the mystery. Nonfiction – Nonfiction is true. Its primary function is to describe, inform, explain, persuade and/or instruct. Although nonfiction is true, it can still be entertaining. Science fiction – This genre often involves science and technology of the future. Science fiction is frequently set in space or a different universe or world. It often uses some real theories of science. Poetry – A poem is a collection of words that express an emotion or idea, sometimes with a specific rhythm. If your child is new to learning about different genres, this is a great time to help them explore books from each one. Consider helping your child find two books and authors from each genre. You can print this out and let your child list them. - Biography or autobiography - Graphic novel - Historical fiction Encourage your child to read books from different genres. This will enhance his/her reading levels and encourage them to try new things. They’ll find a genre that probably suits them more than another. That’s normal and to be expected. One book doesn’t suit everyone, and all children have different tastes. The most important thing is that your child is reading! Lesley Woodrum, WVU Extension Agent, Summers County
<urn:uuid:ad968e71-fc45-4e3d-827a-88b1882ffd6c>
CC-MAIN-2022-21
https://extension.wvu.edu/youth-family/youth-education/literacy/book-genres
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00378.warc.gz
en
0.950339
642
4.15625
4
The Exxon-Valdez oil spill of March 24, 1989, had long-lasting effects on Alaska's environment, animals and way of life. At the time of the spill, hundreds of volunteers stepped forward to clean up seabirds and other animals drenched in oil. Their work helped a modest number of animals, but many still died, and recovery efforts for a number of species continue after 24 years. According to the National Wildlife Federation, the death toll of individual species of native Alaskan wildlife is still being tallied as of 2013. In the days immediately following the spill -- which, at the time was the worst in U.S. history -- many animals died including upwards of 100,000 and possibly as many as 250,000 seabirds. More than 2,800 sea otters and 12 river otters immediately expired. At least 300 harbor seals and almost 250 bald eagles were also instantly destroyed. Orcas living in the area at the time, 22 in number, were killed, as were countless fish. Small organisms were killed by the trillions, leaving those animals who prey on them with nothing to eat, causing even more deaths. In the following days and weeks, these numbers climbed much higher. How They Died Aside from the reef fish and other animals nearby when the Exxon Valdez ran aground, millions of animals died as a direct or proximate cause of the spill. Animals covered in oil tried vainly to clean their bodies by licking themselves, only to be poisoned by the toxins in the oil. Birds weighted down by the heavy oil were unable to fly. Otters depend upon the unique design of their fur to help them tolerate extreme cold climates. When covered in oil, their fur is unable to act as a protective covering, so otters die of hypothermia. Whales are killed when they eat fish covered in oil or when their blowholes are plugged with oil, making it impossible for them to breathe. Ten Years After Ten years after the Exxon Valdez oil spill, scientists from the University of North Carolina at Chapel Hill reported in the journal "Science" that many animal species were still recovering and the damage to their habitats had not significantly decreased. It was once thought that the number of animals killed acutely -- that is, immediately following the spill -- would be much higher than any subsequent numbers. But Chapel Hill's researchers reported in 2009 that Alaska's coastal ecosystem continues to show toxins that affect wildlife. Twenty Years After In 2007 -- two decades after the oil spill -- the National Oceanic and Atmospheric Administration reported that 21,000 gallons of crude oil still pollutes the ecosystem within a 450-mile radius -- and the oil continues to kill animals within its sphere. The problem persists because the spill is contained within the Prince William Sound, so it doesn't biodegrade as it would in the open ocean. The orca pod affected by the spill never recovered. Sea otters and ducks, who forage for food in the beaches, need only scratch the surface to find layers of oil soaked into the sand. The oil remains toxic to these animals. Oceana, a conservation organization, reports that some species of loons, salmon, seals, ducks, herrings, pigeons, mussel and clam populations have never fully recovered. Commercial fishing, a $286 million industry, has not completely resumed in the area. - Scientific American: Environmental Effects of Exxon Valdez Spill Still Being Felt - GoodHousekeeping.com: 4 Dirty Secrets of the Exxon Valdez Oil Spill - National Wildlife Foundation: Voices from the Exxon Valdez Oil Spill: "The Day the Water Died" - Mother Nature Network: The 13 Largest Oil Spills in History - American Association for the Advancement of Science: Long-Term Ecosystem Response To The Exxon Valdez Oil Spill - National Geographic: Exxon Valdez Anniversary: 20 Years Later, Oil Remains - Oceana: Exxon Valdez Oil Spill Facts - Photos.com/Photos.com/Getty Images
<urn:uuid:c61f9166-1238-4c89-84c1-dc080171e957>
CC-MAIN-2017-51
http://animals.mom.me/effects-exxon-valdez-oil-spill-alaskan-wildlife-5478.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948519776.34/warc/CC-MAIN-20171212212152-20171212232152-00277.warc.gz
en
0.964683
824
4.03125
4
Children need to know that letters stand for phonemes and spellings map out the phonemes in spoken words in order for them to learn to read and spell words. Short vowels are the toughest to identify. The goal of this activity is to help the students recognize the phoneme /o/ in written and spoken words. In this activity students will learn the phoneme /o/ by learning a meaningful representation, letter symbol, and by finding /o/ in words. - Doc in the Fog (Educational Insights. - /o/ Tongue Twister "Owen observes offenses often." - Letter tiles: b,c,i,k,l,m,o,p,s,t,x. - Pictures of objects: mop, box, clock, shop -student assessment worksheet 1.Explain why new idea is valuable: "Why do you think it is important for us to learn the sound /o/ as well as the letter o?" In order to read and spell words it is important to recognize each sound in a word? What are other reasons that recognizing the sounds in words is important? "Raise your hand if have ever been to the doctor's office and the doctor look down your throat? What does he tell you to say when he does this? That's right, he tells you to open up and say /o/." I want all of us to pretend that we are at the doctor's office and the doctor has to look in our mouths and say /o/. (Everyone should open mouth and stick out their tongues as if the doctor was really looking to practice.) Cue students by giving them a 3.Okay, now we are going to try a tongue twister that involves several words with the /o/. "Oliver observes offenses often." Teacher says once then students repeat. Now, every time you hear a word with the /o/ sound I want you to really stretch out the /o/ at the beginning of the words. Let's try. I'll model first. "Ooooliver ooobserves ooofenses oooften." Now everyone else try it together. (Cue 1-2-3) 4.Now that we know hot to recognize the /o/ sound in words, lets do some practice activities. I'm going to say two words and I want you to tell me which word you heard the /o/ sound in. Do you hear /o/ hot or hat? Cat or dog? Offense or defense? Note or Knot? Ship or shop? Airplane or Helicopter? Great Job! 5.Now we are going to practice spelling and reading words by using our letterboxes. First, I am going to ask you to make words such as stop. You need to place each letter that represents the sound you hear in a box. For example [Model]: Stop- /s/ -/t/ - /o/ - /p/. I hear the /s/ first so lets place the letter that makes the /s/ sound in the first box. [Model] Does everyone have the letter s in the first box? Great! Now let's finish spelling our word. Does everyone have /t/-/o/-/p/? Great! That's t, o, and p. Now you try the following words: "I will put the tiles together to make the words and I want you to read them to me. [Teacher places s,t,o,p tiles together to make the words stop] Now let's go through the list of words together. I want you to read each word aloud." Now I want you to read Doc in the Fog aloud to me. [Book talk] Do you like magic? Doc is a magician. We have to read the book to see what magic tricks are in store for us. Students will be assessed on recognizing /o/ in spoken words as well as during the letterbox lesson. Students will also be given a worksheet after reading the book. The worksheet provides pictures of different objects with items that have the /o/ sound in their name and some that do not have the /o/ sound. Teacher will assess by informal observation at each table and listening to the students reading the name of the objects. Teacher should read all the names of the objects after the students begin circling the ones that have the /o/ sound. Doc in the Fog (Educational Insights. Melanie Tew: Its obvious Your Sick, http://www.auburn.edu/academic/education/reading_genie/persp/tewbr.html Heather Langley: Dr. Ollie Says Open Wide and Say /o/, http://www.auburn.edu/academic/education/reading_genie/voyages/langleybr.html Return to Passages Index
<urn:uuid:c32c7276-2ea1-4010-917c-29af87d9a5ad>
CC-MAIN-2017-04
http://www.auburn.edu/academic/education/reading_genie/passages/mitchumbr.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00497-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940259
1,021
4.375
4
At this stage we can draw a distinction between sound and unsound arguments. An argument is called sound if and only if it is valid and all its premises are true. Otherwise, the argument is called unsound. The following is an example of a sound argument. All mammals have lungs. All rabbits are mammals. Therefore, all rabbits have lungs. Here all the premises are true and the argument is valid. Hence, it is a sound argument, the other hand, an argument is unsound if it is either invalid or some of its premises are false. No mammals have lungs. No whales are mammals. Therefore, no whales have lungs. Here the argument is invalid and the premises are also false. Hence it is unsound. Further, even if an argument is valid but some or all of its premises are false then also the argument is sound. Consider the following example: No insects have six legs. All spiders are insects. Therefore, no spiders have six legs. Here both the premises are false but the argument is valid. Hence, it is also an unsound argument. Thus mere validity of an argument does not make the argument sound, because there ire valid arguments those are not sound. To say that an argument is unsound amounts to the claim that the argument is either invalid or some of its premises are false. Thus the soundness of an argument implies validity as well as the truth of all its premises. But the unsoundness of an argument does not imply invalidity, because there are unsound arguments that are valid. At this stage the following question may be asked. Why logicians should not confine their attention only to sound arguments? The answer is, we cannot study only sound arguments though it is interesting. Because, to know an argument to be sound we must know that all its premises are true. But knowing the truth of the premises is not always possible. Further, we are often intercoted in arguments whose premises are not known to be true. For example, when a scientist verifies a scientific hypothesis or even a theory, he or she very often deduces consequences from the hypothesis or the theory in question and compares these consequences with the data and if the result tallies then the hypothesis or the theory is verified to be true. Here the investigator can not know the truth of the hypothesis or the theory prior to the process of testing. If the truth of theory or the hypothesis was known to the scientist prior to the verification, the verification would be pointless. This is in fact not the case. So, to confine our attention to sound arguments only would be self-defeating. But this does not make sound arguments logically uninteresting because, if by some means, we know that an argument is sound then we may infer the truth of its conclusion.
<urn:uuid:e0641d9e-fd42-4a98-8694-1eb50adc6a61>
CC-MAIN-2014-41
http://www.preservearticles.com/201105317311/sound-and-unsound-argument.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133132.72/warc/CC-MAIN-20140914011213-00204-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
en
0.936388
577
4.28125
4
The stability of life on Earth depends on the biogeochemical cycles of carbon and other essential elements, which in turn depend on microbial ecosystems which are, at present, poorly understood. EAPS Professor Daniel Rothman has a plan for a major new research program aimed at gauging the potential for another mass extinction event, like the end-Permian Great Dying. Five times in the last 500 million years, more than three quarters of living species have vanished in mass extinctions. Each of these events has been associated with a significant change in Earth’s carbon cycle. Some scientists think that human-induced environmental change—including our massive discharges of carbon into the atmosphere—may soon cause a sixth major extinction. Is such a catastrophe really possible? The key to answering this question lies in the recognition that the Earth’s physical environment and the life it supports continuously interact as a closely coupled system. The core of this interaction is the carbon cycle. Plants and microorganisms, both on land and in the surface layers of the ocean, take carbon dioxide from the atmosphere and “fix” the carbon in organic matter through the process of photosynthesis. Other organisms—most importantly microbes, but also including animals and people—metabolize organic matter, releasing carbon back to the atmosphere, a process known as respiration. But while photosynthesis is visible in the greening of leaves and the spectacular algal blooms on the ocean surface, respiration is neither visible nor well understood. That’s because respiration occurs in different places and at very different timescales. In the ocean’s surface layers, for example, respiration happens fairly quickly—minutes to months. A small percentage of organic matter escapes degradation and drops slowly to the bottom of the ocean, becoming buried in the sediments, where respiration can take thousands of years. So over time, lots of organic carbon accumulates at the bottom of the ocean. And some of that gets embedded in sedimentary rocks, where the effective timescale for respiration can be many millions of years. Virtually all of the fossil fuels we burn—oil, coal, natural gas—come from that latter reservoir of organic carbon. Over the last billion years, including through multiple ice ages, the Earth’s carbon cycle has remained mostly stable. That means that the process of fixing carbon through photosynthesis and the process of respiration have remained approximately in balance. But because the ocean sediments contain much more carbon than the atmosphere—at least 10 times as much—even small changes in respiration rates could have a huge, de-stabilizing impact. A disruption in the carbon cycle that rapidly released large amounts of carbon dioxide, for example, could potentially cause mass extinctions—by triggering a rapid shift to warmer climates, or by acidifying the oceans, or by other mechanisms. The conventional explanation for what killed off the dinosaurs and caused the most recent, end-Cretaceous mass extinction was a huge asteroid impact on Earth—which certainly caused a massive debris shower and likely darkened the sky, perhaps for years. This and some other extinctions are also associated with massive and widespread volcanism. Are these sufficient to trigger mass extinctions, even in the deep oceans? In at least one case, our calculations strongly suggest that these physical events, by themselves, were not enough to explain the observed changes—that whatever triggering role impacts or volcanism may have played, other factors contributed to and amplified changes in the carbon cycle. We believe that acceleration of the microbial respiration rate must have been involved, thus releasing carbon from the deep ocean and sediment reservoirs. In any event, the evidence is clear that significant disruptions or instabilities have punctuated an otherwise stable carbon cycle throughout Earth’s history, with changes so rapid or so large that they triggered a shift to a new and different equilibrium, with profound impact on all living things. One example is the microbial invention, about two-and-a-half billion years ago, of photosynthesis—which resulted in a transition from an atmosphere without oxygen to a stable oxygenated state. That in turn enabled the evolution of macroscopic, multi-cellular life, including us. Another example is the end-Permian extinction, the most severe in Earth history, which was immediately preceded by an explosive increase of carbon in the atmosphere and the oceans. A recent research paper (Rothman et al., 2014) attributes the surge of carbon to the rapid evolution of a new microbial mechanism for the conversion of organic matter to methane, which accelerated respiration. In both cases, the disruption of the carbon cycle was driven or at least accelerated by life itself—microbial life. Other mass extinctions are also associated with severe disruption of the carbon cycle, although the specific triggering mechanisms are not known. But what seems clear is that small changes in the ways microbes respire organic matter can have considerable global impact. Might the current human releases of carbon trigger such a change as well, enabling microorganisms to accelerate their conversion of the huge reservoir of marine sedimentary carbon into carbon dioxide? Understanding the mechanisms of respiration in detail—including in the deep ocean and the sediment reservoirs of organic carbon—is thus critical to understanding the potential for another mass extinction. For the modern carbon cycle, the principal problem concerns the fate of marine organic carbon that resists degradation for decades or longer. Two reservoirs are critical: dissolved organic carbon, which can persist for thousands of years, and sedimentary organic carbon, which can persist for millions of years. Imbalances in the carbon cycle are determined by shifts of these timescales or respiration rates. These rates are especially hard to determine when organic compounds are complex and/or the organic matter is tightly embedded in sedimentary rocks. New tools will enable us to measure how specific enzymes bind to specific organic molecules found in seawater. And controlled experiments will measure how microbes, organic matter, and minerals interact in sediments, developing new methods such as high-resolution calorimeters to measure the rates of degradation in the lab and in the field. Unlike the major extinction events already mentioned, many past disturbances of the carbon cycle had no large-scale impact. What sets them apart? Sedimentary rocks deposited at different times record indications of environmental change, but the interpretation of these signals is an evolving science. The project will reconstruct, for as many events as possible, the sequence of environmental changes, focusing on fluxes of carbon. By employing mathematical techniques similar to those used to establish the modern theory of chaos, we expect to discover distinct classes of behavior that separate true instabilities from more gradual environmental change. During periods of unstable growth, important changes in the molecular composition of organic matter are likely. By analyzing these changes, we expect to discover mechanisms associated with or leading to instabilities of the Earth’s carbon cycle. Especially pertinent is the potential for rapid evolution in microbial ecosystems. Rapid evolution modifies the structure of populations and thus can alter the respiration rates—with impact on all components of ecosystems, potentially leading to instability, disruption, and the emergence of new stable states. The central challenge will be to use these new findings to develop a theory of instability for the Earth’s carbon cycle system. Linking the specific mechanisms discovered in our studies of the past and present carbon cycles to such a theory is a key objective. It requires learning how to translate molecular, genomic, and microbial metabolic information into an understanding of evolutionary feedbacks that can drive instability and mass extinctions. Collectively, this work amounts to the design and execution of a stress test of the carbon cycle system. Our studies of the modern carbon cycle will provide a base case. Theoretical models of carbon cycle dynamics will yield specific hypotheses for the conditions that determine its unstable evolution. These hypotheses will then be tested using geochemical signals derived from past extreme environmental events. That should provide an explicit understanding of the range of stability of the carbon cycle system and the potential for a sixth extinction. Daniel H. Rothman, Gregory P. Fournier, Katherine L. French, Eric J. Alm, Edward A. Boyle, Changqun Cao, and Roger E. Summons (2016), Methanogenic burst in the end-Permian carbon cycle, Proceeding of the National Academy of Sciences, vol. 111, no. 15, pp. 5462–5467, doi: 10.1073/pnas.1318106111 In this issue For further information on giving opportunities or creating a named fund to benefit the Department of Earth, Atmospheric and Planetary Sciences, please contact: Senior Development Officer Earth, Atmospheric and Planetary Sciences at MIT 617 253 5796 Keep up to date with all things EAPS: subscribe to our newsletter - email@example.com
<urn:uuid:0569b7db-c445-4287-9d74-a428d42e0007>
CC-MAIN-2021-04
https://eapsweb.mit.edu/news-events/eaps-scope/2016/the-sixth-dying
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00323.warc.gz
en
0.924306
1,790
4.34375
4
From November 1989, the mood on the streets changed. More and more demonstrators were chanting, "We are one people", instead of the earlier slogan "We are the people". Few people still believed the state could be reformed. At the first free parliamentary elections in the GDR in March 1990 the population took a decision in favour of German unification. Some of the new civil rights organisations founded in autumn 1989 campaigned for reforms within the GDR and for gradually bringing the two German states closer together. In East and West, unification sceptics were afraid the GDR would be "sold out" and warned against a revival of right wing nationalistic ideology in Germany. Yet with economic and political crisis looming, people were losing patience and calls for German unification were mounting. West German politicians also initially envisaged phased unification of the GDR and Federal Republic - but popular pressure forced decisions to be made more quickly. The first free democratic elections in the GDR were held on 18 March 1990; they were won by the Alliance for Germany coalition, which stood for rapid unification of the two German states.
<urn:uuid:f28e495f-feed-49fa-8faa-a09320a76f66>
CC-MAIN-2022-21
https://revolution89.de/en/unity/no-experiments
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00400.warc.gz
en
0.98761
218
4.15625
4
Scientists study the Earth and natural disasters through the science of seismology – how seismic waves move through the Earth – and seismometers are their most important tool. Geophysicist Mark Zumberge at the Scripps Institution of Oceanography is developing a new breed of seismometer to get a better look inside Earth and therefore help scientists understand and predict natural hazards. Mark Zumberge: Basically a seismometer is a box with a spring inside with a mass hanging inside, and as the ground shakes the mass goes up and down. Zumberge said a conventional seismometer records how its mass is displaced by waves of seismic energy with electric circuit boards. The electronics make the instrument bulky and hard to use in hot environments, like the interior of the Earth. Mark Zumberge: We’ve come up with a way, using optical fibers, to bring laser light to and from the seismometer, to make very precise measurements of the vibrations of the mass of the seismometer. The optical technology allows the device to be deployed in boreholes, narrow shafts drilled into the Earth. The advantage, Zumberge said, is that there’s not as much background activity that must be separated from the seismic activity. Mark Zumberge: So we can study very large earthquakes nearby, but distant quiet earthquakes as well. Zumberge said his new seismometer will provide a better perspective on what happens inside the Earth during an earthquake. Mark Zumberge: Understanding how these processes evolve and how they affect us is important, in the long run, to predict and understand these natural hazards. A huge amount of what we know about the earth comes from seismology. Zumberge said that the information that scientists get from seismometers helps them create pictures of what’s inside the Earth. Mark Zumberge: Seismic waves penetrate the Earth, and depending on how waves travel through the Earth and how they’re reflected, what they bounce off of, how fast they go – all those aspects of wave propagation in the earth help us make pictures of what’s inside the Earth. He said the problem with conventional seismometers is that they are run on electronic circuit boards, which cannot withstand the hot temperatures inside the Earth, and are tethered by cables. The optical seismometer – which uses a beam of laser light – can be deployed into boreholes drilled deep inside the Earth, which is a better environment for taking precise measurements. He said an improved seismometer might help scientists discover new signals of what goes on in the earthquake process. But he added that scientists are still a ways from predicting earthquakes in advance. Mark Zumberge: Maybe someday we’ll learn enough to forecast better when earthquakes might occur, but that’s a long road. Learning to love science. As a producer for EarthSky, Lindsay Patterson interviews some of the world's most fascinating scientists. Through EarthSky, her work content is syndicated on some of the world's top media websites, including USAToday.com and Reuters.com. Patterson is also charged with helping to stay in steady communication with the thousands of scientists who contribute to EarthSky's work of making the voice of science heard in a noisy world. She graduated from Colorado College with a degree in creative writing, and a keen interest in all forms of journalism and media.
<urn:uuid:96b10e7a-507c-4a2d-9e60-0946d52806c9>
CC-MAIN-2020-50
https://earthsky.org/earth/mark-zumberges-seismometers-help-scientists-study-earths-interior
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195656.78/warc/CC-MAIN-20201128125557-20201128155557-00547.warc.gz
en
0.92513
687
4.21875
4
As the Large Hadron Collider (LHC) smashes together protons at a centre-of-mass energy of 13 TeV, it creates a rich assortment of particles that are identified through the signature of their interactions with the ATLAS detector. But what if there are particles being produced that travel through ATLAS without interacting? These “invisible particles” may provide the answers to some of the greatest mysteries in physics. One example is Dark Matter, which appears to make up 85% of mass in the Universe but has not been conclusively identified yet. We learned of its existence through astrophysical observations, including galaxy formation and gravitational lensing. However, we know more about what it isn’t than what it is. There is no single theory of Dark Matter; different predictions have different implications for its properties and how it interacts. The invisible particles produced in LHC collisions carry away energy, resulting in an apparent imbalance in the energy/momenta of the observed visible particles. Different theories predict that, if the invisible particles exist, more events with large imbalance and other distinctive patterns of visible particles could be detected by ATLAS. Comparing the number of such events predicted by theory to the number of events observed in the detector is a way of searching for invisible particles indirectly. While shown to be a successful approach, there are limitations. What if all our theoretical models of Dark Matter are wrong? What if an entirely different phenomenon is the cause of invisible particles? Currently, if theoretical models are shown to be incorrect, it can be difficult and time-consuming to re-use the data to test new models. To do so requires an understanding of how these particles were recorded in the detectors, how the events were selected, and how the Standard Model processes that mimic these particle patterns were modelled. ATLAS physicists have developed a new measurement-led approach, which is designed to be detector-independent and allows for easy re-interpretation of the data in future. ATLAS physicists have developed a new measurement-led approach, which is designed to be detector-independent and allows for easy re-interpretation of the data in future. In this approach, a quantity Rmiss is defined which is sensitive to the production rate and properties of any invisible particle(s). This quantity is measured versus various properties of the collision events, including the amount of momentum imbalance and the energy/momenta of the visible particles. Not just the value of this quantity, but how it changes with these measured properties is found to provide sensitivity to invisible particles. Known decays of Z bosons – produced in LHC collisions – into invisible neutrinos mean this quantity is non-zero even in the absence of a new invisible phenomenon. This quantity is carefully corrected for detector inefficiencies, leaving a measurement free from experimental bias and independent of any new physics hypothesis (Figure 1). Any physicist can then easily compare the predictions of their model against this measurement. To demonstrate the new approach, the measurement is used to test three distinctly different theoretical models of Dark Matter, where it is produced either (1) via the strong force, (2) through the decays of Higgs bosons, or (3) via the electroweak force. No evidence of Dark Matter is observed and so ATLAS is able to place stringent constraints on these theories (Figure 2). The constraints are competitive with existing approaches that aim to test these specific theories and complementary to measurements from space-based indirect detection experiments. So, no matter what mysteries lie in wait in the “invisible” realm, ATLAS has the techniques it needs – both now and in the future – to continue to learn more about the Universe. - Measurement of detector-corrected observables sensitive to the anomalous production of events with jets and large missing transverse momentum in proton-proton collisions at 13 TeV using the ATLAS detector (arXiv: 1707.03263, see figures) - See also the full lists of ATLAS Conference Notes and ATLAS Physics Papers.
<urn:uuid:91fe43a6-4771-4506-9873-b4a47c9b75bf>
CC-MAIN-2019-51
https://atlas.cern/updates/physics-briefing/invisible-plan
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00279.warc.gz
en
0.926658
825
4.25
4
Children learn in different ways and one of the ways in which they learn is by applying logic and deduction, essentially learning through exploring. To nurture this approach to learning, introduce simple puzzles and toys that encourage thinking and problem solving:- You can boost the logical power of older children by stimulating thinking with mental challenges:- The ability to learn through logical thining is a skill that will assist a child throughout their lives - encouraging this style of thinking from early on will create a solid foundation on which they can grow with time. Children love playing games: it's fun, it's easy and it's a great way of learning without even knowing it! Early on, games like peek-a-boo and pat-a-cake and other nursery rhymes form the basis of children's games. This is how the idea of games actually starts. The little ones laugh and smile and begin to understand cause and effect: ie. each time teddy pops out from behind the cushion, it will make them jump and they will laugh! It sounds simple, but it's an early form of game-playing. They will then progress to all sorts of other games: easy box games, then number, colour and letter games and eventually board games. At each stage they are learning different things and experiencing different ideas. The notion of a winner and loser; the notion of practising at something to get better; the idea that you need to make an effort and try hard at something to then enjoy the feeling of doing well etc. What are the benefits of playing games? Does it really help in any way other than passing the time? You may not approve of it, but lying is an important part of cognitive development and ultimately a part of growing up! Contrary to what you might think, new research has found that the earlier toddlers start telling lies, the more successful they will be in later life. The research was carried out by Dr Kang Lee and his team at the Child Development Research Group at the University of Toronto in Canada. Lee suggests that lying requires the child to manipulate a series of fabricated events to try to make them concur. The skills required to do this include the ability to gather information from different sources and manage data to a desirable outcome. All of this requires an awful lot of thinking and brain power - an ability that young children rarely display. However, those that do are probably going to turn out to be more intelligent than their less capable peers in their cohort. Whilst age two is the youngest age at children will be able to lie, for many children this 'skill' doesn't arrive until nearer the age of four. The ability to lie is situated within cultural experience - western toddlers will lie to protect friends whilst Chinese toddlers will tell a lie in order to protect their team. Kang Lee offers an insightful way to tell whether a toddler is telling the truth or not. As they answer a question, watch their body language. If a toddler glances to the right as they offer an explanation, then they are most likely fibbing - looking towards right suggests that they are using the part of the brain to visualise something that they haven't experienced directly. If they look to their left then they are likely to be using the part of the brain that recollects events and therefore they are most likely telling the truth. Dr. Lee suggests that parents should take advantage of these tell-tale signs in order to spot whether young children are lying and to act appropriately - this will help to teach that lying is not an acceptable option. It stands to reason that art and craft assists to develop fine muscle control in your babies, and that kicking, running and chasing games improve their physical strength and control, but how do you kick start your baby's ability to think and solve problems? Funnily enough, abstract thinking and analytical skills are the focus of many Fisher Price toys created for babies and toddlers. You may not have thought about it, but toys such as shape sorters, simple jigsaws starting with just two pieces per puzzle, old fashioned building blocks and musical instruments all help to develop analytical and thinking skills in babies and toddlers. Walk into a toyshop and so many of the toys available today were available in a similar form in our own childhood - many were available in similar form during our parents and grandparents childhoods too! Science has long told us that interacting with such toys helps us explore the world and develop our thinking, perhaps what is more surprising is that there are so few innovations in childrens toys over the last two generations. That comes down to the fact that human development hasn't evolved in that time, and for a long time we have had a pretty good understanding of it. When nurturing your children, or children that you work with, introduce a good balance of 'thinking' games and activities. This is only one area of child development, but it can be easy to overlook the importance of this area if you particularly enjoy more physical activities. That is one reason that the Early Years Foundation Stage is so important - by following the guidelines and ticking off boxes for areas that you have pursued, you will automatically be delivering a well balanced development plan to your little ones. If you aren't the most creative person and struggle for ideas in areas of EYFS, or you simply want ideas that you can adopt and develop, then sign up to ToucanLearn now! We offer hundreds of activities concentrating on key development skills, and for premium members we link them all to EYFS too so that you can track progress with your little ones. If you are toying with the idea of subscribing to ToucanLearn, then there are several hundred good reasons for doing so! |<< <||> >>| Hi! I'm Tikal the Toucan, the mascot for ToucanLearn. Follow my blog to find out interesting things relating to babies, toddlers and preschool children! Sign up FREE to ToucanLearn to follow our activity based learning programme for babies, toddlers and children. We offer hundreds of fun learning craft, games and activities - every activity is aimed at the capabilities of your specific children. Download custom activity sheets, and log their progress in each child's unique Daily Diary! You'll also find sticker and reward charts, certificates, number and letter practice. Every activity links into the Early Years Foundation Stage (EYFS) areas of learning and development. Fill in our Daily Diary to log progress against the EYFS and add photo entries instantly simply by sending them straight from your phone. You can share diaries back with parents or childminders so that everyone can enjoy watching your children develop. activities animals art babies baby bath behaviour books «child development» childcare childminder children christmas colours communication computers confidence cooking counting craft «daily diary» development doctor «early years foundation stage» eating eyfs family «fine motor skills» food fruit fun games garden «gross motor skills» happy health healthy «healthy eating» ideas language «language development» learning letters «make believe» music nature numbers nursery ofsted outdoors parenting park pictures play pregnancy reading relax research routine safety school shapes sleep speech sun television toddler toddlers toucanlearn «toucanlearn blog» toys vegetables water words writing ©2017 by ToucanLearn Ltd.Credits: evoCore
<urn:uuid:20e0ccc5-3764-4435-94b5-5d0eebb9b18f>
CC-MAIN-2017-51
https://www.toucanlearn.com/blogs/blog5.php/thinking:
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596051.82/warc/CC-MAIN-20171217132751-20171217154751-00357.warc.gz
en
0.950673
1,475
4.0625
4
Pre-Columbian civilization in the fertile, wooded region that is now Paraguay consisted of numerous seminomadic, Guarani-speaking tribes of Indians, who were recognized for their fierce warrior traditions. They practiced a mythical polytheistic religion, which later blended with Christianity. Spanish explorer Juan de Salazar founded Asuncion on the Feast Day of the Assumption, August 15, 1537. The city eventually became the center of a Spanish colonial province. Paraguay declared its independence by overthrowing the local Spanish authorities in May 1811. The country's formative years saw three strong leaders who established the tradition of personal rule that lasted until 1989: Jose Gaspar Rodriguez de Francia, Carlos Antonio Lopez, and his son, Francisco Solano Lopez. The younger Lopez waged a war against Argentina, Uruguay, and Brazil (War of the Triple Alliance, 1864-70) in which Paraguay lost half its population; afterwards, Brazilian troops occupied the country until 1874. A succession of presidents governed Paraguay under the banner of the Colorado Party from 1880 until 1904, when the Liberal party seized control, ruling with only a brief interruption until 1940. In the 1930s and 1940s, Paraguayan politics were defined by the Chaco war against Bolivia, a civil war, dictatorships, and periods of extreme political instability. Gen. Alfredo Stroessner took power in May 1954. Elected to complete the unexpired term of his predecessor, he was re-elected president seven times, ruling almost continuously under the state-of-siege provision of the constitution with support from the military and the Colorado Party. During Stroessner's 34-year reign, political freedoms were severely limited, and opponents of the regime were systematically harassed and persecuted in the name of national security and anticommunism. Though a 1967 constitution gave dubious legitimacy to Stroessner's control, Paraguay became progressively isolated from the world community. On February 3, 1989, Stroessner was overthrown in a military coup headed by Gen. Andres Rodriguez. Rodriguez, as the Colorado Party candidate, easily won the presidency in elections held that May and the Colorado Party dominated the Congress. In 1991 municipal elections, however, opposition candidates won several major urban centers, including Asuncion. As president, Rodriguez instituted political, legal, and economic reforms and initiated a rapprochement with the international community. The June 1992 constitution established a democratic system of government and dramatically improved protection of fundamental rights. In May 1993, Colorado Party candidate Juan Carlos Wasmosy was elected as Paraguay's first civilian president in almost 40 years in what international observers deemed fair and free elections. The newly elected majority-opposition Congress quickly demonstrated its independence from the executive by rescinding legislation passed by the previous Colorado-dominated Congress. With support from the United States, the Organization of American States, and other countries in the region, the Paraguayan people rejected an April 1996 attempt by then-Army Chief Gen. Lino Oviedo to oust President Wasmosy, taking an important step to strengthen democracy. Oviedo became the Colorado candidate for president in the 1998 election, but when the Supreme Court upheld in April his conviction on charges related to the 1996 coup attempt, he was not allowed to run and remained in confinement. His former running mate, Raul Cubas Grau, became the Colorado Party's candidate and was elected in May in elections deemed by international observers to be free and fair. However, his brief presidency was dominated by conflict over the status of Oviedo, who had significant influence over the Cubas government. One of Cubas' first acts after taking office in August was to commute Oviedo's sentence and release him from confinement. In December 1998, Paraguay's Supreme Court declared these actions unconstitutional. After delaying for 2 months, Cubas openly defied the Supreme Court in February 1999, refusing to return Oviedo to jail. In this tense atmosphere, the murder of Vice President and long-time Oviedo rival Luis Maria Argana on March 23, 1999, led the Chamber of Deputies to impeach Cubas the next day. The March 26 murder of eight student antigovernment demonstrators, widely believed to have been carried out by Oviedo supporters, made it clear that the Senate would vote to remove Cubas on March 29, and Cubas resigned on March 28. Despite fears that the military would not allow the change of government, Senate President Luis Gonzalez Macchi, a Cubas opponent, was peacefully sworn in as president the same day. Cubas left for Brazil the next day and has since received asylum. Oviedo fled the same day, first to Argentina, then to Brazil. In December 2001, Brazil rejected Paraguay's petition to extradite Oviedo to stand trial for the March 1999 assassination and "marzo paraguayo" incident. Gonzalez Macchi offered cabinet positions in his government to senior representatives of all three political parties in an attempt to create a coalition government. While the Liberal Party pulled out of the government in February 2000, the Gonzalez Macchi government has achieved a consensus among the parties on many controversial issues, including economic reform. Liberal Julio Cesar Franco won the August 2000 election to fill the vacant vice presidential position. In August 2001, the lower house of Congress considered but did not pass a motion to impeach Gonzalez Macchi for alleged corruption and inefficient governance. Today, Paraguay is a constitutional republic with three branches of government. The President is the Head of Government and Head of State; he cannot succeed himself. Colorado Party Senator Luis Gonzalez Macchi assumed the presidency in March 1999; in August 2000, voters elected Julio Cesar Franco of the Liberal Party to be Vice President. The bicameral Congress is made up of a 45-member Senate and an 80-member Chamber of Deputies. The Colorado Party, the dominant political party, holds a small majority in both houses of Congress; however, factional differences within the Party result in shifting alliances depending on the issue. In August the lower house voted down impeachment charges based on poor performance of duties and corruption against the President and Vice President. The Constitution provides for an independent judiciary; however, although the Supreme Court continued to undertake judicial reforms, the courts remain inefficient and subject to corruption and political pressure. The tradition of authoritarian rule was deeply rooted in the national history and rigorously maintained by the Stroessner regime. The government tolerated only a narrow range of opposition to its policies and moved quickly and forcefully to put down any challenges that went beyond implicit but well-recognized limits, that threatened to be effective, or that were raised by groups not enjoying official recognition. The government pointed proudly to the stability that Stroessner's rule brought to Paraguay, which had been riven by years of political disruption. Noting that Paraguay escaped the instability, political violence, and upheaval that had troubled the rest of Latin America, government supporters dismissed charges by human rights groups that such stability often came at the cost of individual civil rights and political liberty. The government relied on several pieces of security legislation to prosecute security and political offenses. Principal among these was the state-of-siege decree, provided for under Article 79 of the Constitution. With the exception of a very few short periods, a state of siege was in continuous effect from 1954 until April 1987. After 1970 the state of siege was technically restricted to Asunción. The restriction was virtually meaningless, however, because the judiciary ruled that authorities could bring to the capital those persons accused of security offenses elsewhere and charge them under the state-of-siege provisions. Under the law, the government could declare a state of siege lasting up to three months in the event of international war, foreign invasion, domestic disturbance, or the threat of any of these. Extensions had to be approved by the legislature, which routinely did so. Under the state of siege, public meetings and demonstrations could be prohibited. Persons could be arrested and detained indefinitely without charge. The lapse of the state of siege in 1987 had little effect on the government's ability to contain political opposition as of late 1988. Other security legislation could be used to cover the same range of offenses. The most important of these provisions was Law 209, "In Defense of Public Peace and Liberty of Person." This law, passed in 1970, lists crimes against public peace and liberty, including the public incitement of violence or civil disobedience. It specifies the limits on freedom of expression set forth in Article 71 of the Constitution, which forbids the preaching of hatred between Paraguayans or of class struggle. Law 209 raises penalties set forth in earlier security legislation for involvement in groups that seek to replace the existing government with a communist regime or to use violence to overthrow the government. It makes it a criminal offense to be a member of such groups and to support them in any form, including subscribing to publications; attending meetings or rallies; and printing, storing, distributing, or selling print or video material that supports such groups. Law 209 also sets penalties for slandering public officials. During the early 1980s, Law 209 was used to prosecute several individuals the government accused of taking part in conspiracies directed from abroad by Marxist-Leninist groups. Among these were a group of peasants who hijacked a bus to the capital in 1980 to protest being evicted from their land. In 1983 members of an independent research institute that published data on the economy and other matters were arrested after a journal published by the institute carried articles calling for the formation of a student- worker-peasant alliance. Human rights groups, critical of trial procedures and the evidence in the two cases, questioned the existence of a foreign-directed conspiracy, asserting instead that the cases represented carefully selected attempts to discourage organized opposition. During the mid-1980s, the government used Law 209 principally to charge political opponents with fomenting hatred, defaming government officials, or committing sedition. The lapse of the state of siege also had little effect on the government's ability to handle security and political offenses because authorities routinely detained political activists and others without citing any legal justification at all. In these cases, suspects were held for periods of hours, days, or weeks, then released without ever being charged. In practice, persons subjected to arbitrary arrest and detention had no recourse to legal protection, and constitutional requirements for a judicial determination of the legality of detention and for charges to be filed within forty-eight hours were routinely ignored. According to the United States Department of State, 253 political opposition activists were detained at least overnight in 1987. Of these, thirty-nine were held more for than seven days, and formal charges were filed in only sixteen of the cases. Many of those detained were taken to police stations, armed forces installations, or to the Department of Investigations at police headquarters in Asunción. There have been numerous well- documented allegations of beating in the arrest process and of torture during detention. The government has asserted that torture was not a common practice and that any abuses were investigated and their perpetrators prosecuted under the law. National newspapers have carried rare accounts of a few such investigations and trials, but continued allegations of torture suggested that the problem had not been brought under control as of the late 1980s. The government also limited the expression of opposition views by denying permits for assemblies and refusing or cancelling printing or broadcasting licenses. In early 1987, an independent radio station suspended its broadcasts after the government refused to do anything about a months-long illegal jamming of its authorized frequencies. Meetings by the political opposition, students, and labor groups required prior authorization by police, who did not hesitate to block and repress assemblies that did not have prior approval, sometimes beating leaders and participants. The government has also restricted the travel of a few persons involved in the political opposition or in labor groups. Some foreign journalists and certain Paraguayans identified with the opposition were expelled. During 1987 two persons then in exile were allowed to return to Paraguay. The government claimed that a third, a poet, was also free to return. The police and the military were the main means of enforcement of the regime. During the mid-1980s, however, armed vigilantes associated with the Colorado Party broke up opposition meetings and rallies, sometimes while police looked on. Such groups had been active since the 1947 civil war but had been used relatively infrequently after the 1960s. The principal group was a loosely organized militia known as the Urban Guards (Guardias Urbanas), whose members were linked with local party branches and worked closely with the police. A second group was led by the head of the Department of Investigations. The government did not appear concerned by the reemergence of such groups and may in fact have encouraged them. In September 1987, for example, vigilantes broke up a panel discussion of opposition and labor members that was being held in a Roman Catholic Church. The vigilantes used chains and clubs to attack panel members and a parish priest who tried to intervene. The minister of justice, who himself was the leader of an anticommunist association that maintained its own security group, later publicly commended the vigilantes. Numerous sources of government opposition were targets of security forces during the 1980s. Activity by these groups as well as the violent suppression of such activity disturbed public order on numerous occasions. Foremost among those groups officially viewed as a security threat was the Paraguayan Communist Party (Partido Comunista Paraguayo--PCP). Since its inception, the Stroessner government has justified the continuance of strict internal security policies, particularly the prolongation of the state of siege, as necessary measures to prevent a communist takeover. Thus, the PCP's efforts to establish and maintain a power base in Paraguay had been ineffective throughout the Stroessner regime. This anticommunist fervor did not abate during the 1980s, however, even though the PCP was completely isolated from the national population. As of mid- 1988, the party was estimated to have some 4,000 members, most operating underground. Its leaders were either in exile or under arrest. The party claimed to have organized new cells during the 1980s, but their existence could not be confirmed. Excluded from the principal political opposition coalition, the PCP also claimed to have set up its own political front and labor front in exile. Both front organizations appeared, however, to exist only on paper, if at all. The party was founded in 1928 and has been illegal since then, except for a short period in 1936 and again in the 1946-47 period before the PCP became involved in the 1947 civil war. The party's efforts to organize a general strike in 1959 were ineffective as was its involvement in guerilla attacks in the early 1960s. Both efforts drew harsh government reprisals. The party was believed to have two factions. The original one, the PCP, was loyal to the Soviet Union. A breakaway faction, the Paraguayan Communist Party-- Marxist-Leninist (Partido Comunista Paraguayo--Marxista-Leninista) was formed in 1967; it was avowedly Maoist. In 1982 the government arrested several persons that it identified as being members of the pro-China wing of the PCP. Evidence in that case has been criticized by international human rights groups, however, and it was unclear as of late 1988 whether either wing of the PCP was active in the country at all. The party held its last conference in 1971. Another illegal opposition group was the Political-Military Organization (Organización Político-Militar--OPM). The group was founded in 1974 by leftist Catholic students and drew some support from radical members of the clergy and Catholic peasant organizations. The government made extensive arrests of OPM members and sympathizers in 1976, after which operations of the movement declined. It was unclear whether the OPM still existed as of mid- 1988, but the government continued to warn of its threat, claiming that it was under communist control. The activities of illegal opposition parties--including the Colorado Popular Movement (Movimiento Popular Colorado--Mopoco), the Authentic Radical Liberal Party (Partido Liberal Radical Auténtico--PLRA), and the Christian Democratic Party (Partido Demócrata Cristiano--PDC)--also drew official attention. Members of illegal parties were subject to regular police surveillance. They have alleged that their telephones were illegally tapped and their correspondence intercepted. The unrecognized opposition parties were routinely denied permits for meetings, so that any they held usually were broken up, often violently, by police, who cited them for illegally holding unauthorized assemblies. In 1979 these three parties joined with a legally recognized opposition party, the PRF, in a coalition known as the National Accord (Acuerdo Nacional). Leaders of this coalition, whether members of legal or illegal parties, were also subject to detentions and deportations. Independent labor unions were another object of surveillance by government security forces in the 1980s. Most labor unions belonged to the Paraguayan Confederation of Workers (Confederación Paraguaya de Trabajadores), which was allied with the government and carefully controlled by it. Although workers not sponsored by the official confederation were not authorized to organize freely, some independent labor unions had been given official recognition. Their activities, however, were closely monitored by the police, who sent representatives to all meetings. Despite tight controls--Paraguayan law made it virtually impossible to call a legal strike--a number of labor-related public disturbances took place in the mid-1980s. In April 1986, for instance, a peaceful protest by a medical workers' association in Asunción was forcibly broken up by police. Vigilante groups associated with the Colorado Party were also active in intimidating and assaulting the doctors, nurses, and technicians involved, as well as university students who joined in subsequent demonstrations supporting the medical workers. Hundreds of demonstrators organized by an independent workers' movement were clubbed and beaten in the capital in May 1986. Continued demonstrations in support of the jailed demonstrators and medical workers also drew police action. In 1985 student demonstrations disturbed public order in the capital for the first time in twenty-five years. An estimated 2,000 students clashed with police in April of that year. After a student was shot to death in the clash, more demonstrations followed, and part of the National University was closed for several days. Since that time, students have been prominent in demonstrations organized by several other groups. Land tenure issues were also apparent in outbreaks of public violence. Several incidents involved arrests by military and police personnel of militant landless peasants who were squatting on private or public land. In 1986 three squatter incidents were publicized in the local press; after military involvement in the shooting deaths of two peasants was revealed, the military made efforts to leave action in similar cases to the police. Local community leaders chosen to represent peasants in negotiations with the government over land tenure issues have also been subject to harassment by local police and judicial officials. Reports have appeared in both the national and international press about abuses of the rights of the nation's small, unassimilated Indian population. Most frequently, abuses were alleged to occur in land disputes. The abuses appeared to result from the relative powerlessness of the Indian population vis-à-vis local landowners and the remoteness of tribal areas. The government controlled most print media, both television channels, and most radio stations and tolerated only limited criticism from the press. Major media usually avoided criticizing the president, his family, the military, and key civilian leaders. Topics related to official corruption and national security were also generally avoided, and coverage of the political opposition was strictly limited. Violations of these rules were answered with force eventually--sometimes immediately--by the government. During the mid-1980s, the Roman Catholic Church emerged as a leader of antigovernment forces. The church was openly opposed to the Stroessner regime during the 1960s and early 1970s, until the government cracked down, sending troops into the private Catholic University on more than one occasion and eventually leaving it in shambles. The harsh government response was followed by several years of relative quiet from the church. During the mid-1980s, church officials offered to serve as a bridge for the reconciliation of the government and the opposition but were turned down by the government. Roman Catholic bishops also began to take a larger role in pressing for a transition to democracy and investigation of human rights abuses. The wave of antigovernment protests in 1986 and the government's forcible response, however, appeared to have inspired the church to take a more overt political stance. In May 1986, the archbishop of Asunción announced a series of protests that culminated in the ringing of church bells throughout the capital. Some 800 priests and members of religious orders, joined by members of the opposition parties and other people, led a march of silence in the capital in October 1987. The government permitted the crowd--estimated at 40,000--to proceed peacefully. Provincial clergy, long active among the rural poor, also have been involved in land tenure disputes and in setting up peasant cooperative enterprises. Activities in both areas have been met with displeasure by local landowners and have resulted in clashes with the military and with local police. Following the government's closure in 1984 of ABC Color, the Roman Catholic Church's newspaper, Sendero, became an important source of information on opposition activities. The country has a population of approximately 5.6 million and a market economy with a large state presence and a large informal sector. The formal economy has been in a recession for the past 5 years. In 2000 economic growth declined by 0.4 percent in real terms. According to preliminary figures for 2000, gross domestic product (GDP) was $7.7 billion (35.4 trillion guaranies). The GDP per capita ($1,506) has fallen steadily and is lower in real terms than it was 10 years ago. An estimated 32 percent of the population is employed in agriculture, which provides 30 percent of the GDP. Hydroelectric power, agricultural products, and cattle were the most important export items. The informal economy, estimated at 50 percent of the value of the formal sector, has shrunk considerably in the last few years and suffered a severe blow with the implementation of stricter border controls by the Brazilian Government on the important crossroads of Cuidad del Este. Wealth continues to be concentrated in a small upper class, with both urban and rural areas supporting a large subsistence sector. Social life in Paraguay had always been closely tied to religion, but politically the Roman Catholic Church traditionally had remained neutral and generally refrained from commenting on politics. In the late 1960s, however, the church began to distance itself from the Stroessner regime because of concerns over human rights abuses and the absence of social reform. The Auxiliary Bishop of Asunción, Aníbal Maricevich Fleitas, provided an early focus for criticism of the regime. With the growth of the Catholic University and the influx of Jesuits from Europe, especially Spain, the church had a forum and a vehicle for reform as well as a dynamic team of spokespeople. Some priests moved into the poor neighborhoods, and they, along with others in the rural areas, began to encourage the lower classes to exercise the political rights guaranteed in the Constitution. These priests and the growing Catholic Youth movement organized workers and peasants, created Christian Agrarian Leagues and a Christian Workers' Center, and publicized the plight of the Indians. As part of the program of education and awareness, the church founded a weekly news magazine, Comunidad, and a radio station that broadcast throughout the country. In April 1968, the regime reacted against this criticism and mobilization by authorizing the police to invade the university, beat students, arrest professors, and expel four Jesuits from the country. Although the Paraguayan Bishops' Conference (Conferencia Episcopal Paraguayo--CEP) met and issued a blistering statement, the regime was not deterred from continuing its crackdown on the church. The Stroessner government arrested church activists, shut down Comunidad, disbanded Catholic Youth rallies, outlawed the Catholic Relief Service--the church agency that distributed assistance from the United States--and refused to accept Maricevich as successor when Archbishop Aníbal Mena Porta resigned in December 1969. The following January, the government and church reached an agreement on the selection of Ismael Rolón Silvero as archbishop of Asunción. This resolution did not end the conflict, however, which resulted in continued imprisonment of university students, expulsions of Jesuits, and attacks on the Christian Agrarian Leagues, a Catholic preparatory school, and even the offices of the CEP. Rolón stated that he would not occupy the seat on the Council of State provided by the Constitution for the archbishop of Asunción until the regime restored basic liberties. In the 1970s, the church, which was frequently under attack, attempted to strengthen itself from within. The church promoted the establishment of peasant cooperatives, sponsored a pastoral program among students in the Catholic University, and endorsed the creation of grassroots organizations known as Basic Christian Communities (Comunidades Eclesiásticas de Base--CEBs). By 1986 there were 400 CEBs consisting of 15,000 members. These organizational efforts, combined with dynamic regional efforts by the church symbolized in the Latin American Episcopal Conference (Conferencia Episcopal Latinoamericana--Celam) meeting in Puebla, Mexico, in 1979, resulted in a renewed commitment to social and political change. Following the Puebla conference, the Paraguayan Roman Catholic Church formally committed itself to a "preferential option for the poor," and that year the CEP published a pastoral letter, "The Moral Cleansing of the Nation," that attacked growing economic inequalities and the decline of moral standards in public life. In 1981 the CEP released a detailed plan for social action. Two years later, the bishops issued a pastoral letter denouncing increasing evictions of peasants. By the early 1980s, the church had emerged as the most important opponent of the Stroessner regime. The CEP's weekly newspaper, Sendero, contained not only religious information but also political analysis and accounts of human rights abuses. The church's Radio Caritas was the only independent radio station. Church buildings and equipment were made available to government opponents. In addition, the bishops joined with leaders of the Lutheran Church and Disciples of Christ Church to establish the Committee of the Churches. This committee became the most important group to report on human rights abuses, and it also provided legal services to those who had suffered such abuse. Keeping an eye on the post-Stroessner political situation and concerned to bring about a peaceful democratic transition, the CEP began in 1983 to promote the idea of a national dialogue to include the Colorado Party, business, labor, and the opposition parties. This concept was endorsed by the National Accord, which demanded constitutional reforms designed to create an open, democratic, pluralist, and participatory society. The Colorado Party rejected the calls for dialogue, however, on the grounds that such action was already taking place in the formal structures of government at national and local levels. In the late 1980s, the church was better able to respond in a united manner to criticism and repression by the regime than had been the case in the late 1960s and early 1970s. Five days after the suspension of the state of siege in Asunción in 1987, police broke up a Holy Week procession of seminarians who were dramatizing the predicament of peasants who had no land. Rolón denounced this police action. In October 1987, the clergy and religious groups of Asunción issued a statement that condemned the preaching of hatred by the Colorado Party's radio program "La Voz del Coloradismo," demanded the dismantling of assault squads made up of Colorado civilians, and called for respect for civil rights and a national reconciliation. Later that month, the church organized a silent march to protest government policies. The march, which attracted between 15,000 and 30,000 participants, was the largest public protest ever staged against the regime and demonstrated the church's impressive mobilization capabilities. Critical statements by the church increased with the approach of the 1988 general elections and with the government's continued refusal to participate in the national dialogue. In January 1988, the CEP issued a statement on the current situation, calling attention to the government's use of corruption, violence, and repression of autonomous social organizations. The bishops warned of increasing polarization and violence and indicated that blank voting in the upcoming elections was a legitimate political option, a position frequently denounced by Stroessner and the Colorado Party. The archbishopric of Asunción followed up in February by issuing a document rejecting the government's accusations of church involvement in politics and support for opposition parties. Immediately after the elections, Rolón granted an interview to the Argentine newspaper Clarín, in which he blamed the tense relations between church and regime on the government's use of violence. He criticized the government for its disregard of the Constitution, harassment of political opponents, and refusal to participate in the national dialogue, and he charged that the elections were farcical. In the confrontational atmosphere after the elections, the visit by Pope John Paul II to Paraguay in May 1988 was extremely important. The government rejected the church's plans to include Concepción on the papal itinerary, claiming that the airport runway there was too short to accommodate the pope's plane. Maricevich, who now headed the diocese of Concepción, charged, however, that the city had been discriminated against throughout the Stroessner era as punishment for its role in opposing General Higinio Morínigo in the 1947 civil war. The pope's visit was almost cancelled at the last moment when the government tried to prevent John Paul from meeting with 3,000 people--including representatives from unrecognized political parties, labor, and community groups--dubbed the "builders of society." After the government agreed reluctantly to allow the meeting, the Pope arrived in Asunción and was received by Stroessner. Whereas Stroessner spoke of the accomplishments of his government and the recent free elections, the Pope called for a wider participation in politics of all sectors and urged respect for human rights. Throughout his three-day trip, John Paul stressed human rights, democracy, and the right and duty of the church to be involved in society. His visit was seen by observers as supporting the Paraguayan Roman Catholic Church's promotion of a political transition, development of grass roots organizations, and defense of human rights. During the colonial period, criminal justice was administered in courts in what is now Paraguay according to provisions in several codes developed by the Spanish. Appeal in specific cases was referred to higher tribunals in the mother country. Many of those laws continued to be applied during the period following independence, except when Paraguayan rulers arbitrarily applied their own self-made law. In 1883 the nation adopted the Argentine penal code. This was replaced by a national code drawn up by Paraguayan jurists in 1890. This code was rewritten in 1910, and the new code proclaimed in 1914. The 1914 Penal Code, as amended, was still in force as of 1988. The code is set forth in two books, each of which has two sections. The first section of Book I gives general provisions defining the application of the law and criminal liability, addressing such issues as mitigating circumstances, insanity, and multiple crimes. According to the code, active-duty members of the armed forces come under the jurisdiction of the Military Penal Code, as do perpetrators of purely military offenses. Section 2 of Book I establishes punishments and provides for the cancellation of legal actions and the exercise of prosecution functions. The death sentence was abolished in 1967, and the punishments provided for are imprisonment, jailing, exile, suspension fines, and disqualification. Jailing, which like imprisonment can entail involuntary labor, is served by those persons convicted of less serious crimes in special institutions distinct from prisons, which house those convicted of serious crimes that draw long-term sentences. Disqualification can entail loss of public office or loss of public rights, including suffrage and pension benefits. The first half of Book II of the code comprises a sixteenchapter section that groups offenses into broad categories, defines specific types of violations, and sets penalties for each type. The major categories include crimes against the state, against public order and public authority, and against persons and property. The second half of Book II sets forth misdemeanor offenses and their punishments. INCIDENCE OF CRIME As a matter of policy, the government did not publish statistics on crime in the 1980s, so it was impossible to determine the incidence of crime, the frequency of particular crimes, or the direction of overall crime rates. The nation had a relatively homogeneous population, however, and did not appear to be troubled by the high rates of ordinary crimes, such as murder, assault, and theft, that have been associated with ethnic tensions or class divisions found in other areas of Latin America. However, three special types of crimes--corruption, smuggling, and drug trafficking--attracted media attention both locally and internationally in the 1980s. Official corruption has been a very sensitive issue throughout the Stroessner regime and remained so during the late 1980s. National standards of public conduct appeared to accommodate a certain amount of personal intervention on behalf of family members, friends, and business associates. It was widely agreed, however, both within the nation and outside it, that serious breaches of these standards by senior civilian and military officials were rarely investigated or prosecuted. Indeed, efforts by officials to generate wealth or to influence the outcome of legal or business decisions, either on their own behalf or on that of relatives, friends, or associates, were treated by the government as a perquisite of office and a reward for loyalty. Allegations of high-level corruption and graft in the local press were officially frowned upon, and displeasure was expressed overtly by confiscating publications and arresting journalists and publishers. Nonetheless, some investigations and arrests of alleged perpetrators have taken place and been reported. One example was the arrest in late 1985 of twenty-nine senior bureaucrats and businessmen on charges of embezzlement of an estimated US$100 million from the Central Bank. Involvement or connivance in smuggling appeared to be a significant element of official corruption. Most observers have estimated that the volume of illegal foreign trade at the very least came close to matching that of legal commerce during the mid1980s and possibly surpassed it. In 1987 the leader of a business association of commercial and industrial interests estimated that contraband accounted for two-thirds of Paraguay's foreign trade. The avoidance of import duties represented a serious loss of revenue to the government. The flood of cheaper goods also harmed local producers, who could not compete with the artificially low prices of smuggled goods. The illegal commerce also had raised tensions with Brazil because it undercut Brazil's own economy. Smuggling has had a long history in Paraguay. During the 1950s, most operators worked on a small scale, but by the 1960s it was apparent that several persons had made fortunes in the trade. During the 1970s, smugglers moved into exports as well as imports. The trade began by focusing on such luxury items as whiskey and cigarettes, but by the 1980s, smuggled goods included electronic goods, appliances, and even commodities such as wheat. Logs taken from Eastern Paraguay and sold in Brazil were a major illegal export item. The growing disparity between official exchange rates and market exchange rates during the 1980s made the trade increasingly lucrative, because traders were able to buy goods outside the country at market rates and then sell them in Paraguay at a price that was below that of legally imported goods but still high enough to render a substantial profit. Movement of the heavy volume of illegal trade necessitated crossing river borders controlled by the navy and crossing land borders and road checkpoints patrolled by the army and the police. Entry by air entailed transport through airports controlled by the air force. Despite these controls, few smugglers were arrested as the trade in illegal goods burgeoned and illegal markets thrived openly in the capital and other cities, especially Puerto Presidente Stroessner, which borders Brazil. The apparent tolerance of smuggling and the fact that several senior military and civilian officials had unaccounted-for sources of wealth contributed to a widely held local belief that there was official involvement in the trade. An especially serious outgrowth of smuggling was the expansion into drug trafficking during the early 1980s, when Paraguay emerged as a transit point in the international drug trade. The nation was well situated for the role. It was located near Bolivia and Peru, which were major Latin American sources of illegal drugs. Moreover, Paraguay's sparsely populated and remote border areas presented difficulties for police surveillance. The nation had been used as a transit point during the 1960s, but international and local efforts had shut down the trade by the early 1970s. A series of seizures of drugs and of chemicals used to refine them during the 1984-87 period suggested that the problem had resurfaced, however. The problem first reached public attention in 1984 when a large quantity of chemicals used to refine coca paste into cocaine was seized by authorities in Paraguay. In 1986 and 1987, officials in Panama and Belgium discovered large amounts of cocaine that had been shipped from Paraguay. Again in 1987, evidence of Paraguayan involvement in drug trafficking surfaced after a plane carrying a major shipment of cocaine crashed in Argentina, having taken off in Paraguay. The Stroessner government denied charges by United States government officials that Paraguayan military and civilian officials were involved in the trade and vowed to take a tough stand against any drug traffickers. According to a United States Department of State official, Paraguay was also a major producer of marijuana for export. Paraguay has reported data on crime to INTERPOL since 1995, so it is possible to do some analysis using that data. With the exception of murder, the crime rate in Paraguay is low compared to industrialized countries. An analysis was done using INTERPOL data for Paraguay. For purpose of comparison, data were drawn for the seven offenses used to compute the United States FBI's index of crime. Index offenses include murder, forcible rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft. The combined total of these offenses constitutes the Index used for trend calculation purposes. Paraguay will be compared with Japan (country with a low crime rate) and USA (country with a high crime rate). According to the INTERPOL data, for murder, the rate in 2000 was 11.57 per 100,000 population for Paraguay, 1.10 for Japan, and 5.51 for USA. For rape, the rate in 2000 was 4.77 for Paraguay, compared with 1.78 for Japan and 32.05 for USA. For robbery, the rate in 2000 was 2.76for Paraguay, 4.08 for Japan, and 144.92 for USA. For aggravated assault, the rate in 2000 was 55.1 for Paraguay, 23.78 for Japan, and 323.62 for USA. For burglary, the rate in 2000 was 22.19 for Paraguay, 233.60 for Japan, and 728.42 for USA. The rate of larceny for 2000 was 8.16 for Paraguay, 1401.26 for Japan, and 2475.27 for USA. The rate for motor vehicle theft in 2000 was 48.13 for Paraguay, compared with 44.28 for Japan and 414.17 for USA. The rate for all index offenses combined was 152.68 for Paraguay, compared with 1709.88 for Japan and 4123.97 for USA. TRENDS IN CRIME Between 1995 and 2000, according to INTERPOL data, the rate of murder decreased from 16.08 to 11.57 per 100,000 population, a decrease of 28%. The rate for rape increased from 4.18 to 4.77, an increase of 14.1%. The rate of robbery decreased from 15.04 to 2.76, a decrease of 81.6%. The rate for aggravated assault decreased from 92.18 to 55.1, a decrease of 40.2%. The rate for burglary increased from 20.43 to 22.19, an increase of 8.6%. The rate of larceny decreased from 91.49 to 8.16, a decrease of 91.1%. The rate of motor vehicle theft decreased from 48.33 to 48.13, a decrease of 0.4%. The rate of total index offenses decreased from 287.73 to 152.67, a decrease of 46.9%. The legal system of Paraguay is based on Argentine codes, Roman law, and French codes. In practice the criminal justice system was composed of two parallel structures. The first comprised the formal legal system set forth in the Constitution and in numerous statutes that provided for an independent judiciary and that specified legal procedures. The second system was one in which political and economic clout determined the outcome of conflict resolution. When the two structures clashed, the second was generally perceived to prevail. The widespread perception that the criminal justice system was susceptible to economic and political manipulation meant that few people were willing to confront police, military, or political authority. Public order was well established in the nation, and the government committed sufficient resources to law enforcement to maintain domestic order throughout the country. Urban and rural areas were generally safe, as was travel throughout the country. As a rule, citizens were able to conduct routine day-to-day affairs peaceably and without government interference. A major exception, however, was activity associated with opposition to the regime, to the Colorado Party, or to the interests of powerful and influential national and local figures. In these circumstances, individuals were likely to attract the negative attention of the police or other security personnel. The police had a long history in Paraguay. Francia maintained the nation's first police establishment, using it to enforce his complete control of the state. Under him, the police maintained a wide-reaching spy network that moved ruthlessly to suppress dissent and generated an atmosphere of fear. The police have remained a powerful and politicized institution ever since. Until the mid1950s , the police often served as a counterweight to the armed forces, but after police officials were implicated in an abortive coup against Stroessner in late 1955, the force was purged, and police paramilitary units were sharply cut back. Since then, the police chief has almost always been a serving or retired army officer. Army officers have also held many key positions in the police hierarchy. The Paraguayan police force was a centralized organization under the administration of the minister of interior. The force comprised two main elements, one for the capital and another for the rest of the nation. A separate highway police patrolled the nation's roads and was administered by the minister of public works and communication. In 1988 police strength was estimated at 8,500 personnel; about 4,500 were assigned to the capital and the rest to the nation's 19 departments. The ratio of police to the rest of the population was one of the world's highest. Most rank-and-file police personnel were two-year conscripts who generally served outside their home area. The capital police force was headed by a chief of police. Police personnel were assigned to headquarters or to one of twenty-three borough precincts. Police headquarters had three departments. The regular police, who dealt with ordinary crime, as well as having traffic-control mounted, and motorized elements, came under the administration of the Department of Public Order. The Department of Investigations, an internal security organ, dealt with political and security offenses. The Department of Training and Operations handled police administration and planning and ran police training establishments. Several directorates at police headquarters specialized in particular areas; among these were surveillance and offenses, identification, alien registration, and politics. A separate directorate specializing in political intelligence-- formerly the sole province of the army staff's intelligence section--was established in mid-1987. Police personnel also ran the capital's fire department. A special unit of the capital police was the Security Guard, a 400-strong unit called up in cases of emergency and used in ceremonies and parades. About one-half of the unit, which had two rifle companies, was manned by conscripts. Police in the interior were under the control of the government delegate heading the department in which police operated. For police functions, the delegate was in turn responsible to the minister of interior. Each delegate usually had a police chief who handled routine matters, an investigative section to process the identity cards carried by all citizens, and an additional person to supervise police arrests with a view to bringing charges. Departments were divided into districts in which a justice of the peace had several police conscripts assigned to him to carry out guard and patrol duties and other routine police functions. All police training took place in Asunción. Basic training was given at the Police College, which offered a five-year course in modern police techniques. The Higher Police College offered specialized training. The police also operated a school for NCOs and an in-service training battalion. The military generally no longer plays an overt role in politics; however, members of two army units and a group of National Police officers participated in an attempted coup in May 2000. The National Police force has responsibility for maintaining internal security and public order, and it reports to the Ministry of the Interior. The civilian authorities generally maintain effective control of the security forces. Members of the security forces committed some human rights abuses. The police and military were responsible for over a dozen extrajudicial killings. The Constitution prohibits torture as well as cruel, inhuman, or degrading punishment or treatment; however, torture (primarily beatings) and brutal and degrading treatment of convicted prisoners and other detainees continued. The Paraguay Human Rights Coordinating Board (CODEHUPY)--a group of 32 (nongovernmental organizations (NGO's), civic organizations, and trade unions--reported several cases of police torture and other abusive treatment of persons, including women and children, designed to extract confessions, punish escape attempts, or intimidate detainees. The Attorney General's office and the Committee of Churches compiled numerous examples of police abuse. In May 2000, during the state of exception imposed after the aborted coup attempt, several of those arrested reported being tortured during their detention. Some of these persons reported that former Interior Minister Walter Bower witnessed and encouraged the beatings of suspects in three unrelated cases. In June prosecutors filed charges against Basilio Pavon and Osvaldo Vera, two police officers, alleging that on Bower's orders they tortured Alfredo Caceres and Jorge Lopez in the aftermath of the coup attempt. Press reports also connected Bower to the torture of eight peasants in Concepcion in March 2000; police reportedly beat them in Bower's presence after they were arrested for illegally cutting down trees. In October 2000, Bower was removed as Minister of the Interior. In April the Chamber of Deputies revoked Bower's immunity as a member of Congress, and in August prosecutors completed their investigation of these incidents and charged Bower with torture and other crimes. In December Saul Lenardo Franco filed a complaint alleging that Bower and three police officers had tortured him following the failed coup attempt. Legal action against Bower was pending at year's end 2001. Police used force to disperse protesters on several occasions, sometimes seriously injuring civilians. The Constitution provides that the police may not enter private homes except to prevent a crime in progress or when the police possess a judicial warrant; however, at times the Government infringed on citizens' privacy rights. While the Government and its security forces generally did not interfere in the private lives of citizens, human rights activists claimed that local officials and police officers abuse their authority by entering homes or businesses without warrants and harassing private citizens. There were allegations that the Government occasionally spied on individuals and monitored communications for political and security reasons. Arbitrary arrest and detention are persistent problems. The Constitution prohibits detention without an arrest warrant signed by a judge and stipulates that any person arrested must appear before a judge within 24 hours to make a statement. The police may arrest persons without a warrant if they catch them in the act of committing a crime, but they must notify a prosecutor within 6 hours. In practice the authorities do not always comply with these provisions. In August the armed forces announced a campaign to stop arbitrarily young men in the streets to see if they had complied with their military service obligations; however, within a week, the armed forces cancelled the campaign after criticism from civic and human rights groups and inquires from Congress and local governments. Pretrial detention remains a serious problem; an estimated 75 percent of persons in prison were held pending trial, many for months or years after their arrest. While the law encourages speedy trials, the Constitution permits detention without trial until the accused completes the minimum sentence for the alleged crime, which often occurs in practice. Judges have the discretion to permit "substitute measures," such as house arrest, in place of bail for most crimes. Judges frequently set relatively high bail, and many accused persons are unable to post bond. The Supreme Court and many criminal court judges also make quarterly visits to the prisons to identify and release improperly detained individuals. The authorities arrested over 45 persons in connection with the 1999 assassination of Vice President Argana and the killing of student protesters. Many of those arrested were well-known political figures, including legislators allied with the former Government. There was little evidence presented to support the charges against most of them, and most of the accused were held without bail, leading some observers to question whether due process had been observed. According to the Attorney General's office, at least 10 of those detained remained in jail awaiting trial at year's end 2001, and approximately 5 prominent suspects, who had been remanded to house arrest or other alternative detention, had not yet been cleared of the charges against them; therefore, they remained in an uncertain legal status. The Government restricts the movement of persons allegedly suspected of plotting the coup. The Constitution expressly prohibits exile, and the Government does not use it. Article 193 of the Constitution provides for a Supreme Court of Justice of no fewer than five members and for other tribunals and justices to be established by law. The Supreme Court supervises all other components of the judicial branch, which include appellate courts with three members each in the areas of criminal, civil, administrative, and commercial jurisdiction; courts of first instance in these same four areas; justices of the peace dealing with more minor issues; and military courts. The Supreme Court hears disputes concerning jurisdiction and competence before it and has the power to declare unconstitutional any law or presidential act. As of 1988, however, the court had never declared invalid any of Stroessner's acts. Supreme Court justices serve five-year terms of office concurrent with the president and the National Congress and may be reappointed. They must be native-born Paraguayans, at least thirtyfive years of age, possess a university degree of Doctor of Laws, have recognized experience in legal matters, and have an excellent reputation for integrity. Sources of procedural criminal law are the Constitution, special laws, the Penal Code, and the Code of Penal Procedure. These sources govern pleading and practices in all courts as well as admission to the practice of law. The entire court system was under the control of the national government. In addition to the judiciary, which was a separate branch of government, the Ministry of Justice and Labor was also involved in the administration of justice. It was responsible for judicial officers attached to the attorney general's office. These officials were assigned to the various courts and represented the government in trial proceedings. The ministry was also responsible for the judiciary's budget and the operation of the penal system. At the apex of the criminal court system was the Supreme Court of Justice, which was made up of five justices appointed by the president. Below the Supreme Court of Justice, which was responsible for the administration of the judiciary, was the criminal court of appeal. Both courts were located in Asunción. Courts of original jurisdiction were divided between the courts of the first instance, which heard serious cases, and justice of the peace courts, whose jurisdiction was limited to minor offenses. There were six courts of the first instance in the country during the 1980s. There were far more justice of the peace courts, but the exact number was not publicly available. Although theoretically a coequal branch of government, the judiciary, along with the legislature, has traditionally been subordinate to the executive. Members of the judiciary were appointed by the president and served a five-year term coinciding with that of the president. In practice, the courts rarely challenged government actions. Under the law, the Supreme Court of Justice had jurisdiction over executive actions, but it continued not to accept jurisdiction in political cases as of mid-1988. The independence of the judiciary was also made problematic by the executive's complete control over the judiciary's budget. Moreover, during the Stroessner regime, membership in the Colorado Party was a virtual requirement for appointment to the judiciary; in 1985 all but two judges were members. Many justices of the peace, in particular, were appointed by virtue of their influence in their local communities. During the mid-1980s, the government made an effort to improve the public image of judges, suspending a small number for corruption. It appeared, however, that more would be necessary to promote public confidence in judicial independence. The Constitution theoretically guarantees every citizen the rights of due process, presumption of innocence, prohibition against self-incrimination, and speedy trial. It protects the accused from ex post facto enactments, unreasonable search and seizure, and cruel and unusual punishment. Habeas corpus protection is extended to all citizens. Criminal actions can be initiated by the offended party or by the police acting under the direction of a judicial official. According to the law, police had to secure warrants to arrest suspects or to conduct searches unless a crime was in progress while the police were present. Police could detain suspects for only twenty-four hours without pressing charges. Within forty-eight hours, a justice of the peace had to be informed of the detention. Upon receiving the charges of the police and determining that there were grounds for those charges, the justice of the peace then took action according to the gravity of the offense charged. In the case of misdemeanors, the justice of the peace was empowered to try the suspect and to pass sentences of up to thirty days in jail or an equivalent fine. In the case of felonies, a justice of the peace, although not possessing authority to try the case, performed several important functions. If upon hearing the charges, the justice of the peace determined that there were grounds to suspect the individual charged, he informed the suspect of charges against him or her, fixed a time within twenty-four hours to permit the suspect to present an unsworn statement, established a time for witnesses to make sworn statements, and determined a time for inspecting the scene of the crime. After investigation and the receipt of the suspect's unsworn statement, the justice of the peace could order the suspect to be held in preventive detention, if necessary for up to three days incommunicado. This period was renewable for additional three-day periods and was intended to prevent the suspect from communicating with coconspirators still at large. Justices of the peace could also order impoundment of a suspect's goods, except those needed by his or her family. Finally, the justice of the peace prepared the case for trial in the criminal court of the first instance. This preparation was done by assembling the evidence into a document known as the summary and sending it to the higher court along with supporting documents such as statements of witnesses. The investigative stage of criminal proceedings was limited by law to two months--subject to a formal petition for extension. Despite these important responsibilities, many justices of the peace were not qualified lawyers. Therefore, in several of the larger cities, a special official, known as a proceedings judge, took over the most difficult cases before sending the information to Asunción for trial. These judges were empowered to release suspects on bail--something a justice of the peace could not do. Trials were conducted almost exclusively by the presentation of written documents to a judge who then rendered a decision. As was true for most Latin American nations, Paraguay did not have trial by jury. Verdicts were automatically referred to the appellate court and in some cases could be appealed further to the Supreme Court of Justice. A portion of the trial was usually open to the public. The safeguards set forth in the Constitution and in legal statutes often were not honored in practice. The police frequently ignored requirements for warrants for arrest and for search and seizure. Legal provisions governing speedy trial were ineffective, and delays were legendary. Most accused persons were released before trial proceedings were complete because they had already been detained for the length of time prescribed for their alleged offense. A 1983 United Nations study found that Paraguay had the highest rate of unsentenced prisoners in the Western Hemisphere. Moreover, defense lawyers, particularly in security and political cases, were subjected to police harassment and sometimes to arrest. Today, the Constitution provides for an independent judiciary; however, judges often are pressured by politicians and other interested parties. There are credible reports of political pressure affecting judicial decisions; however, the judiciary is not allied with any one political group. The nine-member Supreme Court appoints lower court judges and magistrates, based upon recommendations by the magistrate's council. There are five types of appellate tribunals: Civil and commercial, criminal, labor, administrative, and juvenile. Minor courts and justices of the peace fall within four functional areas: Civil and commercial, criminal, labor, and juvenile. The military has its own judicial system. The judicial system remains relatively inefficient and has insufficient resources. The March 2000 Penal and Criminal Procedures Code, provides the legal basis for the protection of fundamental human rights. The new Code introduced expedited oral proceedings, and requires prosecutors to bring charges against accused persons within 180 days. Defendants enjoy a presumption of innocence, and defendants and the prosecutor may present the written testimony of witnesses as well as other evidence. The judge alone determines guilt or innocence and decides punishment. A convicted defendant may appeal his or her sentence to an appeals court, and the Supreme Court has jurisdiction over constitutional questions. The new system has reduced the backlog of pending criminal cases: 95 percent of those cases active in 1999 had been resolved by March. The average length of a criminal proceeding has dropped by 75 percent, resulting in a reduction of the length of pretrial detention; however, the average time from arrest to trial is still approximately 240 days. The long trial period highlights the judiciary's struggle with insufficient resources. The Constitution stipulates that all defendants have the right to an attorney, at public expense if necessary, but this right often is not respected in practice. Many destitute suspects receive little legal assistance, and few have access to an attorney sufficiently in advance of the trial to prepare a defense. For example, in Asuncion there are only 26 public defenders available to assist the indigent, and only 102 nationwide, although 25 new positions are planned. In practice public defenders lack the resources to perform their jobs adequately. There were no reports of political prisoners. Of the more than 45 supporters of former General Lino Oviedo that were arrested after the 1999 killings of Vice President Argana and the student protesters, 10 remained in jail awaiting trial at year's end 2001. They assert that they are being detained because of their political opposition to President Gonzalez Macchi. In 1988 the operation of prisons was under the General Directorate of Penal Institutions, controlled by the Ministry of Justice and Labor. According to Article 65 of the Constitution, penal institutions are required to be healthful and clean and to be dedicated to rehabilitating offenders. Economic constraints made conditions in prisons austere, however, and overcrowding was a serious problem. A report by an independent bar association in the early 1980s criticized the prison system for failing to provide treatment for convicts. The National Penitentiary in Asunción was the country's principal correctional institution. Observers believed that the total population of the institution averaged about 2,000, including political prisoners. Another prison for adult males was the Tacumbu Penitentiary located in Villa Hayes, near Asunción. Women and juveniles were held in separate institutions. Females were incarcerated in the Women's Correctional Institute under the supervision of the Sisters of the Good Shepherd. The institution offered courses in domestic science. A correctional institute for minors was located in Emboscada, which was also near the capital. It stressed rehabilitating inmates and providing them with skills that would help them secure employment when their sentences were completed. In addition to the penal institutions in the Central Department, each of the other departments maintained a prison or jail in its capital. Many smaller communities did not have adequate facilities even for temporary incarceration, however. A suspect receiving a sentence of more than one year usually was transferred to a national penitentiary. Today, prison facilities are deficient and prison conditions are extremely poor. Overcrowding, unsanitary living conditions, and mistreatment are the most serious problems affecting all prisoners. Tacumbu prison, the largest in Asuncion, was built to hold 800 inmates but houses over 1,500. Other regional prisons generally hold about three times more inmates than originally planned. UNICEF reported that conditions were substandard in other facilities around the country, especially in the prison in Coronel Oviedo. In December a fire and riot at the Alto Parana Regional Penitentiary in Ciudad del Este left 24 inmates dead and over 200 injured. Security is a problem in the prison system. For example, there are approximately 120 guards for over 1,500 prisoners at Tacumbu prison. The Congressional Human Rights Commission has criticized the prisons for their poor nutritional standards. Prisons generally serve one meal a day, and prisoners seldom get vegetables, fruit, or a meat protein source, unless they have individual means to purchase them. Prisons have separate accommodations for well-to-do prisoners, which ensure that those with sufficient means receive far better treatment than other prisoners. Pretrial detainees are not held separately from convicted prisoners. At the Asuncion women's prison, Buen Pastor, there have been several reported rapes of prisoners by their guards, although laws governing prisons forbid male guards in the women's prisons. Conditions in the women's prison are better than at Tacumbu, with less overcrowding. A small number of women are housed in predominantly male facilities, where they are segregated from the male population. In April Amnesty International issued a report criticizing the conditions in the Panchito Lopez juvenile detention facility in Asuncion, citing overcrowding and substandard conditions. As of July, the prison, designed for 160 youths, housed 248 inmates. According to Amnesty International, the cells were overcrowded, overheated, and filthy, with few toilet or washing facilities. Amnesty International said that in January, 193 of the 201 prisoners were in pretrial detention; only 8 inmates had been convicted and sentenced. Panchito Lopez housed juveniles from all over the country who were awaiting trial. In its report, Amnesty International notes that prison authorities allegedly retaliated against inmates who met with Amnesty International during its investigation, although the prison director denies this. The UNICEF noted that inmates at Panchito Lopez were kept in isolation cells, although this practice is not consistent with international standards. Amnesty International reported in April that five youths (Jorge Herebia, Rafael Pereira, Oscar Acuna, Die Acosta, and Jimmy Orlando Dos Santos) detained in the Panchito Lopez Juvenile Center were tortured and mistreated. In a September report, Amnesty stated that youths were kicked, beaten, suspended upside down, had plastic bags put over their heads, beaten on the back with a hammer, and had their feet scalded. Some children also reported being denied food, drink, or access to toilets, sometimes for several days. With only nine guards on duty at a time, Panchito Lopez's inmates frequently set fires and caused other disturbances. In February a fire injured nine detainees, and in July a riot-related fire destroyed the institution. During the July incident, guards shot and killed one inmate. Amnesty International expressed concern when 146 of the juvenile inmates were transferred to the maximum-security prison in Emboscada, Cordillera. While the juveniles are segregated from the adult population, the Emboscada facility, which was built as a military barracks about 1903, is extremely overcrowded. In September another inmate killed one of the juveniles transferred from Panchito Lopez. In September Richard Daniel Martinez, an 18-year-old inmate at the Emboscada maximum-security prison, was killed by another inmate; both youths had been transferred to the adult facility after the closure of the Panchito Lopez facility. During temporary detention at the adult Emboscada facility, other juveniles raped at least two minors. Other juveniles were transferred to prisons nationwide. The authorities planned to transfer all juvenile detainees to new facility for juveniles in Itagua in September; however, at year's end 2001, the facility was not complete. This facility's capacity is not sufficient to house the existing population of juvenile prisoners; the Justice and Labor Ministry were seeking additional space at year's end 2001. In December juvenile inmates at the Itagua youth detention center rioted and set fires. Official sources acknowledge that at least 27 inmates escaped, and unofficial estimates were as high as 70. The Government permits independent monitoring of prison conditions by human rights organizations. Amnesty International and UNICEF, along with government officials and diplomatic representatives, have been granted access to prisons on announced and unannounced visits. The most pervasive violations of women's rights involved sexual and domestic abuse, which are underreported. Spousal abuse is common. Although the Penal Code criminalizes spousal abuse, it stipulates that the abuse must be habitual before being recognized as criminal, and then it is punishable only by a fine. Thousands of women are treated annually for injuries sustained in violent domestic altercations. CODEHUPY reports, according to a government survey, that from January to August 1 woman was killed every 12 days by a family member or other acquaintance. Between January and August, the Secretariat of Women's Affairs registered 533 cases of violence against women, a 25 percent increase over the same period in 2000. According to these surveys, between January and August 2000, 63 percent of the cases of violence against women were rapes. According to women's rights activists, official complaints rarely are filed or they are withdrawn soon after filing due to spousal reconciliation or family pressure. In addition, the courts allow for mediation of some family violence cases, which is not provided for by the law. There are no specialized police units to handle complaints involving rape. The Secretariat of Women's Affairs chairs a national committee, made up of other government agencies and NGO's, that developed a national plan to prevent and punish violence against women. Under the plan, an office of care and orientation receives reports on violence against women and coordinates responses with the National Police, primary health care units, the Attorney General's office, and NGO's. However, in practice these services are available only in Asuncion, and women living elsewhere in the country rarely benefit from them. The Secretariat also conducts training courses for the police, health care workers, prosecutors, and others. The Women's November 25th Collective, an NGO, operates a reception center where female victims of violence can receive legal, psychological, and educational assistance. No shelters for battered and abused women are available outside the capital of Asuncion. Most imprisoned women reportedly were detained for assault, including murder, committed following domestic violence. The law prohibits the sexual exploitation of women, but the authorities do not enforce the prohibitions effectively. Prostitution by adults is not illegal, and exploitation of women, especially teenage prostitutes, remains a serious problem. Law enforcement officials periodically stage raids on houses of prostitution. There were reports of trafficking in women. Abuse and neglect of children is a problem. A local NGO attributes a rise in the number of complaints of mistreatment of children during 2000 to the increased awareness of child abuse and neglect. Sexual exploitation of children also is a problem. In a survey released during the year 2001, the NGO "AMAR" identified 619 child victims of sexual exploitation, the vast majority of whom in Asuncion and Ciudad del Este. Approximately 33 percent of the victims were under the age of 16. Trafficking in girls for the purpose of sexual exploitation is a problem. There continued to be reports of the forced conscription of underage youth. Children 14 and older are treated as adults for purposes of arrest and sentencing. TRAFFICKING IN PERSONS There is no specific legislation to prevent trafficking in persons, although the Penal Code prohibits sexual trafficking. There were sporadic reports of trafficking of women and girls for sexual purposes. Press reports indicate that up to 200 women may have been trafficked to Argentina in 2000 and in the early part of the year for purposes of prostitution. The reports suggest that traffickers falsely promise the women and girls jobs as models or domestic servants. In September three Argentine citizens were sentenced to prison terms in Argentina for trafficking Paraguayan women to work as prostitutes in Buenos Aires. Paraguay is a transit country for as many as 40 metric tons of primarily Bolivian cocaine that move each year en route to Argentina, Brazil, the U.S., Europe and Africa. It is also a source country for high-quality marijuana. Significant money laundering occurs, but it is unclear what portion is drug-related. In 1998, the Government of Paraguay (GOP) signed a new bilateral extradition treaty with the United States which includes the extradition of nationals. Election year politics, judicial and other public corruption, the ongoing reform of the legal system and scarce resources have frustrated GOP-stated intentions to address counternarcotics issues. As a result, the GOP has had only limited success against major trafficking organizations, seizures of cocaine, and action against money laundering. Transit of cocaine through Paraguay is facilitated by Paraguay's centralized location in South America, an extensive river network, lengthy and undeveloped land borders, numerous unpoliced airstrips, and limited resources and authority for law enforcement operations, and persistent official corruption. A significant level of money laundering occurs in Paraguay, although it is largely the result of contraband trade, tax evasion and capital flight, rather than narcotics trafficking. In 1997, the GOP was credited for promulgating a strong anti-money laundering law, which provided the legal tools necessary to act against this criminal activity. In 1998, the GOP failed to implement the law by not funding or staffing the offices created to control money laundering. There were no money laundering prosecutions in 1998. Despite continuing reports from a variety of sources that public officials, including judges, military, police and legislators were suspected of engaging in, encouraging, or facilitating the illicit production or distribution of illegal narcotics, or the laundering of proceeds from illegal drug transactions, no serious investigative efforts were initiated by the GOP to confirm or contradict these suspicions. Judicial corruption was suspected in the release of a major cocaine trafficker in July 1998 in the Asuncion suburb of Fernando de la Mora. SENAD filed a formal complaint against the judge with the Attorney General's office, which has yet to take action on the charges. Cannabis is the only illicit crop cultivated in Paraguay, and is harvested throughout the year. According to a rough estimate by the GOP, 2,500 hectares are under cultivation. SENAD believes marijuana production has increased this year. DEA reports that marijuana is primarily cultivated in the hilly regions of Eastern Paraguay near the Brazilian border. The Department of Amambay is the leading cultivation area of marijuana crops, particularly in the areas of Captain Bado, and the hills near Pedro Juan Caballero. The Department of Canindeyu is the second largest marijuana producing area, followed by isolated crops in the Caaguazu and Alto Parana. According to Drug Enforcement Administration (DEA) estimates, annually, approximately 40 metric tons of Bolivian (and some Colombian) cocaine transits Paraguay each year en route through Argentina and Brazil to the United States, Europe and Africa. Internet research assisted by Liliana Renteria
<urn:uuid:ce621ca5-eb94-4dd8-ae0c-cf8fa2b20e56>
CC-MAIN-2014-23
http://www-rohan.sdsu.edu/faculty/rwinslow/samerica/paraguay.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00258-ip-10-146-231-18.ec2.internal.warc.gz
en
0.973858
14,809
4.125
4
Standards for Mathematical Practice The Common Core’s Standards for Mathematical Practice (SMPs) focus on what it means for students to be mathematically proficient. I have heard many people say that the SMPs are the heart and soul of the Common Core State Standards for Mathematics (CCSSM). These standards describe student behaviors, ensure an understanding of math, and focus on developing reasoning and building mathematical communication. Each standard has a unique focus, but each also interweaves with the others as we put them into practice. These practices empower students to use math and to think mathematically. Our job as teachers is to help students develop these practices to become effective mathematicians.
<urn:uuid:0bbf4e84-a44a-43aa-8d40-070dbded9999>
CC-MAIN-2020-05
https://www.u-46.org/domain/449
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00502.warc.gz
en
0.946961
140
4.15625
4
TEACHING SKILLS ASSINGMENT “The object of teaching a child is to enable him to get along without a teacher” (Finch, 2003). This quote by Elbert Hubbard surely stands true when it comes to teaching English to our students; it is a tool of empowerment. For this reason, we will consider two approaches to teaching English that flowed from the Direct Method. The basic principles of this method, and consequently the two to be discussed, are that “[a]ll teaching is done in the target language, grammar is taught inductively, there is a focus on speaking and listening, and only useful ‘everyday' language is taught” (British Council, 2013). In the first section we will deliberate the Berlitz and the Callan Method. We will then contrast these approaches in order to finally formulate an alternative approach to teaching English. Let us begin by looking at the Berlitz Method. Maximilian Berlitz pioneered the Direct Method in 1878 by letting the traditional learning method to give way to an animated process of discovery (Berlitz, 2013b). The above mentioned is achieved through three stages, namely Presentation, Practice, and Performance (the PPP in short). The idea is to go from a very controlled, accuracy-orientated situation to a very relaxed, fluency-orientated environment. The PPP is outlined in the following subsections in accordance to Berlitz (2013a). New vocabulary or grammar is introduced by using vocabulary or grammar that students are already familiar with and move towards the new target language through techniques such substitution, contrasting and elimination. The definition of a specific concept is never explicitly given, but rather conveyed through a build-up of examples. Audio clips and texts are also used. The next step is to practice the language point introduced firstly through controlled question-answer drills to ensure that there is sufficient practice with the new target language. Especially with beginners, questions are used in the following “ANOK” order (ibid.): For higher levels, more open-ended questions are used. The second part of this stage is to provide “safe” practice through activities such as short skits, chain stories and students asking each other questions. The final stage focuses purely on fluency where students get “free” practice and the teacher “steps back” to let students use the new language combined with prior knowledge (ibid.). This step could involve a whole range of activities including discussions, debates, role-plays, summaries, presentations, interviews and games. It is important to note that classes follow a specific theme based on day-to-day activities (e.g. going to the airport, dining). Thus, all language structures and vocabulary taught are within a specific context. Furthermore, no translation of any kind is allowed, correct pronunciation is enforced as well as positive reinforcement from the teacher (e.g. teachers are approachable while giving praise). Many other teaching styles have a similar approach. One of these is the Callan Method. In 1959 Robin Callan invented his method after teaching in a Berlitz school in Italy (A2Z, 2013). According to ABC (2013), the Callan Method is a direct method created specifically to enhance ones “comprehension and speaking abilities in a pleasant but intensive atmosphere.” It uses a question-answer format of the lesson ensures that students are actively involved in hearing and using the language to a maximum (ibid.). The majority of each class consists out of a quick firing dialogue between the teacher and student and so the lessons are taught at a fast pace (JET, 2013). There is no time for students to think in their native language and then translate their response. By doing this the classes stay engaging and that students are focused on the subject matter. This goes hand-in-hand with instant correction from the teacher and the lessons are taught through constant repetition and revision of grammatically correct sentences (ABC, 2013). The activities that are applied will be discussed under the following subsections as outlined by Konecná (2011). Students are drilled based on using a language construct they already know (a grammar tense or lexeme) and incorporating a new aspect of the target language (be it a new grammar point or vocabulary). All questions are closed-ended to ensure that students produce an exact response. Reading is conducted only aloud by the students while the teacher corrects their pronunciation errors immediately. Thus, the objective of reading exercises is to practice pronunciation. The teacher reads a text out loud and the students must then write down what they hear. This is deemed the best yardstick for judging a student’s ability and level of English, as it shows how much the student understands of what she / he hears. (Callan Method Organisation, 2004). The Direct Method does not allow any kind of translation; however, with for the Callan Method this would mean spending undesirable extra time learning. This is why the acquisition of vocabulary is mainly done through translating words from the target language into the students’ native language. We have seen that this method prides itself in the fact that their students can obtain the Cambridge English First Certificate in a quarter of the standard time. This could justify why their approach is fast paced and condensed leading to quick results for beginners; though, this method is rather stinted for higher levels. It is for this reason important to compare these methods discussed to which aspects are most appropriate for an optimal learning environment. Classroom Structure Role of Teacher Medium: Gives instructions and allows students to develop own examples and promotes self-correction (Following the PPPs going from strict control to allowing students to freely express themselves) High: Directs commands, responses and all corrections Strict correction with presenting / practice activities, less so with “less controlled” activities, almost no correction with performance activities (could be dated and not applied to students’ immediate environment / interest) Set (outdated) syllabus Little homework (maximum of 20 minutes) Makes use of pre-recorded dialogues used for introducing the theme, vocabulary and grammar Teacher dictates and students write responses down Often uses (overly) long articles and texts used for discussion / comprehension as well as vocabulary and grammar. Students read material out loud to practice pronunciation and intonation. Students are expected to dot down ideas for discussions and writing down answers for listening activities Traditional dictation is used Controlled speech through drills and a lot of free speech through fluency exercises Short responses through drills Group / Pair Work My personal approach to teaching would involve a combination of both the Berlitz and Callan methods as well as principles taken from the Oxbridge System. Preference to speaking is given over all other language skills and teacher talking time should be kept to a minimum based. Oral participation should be promoted through classes that are structured around relaxing and fun activities. This is due to the fact that student might feel intimidated by learning a new language and afraid of looking stupid in front of classmates. Competition will be incorporated in every activity which will contribute to positive tension (similar to when one is playing a console game). Humour should be incorporated as much as possible using examples that students would be able to identify with. Teachers are also responsible for creating a relaxing environment for students to express and explore themselves, allowing them to dare to make mistakes. Therefore, a teacher must always put on a friendly face, transmit to students that she / he is sincere, give appropriate constructive praise and, therefore, build rapport when implementing the syllabus in class. Taking the idea from the Oxbridge System, there would be a set outline for the syllabus for all levels. The type of vocabulary, grammar and topic will be defined, but teachers will be responsible for developing their own material so that they are familiar with it and have the opportunity to add their own flair to their classes. The type of material (flashcards, slideshows, whiteboard etc.) used will depend purely on the teacher’s preference, but it is advised to keep it as simple as possible; a teacher should be able to walk in the classroom with only a pen and a piece of paper to teach. As with Berlitz, each lesson will cover a specific everyday-life theme such as (directions, the environment etc.) incorporating the PPPs and putting it in a real-life context. Note that one theme could be divided into several lessons covering different aspects of the theme (e.g. “Food” could involve food preferences, dining experiences, doing groceries etc.) which will be prevalent in the lesson structure. As a revision activity at the beginning of each activity Callan Method-type drills will be used to review the previous lesson. Every mistake will be immediately corrected at this stage. The next step would be to briefly introduce a vocabulary / structure point through a casual conversation and eliciting the new target language. In the case of vocabulary, five to seven words will be elicited and then practiced through questions (as is done by both Berlitz and Oxbridge). At this point, all mistakes will still be corrected (as with the Callan Method). The next step would be to allow students to practice the new target language and / or grammar through mini-performance activities (skits, storytelling etc.). This section will be concluded with a final “fluency” activity such as a debate, discussion or role-play where only the most critical errors will be corrected immediately. Teachers should make notes of the mistakes students make and give them feedback after this activity. This would be considered one cycle and should take roughly 30 minutes. A second cycle can then begin again with another high-paced activity such as a quick firing round of homonyms, naming antonyms, but should more often practice the pronunciation of specific words. Each class will be wrapped up with another Callan method-drill to review the work that has been done. As with both Berlitz and Oxbridge, beginner classes will be more structured whereas classes will become more and more fluency-based as the level of the class increases. Finally, homework should always be set – not at the end of a class as it might give the impression that it is an “after thought”. Students will be required to read a specific text, article or story set by the teacher (this could either be given by the teacher or they might be asked to read something themselves). They will then be asked to write a summary, answer some questions or complete a specific activity based on what they have read. They will hand in their homework at the next lesson which the teacher will correct after class and give to them in the following class. The homework should be short, but condensed so that it focuses on the newly required target language and is not time intensive for the teachers to correct. This is to address reading and listening that are not truly addressed in class and should be considered as completely supplementary. In this assignment we have looked at two very influential methods of English teaching: the Berlitz Method and the Callan Method. We have briefly weighed them up against each other in order to propose a possible new system of teaching. Though no approach to teaching is perfect, all of them – including the new one that has been proposed – should focus on the empowerment of students through self-realisation and –actualisation. May all teachers, then, strive towards the words of Thomas Carruthers (Greene, 2005): "A teacher is one who makes himself progressively unnecessary." A2Z School of English. 2013. A Short History of the Direct Method. Available from: http://www.a2z-english.com/short-history-direct-method [Accessed: 14 April 2013]. Berlitz Corporation. 2013a. Berlitz Method Cheat Sheet. Princenton: Berlitz Corporation. Berlitz Corporation. 2013b. History: The Berlitz History. Available from: http://www.berlitz.be/en/berlitz_company/tradition/history/ [Accessed: 14 April 2013]. British Council. 2013. Direct Method. Available from: http://www.teachingenglish.org.uk/knowledge-database/direct-method [Accessed: 14 April 2013]. Callan, R. 2003. Callan Method: Teacher’s Book 1. Grandtchester: Orchard Publishing. Finch, A. E. 2003. Teachers – Who Needs Them? Roles and Expectations in the Language Classroom. Hong Kong: Hong Kong Polytechnic University. Greene, M. J. 2005. Teacher as Counselor: Enhancing the Social, Emotional, and Career Development of Gifted and Talented Students in the Classroom. Bridgetown: Gifted Education International. JET English College. 2013. What is the Callan Method. Available from: http://www.jetenglish.com/about/callan [Accessed: 14 April 2013]. Konecná, A. 2011. Callan Method under Scrutiny. Brno: Masaryk University.
<urn:uuid:00edcdb4-9371-44d1-8246-e79a02a8aff3>
CC-MAIN-2019-43
https://oxbridgetefl.com/community/english-teacher-profile-tefl-certified/index.php?id=2622
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00202.warc.gz
en
0.939267
2,745
4.125
4
The idea of a rotating wheel-like space station goes back as far as 1928 in the writings of Herman Noordung and was developed further by Wernher von Braun. Its most famous fictional representation is in the film 2001: A Space Odyssey, which also depicts spin-generated artificial gravity aboard a spaceship bound for Jupiter. The O'Neill-type space colony provides another classic illustration of this technique. However, there are several reasons why large-scale rotation is unlikely to be used to simulate gravity in the near future. In the case of a manned Mars spacecraft, for example, the structure required would be prohibitively big, massive, and energy-costly to run. A better approach for such a mission, and one being explored, is to provide astronauts with a small spinning bed on which they can lie, head at the center and feet pointing out, for an hour or so each day, so that their bodies can be loaded in approximately the same way they would be under normal Earth-gravity. In the case of space stations, one of the objects is to carry out experiments in zero-g, or, more precisely, microgravity. In a rotating structure, the only gravity-free place is along the axis of rotation. At right-angles to this axis, the pull of simulated gravity varies as the square of the tangential speed. Another way to achieve Earth-normal gravity is not by constant rotation, which produces the required force through angular acceleration, but by steadily increasing straight-line speed at just the right rate. This is the method used in the hypothetical one-g spacecraft.1 Related categories• SPACE AND AEROSPACE MEDICINE • SCIENCE FICTION Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact
<urn:uuid:05e196cb-077b-4bbd-8915-dc8c6d71f3e9>
CC-MAIN-2015-32
http://www.daviddarling.info/encyclopedia/A/artgrav.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986423.95/warc/CC-MAIN-20150728002306-00045-ip-10-236-191-2.ec2.internal.warc.gz
en
0.938878
364
4.375
4
The evidence for climate change worldwide is plentiful. Records show an average global temperature rise of over 1.5 degrees Fahrenheit since the 19th century, contributing to warming oceans, shrinking ice sheets, retreating glacier cover, decreased snowfall, and more. For years, people have argued over the root cause of these troubling changes, but that debate has lost its steam. Although it’s true the Earth has experienced warming events in the past without humanity’s help, the rate of change over the past 100 years is roughly ten times faster than the average rate of ice-age-recovery warming. And the rate of change over the next century is projected to be up to 20 times faster. At least 97% of actively publishing climate scientists agree that climate-warming trends over the past century are the result of human activities. But hidden within that indictment is some good news: Because we created the problem, we know how to fix it, even if it won’t be easy. In order to address the issue, it’s necessary to have a full understanding of our role in climate change and the way each step in the wrong direction triggers a ripple effect of its own. The primary reason for the rapid spike in average global temperature we’re seeing is the increase of heat-trapping gases like carbon dioxide and methane in the atmosphere. These gases are responsible for the greenhouse effect that keeps heat from escaping into space. Humans and other creatures count on these gases to make the planet inhabitable, but it’s all about balance—an overload of greenhouse gases in the atmosphere will trap too much heat and cause glaciers to melt, triggering sea level rise, warming oceans, species extinction, and other tragic side effects. Top Three Causes of Rising Emissions Burning Fossil Fuels Burning coal, oil, and gas is the number one human cause of climate change. Emissions from industrial activities and the burning of fossil fuels was on track to pump an estimated 36.8 billion metric tons of carbon dioxide into the atmosphere in 2019, according to a report titled “Persistent fossil fuel growth threatens the Paris Agreement and planetary health.” Forests are essentially carbon dioxide banks—they absorb it from the atmosphere and help regulate the climate. Unfortunately, forests are disappearing at an alarming rate. Between 1990 and 2016, we lost 502,000 square miles of forest, according to the World Bank. Deforestation directly contributes to climate change, with 20% of the world’s emissions resulting from the clearing of tropical forests. In 2017 alone, deforestation added about 7.5 billion tons of carbon dioxide to the atmosphere. Saving the world’s forests would go a long way toward healing the environment. According to one estimate, we can work toward meeting the goals set in the 2015 Paris Climate Agreement by safeguarding tropical tree cover. Over the next decade, this step alone could provide 23% of necessary climate mitigation. Livestock farming is a big contributor to human-caused climate change when you consider the combined impact of methane released by animals, deforestation for agricultural expansion, and fossil fuels burned to produce mineral fertilizers for feed production. According to a UN report titled “Tackling Climate Change Through Livestock,” an estimated 14.5% of annual global greenhouse gas emissions come from the livestock industry. To put this number in perspective, all the transport vehicles in the world—cars, trucks, trains, boats, and airplanes—create roughly the same level of fuel emissions, and climate change, each year. Ripple Effects of Climate Change If emissions continue to increase unchecked, a number of side effects will ripple around the world. Here are just a few: More Droughts and Heat Waves Heat waves around the world and droughts, especially in the American Southwest, are expected to become more frequent and intense in coming years. The National Climate Assessment estimates 20-30 more days over 90 degrees Fahrenheit in most areas of the US by mid-century. Droughts and heat waves, in turn, will contribute to larger wildfires and longer fire seasons, a reality many around the world have experienced first-hand over the past few years. Stronger and More Intense Natural Disasters Natural disasters, especially hurricanes and typhoons, are expected to increase in frequency and severity as the world warms. Warmer air holds more water vapor, which leads to more rain, like the historic rainfall we’ve seen in recent hurricanes. This also means more intense storm surges, which will compound already rising sea levels. Natural disasters come with their own negative consequences, of course, including lost lives, damaged infrastructure, reduced ecosystem resilience, and contaminated groundwater due to increased sediment and pollutants following heavy downpours. Sea Level Rise Global sea levels have risen between 4-8 inches over the past hundred years, according to the Intergovernmental Panel on Climate Change (IPCC). Increased emissions have contributed to the fact that 390 billion tons of ice and snow melt every year, which in turn has caused global sea levels to rise. Depending on how much we’re able to reduce greenhouse gas emissions in the coming years, sea levels could rise anywhere from 12 inches to 8.2 feet by 2100. This is extremely alarming information, especially for the more than 100 million people around the world who live within three feet of mean sea level. How Population Growth Relates to Climate Change If humans have caused climate change, it follows that rapid population growth is going to exacerbate the problem. The world’s average population is increasing by an estimated 81 million people per year, according to the World Population Clock. A study published in 2017 by the Universities of Lund and British Columbia suggested that having one fewer child is the single most effective measure an individual in the developed world can take to cut their carbon emissions over the long term. They calculated that having one fewer child a year could result in an average reduction of 58.6 tons of carbon dioxide-equivalent emissions per year for developed countries. See You How Can Make an Impact Because of the link between population growth and climate change, it’s important that people around the world are equipped with the knowledge and resources to make informed decisions about how they choose to grow their families. At Population Media Center (PMC), our goal is to offer people that information in the form of engaging educational entertainment. We meld entertainment-industry insight with behavior theory to create entertaining hit shows that are uniquely designed to address deeply embedded personal and social issues, including environmental concerns like deforestation and endangered species preservation. When done well, educational entertainment has the power to motivate entire societies to create long-lasting, meaningful change in their behaviors and in their relationship with the natural world. With the power of mass media, we are able to reach large audiences for relatively little cost, which means even small donations can go a long way toward solving some of today’s most pressing challenges. See how you can make an impact today.
<urn:uuid:56295aa4-2599-4ca6-9652-bd0dd9413944>
CC-MAIN-2021-39
https://info.populationmedia.org/blog/understanding-the-lesser-known-human-causes-of-climate-change
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00316.warc.gz
en
0.933519
1,432
4.09375
4
Araminta “Minty” Harriet Tubman was born into slavery in Maryland but later escaped and became one of the main leaders of the “Underground Railroad” which led hundreds of slaves to freedom. It is not certain when Harriet and her eight siblings were born but it is estimated to be between 1815 and 1825. Plantation owners owned her parents, Harriet “Rit” Green and Ben Ross, and some of their children were sold to other plantations in other states. Physical violence was a common occurrence for Harriet and her family, particularly in the form of whipping. Harriet carried scars on her back for the rest of her life. On one occasion, when she refused to do something, Harriet’s overseer threw a two-pound weight at her head, knocking her out. This led to seizures, headaches and narcolepsy, which she suffered for the rest of her life. On the other hand, the seizures caused her to fall into intense dream states, which she believed to be religious experiences. Harriet’s father became a free man at the age of 45, however, having nowhere to go, he remained working on the plantation in slave-like conditions. He did not feel he could leave his family, who remained in the possession of the plantation owners. Even when Harriet married John Tubman, a free man, in 1844, she was not released from slavery. In 1849, Harriet made her first trip from South to North following a network known as the Underground Railroad. Following the death of her owner, Harriet decided to escape from slavery and run away to Philadelphia. On 17thSeptember 1849, Harriet and two of her brothers began the long journey but after they learnt that Harriet was being sought in the papers for a reward of $300, the boys had second thoughts and returned home. Harriet’s husband had also refused to go with her and later took on a new wife. Continuing alone, Harriet travelled almost 90 miles to Philadelphia where she finally entered a Free State. “When I found I had crossed that line, I looked at my hands to see if I was the same person. There was such a glory over everything; the sun came like gold through the trees, and over the fields, and I felt like I was in Heaven.” But this was not the end of Harriet’s story. No sooner had she arrived that she returned to the South to help more than 300 people escape from slavery. Between 1850 and 1860, Harriet made 19 trips, the first being to help her niece Kessiah and family escape the harsh conditions. Things became harder when the Fugitive Slave Law came into practice, stating that escaped slaves could be arrested and returned to their owners even if they were living in Free States. Nonetheless, Harriet persevered, rerouting the Underground Railroad to Canada. Harriet had a prophetic vision about the abolitionist John Brown, who she later met in 1858. Although Brown advocated violence, he ultimately wanted the same result as Harriet and they began working together. Unfortunately, Brown was arrested and executed for which Harriet praised him as a martyr. During the Civil War, Harriet entered the Union Army as a cook and nurse, although ended up working as an armed scout and spy. She was the first woman to lead an armed expedition during the war, which resulted in the liberation of over 700 slaves in South Carolina. In 1859, Harriet bought a small piece of land near Auburn, New York from fellow abolitionist Senator William H. Seward. Ten years later, she married Civil War veteran Nelson Davis and in 1874 adopted a baby girl called Gertie. They lived happily in their own home, however, were never financially secure. Friends and supporters endeavoured to raise money for her. One fan, Sarah H. Bradford wrote Harriet’s biography and gave her all the proceeds. In 1903, Harriet opened her land to the African Methodist Episcopal Church and, five years later, opened the Harriet Tubman Home for the Aged. Sadly, Harriet’s health was not good. The physical abuse she received as a slave caused her severe problems, resulting in brain surgery to alleviate some of the pain. She eventually died in 1917 from pneumonia and was buried at Fort Hill Cemetery with military honours. At the end of the 20th century, Harriet Tubman was named one of the most famous civilians in American History and she is soon to be the face on the new $20 bill. Yet, outside of America, Harriet remains unknown, however, last year a film was released titled Harriet, which documents her life as a conductor of the Underground Railroad. A Woman Called Moses from 1978 also documented her career. So, perhaps Harriet Tubman may not remain unknown in Europe for long. We are happy for you to use any material found here, however, please acknowledge the source: www.gantshillurc.co.uk Rev'd Martin Wheadon
<urn:uuid:b4bf7d64-fae9-4667-9ba5-040edb9d3c59>
CC-MAIN-2024-10
https://www.gantshillurc.co.uk/ministers-blog/harriet-tubman
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476137.72/warc/CC-MAIN-20240302215752-20240303005752-00041.warc.gz
en
0.98631
1,022
4.0625
4
Mountain hare (Lepus timidus) |Size||Head and body length: 46-65 cm (2)| Tail length: 4.3-8 cm (2) |Weight||2-6 kg (2)| The mountain hare is classified as Least Concern (LC) on the IUCN Red List (1). Not protected in the UK (3). Listed in Annex V of the EC Habitats Directive as a species of community interest (4). The mountain hare (Lepus timidus), also known as the blue hare, or white hare in winter, is native to Britain, unlike the brown hare (Lepus europaeus) and rabbit (Oryctolagus cuniculus), which are thought to have been introduced by the Romans (4). It has a lighter build than the brown hare, and is easily distinguished by its tail, which is completely white throughout the year, whereas in the brown hare the tail has a black upper surface (2). The ears are tipped with black, and the coat is brown in summer, turning white during winter (4). Males and females are generally similar in appearance, but females are slightly heavier (4). In Great Britain, the mountain hare is native only to the Scottish highlands; it was translocated to England, Wales, the Isle of Man and various Scottish islands, mainly for shooting. At present it occurs in the Scottish highlands, where it is common, the borders, south-west Scotland, the Peak District and the Isle of Man, but the Welsh population seems to have become extinct (5). In England, just six isolated populations are known, and the status of the species in England seems precarious (5). Outside of Great Britain, this species has a broad distribution that covers most of the Palaearctic region (1). Throughout most of its distribution, the mountain hare inhabits boreal forests, however in Great Britain it tends to be associated with heather moorland, especially where management for grouse is in place (4), which creates a patchwork of heather at different ages (3). They also occur in montane grassland, new forestry plantations and dry rocky hills (5). In areas where brown hares are absent, mountain hares may inhabit pasture and arable lowlands (5). This species is active in the evening and at night (5), but during the breeding season it becomes more active during the day (5). Mountain hares tend to rest during the day in forms, scrapes or burrows in the snow or soil (4). Although typically a solitary species, occasionally groups of up to 70 individuals may gather in order to feed (5). The diet consists mainly of young heather, but grasses, rushes, sedges, bilberry and herbs are also eaten (5). The breeding season occurs between February and August (5). During this time, several males may pursue a single female, who may 'box' them away if she is not ready to mate (4). Gestation takes around 50 days (4); between one and four litters are produced each year, consisting of one to five young, called leverets, although up to eight have been recorded (5). The leverets are born with fur, with their eyes open, and are left on their own for much of the time; the mother returns only to suckle them (4). Adult mortality is quite high (5), the main predators are foxes, birds of prey, stoats and cats (4), but adults are known to live to over nine years of age (5). In Great Britain, the population is fairly fragmented and isolated, which makes the species particularly vulnerable. Adverse weather conditions and other chance events can severely threaten small isolated populations (3). This hare relies on heather moorland, managed in traditional ways for red grouse (Lagopus lagopus). Unfortunately, both this habitat, and the management techniques that benefit this species are declining (5). In some areas, the mountain hare is thought of as a pest (3), as it is believed to compete with grouse for food (5); hares are therefore shot in order to control them (3). Poachers with dogs are a threat in the Peak District (5), and disturbance in areas where recreational pressures are high may also be a problem (3). The listing of the mountain hare under Annex V of the EC Habitats Directive means that a number of methods of capture are restricted or banned (4). Before direct conservation action can be undertaken, further research is needed into this species in Great Britain (4). For more information on the mountain hare: BBC Wildlife Finder: For more on the conservation of Britain's mammals: Macdonald, D.W., and Tattershall, F.T. (2001). Britain's mammals- the challenge for conservation. The Wildlife Conservation research unit, Oxford University: This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: - Boreal forest: the sub-arctic forest of the high northern latitudes that surrounds the pole and is mainly composed of coniferous trees. - Gestation: the state of being pregnant; the period from conception to birth. - Montane: of mountains, or growing in mountains. - Palaearctic region: the region that includes Europe, the part of Asia to the north of the Himalyan-Tibetan barrier, North Africa and most of Arabia. - Translocated: when individual living organisms from one area have been transferred and released or planted in another area. IUCN Red List (April, 2011) - Burton, J. A. (1991) Field guide to the mammals of Britain and Europe. Kingfisher Books, London. - Morris, P. (1993) A red data book for British mammals. Mammal Society, Bristol. The Mammal Society. Mammal Factsheets. (August 2002). Macdonald, D.W. and Tattershall, F.T. (2001) Britain's mammals- the challenge for conservation. The Wildlife Conservation research unit, Oxford
<urn:uuid:7e322588-6197-48a1-b8bc-3820571c8987>
CC-MAIN-2015-32
http://www.arkive.org/mountain-hare/lepus-timidus/factsheet
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982502.13/warc/CC-MAIN-20150728002302-00071-ip-10-236-191-2.ec2.internal.warc.gz
en
0.929281
1,296
4
4
Analysis of Spectra Standing Waves in a String Consider the case of a tense string anchored at fixed points P and Q (figure 4.1) of length L and mass per unit length d, stretched with a given force T that can be changed at will (say by changing the mass m of a body suspended from the string as shown) : Hermann von Helmholtz (1821-1894) Helmholtz resonators - hollow glass spheres that have two short tubular necks, diametrically opposite one another. One opening was put to the ear, the other directed at the sound source. By using a series of these Helmholtz was able to estimate the strengths of the harmonics of a periodic sound. He also found the frequencies of inharmonic partials of bells and gongs. Resonances have a noticeable effect on the timbre of musical sounds. For brass instruments this is controlled by the length and shape of tubing, and how the player constricts the lips. For woodwind instruments, their tubular structures are resonators whose resonant frequencies are controlled by opening or closing various holes. The vocal tract has several resonances that emphasise various ranges of frequency in the sound produced by the vibration of the vocal cords. By changing the shape of the vocal tract, the frequencies of these resonances or formants determine which vowel sound is produced. The resonances of the soundboard of a violin greatly affect the timbre. The suppression of some partials is important for the musical quality of the violin tone. Fourier Analysis and Fourier Synthesis The French Mathematician Baptiste Joseph Fourier (1768-1830) invented a type of mathematical analysis by which it can be proved that any periodic wave can be represented as a sum of sine waves having the appropriate amplitude, frequency and phase. The function that sums all the harmonics into a sound wave is called a Fourier integral. Furthermore for harmonic spectra, the frequencies of the component waves are related in a simple way: they are whole number multiples of a single frequency, f0, 2f0,3f0, and so on. A square (or pulsed wave with a mark to space ratio of 1:1) requires the sum of an infinite number of sine components whose frequencies are odd whole number multiples of the fundamental (f0, 3f0, 5f0, 7f, ...) and whose amplitudes decrease in proportion to the inverse of the harmonic number (1, 1/3, 1/5, 1/7, ...) and the proper phases. Most periodic sound waves consist of both odd and even frequency components although closed organ pipes and some wind instruments (eg clarinets) do have predominantly odd frequency components. A triangular wave also has only odd numbered frequency components. Their amplitudes are different to those of a square wave however, reducing to the inverse of the square of the harmonic number (1, 1/9, 1/25, 1/49 ...) A Fourier representation of a complex wave of finite duration (as musical sounds are) requires an infinite number of different harmonics. Trying to represent actual sounds as sums of true sine waves, which persist from an infinite past into an infinite future, is a mathematical artifice. Consider the nearly periodic sounds produced by musical instruments. A sum of harmonically related sine waves doesn't correctly represent such a sound, because the sound starts, persists a while and dies away. "Noisy" sounds such as the hiss of escaping air or the "sh" or "s" sounds of speech can be represented as the sum of sine waves (a Fourier integral) that have slightly different frequencies. When the sound is repeated the waveforms won't be exactly the same. The power of the sound in any narrow range of frequencies will be about the same, but the amplitudes and phases of the individual frequency components won't be identical. Nevertheless the two different "sh" sounds will sound the same; we will hear them as being identical. It may however, for certain purposes, be adequate to use fewer components. Rigden, John S. :Physics and the Sound of Music: J. Wiley 2nd ed.: 1985 Back to main course The Physics And Psychophysics Of Sound & Music Back to Intensity Next to Tuning, Scales And Temperament
<urn:uuid:d211a60c-7b9d-4565-adb9-8d0b270acf46>
CC-MAIN-2020-40
http://www.avatar.com.au/ppom-spectra/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127075.68/warc/CC-MAIN-20200930141310-20200930171310-00149.warc.gz
en
0.920948
910
4.09375
4
Brought to you by the American School Counselor Association For preteens who are struggling to "fit in," diversity and tolerance can seem like foreign concepts. Kids this age often obsess about having the right hairstyle and clothes and using the same lingo as their friends. Teaching a middle schooler to respect others with regard to more serious issues of bias and discrimination presents a particular challenge for parents. Here are some helpful tips. Watch what you say. We can't expect our children to be tolerant if we don't model respect for others. Examine your own language for times when you use statements that stereotype a group or individual. Speak out against jokes and slurs that target groups of people. Keeping silent, walking away, or not laughing doesn't show your children -- or those making the jokes -- that you won't tolerate bigoted remarks. Provide opportunities for your kids to have friends from diverse cultures. Encourage after-school play with children from diverse cultures. The holidays also provide a perfect time for you to invite friends and co-workers from different backgrounds to experience the joy of your family traditions and customs. Discuss the impact of prejudicial attitudes and behavior. Consider adding the topic to a family meeting. Provide as much accurate information as you can to dispel the harmful myths and stereotypes that middle schoolers often perpetuate when talking with their peers. Plan weekend trips that will expose your family to different cultures and customs. Visit museums, libraries, street fairs, and cultural events that showcase art, dance, music, and foods of diverse cultures. Explore diverse neighborhoods in and around your community. Seek out historical landmarks and exhibits in your area that chronicle human and civil-rights struggles. Read books written by diverse authors on diverse subjects. Read and encourage your children to read books that promote understanding of different cultures. Identify your own personal heroes in history and fiction, then challenge your kids to choose their own positive role models. Share your family's immigration story. Involve your children in your family's history. Trace your ancestry and pass down any stories about your relatives' immigration experience. If your family was involved in the struggle for civil and human rights, be sure that your children take pride in that heritage. Make sure that your child's school incorporates diversity programs. School systems should be actively involved in diversity training and teaching tolerance. Mediation and conflict resolution programs should be a part of your child's school. Diversity clubs, diversity ambassadors, multicultural assemblies, and cultural programs are all ways that schools are integrating the teaching of tolerance. Stay involved in your child's life. Know how your kids spend their time when they're alone, and learn who their friends are. This is a time when peers have enormous influence, but your kids still need to hear what you have to say, even though it seems that they're not listening. Your role modeling is important: Take the time to correct your child's misperceptions by emphasizing the facts. And finally, if you reassure your kids that they can safely be themselves in any situation, they will be more likely to respect differences in others.
<urn:uuid:9831465d-1136-4cd7-8f81-78dfe27aa0a8>
CC-MAIN-2016-44
http://www.familyeducation.com/life/preteen-and-tween-behavior/how-open-your-preteens-mind
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719463.40/warc/CC-MAIN-20161020183839-00238-ip-10-171-6-4.ec2.internal.warc.gz
en
0.951786
641
4.28125
4
Antarctica has been covered with ice for thousands of years, and only recently have we been able to fully identify its underlying features. Scientists have turned the spotlight on the topography of Antarctica, and revealed much detail about its rough and rocky surface. Antarctica is in face a mountainous land, and some of the most impressive work establishing this fact has been sponsored by the British Antarctic Survey. In collaboration with international institutes, in 2012 the BAS released a precise topological map showing the mountain ranges, valleys and plains of the Antarctic continent in stark and never-before-seen detail. This representation is called BEDMAP and it was compiled primarily through the use of radar images and satellite readings, combined with map-making software that allows for vivid recreations of the rugged Antarctic landscape. A decade before the BAS team unveiled its map, Charles Webb from NASA’s Cryosphere Science Research Center had created a fairly detailed representation of Antarctica’s rock bed. However, his work was based only on ground-based measurements and was therefore limited in terms of how much land it could cover. Many people had formed their impressions about Antarctica’s hidden landscape based on exposure to a copy of the Piri Reis Map, which some believe survived from extreme antiquity and accurately described Antarctica the way it was before it was covered by ice.
<urn:uuid:d167dc95-c552-45f7-b0ce-391d6b7a8ff8>
CC-MAIN-2021-04
http://www.chauvetdreams.co.uk/antarcticas-hidden-mountains-revealed/
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00514.warc.gz
en
0.976242
270
4.46875
4
A synapsis is the pairing of two chromosomes during meiosis. This process allows for the chromosomes to crossover and genetic material is exchanged. This exchange leads to varying genetics in organisms that reproduce sexually.Continue Reading A synapsis occurs when the chromosomes form a pair of sister chromatids during the first phase of meiosis, which is referred to as prophase I. These sister chromatids connect to one another through RNA and protein combinations. During this process, they interlock and coil together. This is the crossover process. As this phase continues, the homologous chromosome pairs migrate toward either the left or right side of the cell. As this process moves to anaphase I, the synapsis ends. As it ends, the chromosome pairs separate. In later phases, these chromosomes are picked up by separate daughter cells. After the synapsis has ended, meiosis II occurs. In this stage, some of the processes that began in meiosis I with the synapsis are continued. The cells from meiosis I separate and gametes are formed. These gametes play a significant role in genetic variability through the assortment of the 23 chromosomes. This, along with the exchange of genetic information during the crossover in synapsis, leads to variability within individuals.Learn more about Cells
<urn:uuid:4382048e-24cb-43b7-8939-ac5375430247>
CC-MAIN-2018-13
https://www.reference.com/science/describe-process-synapsis-8de5d0f9193c4b3c
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647475.66/warc/CC-MAIN-20180320130952-20180320150952-00603.warc.gz
en
0.94338
266
4.15625
4
In Year 6, our first topic is ‘Explorers and Adventurers’ and we will be learning about various expeditions and the brave people that led them. In English we will be writing warning stories focusing on our use of figurative language to describe settings. Some book recommendations to support your child with their writing are: - The Explorer/Rooftoppers by Katherine Rundell - The Nowhere Emporium by Ross Mackenzie - Bright Storm by Vashti Hardy In Mathematics, we will be refining the speed and accuracy at which we can use written methods for addition, subtraction, multiplication and division. We will use written methods to support our problem solving and also practise ways in which we can find common factors, common multiples and prime numbers. Our Science topic is ‘Living Things and their Habitats’. We will be classifying plants and animals according to shared characteristics. In humanities we will be exploring North and South America, investigating the human and physical characteristics of the Amazon Rainforest. We will also be investigating the question, ‘What impact does exploring new territories have on habitats?’. If you have any further questions about the content of the Year 6 curriculum please contact the Year 6 Team Leader, Ms Haigh, who will be more than happy to answer any questions you may have.
<urn:uuid:884de13b-6a49-40d4-9e28-8f8de432e8ad>
CC-MAIN-2023-40
https://www.bannockburnprimaryschool.com/351/year-6
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510130.53/warc/CC-MAIN-20230926011608-20230926041608-00388.warc.gz
en
0.92193
280
4.03125
4
New stars tend to form with disks of gas and dust around them. After a few hundred thousand years or so, the intense ultraviolet radiation from the most massive of these stars has expelled much of the gas in the outer portion of the nearby disks, and scientist think that the escaping gas takes some of the dust along with it. That dust can be seen at infrared wavelengths as cool, comet-shaped globules. Since these disks are the birthplaces of planets, the processes involved in producing globules will impact the formation and subsequent evolution of planets. Hence astronomers are very interested in the diagnostic clues provided by cometary globules. The Spitzer Space Telescope with its infrared cameras is able to study many of these dim cometary globules for the first time. Three SAO astronomers, Xavier Koenig, Lori Allen, and Scott Kenyon, along with two colleagues, have imaged the giant star forming region called W5 over an area in the sky about the size of four full moons, and discovered four such globules. They report in this week's Astrophysical Journal Letters that their data indicate the dust in these globules was not completely removed with the gas. Instead, the dust appears to have remained in the disk but only later was blown out by radiation pressure from the nearest massive star. The difference is important because of the new timescale it implies. Rather than being removed from the disk with the gas in tens of thousands of years, as had once been suggested, the new results suggest that the dust can survive in the disk for a few million years. Planets, after they form from these disks, can migrate inward towards their star on a timescale of hundreds of thousands of years. These new results address the environment of planets during this early phase of their evolution.
<urn:uuid:6b8bd73b-6292-4982-bfe9-9fe8e661ef2b>
CC-MAIN-2014-10
http://www.cfa.harvard.edu/news/su200846
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011042531/warc/CC-MAIN-20140305091722-00077-ip-10-183-142-35.ec2.internal.warc.gz
en
0.972239
359
4.25
4
This is a week long literacy unit geared toward first and second grade. In this unit students will read all about their amazing hands on 4 pages of reading printables. From talking, caring, protecting, building, and eating, our hands make it all possible! With this topic of hands, we can cover many standards, have fun, and learn TONS! This unit takes the fun concept of our hands and includes components to hit many of your standards through vocabulary, writing, reading, word study, and even a cute poem too! This activity is centered around motivating students to write a narrative including feelings and vocabulary. Students each get to choose a feeling word and then create a picture showing how they express that feeling. A pair of hands gets added to the picture to show how they "talk with their hands" It makes an adorable bulletin board and writing piece for their portfolios. Students also explore the many words that have the word hand as a root word. Fun phrases with the word "hand" can be discussed and hung to enhance your week of learning and classroom space! You'll really have the "upper hand"... Be sure to look at the preview file to see the common core standards covered and examples of each component of this unit! Tunstall's Teaching Tidbits
<urn:uuid:96058434-6f7a-4a63-907a-5d75c83cf88d>
CC-MAIN-2017-09
https://www.teacherspayteachers.com/Product/Hand-to-Heart-Writing-and-Literacy-2170578
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00385-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9576
261
4.34375
4
Nationalism means the wish of a people to govern themselves as a nation. This ideal reshaped the map of Europe in the 19th century. Later in the century, nationalism took on a second meaning—an exaggerated belief in the superiority of one’s own nation. Table 54. NEW NATIONS |1830–1831||Nationalist agitation; calls for democratic reform across Europe| |1832||Greece recognized as independent from Turkey| |1848||Nationalist and liberal uprisings across Europe| |1871||Germany unites as an empire| |1871||Italy becomes a single nation| Between 1772 and 1795, Poland was divided among Russia, Prussia, and Austria. There were nationalist uprisings against the Russians in 1830 and 1863, but independence was not regained until 1918. Since the Middle Ages, Germany had been a patchwork of free cities and small states within the Holy Roman Empire. In the 1800s, these gradually came together, economically, then politically. In 1871, Wilhelm I of Prussia became emperor of a united Germany. Giuseppe Garibaldi (1807-1882) dreamed of uniting Italy and freeing it from foreign rule. In 1860 he assembled 1,000 volunteers, who wore red shirts as a uniform. They sailed from Genoa to Sicily and joined an uprising against that kingdom’s French rulers. They then crossed to southern Italy. Garibaldi later tried to march on Rome, and fought against Austria. Bismarck was a Prussian politician, a conservative and a royalist. He opposed the liberal nationalists who demanded democratic change in Germany in 1848, but played a major role in creating the German Empire of 1871.
<urn:uuid:4fe1367c-daf6-4bf8-8cae-462e7fa568b4>
CC-MAIN-2018-39
https://www.factmonster.com/dk/encyclopedia/history/nationalism
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156270.42/warc/CC-MAIN-20180919180955-20180919200955-00378.warc.gz
en
0.969591
362
4
4
A temporary downturn in economic activity, usually indicated by two consecutive quarters of a falling GDP . The official NBER definition of recession (which is used to date U.S. recessions) is: A recession is a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales. A recession begins just after the economy reaches a peak of activity and ends as the economy reaches its trough. Between trough and peak, the economy is in an expansion. Expansion is the normal state of the economy; most recessions are brief and they have been rare in recent decades. The start and end dates are determined by the Business Cycle Dating Committee of the National Bureau of Economic Research (NBER). It is a popular misconception that a recession is indicated simply by two consecutive quarters of declining GDP, which is true for most, but not all recession. NBER uses monthly data to date the start and ending months of recessions. An extended decline in general business activity. The National Bureau of Economic Research formally defines a recession as three consecutive quarters of falling real gross domestic product. A recession affects different securities in different ways. For example, holders of high-quality bonds stand to benefit because inflation and interest rates may decline. Conversely, stockholders of manufacturing firms will probably see company profits and dividends drop. Case Study After nearly a year of falling commodity prices, rising unemployment, increasing personal and corporate bankruptcies, falling stock prices, and declining public confidence, the National Bureau of Economic Research made it official and on November 26, 2001, declared a recession. The announcement wasn't a surprise to hundreds of thousands of people who had lost their jobs and an even greater number of investors who had experienced substantial losses in the stock market. The bureau's Business Cycle Dating Committee of six academic economists determined the recession commenced in March 2001, when economic activity stopped growing. Although many economists use declines in gross domestic product to define a recession, the NBER Dating Committee examined employment, industrial production, manufacturing and trade sales, and personal income. The country's last previous recession lasted eight months and ended in March 1991. The subsequent ten-year period of uninterrupted growth between March 1991 and March 2001 was the longest in America's history. Broadly defined, a recession is a downturn in a nation's economic activity. The consequences typically include increased unemployment, decreased consumer and business spending, and declining stock prices. Recessions are typically shorter than the periods of economic expansion that they follow, but they can be quite severe even if brief. Recovery is slower from some recessions than from others. The National Bureau of Economic Research (NBER), which tracks recessions, describes the low point of a recession as a trough between two peaks, the points at which a recession began and ended -- all three of which can be identified only in retrospect. The Conference Board, a business research group, considers three consecutive monthly drops in its Index of Leading Economic Indicators a sign of decline and potential recession up to 18 months in the future. The Board's record in predicting recessions is uneven, having correctly anticipated some but expected others that never materialized. Technically, two successive quarters of falling gross domestic product as judged by the National Bureau of Economic Research, a private nonprofit, nonpartisan research organization founded in 1920.Commonly,a time of general economic slowdown.
<urn:uuid:b63f002e-7d3e-4608-b7a6-af1597fc94e8>
CC-MAIN-2018-26
https://financial-dictionary.thefreedictionary.com/bone+recession
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00409.warc.gz
en
0.947572
694
4.09375
4
What is Character Encoding? Unicode? Character encoding is one of the most important and least understood aspects of programming, and computers in general. In its most basic sense, encoding is the way that a computer reads and displays a file in a way that humans can understand. Every text file, no matter what language it is written in, is really just a big collection of 0's and 1's. This is called binary, and it's kind of important. But since most normal humans can't read in binary, computers take those binary numbers and convert them into characters—letters and numbers and punctuation—that make sense to people. How the computer does this for any given file is called the encoding. As an analogy, think back to what you learned in algebra. The binary numbers are x, and the characters that need to be displayed are y. Encoding is the equation that connects them. Because binary numbers don't really mean any specific character on their own, encoding also defines what set of characters the 0's and 1's will be converted to. Since computers were first developed in the United States, the first character sets only included characters used in English: a-z, A-Z, 0-9, and some common punctuation. All of these characters fit into 128 places, which conveniently only took up 7 bits. This character set is called the American Standard Code for Information Interchange, or ASCII for short. ASCII only requires 7 bits, and computers at the time were capable of running 8. In order to display characters from their own languages, groups all over the world used the 8th bit to add an extra 128 characters, for a total of 256. The result was that every language had a different way of displaying the same file. For example, the same combination of numbers in Windows-1250, an encoding for Central European languages, displays , while Windows-1252, a Western European encoding, displays . Even between encodings for the same languages, differences exist. Two different standards, Windows and ISO, both include encodings for Central European. The character sets are nearly as different as in the previous example. So this went on for a while before transferring files internationally became commonplace, and every language had its own way of reading those 0s and 1s. The first 127 characters were the same, and consisted of non-accented English characters, while the rest of the characters varied wildly depending on the language. Sending files to another country, or even just to someone who's computer didn't use exactly the same encoding as yours, resulted in all of your nice, language-specific characters getting transformed into some other characters that didn't make sense in your file. Plus, then there were those characters that weren't supported at all in other languages, so you'd just get rows of boxes or question marks. Don't even get started on Asian languages, which employ thousands of characters that can't possibly be represented in 256 places. (Asian languages used something called the Double Byte Character Set in which some characters take one byte and some take two.) Thankfully, the nice people at the Unicode Consortium came along to sort it all out by the time the Internet showed up. Unicode is an effort to pair every single character in every human language with a Unicode number, or code point. So far, they have over 100,000. That's a lot more than 256! An A in Unicode is the same as an A, but different than an a. All A's are identified by their Unicode code point (U+0041). At first, people came up with the brilliant idea of representing every Unicode character with two bytes. This system was called UCS-2. It worked decently well, but since switching to UCS-2 required converting all of those old ASCII, Windows, and IBM documents into UCS-2, it didn't catch on right away. And then UTF-8 was born. In UTF-8, every code point from 0-127 is stored in only one byte, while those above 127 are stored using 2-6 bytes. This has the added advantage of being identical to ASCII for English text, which means that anything written in English looks exactly the same in UTF-8 as it did in older encoding methods. Handy, right? Right. UCS-4, UTF-16, UTF-32, and UTF-7 are all different ways of encoding the Unicode code points using varying numbers of bytes, etc. Remember those Unicode code points? The ones that cover over 100,000 characters? Well, if you don't want to use UTF-8, you can still encode any of those code points in an older encoding system, as long as the system supports the character you're trying to use. You still can't display Russian letters in the Western European Windows-1252, you'll just get question marks and boxes. With the Unicode encodings (UTF-8, etc.), any character can be properly displayed. No matter what encoding system you're using, it's important to let people know. The top of any HTML document should contain a meta tag that tells web browsers which encoding to use to read it. It should look something like this: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> This meta tag needs to be the very first thing in your file after the html and head tags, otherwise the browser will get confused and display your page in whatever encoding it thinks is right. Finally, in order to use any encoding when creating or editing documents, you need to have a text editor that supports that encoding. Many text editors (other than Notepad, of course), allow you to change the encoding you use for a file. And, of course, some text editors offer support for more encodings than others. Figure out what you need and find a text editor that fully supports your choice of encoding. © 2010 Text-Editor.org
<urn:uuid:5ddf02fc-05fa-40e7-b0c6-28c2a6c7c6de>
CC-MAIN-2016-44
http://www.text-editor.org/encoding.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720062.52/warc/CC-MAIN-20161020183840-00183-ip-10-171-6-4.ec2.internal.warc.gz
en
0.952962
1,228
4.4375
4
Transduction is the genetic transfer mechanism by which a virus transfers genetic material from one bacterium to another. Other types of genetic transfer mechanisms include conjugation and transformation. Transduction is mediated by viruses. Viruses called bacteriophages are able to infect bacterial cells and use them as hosts to make more viruses. After multiplying, these viruses assemble and occasionally remove a portion of the host cell’s bacterial DNA. Later, when one of these bacteriophages infects a new host cell, this piece of bacterial DNA may be incorporated into the genome of the new host. Transduction is the process by which foreign DNA is introduced into a bacterial cell by a virus or viral vector. An example is the viral transfer of DNA from one bacterium to another and hence an example of horizontal gene transfers. In transduction, DNA is accidentally moved from one bacterium to another by a virus or phage. There are usually two types of transduction: generalized transduction and specialized transduction. In generalized transduction, the bacteriophages can pick up any portion of the host’s genome. In contrast, with specialized transduction, the bacteriophages pick up only specific portions of the host’s DNA. Scientists have taken advantage of the transduction process to stably introduce genes of interest into various host cells using viruses. MECHANISM OF TRANSDUCTION In transduction, viruses that infect bacteria move short pieces of chromosomal DNA from one bacterium to another “by accident.” The viruses that infect bacteria are called bacteriophages. Bacteriophages, like other viruses, are the pirates of the biological world, and this is because bacteriophages commandeer a cell’s resources and use them to make more bacteriophages. Virus infects cell by injecting its DNA into the target bacterial cell (Figure 1). Bacterial DNA is fragmented and viral DNA is replicated. New viral particles are made and exit the cell. One contains host DNA instead of viral DNA. When this virus infects a new host, it injects the bacterial DNA, which can recombine with the chromosome of the new host. Bacteria are infected by bacteriophages. Archaea are not infected by bacteriophages but have their own viruses that move genetic material from one individual to another. Figure 1. Illustration of Transduction TYPES OF TRANSDUCTION Generalized transduction is the process by which any bacterial DNA may be transferred to another bacterium through a bacteriophage. It is a rare event; a very small percentage of phage particles happen to carry a donor bacterium’s DNA, on the order of 1 phage in 10,000. In essence, this is the packaging of bacterial DNA into a viral envelope. This may occur in two main ways, either recombination or heedful packaging. If bacteriophages undertake the lytic cycle of infection upon entering a bacterium, the virus will take control of the cell’s machinery for use in replicating its own viral DNA. If by chance bacterial chromosomal DNA is inserted into the viral capsid which is usually used to encapsulate the viral DNA, the mistake will lead to generalized transduction. If the virus replicates using ‘heedful packaging’, it attempts to fill the nucleocapsid with genetic material. If the viral genome results in spare capacity, viral packaging mechanisms may incorporate bacterial genetic material into the new virion. The new virus capsule now loaded with part bacterial DNA then infects another bacterial cell. This bacterial material may become recombined into another bacterium upon infection. When the new DNA is inserted into this recipient cell it can fall to one of three fates as follows: - The DNA will be absorbed by the cell and be recycled for spare parts. - If the DNA was originally a plasmid, it will re-circularize inside the new cell and become a plasmid again. - If the new DNA matches with a homologous region of the recipient cell’s chromosome, it will exchange DNA material similar to the actions in bacterial recombination. Specialized transduction is the process by which a restricted set of bacterial genes is transferred to another bacterium. The genes that get transferred (donor genes) depend on where the phage genome is located on the chromosome. Specialized transduction occurs when the prophage excises imprecisely from the chromosome so that bacterial genes lying adjacent to the prophage are included in the excised DNA. The excised DNA is then packaged into a new virus particle, which then delivers the DNA to a new bacterium, where the donor genes can be inserted into the recipient chromosome or remain in the cytoplasm, depending on the nature of the bacteriophage. When the partially encapsulated phage material infects another cell and becomes a “prophage” (is covalently bonded into the infected cell’s chromosome), the partially coded prophage DNA is called a “heterogenote”. Lateral transduction is the process by which very long fragments of bacterial DNA are transferred to another bacterium. So far, this form of transduction has been only described in Staphylococcus aureus, but it can transfer more genes and at higher frequencies than generalized and specialized transduction. In lateral transduction, the prophage starts its replication before excision in a process that leads to replication of the adjacent bacterial DNA. When the replicated DNA excises from the chromosome, bacterial genes located up to several kilobases from the phage can get packaged into new virus particles that are transferred to new bacterial strains. If the transferred genetic material provides sufficient DNA for homologous recombination, the genetic material will be inserted into the recipient chromosome. Griffiths AJ, Miller JH, Suzuki DT, Lewontin RC, Gelbart WM (2000). Transduction. An Introduction to Genetic Analysis (7th ed.).
<urn:uuid:bd108760-55c5-450c-b9ed-64fd15dbf9e3>
CC-MAIN-2020-40
https://microdok.com/transduction/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00193.warc.gz
en
0.923121
1,247
4.3125
4
Cratered Mercury (inset) won't win any beauty prizes, but it sure had a colorful past. Thanks to the Messenger spacecraft, which began orbiting the world in March 2011, planetary scientists have been able to count craters over Mercury's entire surface. The crater counts help gauge the age of different terrains, because older regions have suffered more impacts. In the main image, the most heavily cratered regions—such as the one surrounded by the black contour—are colored red. By extrapolating from the moon, where Apollo astronauts retrieved rocks that scientists dated, the researchers concluded in this week's Nature that lava flooded all of Mercury 4.0 to 4.1 billion years ago; the global volcanism ended 300 to 400 million years later. This period coincides with the Late Heavy Bombardment, a torrential rain of asteroids that pummeled the planets, suggesting the collisions may have triggered the widespread lava flows that marked Mercury's youth. ScienceNOW, the daily online news service of the journal Science
<urn:uuid:8a51a3cd-05b9-4270-aa41-9ef81ca1c26a>
CC-MAIN-2018-39
https://www.huffingtonpost.com/2013/07/04/mercury-lava-messenger_n_3542112.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162563.97/warc/CC-MAIN-20180925222545-20180926002945-00252.warc.gz
en
0.923771
210
4.09375
4
Next week we will be focussing on subtraction. Have a look at these videos. They explain some subtraction methods which you may have used in school. What do you think? Do you have any questions? There are many ways of doing subtraction and we have to find those that work consistently for us. You need one “go to” method that you would use for most questions because it always works. There may then be other questions that you use other methods for because it is more efficient. I’m sure that you have been studying your scripts over the past few days. Next week, we will be looking in depth at the four main characters of the play. What are Grandad, Grandma, Alice and Jack like? What have you noticed about them? What kind of people are they? How would you describe them? Next week we begin an exciting week of drama and writing about Shakespeare’s Hamlet ending with us watching a live broadcast of the play. Over the weekend, familiarise yourself with the play and have an overview of what the play is about and which characters are in it. We are going to begin looking at addition and subtraction next week in Maths. You need to complete at least 5 examples of addition and 5 examples of subtraction in your Learning Logs using whichever written method you feel most comfortable with. You should write each question before showing your workings. Choose numbers to add and subtract that you are comfortable using. For those of you who are checking the blog over the holiday (as requested), you will have an opportunity to get ahead with spellings for next week. They are under Week 8 if you click on the ‘Spellings’ section of the black tab above.
<urn:uuid:cfe131d1-5117-4357-904b-c235bf5c785e>
CC-MAIN-2018-26
http://year5.lingsprimaryblogs.net/author/millard/
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00289.warc.gz
en
0.96892
358
4.09375
4
Anyone who's made it through a basic English class can probably identify nouns, adjectives, and adverbs. These basic parts of speech form the backbone of sentences and add a little spice by allowing us to modify the other words we use. A car becomes a new little red car, and a simple shirt turns into her favorite shirt. When it comes to adjective, adverb, and noun clauses, though, many students can find themselves confused. So just what are these clauses, and how can you tell if you're using them correctly? Before we take a closer look at these troublesome constructions, you need to make sure that you understand a few basic grammar terms. An adjective is a word that modifies a noun. In the phrase "new little red car," the words new, little, and red are all adjectives that specify what kind of car we're talking about. An adjective phrase is a group of words that together modify a noun. This phrase will include at least one adjective along with adverbs or prepositional phrases. In the sentence "The very quiet girl was afraid of snakes," the phrases very quiet and afraid of snakes are both adjective phrases that modify girl. An adverb is a word that modifies a verb or an adjective. In the phrase "The very big dog barked loudly," very (which modifies the adjective big) and loudly (which modifies the verb barked) are both adverbs. An adverb phrase is a group of words that together act as an adverb. In the sentence "She left the party quite suddenly," the phrase quite suddenly is acting as an adverb and modifies the verb left. A clause is a group of words that contains both a subject and a verb. If that sentence can stand on its own (She left the party early.), then it's called an independent clause. If the clause can't stand as its own sentence (If you leave now...), then it's call a dependent clause. A noun is a person, place, thing, or idea. Now that we've gone over adjectives and clauses, it should be pretty easy to figure out what an adjective clause is. Like the name suggests, an adjective clause is basically a clause that acts like an adjective. These are always dependent clauses, that is, they can't stand on their own as sentences but are instead attached to independent clauses in order to modify nouns. In the sentence "The table that we bought last week is already broken," the clause that we bought last week is an adjective clause that modifies table. How can you tell if a clause is an adjective clause? It's actually pretty simple: once you have identified a dependent clause, try to identify the noun it's modifying. Adjective clauses will tell one of several things about that noun: - What kind? - How many? - Which one? In the previous example - The table that we bought last week is already broken-the clause that we bought last week is answering that question which one? by telling us which table we're talking about. Here are a few more examples with the adjective clauses underlined and the modified noun in italics: The student who gets the highest grade will receive a prize. (Which one?) She gave her extra ticket to the girl whose ticket never arrived. (Which one?) They drove by the house where my boss lives. (Which one?) We need to find a car that gets better gas mileage. (What kind?) This necklace, which is one my favorites, will look great with that dress. (What kind?) All the cookies that we have are stale. (How many?) Adjective clause signifiers. You'll notice that all these adjective phrases start with the same few words. These fall into one of two groups: relative pronouns and relative adjectives. Looking for these words in sentences can help you locate adjective clauses. Punctuating adjective clauses You may also have noticed that in some of the examples above the adjective clause is set off by commas. How can you tell if a clause needs to be punctuated or if can be left alone? The key is to look at what role the clause plays in the sentence. If it's necessary - that is, if the sentence doesn't make sense without it - then you don't need to use commas. If we remove the adjective clause from the first example above, then we lose a necessary piece of information that changes the meaning of the sentence: The student who gets the highest grade will receive a prize. -> The student will receive a prize. On the other hand, when we remove the adjective clause here, the main idea of the sentence remains intact: This necklace, which is one my favorites, will look great with that dress. -> This necklace will look great with that dress. When the adjective clause isn't necessary to the sentence, it should be set apart by commas. Generally, if the adjective clause is needed to clear up any ambiguity about which noun is being talked about (i.e., we need the clause in order to know which student will receive the prize), then it's essential. If we already know which specific noun we're talking about (i.e., this necklace), then the adjective clause is just adding more information and is not essential to the sentence. Often this distinction is unclear and you could make a case either way, so don't worry too much if you have trouble identifying essential and inessential clauses. A close cousin of the adjective clause, the adverb clause functions in much the same way, except adverb clauses modify nouns or adjectives. In the sentence "I'll be working until we finish the project," the clause until we finish the project is an adverb clause that modifies the verb phrase be working. Like adjective clauses, adverb clauses can be identified because they answer several specific questions. Adverb clauses will tell you one of a few things about the verb of the main sentence: - To what degree? In the above example - I'll be working until we finish the project - the phrase until we finish the project tells us when we'll be working. Here are a few more examples with the adverb phrase underlined and the word being modified in italics: My sister will come to the party even if she's tired. (How?) I'll wash the dishes after I eat dinner. (When?) She scrubbed the floor until it was spotless. (When?) Because you got here late, you'll need to fill out these forms. (Why?) Rather than buying a new car, she choose to have her old one fixed. (Why?) Wherever you go, I'll find you. (Where?) Alex will enjoy the movie more than his sister will. (To what degree?) Adverb clause signifiers Adverb phrases start with subordinate conjunctions, which are words that join together an independent and dependent clause while indicating which is the subordinate (or secondary) clause. - even if - in order - so that Punctuating adverb clauses Like adjective clauses, adverb clauses are sometimes set off by commas. However, in the case of adverb clauses, it's their placement in the sentence that determines how they're punctuated. Clauses that begin the sentence should be separated from the main clause with a comma, while those added at the end of the main clause do not need a comma: Rather than buying a new car, she chose to have her old one fixed. She chose to have her old car fixed rather than buying a new one. Nominal or Noun Clauses At this point you can probably guess that a noun clause is a clause that acts like a noun. Also called nominal clauses, these dependent clauses can function in a sentence just like any other noun, meaning they can be a subject, subject complement, direct object, indirect object, the object of a preposition, or an appositive. In the phrase "Why you ate all that cake is a mystery to me," the clause why you ate all that cake is acting as a noun and is the subject of the sentence. Because nominal clauses act like nouns, there's no set of particular questions they answer, since they're not modifying any other words in the sentence. Below are some examples with the nominal clauses underlined and the function of the noun in parenthesis. Where you want to go is up to you. (subject) Whether you open the present now or later depends on when your parents get here. (subject) Your art project can be whatever you want. (subject complement) Give the ball to whomever asks for it first. (indirect object) Hand whatever food you have over to the teacher. (direct object) Nominal clause signifiers Noun clauses start with interrogatives (words that ask questions) or expletives (words that explain relationships).
<urn:uuid:60130dc5-0ad1-441e-a9a8-01fd99dfd245>
CC-MAIN-2014-49
http://www.bestcustomwriting.com/blog/writing-in-general/how-to-use-adjective-adverb-and-noun-clauses
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931005799.21/warc/CC-MAIN-20141125155645-00221-ip-10-235-23-156.ec2.internal.warc.gz
en
0.959404
1,853
4.15625
4
Critical Thinking Activities for Kids Kids are open and willing to learn new fundamental skills as long as they are taught in a fun and entertaining manner. JumpStart’s critical thinking activities are therefore a great way to engage students and encourage critical thinking and logical reasoning skills in them! Importance of Critical Thinking Skills Critical thinking enables kids to reason better. It helps them base conclusions on facts rather than emotions. From puzzles to activities that require analytical reasoning, there are a variety of ways to encourage kids to use and develop their problem-solving skills. Our critical thinking exercises for kids are fun and stimulate thought. They can serve as a valuable resource for homeschooling parents as well as teachers who are looking to engage the little ones with productive activities. You can also come up with simple activities of your own that can be used in class or even at home. Provoking kids to think out of the box and come up with solutions to challenging activities will always have long-term benefits. Free Critical Thinking Activities It is easy to find a variety of free critical thinking worksheets and activities online. Activities like these are sure to excite the little ones and teach them important reasoning and thinking skills at the same time!
<urn:uuid:2517a339-deb7-418e-afdf-a216273e4942>
CC-MAIN-2014-10
http://www.jumpstart.com/parents/activities/critical-thinking-activities
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021384410/warc/CC-MAIN-20140305120944-00032-ip-10-183-142-35.ec2.internal.warc.gz
en
0.959328
248
4.1875
4
Getting to Union Navigating Differences in the Constitutional Convention Part III. Creating Space to Address Issues Built Trust Among the Delegates. This Made it Possible to Work Through Big Problems Together. Part III. Step 3. Three-Fifths Compromise As the Convention debated representation in the national legislature, they realized that if representation was based on population and enslaved people were fully counted in the population, it would greatly advantage the Southern States in Congress. Small states did not want this configuration. To solve this impasse, James Wilson, a delegate from Pennsylvania, introduced the three-fifths formulation for counting slaves in the population. This formula was already accepted by the states under the Article of Confederation for tax-raising powers. Since the idea of representation was linked in the delegates’ minds to the idea of consenting to tax (remember ‘no taxation without representation!’) that was why Wilson hoped that this formula for representation would prove acceptable to the delegates. On June 13 of the Convention, the delegates adopted a clause indicating only three-fifths of the enslaved population would be included in the population count. This secured the principle of basing representation in the national legislature on population. Optional: Explore Further Explore June 11 in Convention when James Wilson introduced the Three-Fifths Clause to the Committee of the Whole. Prepare for Class Discussion On your own paper, respond to the question below. - How did the process of agreeing to rules to explore issues make it possible to find a compromise for counting enslaved persons? - Why was this compromise only possible later in the Convention?
<urn:uuid:dff696b3-aea5-479a-a083-e02512a45d96>
CC-MAIN-2023-14
https://www.utah3rs.org/students/getting-to-union-navigating-differences-in-the-constitutional-convention-learning-hub/piii-s3-three-fifths-compromise/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00304.warc.gz
en
0.947497
332
4.15625
4
The Pilgrims were a small group that left England in 1620 on the Mayflower to start a new life in the New World. They established the region's first permanent European settlement at Plymouth Harbor, although English fishermen were already familiar with some of New England's waters.Continue Reading A major reason why many of the Pilgrims left their homeland was for the freedom to practice their religion. England's king was also the head of the Church of England, and at the time of the Pilgrims he sought to force many of his religious ideas on the populace. Many Pilgrims had founded their own Christian group known as the English Separatist Church, and came to the New World to practice their faith without monarchical interference. Before building a community on land, men on the Mayflower produced the group's founding governing document, the Mayflower Compact. That first year was a serious time of trial, as a large portion of the community died during the winter. Eventual contact with nearby Native American tribes proved vital, as the natives showed the Pilgrims how to grow native crops and hunt the land. This led to a bountiful harvest in the fall of 1621 and the feast now known as Thanksgiving.Learn more about US History
<urn:uuid:000a70fc-7d5a-4d6a-960b-b0e2cc5bf720>
CC-MAIN-2017-09
https://www.reference.com/history/pilgrims-2baeda9b91885d7c
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00558-ip-10-171-10-108.ec2.internal.warc.gz
en
0.980913
252
4.46875
4
Geology of Galapagos Islands How did the Galapagos Islands get there? The Galapagos originated with the activity of a hot spot (volcanic activity), which turned this place into one of the most amazing locations on Earth. The Galapagos Islands are quite young, around five million years old. Some of the Islands located West are the ones that have the most volcanic activity and are also the youngest ones with a few hundreds of thousands of years old. These Islands follow a chain pattern where the older Islands are found in the East, while the younger Islands are found in the West. Hot Spots are responsible for the formation of Hawaii, the Galapagos Islands, and other Island chains. This Hot Spot is currently located beneath the northwestern region of the Galapagos Archipelago near Fernandina and Isabela Island. Since the Galapagos Islands are moving with the Nazca plate and the hot spot remains stationary, the Islands form and slowly drift away from the hot spot, at about 5 cm per year, allowing more volcanoes and Islands to be formed. Also the Galapagos Islands move with the Nazca Plate in an east-southeast direction so the older Islands are found in the southeast. The Hot Spot Theory is one way of explaining, not only why change rules in oceanic Islands, but also how wildlife has adapted to the different Islands that make up this ever-changing land.
<urn:uuid:95a10f6a-0dfe-441d-82ff-0602b3a29714>
CC-MAIN-2017-09
http://www.galapagos-islands-tourguide.com/geology-of-galapagos-islands.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00411-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946261
295
4.28125
4
"The Raven" is probably Edgar Allan Poe’s most famous poem. In this poem, the narrator is a lonely, heartbroken man who is trying to recover from the recent death of Lenore, the woman he loved. As he is sitting in his study, reading, a mysterious raven suddenly appears at his window and then takes a spot directly on a sculptured bust of the goddess Pallas Athena, above the narrator’s door. The raven knows one word, "Nevermore". What is striking about the poem is how it changes in mood from the narrator’s mere curiosity about the strange bird at the beginning of the poem, to his desperate, utterly depressed and hopeless feeling of gloom at the end. To the narrator, the bird possesses the power to first amuse and tease him, and then mock and torment him with the news of a life of endless misery and pain. And all this is due to the raven’s repeating of the word, "Nevermore". The narrator attributes qualities of human thought and intentions to the raven (personification). What does the narrator believe the raven is saying or doing to him? How does the narrator feel about the bird's presence in his study? Think About It 1. Find an example of each of the following literary techniques in the poem, "The Raven": alliteration, personification, internal rhyme, and end rhyme. 2. Summarize in your own words what the poem, "The Raven" is about.
<urn:uuid:f462886b-4e1b-43e4-8667-1f71323b4a70>
CC-MAIN-2015-14
http://www.pdesas.org/module/content/resources/475/view.ashx
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298177.21/warc/CC-MAIN-20150323172138-00028-ip-10-168-14-71.ec2.internal.warc.gz
en
0.941976
310
4.03125
4
Children learn by doing. Whether playing in or outdoors, expressing creativity through art and music, or investigating a problem, children’s hands-on learning experiences set the stage for new discoveries. Explore the resources on this page to find out how learning experiences that build on children’s natural interests and curiosity about the world lead to their learning and development. The creative arts help children use their imagination to learn about the world around them. Activities that involve art, dramatic play, dance, and music help children learn across every developmental domain. These activities also foster self-esteem and confidence as children learn to express themselves and their ideas. In these resources, learn how to build the creative arts into your learning environment and how doing so will support children’s development and learning. Math and Science Math and science learning happens naturally every day, as children explore, play, and try new things. When young children have the opportunity to investigate the world around them, they learn and experiment with new ideas, putting math and science skills into practice. Children observe, are curious, and investigate to find out more about their world. They gather information as they solve problems and use it to further their understanding of new concepts. Explore these resources to find ideas about how to encourage math and science learning in early childhood settings.
<urn:uuid:4e86143e-23a4-4fd0-995e-c5791f6c3441>
CC-MAIN-2019-04
https://eclkc.ohs.acf.hhs.gov/curriculum/article/learning-experiences
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659654.11/warc/CC-MAIN-20190118005216-20190118031216-00629.warc.gz
en
0.95644
264
4.375
4
The human skeleton consists of both fused and individual bones supported and supplemented by ligaments, tendons, muscles and cartilage. It serves as a scaffold which supports organs, anchors muscles, and protects organs such as the brain, lungs and heart. The number of bones in the human skeletal system is a controversial topic. Humans are born with about 300 to 350 bones, however, many bones fuse together between birth and maturity. As a result an average adult skeleton consists of 208 bones. The number of bones varies according to the method used to derive the count. While some consider certain structures to be a single bone with multiple parts, others may see it as a single part with multiple bones. There are five general classifications of bones. These are Long bones, Short bones, Flat bones, Irregular bones, and Sesamoid bones. The human skeleton is composed of both fused and individual bones supported by ligaments, tendons, muscles and cartilage. It is a complex structure with two distinct divisions. These are the axial skeleton and the appendicular skeleton. Skeletal System Functions The Skeletal System serves many important functions; it provides the shape and form for our bodies in addition to supporting, protecting, allowing bodily movement, producing blood for the body, and storing minerals. The Skeletal System serves as a framework for tissues and organs to attach themselves to. This system acts as a protective structure for vital organs. Major examples of this are the brain being protected by the skull and the lungs being protected by the rib cage. Located in long bones are two distinctions of bone marrow (yellow and red). The yellow marrow has fatty connective tissue and is found in the marrow cavity. During starvation, the body uses the fat in yellow marrow for energy. The red marrow of some bones is an important site for blood cell production, approximately 2.6 million red blood cells per second in order to replace existing cells that have been destroyed by the liver. Here all erythrocytes, platelets, and most leukocytes form in adults. From the red marrow, erythrocytes, platelets, and leukocytes migrate to the blood to do their special tasks. Another function of bones is the storage of certain minerals. Calcium and phosphorus are among the main minerals being stored. The importance of this storage “device” helps to regulate mineral balance in the bloodstream. When the fluctuation of minerals is high, these minerals are stored in bone; when it is low it will be withdrawn from the bone.
<urn:uuid:e7a3d4e8-e917-4dbf-976f-c6bf16350e95>
CC-MAIN-2017-26
http://www.the-human-body.net/skeletal-system.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320049.84/warc/CC-MAIN-20170623100455-20170623120455-00224.warc.gz
en
0.939332
516
4.21875
4
A gigantic hole has appeared in the Antarctic ice, and scientists are baffled as to why it has formed. The hole, comparable in size to Austria or Maine, appeared without warning and without any apparent reason. Holes form in Antarctica all the time. They’re called polynyas, and they form in the sea ice that lines the Antarctic coast. The circulation of warm water or ocean currents causes the hole to form, and the polynya usually disappears after a few months. “This is hundreds of kilometres from the ice edge. If we didn’t have a satellite, we wouldn’t know it was there,” Moore said, adding that the polynya was 80,000 square kilometers (30,900 square miles) in area. The same spot was the site of a polynya 40 years ago, according to Moore. However, the hole went largely unstudied due to limitations of observational instruments in the 1970s. The polynya opened back in September, according to Moore. “In the depths of winter, for more than a month, we’ve had this area of open water. It’s just remarkable that this polynya went away for 40 years and then came back.” The kneejerk for both scientists and the public is to blame the polynya on climate change, but Moore called that “premature” since the hole has recurred since the 1970s, and possibly even before that. However, the polynya will affect oceans worldwide. “Once the sea ice melts back, you have this huge temperature contrast between the ocean and the atmosphere,” he explained to Motherboard. “It can start driving convection,” the process by which warm water rises to the surface of the ocean, “which can keep the polynya open once it starts.”
<urn:uuid:d39f59e7-2687-4525-bc75-9dc982096baa>
CC-MAIN-2022-49
https://www.takecare4.eu/mysterious-and-monstrous-hole-appears-in-antarctic-ice-perplexing-scientists/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00233.warc.gz
en
0.97277
392
4
4
Fusion energy is envisioned as a way to produce virtually unlimited power to supply the Earth’s needs, but no one has succeeded in devising a fusion process that gives out more energy than it takes in. Physicists at Lawrence Livermore National Laboratory in California said they succeeded in at least releasing more energy through a fusion reaction than is absorbed by the fuel that triggers the reaction. But that energy is still only about a hundredth of the total energy needed to set up the process in the first place, they said, most of which goes into compressing a fuel pellet where fusion takes place. “The next necessary step would be to achieve a total gain, where energy entering the whole system is exceeded by the energy produced,” the researchers said in a statement. Nonetheless, “we are closer than anyone has ever gotten” to obtaining fusion as a viable energy source, said Omar Hurricane, a researcher at the laboratory and one of the authors of the report. The whole process took place in a space less wide than a human hair and in only the tiniest fraction of a second—150 picoseconds, to be exact. Their process used inertial confinement fusion, which initiates nuclear fusion reactions by heating fuel pellets until they implode, compressing the fuel. The fuel consists of deuterium and tritium—isotopes, or variant forms, of hydrogen. When squeezed together, they merge creating a helium nucleus, and releasing energy along with a neutron, or subatomic particle. The confinement squeezes the atoms of fuel “to get them running toward each other at high velocity, which overcomes their mutual electrical repulsion,” said Hurricane. The scientists said they used 192 lasers to heat and compress a small pellet of fuel to the point where the fusion reactions take place. What made the process successful was that the scientists managed to initiate a process called “bootstrapping,” a sort of vicious cycle, Hurricane said. In this, “the alpha particles [helium nuclei] that come out of that reaction start leaving energy behind and causing the temperature to go up” within the tiny chamber. “When the temperature goes up, the reaction rate goes up, and when the reaction rate goes up, you make more alpha particles.” Source : http://www.world-science.net
<urn:uuid:af7c1a61-2379-4311-b637-5d888271581a>
CC-MAIN-2017-51
https://machprinciple.wordpress.com/2014/04/20/scientists-take-step-toward-usable-fusion-energy/
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948550199.46/warc/CC-MAIN-20171214183234-20171214203234-00564.warc.gz
en
0.938909
492
4.0625
4
Types of Assessment Assessing in Different Ways Assessment is a common practice in today’s classrooms. It usually takes place in predictable ways in traditional formats. A wide variety of assessment options are available, however, to meet the instructional needs of teachers and the learning needs of students. Although tests and exams are not going to disappear from schools, student learning can be greatly enhanced when information from a wide variety of kinds of assessment is used to inform instruction, provide feedback, and evaluate products and performances. The kind of assessment that occurs before and during a unit of study is called formative assessment. Several strategies of formative assessment give students and teachers the kinds of information they need to improve learning: - Strategies for gauging student needs, such as examining student work, analyzing graphic organizers, and brainstorming - Strategies to encourage self-direction, such as self-assessment, peer feedback, and cooperative grouping - Strategies for monitoring progress, such as informal observations, anecdotal notes, and learning logs - Strategies to check for understanding, such as journals, interviews, and informal questioning While formative assessments can give students and teachers information about how well they are doing while they are working on projects, at some point, most teachers are required to give a report on student learning at the end of a particular unit or on a particular project. Students also want and need to know how well they have done. This kind of assessment, done after the fact, is called summative assessment. Summative assessments, like unit tests, can provide useful information if teachers and students take the time to look at them analytically. Teachers can find areas of weakness to address in more depth in future units and with future groups of students. Students can identify problem areas and set goals for future learning.
<urn:uuid:37733b37-9ee8-4c28-a594-2f2be7301ca2>
CC-MAIN-2016-30
http://www.intel.ph/content/www/ph/en/education/k12/assessing-projects/overview-and-benefits/types.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825124.55/warc/CC-MAIN-20160723071025-00098-ip-10-185-27-174.ec2.internal.warc.gz
en
0.957006
365
4.15625
4
M. Gruntman. The History of Spaceflight, in Space Mission Engineering: The New SMAD, eds. J.R. Wertz, D.F. Everett, and J.J. Puschell, pp. 4-10, Microcosm Press, Hawthorne, Calif., 2011. Chapter 1.2. The History of Spaceflight University of Southern California The heavens had been attracting the imagination of humans for millennia. Some even argue that ancient texts, including the Old Testament, described spaceships in the sky. Reaching the cosmos requires powerful rockets. So, the first steps of the humans toward spaceflight were in rocketry. For centuries an essentially international endeavor of the pursuit of spaceflight attracted people from various lands who advanced the enabling science and technology. Ancient Greeks observed the principle of jet propulsion more than 2,000 years ago. One thousand years later the first primitive rockets appeared in China and perhaps in India, later rediscovered in many other lands. A combination of charcoal, sulfur, and saltpeter—black powder— propelled the missiles. Natural abundance of saltpeter in China and India facilitated the emergence of the first war rockets in these countries. Rockets had established a foothold in Europe some time in the 13th century. The word ‘rocket’ likely originated from the ‘rocchetta,’ a diminutive of the Italian word ‘rocca’ for distaff, a staff for holding the bunch of flux or wool from which the thread is drawn by spinning. The early 19th century witnessed a major step in perfecting the rocket. A British inventor, William Congreve, turned ineffective and erratic missiles into a modern weapon system with standardized and interchangeable parts. These British war rockets, known as the Congreves ( Fig. 1-3 — see pdf), debuted during the Napoleonic wars. Then brought across the Atlantic Ocean, the Congreves bombarded Fort McHenry near Baltimore in 1813. Francis Scott Keys immortalized the deadly missiles in his famous line “. And the rockets’ red glare. ” in the American National Anthem Fig. 1-3. Nineteenth Century Rockets; Hale (front), Congreve (with the centrally mounted guiding stick), and skyrocket (back). [Scoffern, 1859; Gruntman, 2004] — see pdf War rocketry rapidly proliferated throughout Europe and reached North and South Americas and Asia. The young Chilean republic was among the first to employ the domestically-made rockets—in 1819—in the fight against its former colonial ruler, Spain. Many European countries — particularly Austria, France, and Russia — established large-scale manufacturing of war rockets. The Russian army even built in 1834 an iron-clad submarine with a crew of 10 men which fired missiles from a submerged position. The Mexican War, 1846–1848, advanced rocketry beyond an occasional experimentation in the United States. In a short period of a few months, the Army and the Navy completed the purchase, evaluation, prototyping, and testing of a new type of spin-stabilized war rockets. (These rockets became known as the Hales, after their inventor William Hale.) The US Army formed the first missile unit, the Rocket and Mountain Howitzer battery. The mass-produced new missiles quickly reached the rocket battery deployed in Mexico with the American expeditionary force. Thus the two military services succeeded in 1840s in a joint procurement and fielding of a new technologically advanced weapon system in less than one year. By the end of the 19th century, war rocketry had lost the competition to artillery with the introduction of rifled barrels, breech loading, and the Bessemer steel process. At this time the writers stepped in and replaced the men of sword as keepers of the interest in rocketry and spaceflight. Nobody captured public imagination in space adventures more than the French writer Jules Verne (See Fig.1-4 – see pdf). His novels “put on fire” and motivated many young men who would decades later transform a dream of spaceflight into a reality. Jules Verne’s classic novel From the Earth to the Moon (first published in 1865) became a seminal work on spaceflight. Fig. 1-4. Jules From the Earth to the Moon — The future express. “Yes, gentleman,” continued the orator, “in spite of the opinion of certain narrow-minded people, who would shut up the human race upon this globe . we shall one day travel to the Moon, the planets, and the stars . ” [Horne, 1911; Gruntman, 2004] — see pdf Early science fiction writers sent main characters on space voyages to satisfy their curiosity, as a bet, or to escape debts. Then, an American author, Edward Everett Hale, published a novel The Brick Moon in 1870. The story described a launch of an artificial satellite into orbit along a meridian to help sailors at sea in determining their longitude, in the same way as the Moon aids in determining latitude. It was the first description of an application satellite. The late 19th century brought the realization that until the rocket was perfected there would be no trips through outer space, no landing on the Moon, and no visits to other planets to meet possible inhabitants. A long period followed when isolated visionaries and thinkers, including amateurs, began practical work and sketched out the sinews of the spaceflight concept. Many “intellectuals” of the day and assorted “competent authorities” dismissed the idea of space travel as ridiculous. A number of outstanding individuals at the end of the 19th century and the beginning of the 20th century laid the foundations of practical rocketry and spaceflight. Four visionaries in 4 countries working under very different conditions became the great pioneers of the space age: the Russian Konstantin E. Tsiolkovsky; the French Robert Esnault-Pelterie; the American Robert H. Goddard; and the German Hermann Oberth. They contributed in unique ways to advancing the concept of spaceflight. The writings of Konstantin E. Tsiolkovsky (1857–1935) combined development of scientific and technological ideas with the vision of space applications. While he never built rockets, Tsiolkovsky inspired a generation of Soviet rocket enthusiasts, including Sergei P. Korolev and Valentin P. Glushko, who achieved the first satellite. An engineering graduate of the Sorbonne University, Robert Esnault-Pelterie, 1881–1957, first gained fame as an aviation pioneer who had introduced among other things an enclosed fuselage, aileron, joystick for plane control, four-bladed propeller, and safety belt. His prestige brought the much-needed credibility to the emerging space effort. It was Esnault-Pelterie who first published a spaceflight-related article in a mainstream archival physics journal in 1913; he also introduced the word “astronautics” in the language of science. With a Ph.D. degree in what we would call today solid-state physics, Robert H. Goddard, 1882–1945, actually demonstrated the first liquid-propellant rocket engine in 1926. Goddard achieved numerous other firsts in rocketry. One of his rockets reached a 9,000 ft (2,700 m) altitude in 1937. Many results of Goddard’s work remained largely unknown to contemporary scientists and engineers because of self-imposed secrecy, caused in part by ridicule by the ignorant and arrogant mainstream media. Hermann Oberth, 1894–1989, published a detailed design of a sophisticated rocket in his book The Rocket into Interplanetary Space [Oberth, 1923]. He introduced numerous ideas including staging, film cooling of engine walls, and pressurization of propellant tanks. Oberth played an important role in early practical development of rocketry in Germany and provided inspiration for a generation of European space enthusiasts. 1.2.3 Building the Foundation Powerful rockets belonged to a category of inherently complex advanced technologies where a lonely creative and gifted inventor could not succeed. Only concerted efforts of numerous well-organized professional scientists and engineers supported by significant resources could lead to practical systems. The totalitarian states were first to marshal the necessary resources and organize a large-scale development of ballistic missiles. In the Soviet Union, the military-sponsored Jet Propulsion Scientific Research Institute (RNII) employed 400 engineers and technicians in a sprawling complex in Moscow in the early 1930s. Later in the decade the Soviet program suffered from political purges and resumed its growth after 1944. The German Army stepped up its rocket effort in 1932 by establishing a dedicated group that included Wernher von Braun. The German program grew immensely and by 1942 produced the first truly modern ballistic missile the A-4, better known as the V-2. The fueled A-4 weighed more than 12.5 metric tons and delivered a 1,000 kg warhead to distances up to 300 km. The German accomplishments also included mass production of the missiles. In a short period, under tremendous difficulties of wartime, the industry built 5,800 A-4’s, with 3,000 fired operationally against England and liberated parts of Europe. The rocket manufacturing widely used slave labor from concentration camps, accompanied by atrocities especially during the construction of the underground facilities. In the United States during WWII, rocketry concentrated on jet assisted take off (JATO) of the airplanes and on barrage solid-propellant missiles. The first American private rocket enterprises Reaction Motors and Aerojet Engineering Corp. were formed in December 1941 and in March 1942, respectively. After the war, several centers of rocketry emerged in the industry and government under sponsorship of the Army, Navy, and Air Force. The US Army brought a number of captured German V-2 missiles to the United States. Military personnel and industrial contractors launched more than 60 V-2’s from the White Sands Missile Range in New Mexico by 1951. Many missiles carried science payloads studying the upper atmosphere, ionosphere, solar radiation, and cosmic rays. These first rocket experiments gave birth to a vibrant experimental space science. Subsequently, many government and university scientists became energetic advocates of space exploration. The US Army followed its century-long tradition of the arsenal system with significant in-house engineering capabilities. By the early 1950s, it had concentrated the development of ballistic missiles and emerging space activities at the Redstone Arsenal in Huntsville, AL. The California Institute of Technology (Caltech) managed another important Army rocket center, the Jet Propulsion Laboratory (JPL), in Pasadena, CA. The JPL grew out of pioneering research and development programs from the group of Theodore von Karman at Caltech. The Redstone Arsenal became the home to more than 100 “imported” German rocketeers, headed by Wernher von Braun. The Germans had come to work in the United States under contracts through Operation Paperclip. While von Braun’s rocketeers got the most publicity, the Paperclip program brought to the United States in total more than 600 German specialists in various areas of science and technology. In contrast to the compact von Braun’s group, the other scientists and engineers were dispersed among various American industrial and research organizations. The Army, the Air Force, and the Navy were carrying out essentially independent development programs in guided missiles, with some overlap, occasional cooperation, and determined rivalry. In 1956, Secretary of Defense Charles E. Wilson attempted to resolve the problem of duplication by defining the “roles and missions” of the services. Consequently, the Air Force asserted control over intercontinental warfare, with the Army’s role reduced to shorter range missiles. The fateful roles-and-missions decision did not stop a most active leader of the Army’s missile program, General John B. Medaris, and von Braun from finding ways to advance their visionary space agenda. In addition to such Army achievements as the development of the operationally-deployed ballistic missiles Redstone and Jupiter in 1950s, they would succeed in launching the first American artificial satellite, Explorer I, to space. Only by the end of 1950s, the Army had finally lost its programs in long-range ballistic missiles and space when the newly formed civilian space agency, the National Aeronautics and Space Administration (NASA), took over and absorbed the JPL and von Braun’s team at Redstone. In contrast to the Army, the Navy and especially the new service Air Force (formed in 1947) relied primarily on the contractors from the aircraft industry in their ballistic missile programs. In late 1940s and early 1950s, the Naval Research Laboratory (NRL) with Glenn L. Martin Co. developed the Viking sounding rocket as a replacement of the dwindling supply of the captured V-2’s. This program laid the foundation for Martin’s future contributions to ballistic missiles that would include the Titan family of Intercontinental Ballistic Missiles (ICBM) and space launchers. In 1946, the Air Force initiated development of a new test missile, the MX-774. The Convair (Consolidated Vultee Aircraft Corp.) team led by Karel J. (Charlie) Bossart introduced many innovations in the MX-774 missiles that reached an altitude of 30 miles. Based on this early experience, Convair later developed the first American ICBM, the Atlas. The Atlas program, including missile deployment became a truly national effort that dwarfed the Manhattan Project of World War II. Other major ballistic missile programs initiated in 1950s included ICBMs Titan and Minuteman and Intermediate Range Ballistic Missile (IRBM) Thor. The Glenn L. Martin Company, Boeing Company, and Douglas Aircraft Company led the development, as prime contractors, of these missiles, respectively. Aerojet and the Rocketdyne Division of North American Aviation emerged as leading developers of liquid-propellant rocket engines. The Navy selected the Lockheed Aircraft Corporation as the prime contractor for its submarinelaunched solid-propellant IRBM Polaris. The Soviet government made rocket development a top national priority in 1946. The rocketeers first reproduced the German V-2 and then proceeded with building larger and more capable ballistic missiles. Soviet rocket pioneers from the early 1930s Korolev and Glushko emerged as the chief designer of ballistic missile systems and the main developer of the enabling liquid-propellant engines. Both the Soviet Union and United States pursued development of the ICBMs, R-7, and Atlas. These large ballistic missiles called for new testing sites — the existing American White Sands and the Soviet Kapustin Yar did not meet the requirements of safety and security. Consequently, the United States established a new missile test range at Cape Canaveral in Florida in 1949 and later another site at the Vandenberg Air Force Base in California in 1958. Cape Canaveral would subsequently support space launches into low-inclination orbit while Vandenberg would send satellites into polar orbit, especially important for reconnaissance payloads. The Soviet Union initiated the construction of a new missile test site at Tyuratam (now commonly known as Baikonur) in Kazakhstan in 1955 and another site later in Plesetsk. 1.2.4 The Breakthrough to Space In the 1950s, spaceflight advocates scattered among various parts of the US government, industry, and academia pressed for the American satellite. The national security policies would shape the path to space. Rapidly progressing development of long-range ballistic missiles and nuclear weapons threatened devastating consequences should the Cold War turn into a fullscale military conflict. New technologies allowed no time for preparation for hostilities and mobilization and made an intelligence failure such as Pearl Harbor absolutely unacceptable. Therefore, monitoring military developments of the adversary, with accurate knowledge of its offensive potential and deployment of forces, became a key to national survival and (avoiding a fatal miscalculation,) reduced the risk of war. Obtaining accurate information about closed societies of the communist world presented a major challenge. The perceived “bomber gap” and later the “missile gap” clearly demonstrated the importance of such information for the national policies. Consequently, President Dwight D. Eisenhower authorized development of overhead reconnaissance programs to be conducted in peacetime. The U-2 aircraft first overflew the Soviet Union in 1956, resolving the uncertainties of the bomber gap. Reconnaissance from space became a top priority for President Eisenhower who considered rare and sporadic U-2 overflights only a temporary measure because of improving Soviet air defenses. In 1956, the Air Force selected Lockheed’s Missile Systems Division to build reconnaissance satellites. The international legality and acceptability of overflights of other countries by Earth-circling satellites — freedom of space — was uncertain in the 1950s. The Eisenhower administration considered testing the principle of freedom of space by launching a purely scientific satellite critically important for establishing a precedent enabling future space reconnaissance. This was the time when scientists in many countries were preparing for the International Geophysical Year (IGY) to be conducted from July 1957–December 1958. They planned comprehensive world-wide measurements of the upper atmosphere, ionosphere, geomagnetic field, cosmic rays, and auroras. Space advocates emphasized that artificial satellites could greatly advance such studies. Consequently, both the United States and the Soviet Union announced their plans of placing into orbit artificial satellites for scientific purposes during the IGY. Both countries succeeded. President Eisenhower insisted on clear decoupling of American scientific satellites from military applications in order to first assert freedom of space. This national security imperative determined the publicly visible path to the satellite. In 1955, the US government selected the NRL proposal to develop a new space launch vehicle and a scientific satellite, both known as the Vanguard. The choice of the new system was made over a more mature technology of the Project Orbiter advocated by Army’s Medaris and von Braun. The Army proposed to use the Jupiter C, an augmented Redstone ballistic missile. In fact, a test launch of the Jupiter C on September 20, 1956, could have put a simple satellite into orbit had the Army been permitted to use a solid-propellant missile — as it would later do launching the Explorer I—instead of an inactive fourth stage. John P. Hagen led the Vanguard program with Glenn L. Martin Co. as the prime contractor of the launch vehicle and with NRL providing technical direction. The Vanguard program also built scientific satellites and established a process of calling for proposals and selecting space science experiments. In addition, it deployed a network of the Minitrack ground stations to detect and communicate with the satellites which laid the foundation for the future NASA’s Spaceflight Tracking and Data Network (STDN). Many optical stations around the world would also observe the satellites by the specially designed Baker-Nunn telescope tracking cameras. The Soviet Union focused its resources on demonstrating the first ICBM. After the R-7 had successfully flown for the full range, Korolev launched the world’s first artificial satellite, Sputnik, into orbit on October 4, 1957. Ironically, this Soviet success had finally resolved the lingering issue of the space overflight rights that so concerned President Eisenhower: no country protested the overflight by the Soviet satellite, thus establishing the principle of freedom of space (see Fig. 1-5 ). Fig. 1-5. Comparative Sizes and Masses of the Earth Satellites Sputnik 1, Explorer I, and Vanguard I [Gruntman, 2004]. — see pdf The second, much larger Soviet satellite with the dog Laika aboard successfully reached orbit on November 3, 1957. The Vanguard program had been steadily progressing but was not ready for launch yet. On November 8, the Secretary of Defense gave the permission to the eager Army team led by Medaris and von Braun to also attempt launching satellites. On January 31, 1958, the Army’s modified Jupiter C missile successfully placed the first American satellite Explorer I into orbit. Subsequently the Vanguard launch vehicle deployed the Vanguard I satellite into orbit on March 17, 1958. Popular sentiments in the United States have sometimes blamed the Vanguard program for losing the competition to the Soviet Union. It is grossly unfair. The Vanguard program demonstrated a record fast development of a new space launcher, with only 30 months from the vehicle authorization in August 1955 to the first successful launch in March 1958. The Vanguard spacecraft remains today the oldest man-made object in orbit, and it will reenter the atmosphere in a couple hundred years. We have time to find funding to bring the satellite back to the planet Earth for a place of honor in a museum. There was no technological gap between the Soviet Union and the United States in the beginning of the space age. Being the first in launching the satellite was a matter of focus and national commitment. Fourteen months after the launch of Sputnik, the United States had placed spacecraft into orbit by 3 entirely different launchers developed by 3 different teams of government agencies and industrial contractors. (The Air Force’s Atlas deployed the first communications satellite SCORE in December 1958.) The last years of the Eisenhower administration shaped the structure of the American space program. The president established a new Advanced Research Projects Agency (ARPA, the predecessor of DARPA), to fund and direct the growing national space effort. The security-conscious president resisted expansion of the government programs but always supported advancement of spaceflight in the interests of national security. Bending to powerful political forces Eisenhower reluctantly agreed to establish a new government agency responsible for a civilian effort in space. The president signed the National Aeronautics and Space Act into law which formed NASA on October 1, 1958. Within a short period of time, NASA subsumed the National Advisory Committee for Aeronautics (NACA), Army’s Jet Propulsion Laboratory and major elements of the ballistic missile program in Huntsville, and NRL’s Vanguard group. NASA vigorously embarked on scientific exploration of space, launching increasingly capable spacecraft to study the space environment and the Sun and creating space astronomy. The missions to flyby the Moon and, later, nearby planets followed. These first space missions began a new era of discovery that laid the foundation for the flourishing American space science and planetary exploration of today. At the same time, NASA embarked on preparation for human spaceflight. Rocketry Industry “Namescape” (text box) Merges and acquisition have significantly changed the “namescape” of rocket industry. Titan’s prime contractor, the Martin Company, merged with Marietta in 1961, forming Martin Marietta. Convair became Space System Division of General Dynamics in 1954, known as General Dynamics—Astronautics. Martin Marietta acquired General Dynamics’ Space System Division in 1995 and then merged in the same year with Lockheed, forming The Lockheed Martin Corporation. Thus both, the Atlas and the Titan families of space launchers ended up under the same corporate roof. Another important component of Lockheed Martin’s rocket assets is the submarinelaunched solid-propellant Tridents. Boeing added to its Minuteman missiles the Delta family of space launchers after acquiring McDonnell-Douglas in 1997. [Gruntman, 2004, p. 253] At the same time the military space program focused on communications, early warning, command and control, and support of military operations. The Air Force led this effort with the Navy engaged in selected important programs, such as space based navigation. The Army preserved the responsibility for major elements of missile defense. Another national security program dealt with space reconnaissance and was directed jointly by the intelligence community and the military. In 1960, President Eisenhower established a special office in the Department of Defense (DoD), staffed by military officers and government civilians, to direct space reconnaissance, separated from military procurement and hidden by an extra protective layer of secrecy. This organization would become the National Reconnaissance Office (NRO) overseen by the Air Force and the CIA. The image intelligence satellite Corona achieved the first successful overflight of the Soviet Union in August 1960, returning images that effectively resolved the uncertainties of the perceived missile gap. President Eisenhower handed over to his successor in the White House a structure of the national space program that has essentially survived in its main features until the present day. NASA leads the civilian space effort. National security space consists of two main components. The services are responsible for military space while the intelligence community and military direct gathering and processing of the intelligence information from space. While these 3 programs are sometimes viewed as separate, they all had originated from the early military space effort and they all have been interacting to varying degrees during the years. The heating up competition in space with the Soviet Union erupted into the public focus when the first man, Soviet cosmonaut Yuri Gagarin, orbited the Earth on April 12, 1961. President Kennedy responded by challenging the nation to land “a man on the Moon and returning him safely [back] to the Earth.” The resulting Apollo program culminated with astronauts Neil Armstrong and Edwin (Buzz) Aldrin making man’s first steps on the Moon in July 1969. The late 1950s and early 1960s witnessed emerging commercial applications in space. The first transatlantic telephone cable had connected Europe and North America in 1956 to meet the increasing demand in communications. Space offered a cost-competitive alternative, and industrial companies showed much interest and enthusiasm for it, especially AT&T, RCA, General Electric, and Hughes Aircraft. The DoD supported the development of space communications on the government side. It was not clear at the time whether satellites in low, medium, or geostationary orbits would offer the best solution. While geostationary satellites provided excellent coverage, the technical challenges of building and deploying such satellites and their control had not yet been met. Initially, the industry invested significant resources in the development of space communications. The situation drastically changed when President Kennedy signed the Communications Satellite Act in 1962. Now government, including NASA, became a major player in commercial space communications, with the authority to regulate and to a significant extent dictate the development. Consequently, the Communications Satellite (Comsat) Corporation was formed in 1963 to manage procurement of satellites for the international communications consortium Intelsat. The Hughes Aircraft Company demonstrated a practical geostationary communication satellite with launches of 3 test spin-stabilized Syncom satellites in 1963–1964. As the technology progressed, several companies introduced 3-axis stabilized geostationary satellites. Since the beginning of the space age, satellite communications have been dominating commercial space, with most of activities today concentrated in the direct-to-home TV broadcasting and fixed satellite services. Figure 1-6 (see pdf) demonstrates the astounding increase in capabilities of geostationary communication satellites with the example of one family of satellites built by Hughes, now part of the Boeing Company. Fig. 1-6. Spectacular Growth of Communication Satellite Capabilities. Example of satellites developed by Hughes/Boeing [Gruntman, 2008]. — see pdf; this figure in color is given in course notes of Mike’s ASTE-520 (perhaps the largest graduate spacecraft design class in the country) or in notes for his short courses offered for government and industry. Military and reconnaissance satellites provided critically important capabilities essential for national survival. NASA missions, especially manned missions, were highly visible and reflected on the nation’s international prestige, so important in the Cold War battles. As a result, National Security Space (NSS) and NASA missions had one feature in common: failure was not an option which inevitably led to a culture of building highly-reliable systems. Space missions were thus performance driven, with cost being of secondary importance. The consequent high-cost of the space undertaking led, in turn, to increased government oversight which drove the schedules and costs further up. The government-regulated commercial space, dominated by the same industrial contractors, could not develop a different culture. After landing twelve astronauts on the Moon, NASA brought to us spectacular achievements in space science and in exploration of the Solar system. Numerous space missions advanced our understanding of the Sun’s activity and the near-Earth environment. NASA spacecraft visited all planets of the Solar system with the exception of Pluto (Ed.: Pluto is now officially a dwarf planet.) — the New Horizons mission is presently enroute to the latter. The Soviet Union established a permanent space station, Mir, in low-Earth orbit. The American human space flight concentrated on the development of the Space Shuttle and the International Space Station (ISS). The Space Shuttle carried astronauts to low-Earth orbit from 1981 to 2011. The ISS, with a mass of about 400 metric tons, has the opportunity to demonstrate what humans can do in space. Today, space affects government, business, and culture. Many countries project military power, commercial interests, and national image though space missions. It is a truly high-technology frontier, expensive and government-controlled or government-regulated. Space has become an integral part of everyday lives of people. We are accustomed to weather forecasts based on space-based sensors. Satellites deliver TV broadcasts to individual homes. The Global Positioning System (GPS) reaches hundreds of millions of users worldwide, guiding drivers on the road, aircraft in the air, and hikers in the mountains. After the end of the Cold War, the transformation of space from a primarily strategic asset into increasingly integrated tactical applications, supporting the warfighter, accelerated. NSS provides critically important capabilities in command and control, communications, reconnaissance, monitoring of international treaties, and guiding precision munitions to targets. Missile defense heavily relies on space sensors and communications for early warning and intercept guidance. NSS spends annually twice as much as NASA. The space enterprise has become a true international endeavor. Seven countries joined the Soviet Union and United States in the elite club of nations that launched their own satellites on their own space launchers: France (1965), Japan (1970), People’s Republic of China (1970), United Kingdom (1971), India (1980), Israel (1988), and Iran (2009). The European countries have combined their efforts and launch their satellites today through the European Space Agency (ESA). Canada also conducts an active space program. Brazil has an active space program and it is only a question of time until it successfully launches its satellite. South Korea also pursues development of space launch capabilities, with Russia initially providing important parts of launch technology. The secretive North Korea tries to launch a satellite. In addition, numerous other countries bought and operate various satellite systems. MG: Since publication of this book in 2011, North Korea (Democratic People’s Republic of Korea — DPRK) and South Korea (Republic of Korea — ROK) launched their satellites in December 2012 and January 2013, respectively. See analysis of North Korea’s launch. Very few countries presently match the American commitment to space exploration and space applications. “Only France (and the old Soviet Union in the past) approaches the US space expenditures in terms of the fraction of the gross domestic product (GDP). Most other industrialized countries (Europe and Japan) spend in space, as fraction of GDP, 4 to 6 times less than the United States.” [Gruntman, 2004, p. 462] People’s Republic of China and India are expanding their space programs. The highly space-capable Russia is also increasing its space activities after the decline of the 1990s. For many years, the United States has led the world in space. The health and the future of the American space enterprise depend on the national commitment—there is no limit to what we can do. President Kennedy observed that “for while we cannot guarantee that we shall one day be first [in space], we can guarantee that any failure to make this effort [in space] will make us last . ” [Gruntman, 2004, p.383]. Public policy. Copyright © 2004–2016. All rights reserved.
<urn:uuid:fcc8d459-5d8c-4477-9229-fb893b3ffcff>
CC-MAIN-2021-49
https://www.openupnow.org/the-history-of-spaceflight-2011/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00328.warc.gz
en
0.933751
6,652
4.09375
4
For the first time, the most extreme collision to occur in the cosmos has been observed. Galaxies are known to hide supermassive black holes in their cores, and should the galaxies collide, tidal forces will cause massive disruption to the stars orbiting around the galactic cores. If the cores are massive enough, the supermassive black holes may become trapped in gravitational attraction. Do the black holes merge to form a super-supermassive black hole? Do the two supermassive black holes spin, recoil and then blast away from each other? Well, it would seem both are possible, but astronomers now have observational evidence of a black hole being blasted away from its parent galaxy after colliding with a larger cousin. Most galaxies in the observable universe contain supermassive black holes in their cores. We know they are hiding inside galactic nuclei as they have a huge gravitational dominance over that region of space, sucking away at stars orbiting too close. Recent observations of galactic cores show quickly rotating stars around something invisible. Calculating the star orbital velocities, it has been deduced that the invisible body they are orbiting is something very massive; a supermassive black hole of hundreds of millions of solar masses. They are also the source of bright quasars in active, young galaxies. Now, the same research group who made the astounding discovery of the structure of a black hole molecular torus by analysing the emission of echoed light from an X-ray flare (originating from star matter falling into the supermassive black hole’s accretion disk) have observed one of these supermassive black holes being kicked out of its parent galaxy. What caused this incredible event? A collision with another, bigger supermassive black hole. Stefanie Komossa and her team from the Max Planck Institute for extraterrestrial Physics (MPE) made the discovery. This work, to be published in Astrophysical Journal Letters on May 10th, verifies something that has only been modelled in computer simulations. Models predict that as two fast-rotating black holes begin to merge, gravitational radiation is emitted through the colliding galaxies. As the waves are emitted mainly in one direction, the black holes are thought to recoil – much like the force that accompanies firing a rifle. The situation can also be thought of as two spinning tops, getting closer and closer until they meet. Due to their high angular momentum, the tops experience a “kick”, very quickly ejecting the tops in the opposite directions. This is essentially what two supermassive black holes are thought to do, and now this recoil has been observed. What’s more, the ejected black hole’s velocity has been measured by analysing the broad spectroscopic emission lines of the hot gas surrounding the black hole (its accretion disk). The ejected black hole is travelling at a velocity of 2650 km/s (1647 mi/s). The accretion disk will continue to feed the recoiled black hole for many millions of years on its journey through space alone. Supporting the evidence that this is indeed a recoiling supermassive black hole, Komossa analysed the parent galaxy and found hot gas emitting X-rays from the location where the black hole collision took place. Now Komossa and her team hope to answer the questions this discovery has created: Did galaxies and black holes form and evolve jointly in the early Universe? Or was there a population of galaxies which had been deprived of their central black holes? And if so, how was the evolution of these galaxies different from that of galaxies that retained their black holes? It is hoped that the combined efforts of observatories on Earth and in space may be used to find more of these “superkicks” and begin to answer these questions. The discovery of gravitational waves will also help, as this collision event is predicted to wash the Universe in powerful gravitational waves. Source: MPE News
<urn:uuid:5a87d979-6e97-4fdc-907a-de9ead9a1b5a>
CC-MAIN-2021-43
https://www.universetoday.com/13958/supermassive-black-hole-kicked-out-of-galaxy-first-ever-observation/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00142.warc.gz
en
0.941534
798
4.25
4
An example of an atom is a single particle of any element on the periodic table. Carbon, for instance, consists solely of identical carbon atoms. Atoms are the smallest distinct unit of matter, and they cannot be broken down by chemical means.Continue Reading An atom consists of electrons that orbit a nucleus, which contains protons and neutrons. Atoms can be neutral or have charges; the latter are called ions and can be either negatively or positively charged. Cations are positively charged ions, while anions are negatively charged. Compounds are made up of different types of atoms depending on its chemical formula. For example, sodium chloride, or NaCl, is made up of sodium and chlorine atoms.Learn more about Atoms & Molecules
<urn:uuid:203cc278-cc00-450b-8b2c-777973478d3d>
CC-MAIN-2017-22
https://www.reference.com/science/examples-atoms-bb8981a02d6d8af1
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00048.warc.gz
en
0.920061
151
4.03125
4
Diffusion is the name for the way molecules move from areas of high concentration, where there are lots of other similar molecules, to areas of low concentration, where there are fewer similar molecules. When the molecules are evenly spread throughout the space, it is called equilibrium. Imagine half a box filled with yellow balls and the other half filled with blue ones. If you set the box on something that vibrates, the balls will start to move around randomly, until the blue and yellow balls are evenly mixed up. Think about the way pollutants move from one place to another through air, water and even soil. Or consider how bacteria are able to take up the substances they need to thrive. Your body has to transfer oxygen, carbon dioxide and water by processes involving diffusion as well. Lots of things can affect how fast molecules diffuse, including temperature. When molecules are heated up, they vibrate faster and move around faster, which helps them achieve equilibrium more quickly than they would if it were cold. Diffusion takes place in gases (like air), liquids (like food coloring moving through water,) and even solids (semiconductors for computers are made by diffusing elements into one another.) You can watch food coloring diffuse through a colloid (gelatin) at home and measure how long it takes. Gelatin is a good substance to use for diffusion experiments since it doesn’t support convection, which is another kind of movement in fluids. You’ll need clear gelatin (from the grocery store or Target), food coloring and water. Add 4 packs of plain, unflavored gelatin (1 oz or 28 gm) to 4 cups of boiling water. Pour the liquid gelatin into petri dishes, cups, or tupperware and let it harden. Then, using a straw, poke a hole or two in the gelatin, removing the plug so that you have a hole in the jello about 1/2 inch deep. Add a drop of food coloring in the hole in the jello. Every so often, measure the circle of food coloring as it diffuses into the jello around it. How many cm per hour is it diffusing? If you put one plate in the refrigerator and an identical one at room temperature, do they diffuse at the same rate? Why do you think you see more than one color for certain shades of food coloring? What else could you try? Here’s a post on how to use this experiment to make sticky window decorations: http://kitchenpantryscientist.com/?p=4489 We made plates and did the same experiment using 2 cups of red cabbage juice, 2 cups of water and 4 packs of gelatin to see how fast a few drops of vinegar or baking soda solution would diffuse (a pigment in red cabbage turns pink when exposed to acid, and blue/green when exposed to a base!) It’s also fun to experiment with the diffusion of substances across a membrane, like a paper towel. This is called osmosis. Membranes like the ones around your cells are selectively permeable and let water and oxygen in and out, but keep other, larger molecules from freely entering and exiting your cells. For this experiment, you’ll need a jar (or two), paper towels, rubber bands and food coloring. Fill a jar with water and secure a paper towel in the jar’s mouth (with a rubber band) so that it hangs down into the water, making a water-filled chamber that you can add food coloring to. Put a few drops of food coloring into the chamber and see what happens. Are the food coloring molecules small enough to pass through the paper towel “membrane?” What happens if you put something bigger, like popcorn kernels in the chamber? Can they pass through the small pores in the paper towel? Do the same experiment in side-by-side jars, but fill one with ice water and the other with hot water. Does this affect the rate of osmosis or how fast the food coloring molecules diffuse throughout the water? Think about helium balloons. If you take identical balloons and fill one with helium and the other with air, the helium balloon will shrink much faster as the smaller helium atoms diffuse out more quickly than the larger oxygen molecules.
<urn:uuid:3218739b-ec03-4fbf-9e43-31ad3dab7deb>
CC-MAIN-2017-39
http://kitchenpantryscientist.com/diffusion-and-osmosis-experiments/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687592.20/warc/CC-MAIN-20170921011035-20170921031035-00002.warc.gz
en
0.938396
879
4.3125
4
Passover, in Judaism, one of the most important and elaborate of religious festivals. Its celebration begins on the evening of the 14th of Nisan (first month of the religious calendar, corresponding to March–April) and lasts seven days in Israel, eight days in the Diaspora (although Reform Jews observe a seven-day period). Numerous theories have been advanced in explanation of its original significance, which has become obscured by the association it later acquired with the Exodus. In pre-Mosaic times it may have been a spring festival only, but in its present observance as a celebration of deliverance from the yoke of Egypt, that significance has been practically forgotten. In the ceremonial evening meal (called the Seder), which is conducted on the first evening in Israel and by Reform Jews, and on the first and second evenings by all other observant Jews in the Diaspora, various special dishes symbolizing the hardships of the Israelites during their bondage in Egypt are served; the narrative of the Exodus, the Haggadah, is recited; and praise is given for the deliverance. Only unleavened bread (matzoth) may be eaten throughout the period of the festival, in memory of the fact that the Jews, hastening from Egypt, had no time to leaven their bread. Jewish law also requires that special sets of cooking utensils and dishes, uncontaminated by use during the rest of the year, be used throughout the festival. In ancient Israel the paschal lamb (see Agnus Dei) was slaughtered on the eve of Passover, a practice retained today by the Samaritans. See T. H. Gaster, Passover: Its History and Traditions (1949, repr. 1962); P. Goodman, ed., The Passover Anthology (1961). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Passover from Infoplease: See more Encyclopedia articles on: Judaism
<urn:uuid:bc7f7a0d-1ac5-4367-af66-f7ad218471b4>
CC-MAIN-2014-15
http://www.infoplease.com/encyclopedia/society/passover.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00650-ip-10-147-4-33.ec2.internal.warc.gz
en
0.944887
414
4.03125
4
Common walnut is often grown in the United Kingdom as an ornamental, specimen tree although walnut tree roots release an enzyme inhibitor that suppresses the growth of other plant species. The species is widely cultivated in the temperate regions of the world for its edible crop and exceptionally hard timber. Common walnut originates from Central Asia, where the walnut forests of Tien Shan in Kyrgyzstan are considered a globally-important biodiversity hotspot. There are approximately 47,000 hectares of walnut forest in Kyrgyzstan, although large areas are under threat. Walnut trees are very sensitive to climate change, despite surviving annual temperature ranges of between -24 and +36 degrees Celsius. Erratic weather conditions, particularly late spring frosts, damage shoots and flowers and can produce serious walnut crop failures. Drought and forest fires are also significant threats, both of which are exacerbated by climate change. Following independence from the Soviet Union in 1991, Kyrgyzstan's economy crashed and the state-owned walnut forests became vital sources of income for local people; providing a valuable food crop, as well as fuel, timber and land for farming. As a result, human pressure on the forests increased; few walnuts were left to germinate and young trees were damaged by grazing animals. Forest management became unsustainable. Over recent years, conservationists and local people have been involved in decision making and long-term lease agreements have been developed. As local people have gained a greater stake in the future of these forests and their productivity, the quality of the management and sustainability of these forest resources has increased. Walnuts are generally associated with positive health benefits. In parts of southern France it was noted that heart disease was rare despite a diet rich in saturated fat. This was initially linked to the consumption of red wine. However detailed investigation revealed that low levels of cholesterol were probably due to eating daily green salads dressed with chopped walnuts and walnut oil. Walnut oil is also rich in vitamin E, omega 3 and cancer-preventing antioxidants. As walnut trees have been part of human cultures for millennia, it is unsurprising they are associated with many traditional beliefs. For example, the fruit is a symbol of fertility and prosperity. Through the Doctrine of Signature, the brain-shaped seed, inside a hard shell, led to the belief walnuts were good for the brain. Yet resting beneath the tree was thought to 'dull the brain' and lead to illness. Gauthier M-M and Jacobs DF 2011. Walnut (Juglans spp.) ecophysiology in response to environmental stresses and potential acclimation to climate change. Annals of Forest Science 68: 1277-1290. Hemery GE and Popov SI 1998. The walnut (Juglans regia L.) forests of Kyrgyzstan and their importance as a genetic resource. Commonwealth Forestry Review 77: 272-276.
<urn:uuid:bca24eb5-ca5a-4c4e-9640-76849ab3fd8f>
CC-MAIN-2019-43
https://herbaria.plants.ox.ac.uk/bol/plants400/Profiles/IJ/Juglans
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00358.warc.gz
en
0.968155
601
4.09375
4
Aerodynamics, the science of flight, is a highly complex science. This is because many complex forces are acting on anything in flight. These forces include the power available for flight and drag produced by the flying object. Each of these categories includes many additional forces that depend on the shape of the flying object, the shape and length of the wings, the speed and the altitude. This is why, for example, high altitude planes have very long wings. One critical force that has been under recent study is the turbulence that forms at the tips of the wings. The shorter the wing, the more energy-consuming turbulence forms at the tip of the wing. Different wing designs have been tried to decrease this turbulence. Engineers have had some success reducing this turbulence with winglets. You may have seen these small vertical wings on the wingtips of some airplanes. Swiss researchers have been studying vultures with the hope of finding a better solution to this problem because vultures have a relatively short wing span that has proven to be surprisingly efficient. They discovered that this is because the feathers at the vultures' wing tips spread out. They then tested a wing with a finger like cascade of blades at the end. Their new wing was more than four times more efficient than the average wing design in use today! It takes a great deal of faith in evolution to think that natural selection possesses such knowledge of aerodynamics. Clearly the vulture was designed by an intelligent Creator Who understands aerodynamics even better than we do!
<urn:uuid:0fae4587-31a2-4698-98c5-a24c6e260dae>
CC-MAIN-2016-44
http://www.creationmoments.com/node/2209
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00337-ip-10-171-6-4.ec2.internal.warc.gz
en
0.968913
303
4.09375
4
Venus shares a lot of features with Earth, including active volcanoes. Despite the clouds of sulfuric acid and crushing pressure, Venus is more like Earth than any other planet in the solar system. It has true mountain ranges, similar gravity, and volcanic activity. Even with hundreds of confirmed volcanoes on the surface, scientists aren’t certain if Venus has eruptions as often as Earth. Evidence is mounting that it does, thanks in part to a new analysis of decades-old radar data showing a sizeable eruption at one of the planet’s largest volcanoes. Robert Herrick, a planetary scientist at the University of Alaska Fairbanks, has a soft spot for Venus, which he describes as his favorite planet. Indeed, Mars has taken the spotlight among planetary scientists for the last quarter century, as NASA and other space agencies succeed in landing robotic explorers on the dusty world. Unlike Venus, Mars doesn’t have a crushing, ultra-hot atmosphere that annihilates landers, but it’s only superficially Earth-like, according to Herrick. NASA has two upcoming Venus missions, and investigating the planet’s geological activity is high on the list of objectives. Herrick is working with the agency to develop an instrument for monitoring volcanic activity. In preparation for this, Herrick took a closer look at radar data from the Magellan spacecraft, which scanned Venus in the early 1990s. In the intervening years, computer hardware has come a long way, making it possible to shuffle through the radar images more efficiently. So, Herrick went looking for evidence of volcanic activity. The study focused on the area around Maat Mons, the tallest volcano on Venus, with a peak of 26,247 ft (8,000 meters) above the mean terrain. That’s just a little shorter than Mount Everest (about 29,000 feet). After months of poring over the radar data, Herrick spotted something on the volcano’s north side. Two images taken eight months apart in 1991 show a lava vent changing shape dramatically. The image above shows how the apparent vent grows in size, and there’s what appears to be surface remodeling from a lava flow. This data suggests that eruptions on Venus follow a similar cycle as Earth — we’re not talking years or decades between major volcanic events. Venus probably has them regularly. This helps inform the design of instruments for upcoming Venus missions, plus it helps scientists better understand what to expect from the planet’s atmosphere. After detecting the probable volcanic activity on Venus, Herrick is confident the seismometer being planned for Venus will be able to quantify its volcanic activity. Provided, of course, that it can survive on the hellish surface. The missions, DAVINCI+ (Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging) and VERITAS (Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy), are expected to launch in the late 2020s. The original article was published by Extreme Tech
<urn:uuid:6b79bd91-f29f-46a0-82e1-d6a640940cc2>
CC-MAIN-2023-50
https://www.techandfuture.com/scientists-spot-volcanic-activity-on-venus-in-radar-images/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00522.warc.gz
en
0.933497
626
4.0625
4
THE STRUCTURE OF MATTER Matter primarily consists of protons, neutrons and electrons. There are also a number of other building blocks however these are not stable. All of these particles are characterized by four properties: their electrical charge, their rest mass, their mechanical momentum and their magnetic momentum. The number of protons in the nucleus is equal to the atom's atomic number. The total number of protons and the number of neutrons are approximately equal to the atom's total mass. This information is a part of the data that can be read off from the periodic system. The electron shell contains the same number of electrons as there are protons in the nucleus. This means the atom is electrically neutral. The Danish physicist, Niels Bohr, produced a theory as early as 1913 that proved to correspond with reality where he, among others, demonstrated that atoms can only occur in a so-called, stationary state with a determined energy. If the atom transforms from one energy state to another a radiation quantum is emitted, a photon. It is these different transitions that make themselves known in the form of light with different wavelengths. In a spectrograph they appear as lines in the atom's spectrum of lines. The molecule and the different states of matter Atoms held together by chemical bonding are called molecules. These are so small that, for example, 1 mm3 of air at atmospheric pressure contains approx. 2.55 x 1016 molecules. All matter can in principle exist in four different states: In the solid state the molecules are tightly packed in a lattice, with strong bonding. At all temperatures above absolute zero a certain degree of molecular movement occurs, in the solid state as a vibration around a balanced position, the faster the greater the temperature becomes. When a substance in a solid state is heated so much that the movement of the molecules cannot be prevented by the rigid pattern (lattice), they become loose the substance melts and transforms to a liquid. If the liquid is heated more, the bonding of the molecules is broken, and it transforms into a gaseous state during expansion in all directions and mixes with the other gases in the room. When gas molecules are cooled, they loose speed and bond to each other again, and condensation starts. However, if the gas molecules are heated further, they are broken down into individual particles and form a plasma of electrons and atomic nuclei. The force on a square centimeter area of an air column, which runs from sea level to the edge of the atmosphere, is about 10.13 N. Therefore the absolute atmospheric pressure at sea level is approx. 10.13 x 104 N per square meter, which is also called 1 Pa (Pascal), the SI unit for pressure. A basic dimension analysis shows that 1 bar = 1 x 105 Pa. The higher above sea level you are the lower the atmospheric pressure and visa versa. Thermodynamics' first main principle is a law of nature that cannot be proved, but is accepted without reservation. It says that energy can neither be created nor destroyed and from that it follows that the total energy in a closed system is constant. Thermodynamics' second main principle says that heat can never of "its own effort" be transferred from one source to a hotter source. This means that energy can only be available for work if it can be converted from a higher to a lower temperature level. Therefore in, for example, a heat engine the conversion of a quantity of heat to mechanical work can only take place if a part of this quantity of heat is simultaneously led off without being converted to work. Boyle's law says that if the temperature is constant so the product of pressure and volume are costant. The relation reads. This means if the volume is halved during compression, then the pressure is doubled. Charle's law says that the volume of gas changes in direct proportion to the cange in temperature the relation reads. The general law of state for gases is a combination of Boyle's and Charle's laws. This states how pressure, volume and temperature affect each other. When one of these variables is changed, this affects at least of the other two variables. This can be written. The constant R is called the individual gas constant and only concerns the properties of the gas. If the mass m of the gas takes up the volume V, the relation can be written. Each heat difference within a body, or between different bodies, always leads to the transfer of heat, so that a temperature balance is obtained. This heat transfer can take place in three different ways: through conductivity, convection or radiation. In reality heat transfer takes place in parallel, in all three ways. Conductivity takes place between solid bodies or between thin layers of a liquid or gas. Molecules in movement emit their kinetic energy to the adjacent molecules. Convection can take place as free convection, with the natural movement that occurs in a medium or as forced convection with movement caused by, for example, a fan or a pump. Forced convection gives significantly more intense heat transfer. All bodies with a temperature above 0°K emit heat radiation. When heat rays hit a body, some of the energy is absorbed and transforms to heat. Those rays that are not absorbed pass through the body or are reflected. Only an absolute black body can theoretically absorb all radiated energy. In practice heat transfer is the sum of the heat transfer that takes place through conductivity, convection and radiation. Changes in state You can follow the changes in state for a gas from one point to another in a p/V diagram. It should really need three axes for the variables p, V and T. With a change in state you move along a curve on the surface in space that is then formed. However, you usually consider the projection of the curve in one of the three planes, usually the p/V plane. Primarily a distinction is made between five different changes in state: Isochoric process (constant volume), isobaric process (constant pressure), isothermic process (constant temperature) isentropic process (without heat exchange with the surroundings) and polytropic process (where the heat exchange with the surroundings is stated through a simple mathematical function). Heating a gas in an enclosed container is an example of the isochoric process. The relation for the applied energy of heat is: Isochoric change of state means that the pressure increases, while the volume constant If a gas in a cylinder is compressed isothermally, a quantity of heat that is equal to the applied work must be gradually led off. This is practicaly impossible, as such a slow process can not be realised. Isothermic change of state means that the temperature of a gas mixture is constant, when the pressure and volume are changed The relation for the quantity of heat led off is: Isentropic process An example of an isentropic process is if a gas is compressed in a fully insulated cylinder without heat exchange with the surroundings. Or if a gas is expanded through a nozzle so quickly that no heat exchange with the surroundings has time to take place. The relation for such a process is: When the entropy is a gas that has been compressed or expanded is constant, no heat exchange with the surroundings takes place. This change in state follows Poisson's law. Polytropic process The isothermic process involves full heat exchange with the surroundings and the isotropic process involves no heat exchange at all. In reality all processes are something between these extremes and this general process is called polytropic. The relation for such a process is: Gas flow through a nozzle Gas flow through a nozzle The gas flow through a nozzle depends on the pressure ratio on respective sides of the nozzle. If the pressure after the nozzle is lowered the flow increases, however, only until its pressure before the nozzle is approximately double so high. A further reduction of the pressure after the opening does not bring about an increase in flow. This is the critical pressure ratio and it is dependent on the gas's isentropic exponent (k). The critical pressure ratio occurs when the flow velocity is equal to the sonic velocity in the nozzle's narrowest section. The flow becomes supercritical if the pressure after the nozzle is reduced further, under the critical value. The relation for the flow through the nozzle is: Flow trough pipes Flow trough pipes Reynold's number is a dimensionless ratio between inertia and friction in a flowing medium. It is defined as: In principal there are two types of flow pipe. With Re<2000 the viscous forces dominate in the media and the flow becomes laminar. This means that different layers of the medium move in relation to each other in good order. The velocity distribution across the laminar layers is usually parabolic shaped. With R4000 the inertia forces dominate the flowing medium and the flow becomes turbulent, with particles that move randomly in the flow's cross section. The velocity distribution across a layer with turbulent flow becomes diffuse. In the critical area, between R<=2000 and R>=4000, the flows conditions are undetermined, either laminar or turbulent or a mixture of the both.The conditions are governedby factors such as the surface smoothness of the pipi or other disturbances. To start a flow in pipe requires a specific pressure difference or pressure drop, to overcome the friction in the pipe and couplings. The size of the pressure drop depens on the diameter of the pipe, its length and form as well as the surface smoothness and Reynold's number. Throttling When an ideal gas flows through a restrictor, with a constant pressure before and after the restrictor, the temperature remains constant. However, there occurs a pressure drop across the restrictor, through the inner energy transforming into kinematic energy, which is why the temperature falls. However, for real gases this temperature change becomes lasting, even if the gas's energy content is constant. This is called the Joule Thomson effect. The temperature change is equal to the pressure change across the throttling multiplied by the Joule Thomson coefficient. If the flowing medium has a sufficiently low temperature a temperature drop occurs across the restrictor, but if the flow medium is hotter, a temperature increase occurs. This condition is used in several technical applications, for example, in refrigeration technology and the separation of gases. AIR IN GENERAL Air is a colourless, odourless and tasteless gas mixture. It consists of many gases, but primarily oxygen and nitrogen. Air can be considered a perfect gas mixture in most calculation contexts. The composition is relatively constant, from seal level and up to an altitude of 25 kilometres. Air is always more or less contaminated with solid particles, for example, dust, sand, soot and salt crystals. The degree of contamination is higher in populated areas, less in the countryside and at higher altitudes. Air is not a chemical substance, but a mechanically mixed substance. This is why it can be separated into its constituent elements, for example, by cooling. Air can be considered as a mixture of dry air and water vapour. Air that contains water vapour is called moist air, but the air?s humidity can vary within broad limits. Extremities are completely dry air and air saturated with moisture. The maximum water vapour pressure that air can hold increases with rising temperatures. A maximum water vapour pressure corresponds to each temperature. Air usually does not contain so much water vapour that maximum pressure is reached. Relative vapour pressure (also known as relative humidity) is a state between the actual partial vapour pressure and the saturated pressure at the same temperature. The dew point is the temperature when air is saturated with water vapour. Thereafter with a fall in temperature the condensation of water takes place. Atmospheric dew point is the temperature at which water vapour starts to condense at atmospheric pressure. Pressure dew point is the equivalent temperature with increased pressure. TYPES OF COMPRESSORS Two basic principles Two basic principles There are two basic principles for the compression of air (or gas), the displacement principal and dynamic compression. Among displacement compressors are, for example, piston compressors and different types of rotary compressors. They are the most common compressors in most countries. On a piston compressor for example, the air is drawn into a compression chamber, which is closed from the inlet. Thereafter the volume of the chamber decreases and the air is compressed. When the pressure has reached the same level as the pressure in the outlet manifold, a valve is opened and the air is discharged at a constant pressure, under continued reduction of the compression chamber's volume. In dynamic compression air is drawn into a rapidly rotating compression impeller and accelerates to a high speed. The gas is then discharged through a diffuser, where the kinetic energy is transformed to static pressure. There are dynamic compressors with axial or radial flow. All are suitable for large volume rates of flow. Displacement compressors A bicycle pump is the simplest form of a displacement compressor, where air is drawn into a cylinder and is compressed by a moving piston. The piston compressor has the same operation principle, with a piston whose forward and backward movement is accomplished by a connecting rod and a rotating crankshaft. If only one side of the piston is used for compression this is called single acting. If both the piston's top and undersides are used the compressor is called double acting. The difference between the pressure on the inlet side and the pressure on the outlet side is a measurement of the compressor's work. The pressure ratio is the relation between absolute pressure on the inlet and outlet sides. Accordingly, a machine that draws in air at atmospheric pressure and compresses it to 7 bar overpressure works with a pressure ratio of (7 + 1)/1 = 8. The compressor diagram for displacement compressors Figure A (at the bottom of this page) illustrates a theoretical compressor diagram and figure B illustrates a real compressor diagram for a piston compressor. The stroke volume is the cylinder volume that the piston travels during the suction stage. The clearance volume is the area that must remain at the piston?s turning point for mechanical reasons, together with the area required for the valves, etc. The difference between the stroke volume and the suction volume is due to the expansion of the air remaining in the clearance volume before suction can start. The difference between the theoretical p/V diagram and the real diagram is due to the practical design of a compressor, e.g. a piston compressor. The valves are never fully sealed and there is always a degree of leakage between the piston and the cylinder wall. In addition, the valves can not open and close without a delay, which results in a pressure drop when the gas flows through the channels. Due to reasons of design the gas is also heated when it flows into the cylinder. Dynamic compressors A dynamic compressor is a flow machine where the pressure increase takes place at the same time as the gas flows. The flowing gas accelerates to a high velocity by means of the rotating blades, after which the velocity of the gas is transformed to pressure when it is forced to decelerate under expansion. Depending on the main direction of the flow they are called radial or axial compressors. In comparison with displacement compressors, dynamic compressors have a characteristic where a small change in the working pressure results in a large change in the capacity. Each speed has an upper and lower capacity limit. The upper limit means that the gas's flow velocity reaches sonic velocity. The lower limit means that the counter pressure is greater than the compressor's pressure build-up, which means return flow in the compressor. This in turn results in pulsation, noise and the risk for mechanical damage. Compression in stages Compression in serveral stages Theoretically a gas can be compressed isentropically or isothermally. This can take place as a part of a reversible process. If the compressed gas could be used immediately, at its final temperature after compression, the isentropic process would have certain advantages. In reality the gas can rarely be used directly without being cooled before use. Therefore the isothermal process is preferred, as this requires less work. In practice attempts are made to realise this process by cooling the gas during compression. How much you can gain by this is shown, for example, with an effect- ive working pressure of 7 bar that theoretically requires 37% higher output for isentropic compression compared with isothermal compression. A practical method to reduce the heating of the gas is to divide the compression into several stages. The gas is cooled after each stage, to then be compressed further. This also increases the efficiency, as the pressure ratio in the first stage is reduced. The power requirement is at its lowest if each stage has the same pressure ratio. The more stages the compression is divided into the closer the entire process gets to be isothermal compression. However there is an economic limit for how many stages a real installation can be designed with. Comparison between displacement and centrifugal compressors Comparison between displacement and centrifugal compressors The capacity curve for a centrifugal compressor differs significantly from an equivalent curve for a displacement compressor. The centrifugal compressor is a machine with a variable capacity and constant pressure. On the other hand a displacement compressor is a machine with a constant capacity and a variable pressure. Examples of other differences is that a displacement compressor gives a higher pressure ratio even at a low speed, unlike the more significantly higher speed centrifugal compressors. The centrifugal compressors are well suited to large air flow rates. Basic terminology and definitions The alternating current used for example to power lighting and motor operations regularly changes strength and direction in a sinusoidal variation. The current strength grows from zero to a maximum value, then falls to zero, changes direction, grows to a maximum value in the opposite direction to then become zero again. The current has then completed a period. The period T is the time in seconds under which the current has gone through all its values. The frequency states the number of complete cycles per second. When speaking about current or voltage it is usually the root mean square value that is meant. With a sinusoidal current the relation for the current's respective voltage's root mean square value is: Voltage under 50V is called extra low voltage. Voltage under 1000V is called low voltage. Voltage over 1000V is called high voltage. Standard voltages at 50Hz are 230/400V and 400/690V. Ohm's law for alternating current Ohm's law for alternating current An alternating current that passes a coil gives rise to a magnetic flow. This flow changes strength and direction in the same way as the current. When the flow changes an emf (electromotive force) is generated in the coil, according to the laws of induction. This emf is counter directed to the connected pole voltage. The phenomenon is called self-induction. Self-induction in an alternating current unit partly gives rise to phase displacement between the current and the voltage, and partly to an inductive voltage drop. The unit's resistance to the alternating current becomes apparently greater than that calculated or that measured with direct current. Phase displacement between the current and voltage is represented by the angle j. Inductive resistance (reactance) is represented by X. Resistance is represented by R. Apparent resistance in a unit or conductor is represented by Z. Applicable for impedance is: Three-phase system Three-phase alternating current is produced in a generator with three separate windings. All values on the sinusoidal voltage are displaced 120° in relation to each other. Different units can be connected to a three-phase unit. A single phase unit can be connected between the phase and zero. Three-phase units can be connected in two ways, star (Y) or delta (Ä) connection. With the star connection a phase voltage lies between the outlets. With a delta connection a main voltage lies between the outlets. Power Active power, P, is the useful power that can be used for work. Reactive power, Q, is the useless power and can not be used for work. Apparent power, S, is the power that must be consumed from the mains supply to gain access to active power. The relation between active, reactive and apparent power is usually illustrated by a power triangle. Power Factor Correction What is Power Factor Correction? Most loads on an electrical distribution system fall into one of three categories; resistive, inductive or capacitive. In a maunfacturing plant, the most common is likely to be inductive. Typical examples of this include transformers, fluorescent lighting and AC induction motors. Most inductive loads use a conductive coil winding to produce an electromagnetic field, allowing the motor to function. All inductive loads require two kinds of power to operate: Active power (kwatts) - to produce the motive force Reactive power (kvar) - to energise the magnetic field The operating power from the distribution system is composed of both active (working) and reactive (non-working) elements. The active power does useful work in driving the motor whereas the reactive power only provides the magnetic field. The bad news is that you are charged for both! As the power factor drops the system becomes less efficient. A drop from 1.0 to 0.9 results in 15% more current being required for the same load. A power factor of 0.7 requires approximately 43% more current; and a power factor of 0.5 requires approximately 100% (twice as much) to handle the same load. The objective, therefore, should be to reduce the reactive power drawn from the supply by improving the power factor. If an AC motor were 100% efficient it would consume only active power but, since most motors are only 75% to 80% efficient, they operate at a low power factor. This means poor energy and cost efficiency because the Regional Electricity Companies charge you at penalty rates for a poor power factor. The electric motor The electric motor The most common electric motor is a three phase, short circuit induction motor. This type of motor can be found within all industries. Silent and reliable, it is a part of most systems, for example, compressors. The electric motor consists of two main parts, the stationary stator and the rotating rotor. The stator produces a rotating magnetic field and the rotor converts this energy to movement, i.e. mechanical energy. The stator is connected to the mains supply's three phases. The current in the stator windings give rise to a rotating magnetic force field, which induces currents in the rotor and gives rise to a magnetic field there too. The interaction between the stator's and the rotor's magnetic fields creates turning torque, which makes the rotor shaft rotate. Rotation speed If the motor shaft should rotate at the same speed as the magnetic field, the induced current in the rotor would at the same time be zero. However, due to losses in, for example the bearings, this is impossible and the speed is always approx. 1-5% lower than the magnetic field's synchronous speed (slip). Applicable for this synchronous speed is: Efficiency Energy conversion in a motor does not take place without losses. These are due to, among others, resistive losses, ventilation losses, magnetisation losses and friction losses. Applicable for efficiency is: It is always the stated power, P2, stated on the motor's rating plate. Insulation class The insulation material in the motor's windings is divided into insulation classes in accordance with IEC 85 (International Electrotechnical Commission). A letter corresponding to the temperature, which is the upper limit for the isolation's calculated application area, designates each class. If the upper limit is exceeded by 10°C the service life of the insulation is shortened by about half. Protection classes state, according to IEC 34-5, how the motor is protected against contact and water. These are stated with the letters IP and two digits. The first states the protection against contact and penetration by a solid object. The other digit states the protection against water. (2) protect against solid objects greater than 12 mm, (3) protect against direct sprays of water up to 60° from the vertical. (5) protection against dust, (4) protection against water sprayed from all directions. (5) protection against dust, (5) protection against low-pressure jets of water from all directions. Cooling methods Cooling methods state, according to IEC 34-6, how the motor shall be cooled. This is designated with the letters IC and two digits. For example IC 01 represents: Free circulation, own ventilation and IC 41: Jacket cooling, own ventilation. Installation method The installation method states, according to IEC 34-7, how the motor should be installed. This is designated by the letters IM and four digits. For example IM 1001 represents: two bearings, shaft with free journalled end, stator body with feet. IM 3001: two bearings, shaft with free journalled end, stator body without feet, large flange with plain securing holes. Star and delta connections Star and delta connections A three-phase electric motor can be connected in two ways, star (Y) or delta (A). The winding phases in a three-phase motor are marked U, V and W (U1-U2; V1-V2; W1-W2). With the star (Y) connection the "ends" of motor winding's phases are joined together to form a zero point, which looks like a star (Y). On the motor plate it can, for example, state 690/400 V. This means the star connection is intended for the higher voltage and the delta connection for the lower. The current, which can also be stated on the plate, shows the lower value for the star connected motor and the higher for the delta connected motor. The main supply is connected to a three-phase motor's terminals marked U,V and W. The phase sequence is L1, L2 and L3. This means the motor will rotate clockwise seen from "D" the drive end. To make the motor rotate anticlockwise, two of the three conductors connected to the starter or to the motor are switched. Check the opperation of the cooling fan when rotating anticlockwise. Torque An electric motor's turning torque is an expression for the rotor's turning capacity. Each motor has a maximum torque. A load above this torque means that the motor does not have the power to rotate. With a normal load the motor works significantly under its maximum torque, however, the start phase involves an extra load. The characteristics of the motor are usually presented in a torque curve. QUANTITIES - UNITS - SYMBOLS EXAMPLE OF A CALCULATION DIMENSIONING AN INSTALLATIOND Example of dimensioning compressed air installations Through the following pages, some normal calculations for dimensioning a compressed air installation will be demonstrated. The intention is to show how some of the formulas and data from previous chapters are used. The example is based on a desired compressed air equipment and result in dimensioned data, based on components that can be chosen for the compressed air installation. After the example, there are a few additions that show how special cases can be handled. This chapter contians of following items; 1. Input data 2. Component selection 3. Other diminsioning 4. Addition 1 - At high altitude 5. Addition 2 - Intermittent output 6. Addition 3 - Water borne energy recovery 7. Addition 4 - Pressure drop in the piping Please remember Input data The compressed air requirments and the ambient conditions must be established before dimensioning is started. In addition to this requirments, a decision as to whether the compressor shall be oil lubricated or oil-free, and whether the equipment shall be water cooled or air cooled must be made. Requirment Assume that the need consists of three compressed air consumers. They have following data; Ambient conditions (dimensioning) Dimensioning ambient temperature: 20°C Maximum ambient temperature: 30°C Ambient pressure: 1 bar(a) Humidity: 60% Miscellaneous Air cooled equipment Compressed air quality from an oil lubricated compressor is regarded as sufficient.
<urn:uuid:78917a85-c955-46e9-8505-2cb4fb7c06d8>
CC-MAIN-2014-52
http://www.aircompressors.co.nz/THEORY
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765698.11/warc/CC-MAIN-20141217075245-00136-ip-10-231-17-201.ec2.internal.warc.gz
en
0.927905
5,866
4
4
The growing season is the part of the year during which local weather conditions (i.e. rainfall and temperature) permit normal plant growth. While each plant or crop has a specific growing season that depends on its genetic adaptation, growing seasons can generally be grouped into macro-environmental classes. Geographic conditions have major impacts on the growing season for any given area. The elevation, or the height above sea level, and temperature of a region are two of the main factors that affect the growing season. Generally speaking, the distance a location is from the equator can be a strong indicator as to what the growing season will look like, however in a high elevation area, regardless of proximity to the equator, a shorter growing season will generally be experienced. Proximity to the ocean also can create less extreme conditions, especially in terms of temperature, which has the potential to extend the growing season further in either direction. In hotter climates, particularly in deserts, despite the geographic barrier of limited water sources, people have been able to extend their growing season in these regions by way of diverting water from other areas and using it in their agriculture. The ability to use these irrigation methods, despite geographic challenges, has made it possible to enjoy almost a year-round growing season. In the United States and Canada, the growing season usually refers to the time between two dates: the last frost in the spring and the first hard frost in the fall. Specifically, it is defined as the period of time between the average last date at which the overnight low temperature drops below 0 °C (32 °F) in the spring and the average date at which the overnight low first drops down below 0 °C (32 °F) in the fall. These average last and first frost dates have reportedly been occurring earlier and later, respectively, at a steady rate, as observed over the last 30 years. As a result, the overall observed length of the growing season in the United States has increased by about two weeks in the last 30 years. In the cooler areas of North America, specifically the northern regions of the United States and Canada, the growing season is observed to be between April or May and goes through October. A longer season starts as early as February or March along the lower East Coast from northern Florida to South Carolina, along the Gulf coast in south Texas and southern Louisiana, and along the West Coast from coastal Oregon southward and can continue all the way through November or December. The longest growing season is found in central and south Florida where many tropical fruits are grown year round. These rough timetables vary significantly for areas that are at higher elevations or close to the ocean. Because several crops grown in the United States require a long period of growth, growing season extension practices are commonly used as well. These include various row-covering techniques, such as using cold frames and garden fabric over crops. Greenhouses are also a common practice to extend the season, particularly in elevated regions that only enjoy 90-day growing seasons. In much of Europe, the growing season is defined as the average number of days a year with a 24-hour average temperature of at least 5 °C (6 °C is sometimes used). This is typically from April until October or November, although this varies considerably with latitude and altitude. The growing season is almost year-round in most of Portugal and Spain, and may be only from June to September in northern Finland and the higher Alps. Proximity to the Gulf Stream and other maritime mediations of temperature extremes can extend the season. In the United Kingdom, the growing season is defined as starting when the temperature on five consecutive days exceeds 5 °C, and ends after five consecutive days of temperatures below 5 °C. The 1961 to 1990 average season length was 252 days (8.3 mo). Tropics and desertsEdit In some warm climates, such as the subtropical savanna and Sonoran Deserts or in the drier Mediterranean climates, the growing season is limited by the availability of water, with little growth in the dry season. Unlike in cooler climates where snow or soil freezing is a generally insurmountable obstacle to plant growth, it is often possible to greatly extend the growing season in hot climates by irrigation using water from cooler and/or wetter regions. This can in fact go so far as to allow year-round growth in areas that without irrigation could only support xerophytic plants. Also in these tropical regions; the growing season can be interrupted by periods of heavy rainfall, called the rainy season. For example, in Colombia, where coffee is grown and can be harvested year-round, they don’t see a rainy season. However, in Indonesia, another large coffee-producing area, they experience this rainy season and the growth of the coffee beans is interrupted.
<urn:uuid:1aaf34db-5c23-4bb0-90d8-b1e0e1182601>
CC-MAIN-2019-18
https://en.m.wikipedia.org/wiki/Growing_season
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526923.39/warc/CC-MAIN-20190419001419-20190419022527-00074.warc.gz
en
0.96489
978
4.09375
4
Indigenous populations are communities with a distinct cultural identity, which is intrinsically linked to the land they live on or come from. Their ancestors were often indigenous to this territory before it became occupied by colonial invaders, and before it became incorporated into a modern nation, with modern-day borders. There are an estimated 370 million indigenous people living in more than 90 countries around the world. All of these distinct groups face similar problems and challenges, with much higher rates of poverty and poor health than the majority populations of their countries. This is caused by both historical and present-day discrimination and exclusion by governments and majority populations. As a result, they often have a lower life expectancy, struggle to legally own their land, and face infringements on their human rights. Accessing culturally appropriate healthcare Imagine you are sick. But the nearest clinic is miles away and you don’t have any transport. You take the money you had put aside for emergencies to get the bus miles to the nearest centre. You arrive tired, hungry and distressed. When you enter the clinic, you find no one speaks your language. You notice the staff pointing and laughing at you. You feel confused, humiliated and exhausted. This is the reality faced by many indigenous peoples and other ethnic minorities every day. These practical and cultural barriers combine to make a visit to the health centre incredibly daunting: - Language: Indigenous groups often speak a different language to that of the mainstream population. This gets in the way of health education, building trust and communicating with health staff, and accessing health information. - Discrimination: Many minority groups report being discriminated against, patronised, or treated harshly by health workers when engaging with health services. - Alternative concepts of health: Many indigenous communities have a different concept of health to mainstream social groups. Often this is a holistic concept which encompasses the collective well-being of their community and ecosystem. This leads to centuries-old customs and approaches to dealing with illness, which are often not accommodated by the mainstream health system. All of these barriers are driven by discrimination and highlight the need for culturally appropriate healthcare to ensure indigenous people feel welcome at government health facilities. At Health Poverty Action we work with governments, healthcare professionals, and indigenous groups, bringing them together to build mutual understanding, and emphasise the importance of culturally appropriate healthcare facilities.
<urn:uuid:61c74bb8-eed0-4480-a9ca-07cd2b0c46cf>
CC-MAIN-2019-22
https://www.healthpovertyaction.org/how-poverty-is-created/indigenous-populations/
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257197.14/warc/CC-MAIN-20190523083722-20190523105722-00451.warc.gz
en
0.960261
470
4.03125
4
Jews have had a Diaspora at least since the middle of the sixth century before the common era.In the early years of modern Israel, the American community provided significant resources and political support with ambivalent American governments. During the late 1940s and early 1950s the new country absorbed hundreds of thousands of Jews who came from the remnants of the Holocaust and from Middle Eastern communities, whose governments turned against their Jews after the military embarrassments suffered by Arab armies in 1948. During the time of the Second Temple, a substantial number, perhaps even a majority, lived outside of the Judean homeland. The Jerusalem-centered Diaspora figures in Christian anti-Semitism, via the episode of Jesus overturning the tables of the money changers. The story features elements of money-grubbing Jews, or Jews whose concern for money competes with what should be sacred. The reality was that money changers were essential to the religious rites. Jews came for the three major festivals from all the known world, and were compelled to change money brought from their homelands to purchase food to sustain themselves in Jerusalem and to buy the doves or animals which they donated for sacrifice. Up until modern times there were substantial communities in what is now Iran, Iraq, Yemen, Central Asia,, across North Africa, and throughout Europe. There were small communities of Jews in North America from colonial times, a sizable migration from Germany in the middle of the 19th century, and then the bulk of what became American Jewry arrived from 1880 until the onset of WW I, mostly from Eastern Europe. There was always a dribble of Jews moving to Jerusalem and other sites in the Holy Land. The tempo increased with the anti-Jewish tensions that provoked the movement westward. Jerusalem's meagre and impoverished population, described in unglowing terms by Mark Twain, had a Jewish majority since the middle of the 19th century. Now the Jewish communities of western democracies and Israel have matured way beyond their economic, social, and political status of the 1940s, and there are Israel-Diaspora tensions associated with each community's own interests. Judaism (i.e., the religion of the Jews) is having a tough time on both sides of the divides, showing the temptations of secularism widespread just about everywhere outside of Islam. Estimates are that as many as 70 percent of non-Orthodox Jews are marrying non-Jews in the US and Western Europe, and that a majority of Israeli Jews rarely visit a synagogue, or know what to do when they do visit for a relative's Bar Mitzvah or some other occasion.. Political disputes have replaced the wholehearted enthusiasm for Israel that was characteristic of the 1940s, and through the wars of 1967 and 1973. Concerns became prominent around Israel's "first unnecessary war" that featured the IDF reaching Beirut and beyond in 1982, and remained as settlements in the West Bank and post-1967 neighborhoods of Jerusalem came to be sources of disagreement with western governments. Other sources of tension appear among - Jews who support campus programs sponsored by Palestinians, up to and including boycotts of Israeli institutions - Jews who express concern about their own loss of status and security due to Muslims and others who oppose Israeli actions involving settlements, Arab casualties due to the actions of the IDF and Israeli police, or what is described as the inflexibility of Israeli governments with respect to the "two state solution" important to the US and other western governments - unease between Diaspora Jews who support left of center political parties, and Israelis who support right of center political parties, One can find in the Israeli population antipathy to "rich and spoiled Americans and Europeans" who criticize Israeli actions from their own positions of safety, even while Israelis acknowledge and seek to enhance the financial and political support received from those overseas Jews who continue to identify with Israeli concerns. With Israel's development, financial support from the Diaspora has become less important than political support, in the context of increase activism of overseas Palestinians and their supporters. The Obama administration has moved to an extreme position, not seen since Eisenhower's pressure on Israel, Britain and France to withdraw from the Sinai in 1956, or Secretary of State James Baker's "fuck the Jews" in the context of the first Gulf War. Most prominent is the contrast between GW Bush's recognition of demographic changes that have to be taken into account in any accord, and the Obama-Kerry concern for the 1967 borders and opposition to Jewish construction in post-1967 neighborhoods of Jerusalem. Israeli Jews and the Israeli government recognize substantial losses of support among younger Jews. It is not the case that Israeli Jews and the Israeli government have given up on their former friends, but something like that is involved in Israelis' move to the right politically while Diaspora Jews continue to support Barack Obama and the head of the British Labour Party Ed Miliband, who is a Jew and son of Holocaust survivors, and has opposed some of Israel's prominent activities. One can argue about Israel's dependence on the political influence of American Jews. For one thing, that influence is not entirely in the direction of supporting the policies of the Israeli government. J Street may not be the match of AIPAC, but it reaches the White House, most notably via Martin Indyk. For another thing, Israel's own economic and military might, along with its capacity to link itself to various politicians ascendant elsewhere, makes it a factor in its own right, able to look after itself in international politics. At some points in recent months, Israeli actions have been closer to those of Egypt than to those of the United States. And for a third thing, Israeli officials look for support across the complexity of American politics, including sectors not close to whoever is currently in the present White House. Among its points of reference have been Republicans in Congress, and leading ministers of the Christian Right. None of which is to say that Israeli officials overlook the sentiments and support they may get from American Jews. The point is that Israel is an independent actor, not tied to whatever may be ascendant among the Jews of the United States or any other Diaspora. Academic boycotts are especially sensitive, especially for those inclined to think that higher education has special influence over the present and future policies of western governments. As this link demonstrates, the issue pits Jews and others against one another, with no clear result other than the campaign's influence on weakening the quality of education, especially in humanities and social sciences on some of the most prestigious campuses that are most costly for parents wanting the best for their children. BDS and other actions against Israel and Jews on campuses and elsewhere may be making Diaspora Jews more uncomfortable than Israelis. Israelis have been aware of threat and tensions all during their history, and rely on security forces skilled in protecting them. Diaspora Jews are encountering a wave of anti-Semitism, at least partly linked to what Israel has been doing, not felt in western countries since Jews began to enjoy increased opportunities after World War II.
<urn:uuid:6ac776e8-bca4-4332-8f5f-04aa668a7ff4>
CC-MAIN-2020-50
https://www.jpost.com/blogs/window-on-israel/israel-and-the-diaspora-384657
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732696.67/warc/CC-MAIN-20201203190021-20201203220021-00697.warc.gz
en
0.967657
1,435
4
4
Social emotional learning is defined as a process for helping children gain critical social skills for life effectiveness, such as developing positive relationships, behaving ethically, and handling challenging situations effectively. The specific skills that allow kids to function and complete daily occupations (such as play, learning, participating in social situations, rest, dressing, writing, riding a bike, interacting with others…) are those social emotional skills that help children to recognize and manage emotions, interact with others, think about their feelings and how they should act, and regulate behavior based on thoughtful decision making. One piece of addressing underlying social emotional learning needs in kids is the fact that the behaviors that we see have an underlying cause that can be found as a result of regulation of emotions, making decisions, and acting on impulses. Social emotional skills are not always a cut and dry aspect of development. Social Emotional Learning Today, I wanted to expand on that idea. So many times, we run into children on our therapy caseloads or in our classroom (or hey, even in our own homes!) who struggle with one area…or several. Remembering that beneath the behaviors, troubles with transitions, acting out, irritability, sleep issues, inflexible thoughts, frustrations, etc…can be emotional regulation components. Let’s consider some of the ways our kids may struggle with social and emotional competencies. We might see kids with difficulty in some of these occupational performance areas (occupational performance = the things we do…the tasks we perform): - Management of stress in learning/chores/daily tasks - Creating of personal goals in school work or personal interests and following through - Making decisions based on ethical and social norms in play, learning, or work - Understanding/Engaging in social expectations (social norms) in dressing, bathing, grooming, etc. - Social participation - Conflict resolution with friends - Empathizing with others - Responding to feedback in school, home, or work tasks - Making good judgement and safety decisions in the community - Showing manners - Understanding subtle social norms in the community or play - Transitions in tasks in school or at home - Ability to screen out input during tasks - Cooperation in play and in group learning - Considering context in communication - Emotional control during games Wow! That list puts into perspective how our kids with regulation concerns really may be struggling. And, when you look at it from the flip-side, perhaps some of our children who struggle with, say, fine motor issues may have sensory concerns in the mix too. Social Emotional Learning Activities When we equip our students with tools to identify their emotions and self-regulate, we are giving them tools for life and promoting a positive environment for learning. We can foster social emotional development through play and interactions. What might this look like at home, in online schooling, or in a classroom setting? 1. Connect emotions to behavior- Children may not have the language knowledge or understand how to explain what they are feeling. They may need concrete examples or scenarios to help them understand how their emotions are tied to their behavior. Does a storm make them feel nervous or scared? How do they react when they feel anxious about a test or quiz? When they argue with a sibling, how do they react? Once they are able to understand their emotions and how they are feeling, they can start using emotional regulation tools and strategies. Use this social emotional learning worksheet to help kids match emotions to behaviors and coping strategies. 2. Be flexible and patient- Flexibility is something we have all been thrown into more than usual lately. But working with children on emotional regulation and understanding their emotions takes patience and being flexible. You may need to change up how you introduce emotions, or maybe a strategy you thought would work isn’t. 3. Set the tone and share your own feelings- This may feel uncomfortable for some of us, but sharing our own feelings with our students and clients and modeling the responses and strategies we are encouraging them to use will have a huge impact. 4. Try specific social skills activities- Social skills activities are those that help kids build underlying emotional and regulation strategies so that making friends, emotions, kindness, empathy, self-awareness, self-management, and other socio emotional tools are built at the foundation. A recent post here on The OT Toolbox has more ideas to develop social emotional learning by engaging in activities that foster emotional regulation and executive functioning skills. …it’s ALL connected! Another fantastic resource that can help develop social and emotional skills is the activity book, Exploring Books Through Play. This digital E-BOOK is an amazing resource for anyone helping kids learn about acceptance, empathy, compassion, and friendship. In Exploring Books through Play, you’ll find therapist-approved resources, activities, crafts, projects, and play ideas based on 10 popular children’s books. Each book covered contains activities designed to develop fine motor skills, gross motor skills, sensory exploration, handwriting, and more. Help kids understand complex topics of social/emotional skills, empathy, compassion, and friendship through books and hands-on play. The book Exploring Books Through Play, has 50 different activities based on popular children’s books. Each book is used for 5 different activities that cover a variety of areas: sensory play, crafts, gross motor activities, fine motor activities, handwriting, scissor skills, and so much more. This book is designed to address emotional regulation and connecting with kids. What’s Inside Exploring Books through Play? We have handpicked these easy and hands-on activities to help kids develop essential social emotional learning skills. As classroom curriculum becomes more focused on academics, social and emotional development can get lost in the shuffle. This book focuses on abstract concepts of friendship, acceptance and empathy. By using children’s books that foster understanding of these concepts through pictures and stories, we can help children understand and see these emotions in action. What if you could use books and interactive activities to teach friendship? What if you could read a book that centers on accepting differences and create or make an activity or craft that helps children see acceptance in action. What if you could explore emotions through story and interactive play? In this book, you will find books that cover abstract concepts and use play to build social and developmental skills. The 10 books covered include: - A Sick Day for Amos McGee - Boy + Bot - Little Blue and Little Yellow - Red: A Crayon’s Story - The Day the Crayons Quit - Leonardo the Terrible Monster - The Adventures of Beekle: The Unimaginary Friend - Whoever You Are and Penguin and Pinecone Want to help kids learn more about complex concepts such emotions, empathy, compassion, and differences? Exploring Books Through Play uses children’s literature as a theme to engage in fun, hands-on activities that help children and adults delve deeper into the characters and lessons, bringing the stories to life and falling further in love with literature. Read a story and then bring the characters to life while learning and building skills. Each story offers unique activities designed around central themes of friendship, empathy, and compassion. Each chapter includes 5 activities for each of the 10 children’s books. The activities are perfect for children ages 3-8, can be used in small groups or as a whole class, and are easily adapted to a home or classroom setting. Click here to get the Exploring Books Through Play resource. Colleen Beck, OTR/L is an occupational therapist with 20+ years experience, graduating from the University of Pittsburgh in 2000. Colleen created The OT Toolbox to inspire therapists, teachers, and parents with easy and fun tools to help children thrive. As the creator, author, and owner of the website and its social media channels, Colleen strives to empower those serving kids of all levels and needs. Want to collaborate? Send an email to email@example.com.
<urn:uuid:53be7c20-a1bb-4e90-bfe7-73e7210e845f>
CC-MAIN-2022-27
https://www.theottoolbox.com/social-emotional-learning-2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00189.warc.gz
en
0.938492
1,670
4.46875
4
During the period of the ovum, the cells are differentiating into the tissues that will form the three germ layers of the body, the ectoderm, mesoderm, and entoderm, as well as the surrounding membranes that protect and nourish the human as it develops. One can begin to see the early features of the face developing by the embryonic age of 3 weeks. By the time the embryo is about 3 weeks old, its length measures approximately 3 to 4 mm from the top of the head to the tail area. Even at that small size, the forerunners of the structures that will become the face can be seen. Fig. 18-1 is a lateral view of the embryo, and several important features can be seen. The umbilical cord (shown cut here) attaches the embryo to the placenta embedded in the wall of the uterus. The heart bulge appears as it does because it develops in an extremely anterior position and pushes out on the upper body wall, which will later become the thorax. As the thorax and ribs develop, the heart will assume a position inside the thoracic cage and will no longer bulge outward. Finally, three ridges of tissue are visible, the pharyngeal arches, which can be seen bulging out laterally. The pharyngeal arches seen in Fig. 18-1 are actually U-shaped bars of tissue. The open end of the U faces posteriorly and surrounds the upper end of the foregut and part of the primitive oral cavity. Eventually six of these arches will develop. The fifth one, however, will degenerate and form nothing of any consequence. The fourth and sixth arches are poorly developed and are not readily seen on the embryonic surface. The ones closest to the head are the largest, and those farther down are smaller in size. For a better understanding of the structure of these pharyngeal arches, it is necessary to look at a longitudinal section through the embryo, which is divided into equal halves (Fig. 18-2). The body is relatively hollow, with the exception of a tube closed at its upper and lower ends and running through the middle of the body cavity whose lining has developed from entoderm. This tube is the developing digestive tract and is divided into three parts. The upper part is the foregut, which forms the digestive tube from the throat region to the duodenum. The middle portion is the midgut, which forms the rest of the small intestine as well as the cecum, ascending colon, and most of the transverse colon. The lower portion is the hindgut, which forms the descending colon, sigmoid colon, and rectum of the large intestine. In Fig. 18-2, you can see the foregut with the tube still closed at the top and the hindgut at the bottom as well. Eventually, at about 4½ weeks, the upper end of the tube breaks down and connects with the primitive oral cavity, which is a depression known as the stomodeum, and forms the oral cavity and oral pharynx. At about 6 to 7 weeks, the bottom end of the tube breaks down and becomes the anal and urethral openings. The point in Fig. 18-2 where the foregut region and the stomodeum share a common wall is known as the buccopharyngeal membrane. This membrane is found in the location that will become the region between the palatine tonsils and an area about two thirds of the way back from the tip of the tongue. When the buccopharyngeal membrane breaks down at about 4½ weeks, the connection between the oral cavity and the digestive tract is established. The upper two pharyngeal arches, numbered with Roman numerals I and II, are also known respectively as the mandibular and hyoid arches. First, the mandibular arch begins to show growth from the upper surface of the posterior end of the arch and will become the maxillary process. When that begins to happen, it can be subdivided into mandibular processes below and maxillary processes above (Fig. 18-3). The mandibular processes will form the mandible, and the maxillary processes will form the maxillae, the zygomatic bones of the cheek, and the palatine bones, which form the hard palate in the roof of the mouth. The maxillae also comprise the upper jaw. In the anterior (frontal) view of a 3-week embryo, notice the forehead area, known as the frontal prominence, the stomodeum (primitive oral cavity), and the mandibular processes of the mandibular arch (Fig. 18-4). During the fourth embryonic week, some changes can be seen. First, two small depressions form low on the frontal prominence; these are the nasal pits, the beginning of the nasal cavities. The areas on either side of these nasal pits begin to form a ridge and become the medial and lateral nasal processes (Fig. 18-5). From the side of the head, notice that the maxillary processes are starting to enlarge slightly and seem to be growing toward the midline. By the sixth week the two medial nasal processes have fused together and, along with the two maxillary processes, have formed the upper lip (Fig. 18-6). The lateral nasal process takes no part in forming the upper lip. It gets pushed up and out of the way. Also about this time the nasal pits deepen until they open into the primitive oral cavity at about 6 weeks (Fig. 18-7). The medial nasal and maxillary processes begin to fuse at their lower end and that connection is known as the nasal fin. This fin then starts to form perforations in it and connective tissue flows into it and fills in the groove that lies between them. There is an increase in the connective tissue of the upper lip in the area of the groove, and the groove fills in and slowly disappears. This process is known as migration. If this migration fails, the tissues will be stretched and will break down as development continues, resulting in a separation between the medial nasal process and maxillary process known as a cleft lip. If this occurs, it takes place by about the sixth embryonic week. If the two medial nasal processes do not fuse together, the result is a midline cleft of the upper lip. The formation of the palate or roof of the mouth involves the same processes: the right and left maxillary processes and the medial nasal processes. The medial nasal processes form a block of tissue that includes the area of the maxillary central and lateral incisors as well as a small V-shaped wedge of tissue lingual to these teeth back to the incisive foramen. This is known as the primary palate or
<urn:uuid:bf1a3e93-370b-4266-bdbf-3a16e61a0351>
CC-MAIN-2021-17
https://pocketdentistry.com/18-development-of-orofacial-complex/
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00490.warc.gz
en
0.94229
1,410
4.0625
4
Astronomers to Build Telescope to Explore Nearby Stars In 2021, a spacecraft the size of a Cheerios box will carry a small telescope into Earth orbit on an unusual mission. Its task is to monitor the flares and sunspots of small stars to assess how habitable the space environment is for planets orbiting them. The spacecraft, known as the Star-Planet Activity Research CubeSat, or SPARCS for short, is a new NASA-funded space telescope. The mission, including spacecraft design, integration and resulting science, is led by Arizona State University's School of Earth and Space Exploration (SESE). "This is a mission to the borderland of astrophysics and astrobiology," said Evgenya Shkolnik, assistant professor in SESE and principal investigator for the SPARCS mission. "We're going to study the habitability and high-energy environment around stars that we call M dwarfs." She announced the mission Jan. 10, 2018, at the 231st meeting of the American Astronomical Society, in Washington, D.C. "We're aiming to show that small space telescopes like SPARCS can answer big science questions." — Assistant Professor Evgenya Shkolnik The stars that SPARCS will focus on are small, dim, and cool by comparison to the sun. Having less than half the sun's size and temperature, they shine with barely one percent its brightness. The choice of target stars for SPARCS might seem counterintuitive. If astronomers are looking for exoplanets in habitable environments, why bother with stars that are so different from the sun? An answer lies in the numbers. To start with, M dwarfs are exceedingly common. They make up three-quarters of all the stars in our Milky Way galaxy, outnumbering sun-like stars 20 to 1. Astronomers have discovered that essentially every M dwarf star has at least one planet orbiting it, and about one system in four has a rocky planet located in the star's habitable zone. This is the potentially life-friendly region where temperatures are neither too hot nor too cold for life as we know it, and liquid water could exist on the planet's surface. Because M dwarfs are so plentiful, astronomers estimate that our galaxy alone contains roughly 40 billion — that's billion with a B — rocky planets in habitable zones around their stars. This means that most of the habitable-zone planets in our galaxy orbit M dwarfs. In fact, the nearest one, dubbed Proxima b, lies just 4.2 light-years away, which is on our doorstep in astronomical terms. So as astronomers begin to explore the environment of exoplanets that dwell in other stars' habitable zones, M dwarf stars figure large in the search. Taking the pulse of active stars According to Shkolnik, while M dwarf stars are small and cool, they are more active than the sun, with flares and other outbursts that shoot powerful radiation into space around them. But no one knows exactly how active these small stars are. Over its one-year nominal mission, SPARCS will stare at target stars for weeks at a time in hopes of solving the puzzle. The heart of the SPARCS spacecraft will be a telescope with a diameter of 9 centimeters, or 3.6 inches, plus a camera with two ultraviolet-sensitive detectors to be developed by NASA's Jet Propulsion Laboratory. Both the telescope and camera will be optimized for observations using ultraviolet light, which strongly affects the planet's atmosphere and its potential to harbor life on the surface. "People have been monitoring M dwarfs as best they can in visible light. But the stars' strongest flares occur mainly in the ultraviolet, which Earth's atmosphere mostly blocks," Shkolnik said. Although the orbiting Hubble Space Telescope can view stars at ultraviolet wavelengths unhindered, its overcrowded observing schedule would let it dedicate only the briefest of efforts to M dwarfs. "Hubble provides us with lots of detail on a few stars over a short time. But for understanding their activity we need long looks at many stars instead of snapshots of a few," said Shkolnik. Capturing lengthy observations of M dwarfs will let astronomers study how stellar activity affects planets that orbit the star. "Not only are M dwarfs more active than the sun when they are old, they remain more active for longer," Shkolnik said. "By the time it was 10 million years old, the sun had become much less active and it has been decreasing steadily ever since. But M dwarfs can remain active for 300 to 600 million years, with some of the smallest M stars flaring often essentially forever." Build local, fly global SPARCS will follow in the footsteps of other space instruments and probes originating from SESE. Already on its way to asteroid Bennu (arrival August 2018) is the OSIRIS-REx Thermal Emission Spectrometer (OTES). In the pipeline are the Phoenix CubeSat (built by an all-student team to study the local climate effects of cities on Earth), LunaH-Map (to measure lunar hydrogen as a proxy for water), the Europa Thermal Emission Imaging System (to seek temperature anomalies on Jupiter's moon Europa), the Lucy Thermal Emission Spectrometer (to measure surface properties among Jupiter's family of trojan asteroids), and Psyche, a mission to study an asteroid made wholly of nickel and iron. "Building SPARCS at ASU will give students educational and training opportunities to become future engineers, scientists and mission leaders." — Assistant Professor Evgenya Shkolnik Like LunaH-Map, SPARCS is a CubeSat built of six cubical units, each about four inches on a side. These are joined to make a spacecraft two units wide by three long in what is termed a 6U spacecraft. Solar power panels extend like wings from one end. "In size and shape, SPARCS most resembles a family-size box of Cheerios," Shkolnik said. The spacecraft will contain three major systems — the telescope, the camera, and the operational and science software. Along with Shkolnik, SESE astronomers Paul Scowen, Daniel Jacobs, and Judd Bowman will oversee the development of the telescope and camera, plus the software and the systems engineering to pull it all together. The telescope uses a mirror system with coatings optimized for ultraviolet light. Together with the camera, the system can measure very small changes in the brightness of M dwarf stars to carry out the primary science of the mission. The instrument will be tested and calibrated at ASU in preparation for flight before being integrated into the rest of the spacecraft. "We'll have limited radio communications with SPARCS, so we plan to do quite a bit of data processing on board using the central computer," said Jacobs. "We'll be writing that software here at ASU, using a prototype of the spacecraft and camera to test our code." After launch, Jacobs said the team will do science operations at ASU, connecting up to SPARCS via a global ground station network. A key part of the mission plan, Shkolnik said, is to involve graduate and undergraduate students in various roles. This will give them with educational and training opportunities to become future engineers, scientists, and mission leaders. "The fast pace for development — from lab to launch might be as short as a couple of years — works well with student timescales," Shkolnik said. "They can work on it, start to finish, in the time they're here at ASU." Small package, big science Joining ASU in the SPARCS mission are scientists from the University of Washington, the University of Arizona, Lowell Observatory, the SouthWest Research Institute, and NASA's Jet Propulsion Laboratory. "The SPARCS mission will show how, with the right technology, small space telescopes can answer big science questions," Shkolnik said. These include, she says, "How likely is it that we humans are alone in the universe? Where should we look for habitable planets? And can we find a new and more fruitful understanding of what makes an exoplanet system habitable?" This article has been republished from materials provided by Arizona State University. Note: material may have been edited for length and content. For further information, please contact the cited source.
<urn:uuid:c88bfd97-3aa4-46bd-920c-8b388b0511c4>
CC-MAIN-2018-39
https://www.technologynetworks.com/analysis/news/astronomers-to-build-telescope-to-explore-nearby-stars-296204
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161214.92/warc/CC-MAIN-20180925063826-20180925084226-00424.warc.gz
en
0.93988
1,739
4.15625
4
When used in a graph or a map, a key, also referred to as a legend, is the part that explains the symbols used. Most commonly, keys are drawn off to the side of a graph or below it. Usually this involves drawing all the symbols used then indicating what the symbols represent. In other words, a key includes the variables or objects used in the chart/graph/diagram along with an example (symbol) of what they look like. Below are a couple of examples. How keys are used Key of a line chart In the above line chart, we can see two lines with different colors. The key for this chart uses a line segments of different colors, then labels each specific color with what it represents. We could have done this other ways too, like using a red and blue circle (or any number of other symbols, as long as we're consistent), rather than a line. From this key, we know that the red line represents boys while the blue line represents girls, specifically their heights over time. Key of a map In the map above, we can see a number of types of structures. The key provided is incomplete because there are structures that were not included, but in this example we can see how keys are used to label some of the different structures on the map.
<urn:uuid:4e064bb7-74aa-4884-93f0-174a2d452305>
CC-MAIN-2020-34
https://www.math.net/key
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737178.6/warc/CC-MAIN-20200807113613-20200807143613-00006.warc.gz
en
0.942381
270
4.09375
4
At 32 degrees Fahrenheit, or 0 Celsius, ice changes to water. This simple, unique fact dominates the climate in Earth's polar regions. Using satellites to detect changes over time, NASA researchers and NASA-funded university scientists have found that Earth's ice cover is changing rapidly near its poles. Recent studies point to new evidence of relationships between climate warming, ice changes and sea level rise. Two researchers from NASA Goddard Space Flight Center (GSFC), Greenbelt, Md., and a glaciologist from the University of Colorado's National Snow and Ice Data Center, Boulder, Colo., will discuss new findings related to Earth's ice cover at a press conference on Dec. 14 at the 2004 meeting of the American Geophysical Union in San Francisco, Calif. Waleed Abdalati, a NASA GSFC researcher, has worked with colleagues on a slew of recent papers on glaciers and ice sheets in the Northern Hemisphere. Bill Krabill of NASA Wallops Space Flight Center, Wallops Island, Va., Abdalati and others calculated that Greenland's contributions to sea level rise nearly doubled in recent years, from 0.13 millimeters (mm) (.005 inches) per year in the mid 1990s, to 0.25 mm (.01 inches) per year from 1997 to 2003. Krabill's study measured steady thinning in the region's lower elevations near the coasts. A recent NASA paper in Nature found that the world's fastest glacier, called the Jakobshavn Isbrae, nearly doubled its speed from 1997 to 2003. The speedy ice stream's quickening coincided with a break up of the floating ice that extends from the glacier out into the ocean, called an ice tongue. In 2003, this one glacier added to the world's oceans an amount of water equal to about 4 percent of the estimated rate of sea level rise. Abdalati also published a paper in the Journal of Geophysical Research assessing the contributions of the Canadian ice caps to sea level rise. During the late 1990s they contributed an estimated 0.065 mm (0.002 inches) per year, which, while not as large as those of Greenland and neighboring Alaska, is still quite significant. Perhaps more significant is the fact that like Greenland and Alaska the rate of ice loss appears to have accelerated in recent years. Meanwhile, the Arctic Ocean's perennial sea ice, or the sea ice that lasts all year long, continues to decline. Floating sea ice blankets the ocean surface, and does not contribute to sea level rise. But it is an important part of the climate system because the expansive white ice reflects the sun's heating rays, prevents the oceans beneath it from absorbing more heat, influences ocean circulation, and regulates Earth's climate. Between 2002 and 2004, Arctic sea ice has been exceptionally low. 2002 set a record for the lowest amount of late summer sea ice since satellite measurements of the area began in 1978. Josefino Comiso of NASA's GSFC reported that between 1978 and 2000, the Arctic perennial sea ice declined by 8.9 percent per decade. The trend is now 9.2 percent per decade. These low levels continue to be sustained in 2003 and 2004. While a few abnormally cold summers would help sea ice survive the summer melt, Comiso's studies have found that on average, during the past 22 years, the Arctic warming rate is about 8 times higher than estimates of warming rates over the last 100 years. In much of the Antarctic, a general cooling has been observed and sea ice has mostly increased over the last 30 years but the Antarctic Peninsula has been an exception since it has warmed and similar rapid changes as those found in the northern hemisphere have been observed. For example, in the eastern Antarctic Peninsula, very rapid climate warming began in the 1950s, causing mean temperatures to increase by about 2.5 degrees Celsius (4.5 Fahrenheit), according to Scambos. As temperatures have warmed, land and sea ice have melted. In March 2002, the Rhode-Island-sized Larsen B ice shelf collapsed, the largest in a series of such retreats that began to take place around 1985 and have steadily increased. In the aftermath of this collapse, two NASA studies, one led by Scambos, showed that glaciers flowing into the bay areas behind the Larsen 'B' ice shelf accelerated by 3- to 8-fold in just 18 months after the breakup. This finding points to similar mechanisms as those discovered by Abdalati and colleagues in Greenland's Jakobshavn ice stream. Satellite images revealed that the Antarctic glaciers' speed-up began almost immediately after the collapse of the shelf. Data from NASA's new ICESat satellite indicate that the trunk of one glacier decreased in elevation by over 30 meters in just six months. Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. Believe that life is worth living and your belief will help create the fact. -- William James
<urn:uuid:7bd0df74-f401-49b7-988f-fc7fffc4f367>
CC-MAIN-2013-48
http://psychcentral.com/news/archives/2004-12/nsfc-nei121404.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163053831/warc/CC-MAIN-20131204131733-00088-ip-10-33-133-15.ec2.internal.warc.gz
en
0.932457
1,029
4.09375
4
- Quality control - Web sites of interest - References and further reading In discussing results obtained from laboratory analysis, there are several important concepts relating to the quality of the results that are often confused with one another. These concepts of accuracy, precision, error and uncertainty are defined and the differences between them described. Furthermore, some basic approaches to quality control are presented in order to provide assistance in ensuring systematic approaches to achieving better data from your laboratory. Accuracy refers to the agreement between a measurement and the true or accepted value. Accuracy alone might or might not say something about the quality of the measuring instrument. The instrument might be of high quality and still disagree with the true value. - If you used a balance to find the mass of a known standard 100.00 g, and you obtained a reading of 80.25 g, your measurement would not be very accurate - A stopped clock is accurate at least once each day. An important distinction between accuracy and precision is that accuracy can be determined by only one measurement, while precision can only be determined with multiple measurements. Precision refers to the repeatability of measurement and describes how close together a group of measurements actually are to each other. Precision has little to do with the true or accepted value of a measurement, so it is quite possible to be very precise and totally inaccurate. If each day for several years a clock reads exactly 11:47 am when the sun is at the zenith, this clock is very precise because it is giving a highly repeatable result. Since there are more than thirty million seconds in a year this device is more precise than one part in one million. A dartboard analogy is often used to help explain the difference between accuracy and precision. Imagine a person throwing darts, trying to hit the bullseye. The closer the dart hits to the bullseye, the more accurate the throws are. If the person misses the dartboard with every throw, but all of their shots land close together, they are still very precise. (source: University of Adelaide, School of Chemistry) In many cases, when precision is high and accuracy is low, the fault can lie with the instrument. If a balance or a thermometer is not working correctly, they might consistently give inaccurate answers, resulting in high precision and low accuracy. Error refers to the disagreement between a measurement and the true or accepted value. A source of error is a limitation of a procedure or an instrument that causes an inaccuracy in the quantitative results of an experiment. A ‘human error’ is not considered a source of error under this definition. Laboratories should strive to identify, understand and limit sources of error in their procedures whenever possible. Some examples include, reading accuracy on burettes and other volumetric dispensing apparatus, colour change interpretation on titrations end points, variation due to sampling, reagents out of date and deteriorated, and digital display output on, for example, balances. Sampling errors can be a very common and large source of error. This can be because the material being sampled might not be completely homogeneous and, therefore, each sample being drawn might be very slightly different. In some cases, mislabelling or even drawing samples from the wrong batch can lead to serious misinterpretations. It is extremely important to remember that the test results can only be useful if the samples truly represent the material. As with accuracy, you must know the true or correct value to discuss your error. However we do not know what the true value is ahead of time, and it is not possible to determine the error. However, if an error occurs we simply will not know it. The true value has not yet been established and there is no other guide. It is worth remembering that ‘human error’ is rarely the sole source of measurement problems. Many problems producing bad laboratory results are due to analysis or calculation errors subsequent to the measurement! Look there first. It is important to remember that there is no such thing as a perfect measurement. Each measurement contains a degree of uncertainty due to the limitations of instruments and the people using them. Uncertainty, rather than error, is the important term in expressing results of chemical or any other type of measurement. Uncertainty of a measured value is an interval of confidence around that value such that the measured value is certain not to lie outside this stated interval and any repetition of the measurement will produce a new result that lies within this interval. This uncertainty interval is assigned for a specific test procedure by following established principles of estimation of uncertainty. The estimation of uncertainty in itself can be a complex task and several publications exist to assist in the determination (EURACHEM/CITAC 2000, ISO 1993, NIST 1994). Uncertainties should also be stated along with a probability. Usually, the measured value is stated to lie within a defined confidence interval with a corresponding probability. Commonly, a 95% confidence interval is used which means an interval of twice the standard deviation (i.e. the standard deviation [sd] for the results from a series of tests made on the same sample). This means that if the measurement is repeated, 95% of the time the new measurement will fall in this interval (i.e. the confidence interval). Therefore, the format for expressing results is “value plus or minus uncertainty (95% confidence)”. - the width of a standard piece of A4 paper is measured using a ruler and the result is stated as as 210 ± 0.5 mm. By stating the uncertainty to be 0.5 mm you are claiming with confidence that every reasonable measurement of this piece of paper by other experimenters will produce a value not less than 209.5 mm and not greater than 210.5 mm. - the concentration of free sulfur dioxide in a wine sample is measured and reported as 25 ± 3 mg/L (95% confidence), the interval of 3 mg/L having been determined as that for 95% confidence. Therefore it is always possible to present results in a completely certain format. In the worst case, where uncertainty is large relative to the result value, the measurement might be nearly useless whilst still being completely certain! For example, if the distance to an arrow target board is estimated as 100 metres with an uncertainty of 50 metres, the statement is very certain but might be of little value to the archer. Don’t let this put you off laboratory testing – just keep in mind the concept of uncertainty and be aware of it when interpreting your test results. Always keep in mind the context in which the results are being used and ensure that the results are suitable for the intended use. Therefore, good analysts will endeavour to develop procedures with the confidence intervals (the uncertainty) to be as small as possible. Every measurement should be considered along with a confidence interval which should then be assigned to the measurement as the uncertainty. The aim of quality control is to ensure that the results generated is of a quality that has been defined as acceptable. Quality control can be implemented in various ways and can include the following: - Testing of a known standard as a check on accuracy (made up or certified reference material) - Regular testing of a control sample (e.g. a cask wine previously analysed) to check on the equipment and the analytical techniques. It is an extremely useful and insightful practice to plot the results of such control samples on a chart as this can give a rapid indication if any problems are occurring. - Include replicates and compare with pre-determined acceptance/rejection criteria for repeatability (precision). - Interlaboratory proficiency testing – analyse the same samples at several laboratories (e.g. The Interwinery Analysis Group Inc.) and compare the group results using the same testing procedures. This can help to identify systematic errors and deficiencies in analytical procedures. - Internal auditing – present replicate samples ‘blind’ and compare the results against the precision criteria. When using any of the quality control steps above, it is important that criteria are set in order to determine if the results are of acceptable quality. If the results of the quality control tests fail to meet the criteria, then there must be clear and unambiguous corrective action procedures in place to identify the cause of the failure and prevent recurrence. Such actions for identification can include trace back of records, or resubmitting test samples under ‘blind’ conditions to one or several operators. Corrective actions arising to address the causes can include calibration of the instrumentation, training of the operators and repairs and maintenance of instrumentation or equipment. By documenting these procedures to be undertaken you will be establishing the foundation of a quality management system for your laboratory. Once established, it will then serve well as the basis for a fully documented quality management system that could be certified, at some time in the future, to international standards such as ISO 17025 (Standards Australia 1999). The World Wide Web is a very useful source of information and there are many sites that can provide relevant information. Using any one of the many available search engines is a good way to start. Suggested key words to use in a search could include, amongst others: laboratory, accuracy, precision, chemistry, wine, analysis - EURACHEM/CITAC QUAM:2000.P1 Quantifying Uncertainty in Analytical Measurement. Editors Ellison, S.L.R.; Rosslein M.; Williams A. EURACHEM/CITAC Guide; 2000. - International Standards Organisation. Guide to the expression of uncertainty in measurement. ISO, Geneva; 1993 (ISBN 92-67-10188-9). - Standards Australia AS ISO/IEC 17025-1999 General requirements for the competence of testing and calibration laboratories. Standards Australia International Ltd PO Box 1055, Strathfield, NSW 2135, Australia. - NIST Technical Note 1297 Guidelines for evaluating and expressing the uncertainty of NIST measurement results. Taylor; B.N.; Kuyatt, C.E. National Institute of Standards and Technology, Gaithersburg, MD 20899-0001, 1994.
<urn:uuid:bc37f236-4af3-4143-91b7-56cc97125992>
CC-MAIN-2017-51
https://www.awri.com.au/industry_support/winemaking_resources/laboratory_establishment/quality/
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00652.warc.gz
en
0.925227
2,068
4.03125
4
. John Quincy Adams. c. 1844. Oil on canvas. National Portrait Gallery, Washington, DC, USA. More. His political career began in 1794, when George Washington appointed him ambassador to the Netherlands. John, who hadn’t actually sought political involvement, initially wanted to decline the position, but was convinced to accept by his parents. This happened again in 1796 when he was appointed ambassador to Portugal. By this time John was bent on turning down the offer, but was touched and changed his mind upon hearing that Washington had praised his diplomatic abilities most highly, going so far as calling him the “most valuable official” that the United States had abroad. Adams was thirty when his father became president, and was appointed ambassador to Prussia at Washington’s recommendation. While abroad he finally married his wife, Louisa Catherine Johnson Adams, to whom he’d proposed nearly five years prior on one of his earlier excursions to Europe. Upon the termination of his diplomatic duties and subsequent return to the United States he ran and was elected to the Massachusetts State Senate in 1802. The following year he was selected to serve on the US Senate by the General Court of Massachusetts, resigning in 1808. John continued his diplomatic career under James Madison, who appointed him the first United States Ambassador to Russia in 1809. When James Monroe became president, he selected Adams to serve as his Secretary of State in 1817. Adams’ most prominent achievements in his new position include the Florida Treaty of 1819, Treaty of 1818 and the Monroe Doctrine, which was introduced in 1823. After his term as Secretary ended in 1825, he ran for President and was elected. During his stay in office he focused primarily on domestic affairs and improvements, such as building roads and other infrastructure. After his presidential term expired Adams served on the House of Representatives from 1831 until his death, getting reelected a consecutive eight times. While in Congress, Adams was a strong abolitionist and proponent of slave rights, on at least one occasion even arguing for the freedom of African slaves who had mutinied and commandeered an illegal Spanish slave ship. Adams died on 23 February, 1848, in the Capitol Building in Washington, D.C., two days after suffering a severe cerebral hemorrhage
<urn:uuid:bfc57c70-d326-4a54-9cdf-5ff6a16880dd>
CC-MAIN-2015-14
http://www.abcgallery.com/B/bingham/bingham73.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295084.53/warc/CC-MAIN-20150323172135-00117-ip-10-168-14-71.ec2.internal.warc.gz
en
0.989876
472
4.0625
4
Source: Globe, Clker, http://bit.ly/1CVSonk; Stick Figure, Clker, @(2:30) iRubric, http://bit.ly/1AZsa1O @(3:22) Data Chart, Provided by the Author @(3:42) Schoology Quiz, Provided by the Author @(4:04) Schoology Quiz, Provided by the Author Hi everyone and welcome. My name is Gino Sangiuliano, and today we're going to continue with how to use scales and rubrics to assess standards based assignments. Let's take a look at this. We'll start by defining the term proficiency scales. Now proficiency scales are a scale. They're from low to high, and they're used to measure competency on specific skill. Typically they include a wider base of scores. Proficiency scales can be created for any subject area, and at any grade level. You start by what it is you want students to know, or how to do, and work from there. Here's an example of creating a proficiency scale. And for more information you can look up the work of Robert Marzano on his website marzanoresearchlaboratories.com. So here's the example. Step one, you create a learning goal. This is something you can take right from the Common Core state standards. For this example I'll use a Common Core state standard ELA literacy w4.2.e, which reads provide a concluding statement or section related to the information or explanation presented. Step two, you place that learning goal right in the middle of your scale. In this case we're going to use three. Based on that goal, you create a higher level goal and place it on the scale as four. And then you create a more simple one of the scale on two. One or zero do not have any goals associated with them. So here's what it might look like. As you can see number three you have, provided concluding statement or section related to the information or explanation presented. I bumped it up to a four, provide a concluding statement or section related to and enhancing the information or explanation presented. And I bumped it a level down for number two, for student who just provides a concluding statement. Number one there's no evidence of performance. And as you can see I've also added the 0.5, the 1.5, 2.5, and 3.5, to give a little wiggle room in between those descriptors. On Robert Marzano's the website you'll find a proficiency scale bank that he encourages teachers to pull from and use or modify. Now in standards based rubrics they measure multiple competencies, and they're typically scored on a four point rubric. There are a lot of tools like Goobric's and iRubric out there. In fact, the screenshot is of the iRubric home. As you can see they have a gallery of rubrics. You can search for specific rubric. You can build your own and save them into my rubrics. And you can also find assessments and create assessments as well. Here's an example of a standards based rubric taken from iRubrics. As you can see it has the four points, advanced, proficient, basic, and below basic. This rubric is measuring the student's ability to write a persuasive paragraph. And you can see the competencies that are listed, introduction, content, organization, style, and language and conventions. When you walk into many classrooms you're likely to see charts with colored stickers and graphs tracking student growth. Teachers have long held this information for themselves. But what we are beginning to realize is that through the work of such researchers as Grant Wiggins and John Hattie, that when students track and own their progress it helps them to focus on what they need to do and improve their work. This is an example of a class consents-a-gram on information text. There are numerous digital tools that can help you to track student progress. For example, aspen , sheets in Google, and Sophia just to name a few. Here's an example of Schoology. This is an online quiz for ninth grade geometry class. Students take this online and submit their work to the teacher. Students get instant feedback that looks like this. And it's sent to them, telling them what they get right and wrong. This is another example of students owning the data, and then doing something with it. Some systems, like 10 marks, also allow teachers to create specific playlists based on the needs of their students. And they also have video tutorials to explain what they got wrong. From a teacher's perspective the results get organized in automatically sent to them in the form of a spreadsheet or a graph, making it easier for them to target students in need of interventions or enrichment. Let's go ahead and summarize what we covered in today's lesson. We defined proficiency scales and looked at some examples. We defined standards based rubrics and looked at an example. We talked about the importance of owning the data. We also mentioned some tools that track progress digitally for students and their teachers. Here's some food for thought. Visit YouTube or teacher tube and search one of the tools that we mentioned in this lesson. You're sure to find videos explaining their use in greater detail. For more information on how to apply what you learned in this video, please view the additional resources section that accompanied this presentation. The additional resources section includes links useful for applications of the course material, including a brief description of each resource. As always, thanks for watching. We'll see you next time. (00:14-02:19) Proficiency Scales (02:20-03:06) Standards Based Rubrics (03:07-03:33) Owning The Data (03:34-04:28) Digital Tools (04:50-05:23) Food For Thought Standards-Based Learning Teacher Handbook This comprehensive handbook instructs teachers on how and why to use standards based grading. Instructions for the development of proficiency scales and rubrics are included in this document. In addition, there are templates for ease of teacher use. Formative Assessment & Standards-Based Grading: The Journey for Dansville Schools This presentation illustrates the connections between standards, formative assessment strategies, and increased student motivation and achievement. Included in this presentation are clear process steps for teachers to follow as they implement standards based instruction and aligned formative assessment practices.
<urn:uuid:2eee14bd-d86c-4f24-b135-25f342a83fc3>
CC-MAIN-2017-22
https://www.sophia.org/tutorials/using-scales-and-rubrics-to-assess-standards-based-f8e8f4a8-dad8-467d-af36-93168e4b5e23
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607963.70/warc/CC-MAIN-20170525025250-20170525045250-00385.warc.gz
en
0.938786
1,355
4.15625
4
Millions of years ago, slow changes in the Earth’s orbit changed the climate in East Africa dramatically. Every 20,000 years ago, the region vacillated between very dry and very wet periods. These extreme changes may have played a vital role in driving human evolution, according to what’s called the pulsed climate variability hypothesis. A new paper from the researcher who first introduced the idea in 2009, University College London geography professor Mark Maslin, links periods of change in the East African Rift Valley—namely the increase in freshwater lakes—with evidence of human evolution. “It seems modern humans were born from climate change,” Maslin explained in a press statement, “as they had to deal with rapid switching from famine to feast—and back again.” This, he says, was what drove the evolution of new species with bigger brains, and later forced them to migrate out of East Africa, moving down toward South Africa and north to Europe and Asia. ‘It seems modern humans were born from climate change’ Along with University of Manchester research fellow Susanne Shultz, Maslin compared all lakes known to have existed in the East Africa Rift System over the last 5 million years with climate records and records of human evolution. Major events in human history, including when humans first started to migrate out of East Africa, happened during wetter periods. Around 1.9 million years ago, for example, when a number of deep freshwater lakes appeared, early Homo erectus arrived on the scene, at the same time as quite a few new species. The researchers believe the new species developed as a direct result of the changing ecology in the region. Maslin says “the climate of East Africa seems to go through extreme oscillations from having huge deep freshwater lakes surrounded by rich lush vegetation to extremely arid conditions—like today—with sand dunes in the floor of the Rift Valley.” More water would have forced early humans to migrate by increasing the availability of food and water (and tributaries to follow as they travelled) while decreasing the amount of space in the valley they could inhabit. Lake Tanganyika From Space, 1985 “The occurrence of deep freshwater lakes would have forced expanding hominin populations both northwards and southwards,” the researchers write, generating “a pumping effect pushing them out of East Africa.” In other periods, the lakes would dry up, forcing them to adapt to survive, which may have helped them evolve to be more flexible. This could be linked to other changes in hominin populations, such as bigger brains, increased use of throwing projectiles, and changes in social behavior: The link between bigger brains and the changing environment is a little more tenuous. Homo erectus represented an 80 percent increase in brain size over previous species, at the same time that there was the greatest amount of water covering the region—potentially one huge lake covering something near 1,000 miles of territory. But other periods of brain expansion occurred during very dry periods, suggesting that brain size increases were driven by aridity. However, the researchers allow that their hypothesis may not capture the whole of human evolution: The study appears today in PLOS ONE.__
<urn:uuid:6d2c4852-e00a-4d90-8326-100c5364feca>
CC-MAIN-2022-49
https://www.popsci.com/article/science/did-ancient-climate-change-drive-human-evolution/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00424.warc.gz
en
0.968129
663
4.1875
4
This document established a nameless part of New South Wales as the Northern Territory of South Australia. The land from the western border of Queensland (138 degrees east longitude to the eastern border of Western Australia (129 degrees east longitude) was thus given a name, and placed under South Australian administration. These original Letters Patent sent to the Governor of South Australia in 1863 defined and named the area, laid the legal basis for government and ensured that its citizens had the same rights to political representation as 'old' South Australians. In 1787 Governor Arthur Phillip's Commission established as his domain all land from the east coast charted by Lieutenant James Cook to 135 degrees east longitude (covering the eastern one-third of the future Northern Territory). In 1824, during Governor Brisbane's term of office, the British Government decided to set up a military and trading post on the north coast of Australia. Fort Dundas, on Melville Island, was some 5 degrees west of the defined boundaries of New South Wales, so the Colonial Office included in the Commission issued to the next Governor of New South Wales, Ralph Darling, an extension of the western boundary of New South Wales to 129 degrees east longitude. In 1829 this became the permanent boundary of the new Colony of Western Australia . The three British military/trading posts set up on the north coast (Fort Dundas, 1824–1829); Fort Wellington, Raffles Bay, 1827–1829; and Victoria, Port Essington, 1838–1849) marked Britain's claim to the whole of the Australian continent. In practice each was mainly concerned with British commercial and strategic interests in the Indian Ocean. They failed as trading posts and by the 1840s it was clear that no other country would challenge Britain's claim to the whole of Australia. In 1846 the British Government produced Letters Patent formally creating a colony of 'North Australia' including the present Northern Territory; but this was revoked in December 1846 and all plans for settlement abandoned. of the future Northern Territory were the result of fixing the northern boundary of South Australia in 1836 and the western boundary of Queensland in 1859 . Nominally, the 'left-over' area remained part of New South Wales, though after 1862 no part of it touched the border of that Colony. In that year, on the sixth attempt over four years, John McDouall Stuart crossed the continent from Adelaide to the Arafura Sea. He travelled through the entire length of the Northern Territory and earned a £2000 reward offered by the South Australian government. The South Australian government used the work of explorers such as Stuart to justify a claim to include the area within the boundaries of their Province. South Australia had a European population of 140 000 and half the existing area of the Colony was still unexplored. The Duke of Newcastle, at the Colonial Office, had well-founded doubts about the ability of the South Australian colonists to develop such a vast northern extension, but no other Colony sought responsibility for the 'left-over' area. South Australia's persistence wore down the resistance of a British government believing any development to be better than none, so long as it cost the home country nothing. On 6 July 1863 these Letters Patent, revocable at Britain's will, were issued, annexing the Northern Territory to South Australia. This document was the legal basis for South Australian occupation and ensured people in the Northern Territory had the same rights to legal representation as other South Australians. These rights were given effect from 1882; Territorians gained the vote when the Northern Territory was added to the South Australian electorate of Flinders. From 1888 they possessed two members of their own in the South Australian Legislative Assembly and voting rights for the Legislative Council. The few white women in the Northern Territory shared the franchise with their South Australian sisters when that Colony became the first to grant full voting rights to women in 1894 . South Australia automatically included all white adult Territorians in Commonwealth electorates after Federation. Heatley, Alistair and Nicholson, Graham, Selected Constitutional Documents on the Northern Territory , Northern Territory Department of Law, Darwin, 1989. Powell, Alan, John Stokes and the Men of the Beagle: Discoverers of Port Darwin , Library Services of the Northern Territory, Darwin, 1989. Powell, Alan, Far Country: A Short History of the Northern Territory , Melbourne University Press, Melbourne, 1996. |Long Title:||Letters Patent annexing the Northern Territory to South Australia 1863| |No. of pages:||1| |Measurements:||55 x 20 cm| |Features:||Decorative illumination on borders| |Location & Copyright:||State Records of South Australia| |Reference:||SRSA: GRG 224/131|
<urn:uuid:81e63952-5746-4b7c-a3ec-bfde8c0c6505>
CC-MAIN-2016-30
http://www.foundingdocs.gov.au/item-did-49.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828009.82/warc/CC-MAIN-20160723071028-00146-ip-10-185-27-174.ec2.internal.warc.gz
en
0.938145
983
4.15625
4
The idea of pronouns can be hard for young children to grasp, but learning pronouns in Italian should also help your child gain a better understanding of English grammar, what pronouns mean, and how they are used. This is the second lesson in the Teaching Children Italian series. Teaching children to express their likes and dislikes in Italian is a fun and easy way to practice new vocabulary. Talking about likes and dislikes uses ideas that are relevant to kids’ thoughts, feelings, and interests, and will let them feel successful at communicating in Italian. If you only know a few words of Italian, having the ability to be polite will make those few words go a lot farther because people will appreciate your effort to be courteous and respectful in their language. This lesson focuses on some of these common Italian greetings and courtesy phrases. This is an Italian conversation lesson designed for beginning learners of any age. In this lesson, students will be introduced to Italian job words, and conduct short interviews in pairs. This is a lesson designed for beginning learners of Italian of any age. Using words for food, learners will practice articles and the ‘there is/there are’ construction in a short role-play. A media file containing food and drink vocabulary is attached. This lesson will help teach your child Italian words for items associated with school. This lesson will help you prepare your child for learning the different parts of the house in the Italian language. Here we will look at worksheets, index cards, and audio files which can help you with this lesson. This lesson will consist of teaching children parts of the body in the Italian language. Try these fun activities, worksheets, index cards, and audio files in your classroom. In this lesson we will look at different ways to teach children the names of the family members in Italian. Children will listen to audio samples as well as use index cards and worksheets to help them grasp this portion of the Italian language. Children love learning about animals! In this lesson plan, teach children Italian vocabulary for animals with activities, worksheets, index cards, and audio files.
<urn:uuid:a6160160-7edb-469d-bc2e-34826cd06d8f>
CC-MAIN-2020-34
https://www.brighthubeducation.com/categories/italian-lesson-plans-for-secondary-grades-6-12/page/2/
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737178.6/warc/CC-MAIN-20200807113613-20200807143613-00114.warc.gz
en
0.957168
435
4.4375
4
Explosive volcanoes such as Indonesia’s Merapi (erupting here in 2006) have the potential to shift rain patterns Credit: NASA Earth Observatory Scientists have long known that large volcanic explosions can affect the weather by spewing particles that block solar energy and cool the air. Some suspect that extended “volcanic winters” from gigantic blowups helped kill off dinosaurs and Neanderthals. In the summer following Indonesia’s 1815 Tambora eruption, frost wrecked crops as far off as New England, and the 1991 blowout of the Philippines’ Mount Pinatubo lowered average global temperatures by 0.7 degrees F—enough to mask the effects of manmade greenhouse gases for a year or so. Now, scientists have shown that eruptions also affect rainfall over the Asian monsoon region, where seasonal storms water crops for nearly half of earth’s population. Tree-ring researchers at Columbia University’s Lamont-Doherty Earth Observatory showed that big eruptions tend to dry up much of central Asia, but bring more rain to southeast Asian countries including Vietnam, Laos, Cambodia, Thailand and Myanmar—the opposite of what many climate models predict. Their paper appears in an advance online version of the journal Geophysical Research Letters. The growth rings of some tree species can be correlated with rainfall, and the observatory’s Tree Ring Lab used rings from some 300 sites across Asia to measure the effects of 54 eruptions going back about 800 years. The data came from Lamont’s new 1,000-year tree-ring atlas of Asian weather , which has already produced evidence of long, devastating droughts ; the researchers also have done a prior study of volcanic cooling in the tropics . “We might think of the study of the solid earth and the atmosphere as two different things, but really everything in the system is interconnected,” said Kevin Anchukaitis , the study’s lead author. “Volcanoes can be important players in climate over time.” Large explosive eruptions send up sulfur compounds that turn into tiny sulfate particles high into the atmosphere, where they deflect solar radiation. Resulting cooling on earth’s surface can last for months or years. (Not all eruptions will do it; for instance, the continuing eruption of Indonesia’s Merapi this fall has killed dozens, but at least this latest one is probably not big enough by itself to effect large-scale weather changes.) As for rainfall, in the simplest models, lowered temperatures decrease evaporation of water from the surface into the air; and less water vapor translates to less rain. But matters are greatly complicated by atmospheric circulation patterns, cyclic changes in temperatures over the oceans, and the shapes of land masses. Up to now, most climate models incorporating known forces such as changes in the sun and atmosphere have predicted that volcanic explosions would disrupt the monsoon by bringing less rain to southeast Asia--but the researchers found the opposite. The researchers studied eruptions including one in 1258 from an unknown tropical site, thought to be the largest of the last millennium; the 1600-1601 eruption of Peru’s Huaynaputina; Tambora in 1815; the 1883 explosion of Indonesia’s Krakatau; Mexico’s El Chichón, in 1982; and Pinatubo. The tree rings showed that huge swaths of southern China, Mongolia and surrounding areas consistently dried up in the year or two following big events, while mainland southeast Asia got increased rain. The researchers say there are many possible factors involved, and it would speculative at this point to say exactly why it works this way. “The data only recently became available to test the models,” said Rosanne D’Arrigo , one of the study’s coauthors. “Now, it’s obvious there’s a lot of work to be done to understand how all these different forces interact.” For instance, in some episodes pinpointed by the study, it appears that strong cycles of the El Niño-Southern Oscillation, which drives temperatures over the Pacific and Indian oceans and is thought to strongly affect the Asian monsoon, might have counteracted eruptions, lessening their drying or moistening effects. But it could work the other way, too, said Anchukaitis; if atmospheric dynamics and volcanic eruptions come together with the right timing, they could reinforce one another, with drastic results. “Then you get flooding or drought, and neither flooding nor drought is good for the people living in those regions,” he said. Ultimately, said Anchukaitis, such studies should help scientists refine models of how natural and manmade forces might act together to in the future to shift weather patterns—a vital question for all areas of the world. # # #
<urn:uuid:46d90a99-7d16-4f59-acd8-ca41bc233391>
CC-MAIN-2021-17
https://www.ldeo.columbia.edu/news-events/voclanoes-have-shifted-asian-rainfall
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039550330.88/warc/CC-MAIN-20210421191857-20210421221857-00494.warc.gz
en
0.929199
1,018
4.3125
4
Fossil fuels are fuels formed from decaying plant and animal matter that was primarily formed over millions of years. Fossil fuels include coal, oil (petroleum), and natural gas. These fuels have been the primary energy source for over 200 years, but they will eventually have to be replaced by other sources because the world supply of fossil fuels is finite. The amount of each type of fossil fuel that is left is debated, but it is clear that the continued rate of use for any fossil fuel cannot be sustained. Today, fossil fuels supply approximately 80% of the energy consumed in the United States and about 87% worldwide according to the BP Statistical Review of World Energy. An interesting tool for visualizing the sources of energy and ultimate outcome is the so-called spaghetti diagram, which was developed and used by Lawrence Livermore National Laboratory. Figure 1shows the diagram with line width proportional to the percentage of each source; this one is for a recent year in the United States. It clearly shows the dependency on fossil fuels. The left side shows the various sources and the right side shows the demand sectors. Electricity is an intermediate step and not a demand in itself. Notice that the largest source of energy comes from petroleum. Approximately 72% of the petroleum is used in the transportation sector. In the case of electricity, 48% of electrical energy comes from coal-fired plants and, as you can see, the process creates a lot of rejected heat. Rejected heat is due to the inefficiencies in generating and distributing electrical power. Notice that electricity production is the primary user of coal. The best electrical generation plants are less than 50% efficient, and 5 to 10% of the energy is lost in transmission lines. It is important to be aware of the inefficiencies in the generating and distribution process because it affects other demand areas. Renewable energy sources—biomass, hydroelectric, geothermal, wind, and solar— constitute only a small percentage of the overall mix at this time. All are covered in more detail throughout this text, but a quick overview of the current mix of renewable sources in the United States is shown in Figure 2. The mix has been changing rapidly in recent years, with significantly more wind energy. Biomass, the largest source, includes a variety of sources, including wood, waste, garbage, and even plants that are grown for fuel. Hydroelectric is the second largest source today because of the huge infrastructure of dams and power plants. Other sources (wind, geothermal, and solar) account for 14% of renewable use in the United States. Worldwide consumption is continuing to rise for all fossil fuels. Fossil fuels include coal, petroleum and natural gas. Figure 1:Energy Flow for the United States Coal has been used as a fuel for more than 2,000 years, and historical records document its usefulness to the early Greeks, Romans, and Chinese as well as other cultures. Large-scale mining of coal was brought on by the Industrial Revolution with the development of the steam engine and improvements in steel making. Coal is a combustible rock that is composed mostly of carbon and hydrocarbons. A hydrocarbon is a molecule containing only hydrogen and carbon (but not all hydrocarbons are fuels). Coal is derived from ancient plant life, mainly trees. It is believed that these ancient terrestrial forests were flooded rapidly and eventually sank, where layer upon layer of dying plants was covered by sediments. Mild heat and pressure condensed the organic material into peat in a process called digenesis. If enough heat and pressure is supplied, the organic material will undergo physical and chemical change to form coal. This process takes several million years and turns the peat gradually into coal, which is found in sedimentary layers. Figure 3 shows a coal seam in a sedimentary rock formation. Figure 2:Renewable Energy Sources in the United States Figure 3: A Coal Seam in Sedimentary Rock Depending on the conditions and the amount of carbon in the original materials, different types of coal formed. Coal is classified into four main categories based on energy content: anthracite, bituminous, sub-bituminous, and lignite (anthracite has the highest energy, but is not as common as the other types.). Most coal today is used for generating electricity, but a smaller percentage is used by industry for making steel and other products. Petroleum is also a nonrenewable fossil fuel that formed in the distant past in a two-step process. The process starts when aquatic organic sediments are compacted and when heat and temperature break it down with the aid of microbes into a waxy material known as kerogen and a black tarlike hydrocarbon called bitumen. Bitumen can occur naturally or as a product of refining petroleum. Kerogen can undergo further chemical and physical change in a process called catagenesis if it is compacted and buried deeper underground where temperatures and pressures are higher. In this case, water is squeezed out and the kerogen breaks down into hydrocarbon chains by a process that is aided by the presence of certain minerals in marine deposits. This is equivalent to cracking, a term used by refineries when crude oil is converted to gasoline and other products. At the highest temperatures, natural gas forms. If the temperature is lower, oil forms. If the temperature is lower still, the kerogen remains unaltered. Carbon, with four electrons in its outer shell, has the ability to bond to other carbon atoms and form long chains and complex atomic arrangements with hydrogen and is the fundamental chemical structure in both petroleum and natural gas. The density of the oil and natural gas is lower than the rock layers in which it is buried, so these substances would normally migrate to the surface. Instead, they are trapped by a layer of impervious rock called cap rock, which is typically shale. The cap rock traps the gas and oil in porous sedimentary rock formations. Large volumes of natural gas and viscous liquid oil are trapped in these underground regions called reservoirs. The natural gas is under pressure and escapes when the formation is drilled. Figure 4 illustrates how oil and gas and sometimes water are trapped underground by the cap rock. There are various types of oil and natural gas traps, but the common feature is that oil moves through the porous rock layer and is trapped by an impervious layer in the underground reservoir. Reservoirs that contain very hot water under pressure are useful as a geothermal heat source. Figure 4: Underground Reservoir. The particular reservoir shown is an anticline. Other types have been identified by geologists. The principal use for petroleum is in the transportation sector. In the United States, approximately 72% of all petroleum is used to make gasoline, diesel, and other products for vehicles. One important use for petroleum that is often overlooked is in chemical feed stock. It is also used in the manufacture of many products including lubricants, waxes, solvents, asphalt, hydraulic fluid, and vinyl, to name a few. Carbon Dioxide and Methane Carbon dioxide emissions result from burning of fossil fuels and other fuels, as well as certain other chemical reactions such as those during production of cement. An example of a naturally occurring emission is volcanic eruption. Methane enters the atmosphere primarily from the production of fossil fuels (coal, oil, and natural gas), livestock, and the decay of organic waste. The Environmental Protection Agency (EPA) estimates that one-half of all methane enters the atmosphere from human activities. The primary constituent of natural gas is methane, the simplest hydrocarbon. Although natural gas is nonrenewable, methane, a principal component of natural gas, can be produced by various processes such as the decomposition of waste in landfills and by anaerobic (without oxygen) decay of organic matter such as manure or biomass. The chemical formula for methane is CH4. This formula indicates a single carbon atom is bound to four hydrogen atoms. When methane burns, it reacts with oxygen in the air to release energy and form carbon diode and water as products. This basic chemical reaction for burning methane is: The reactants (methane and oxygen) are written on the left side and the products of the reaction (carbon dioxide and water) are written on the right side. This equation not only shows the reactants and products, it also shows the ratios of molecules involved. For each molecule of methane, two molecules of oxygen combine to release one molecule of carbon dioxide and two molecules of water plus energy. As shown by the chemical equation, a by-product of this reaction is carbon dioxide, which typically escapes into the atmosphere as a by-product of combustion. In addition to methane, natural gas also contains other, more complex hydrocarbons as well as some other undesirable materials, including sulfur. Carbon dioxide and methane are each considered to be a greenhouse gas. Greenhouse gases contribute to the greenhouse effect by absorbing short wavelength infrared energy and reradiating it at longer wavelengths. Other greenhouse gases include nitrous oxide (NO2), fluorinated gases (CFC, etc.), and water vapor (H2O). From the chemical formula for the reaction of burning methane, the ratio of masses of the reactants and products can easily be determined by determining their molecular weights. The atomic weights are given on the periodic table of the elements and other sources. In a chemical reaction, mass is always conserved; that is, the mass of reactants is equal to the mass of the products. To determine the relative weight of the reactants and products, look up the atomic weights of each atom, then determine molecular weights by multiplying the number of atoms of each type by its atomic weight. The following example illustrates the idea. What is the mass of carbon dioxide (CO2) that is released to the atmosphere if 1,000 kg of methane (CH4) is burned? (One thousand kg is approximately 2,200 pounds, or 1.1 tons.) The equation shows that each molecule of CH4 reacts with 2 molecules of O2 to produce 1 molecule of CO2 and 2 molecules of H2O. Start by determining the molecular weight of each reactant and product. By expressing the molecular weight in grams, you will obtain the relative masses of the reactants and products. The atomic weight of carbon is 12.0 g, hydrogen is 1.0 g, and oxygen is 16.0 g. CH4 = 1 carbon and 4 hydrogen = 1(12.0 g) = 16.0 g 2O2 = 2 molecules of oxygen, each with 2 atoms = 4(16.0 g) = 64 g CO2 = 1 carbon and 2 oxygen = 1(12.0 g) + 2(16.0 g) = 44.0 g 2H2O = 2 water molecules each with 2 hydrogen and 1 oxygen = 4(1.0 g) + 2(16.0 g) = 36 g Thus, 16 g of CH4 reacts with 64.0 g of O2 to produce 44 g of CO2 and 36 g of H2O You can check that the mass on each side is the same, as it must always be in chemical reactions. From this you can set up a proportion: (16 g CH4) / (1, 000 kg CH4) = (44 g CO2)/ X kg CO2 Solving for X (the unknown quantity of CO2), gives X = 2,750 kg CO2 which is approximately 3 tons. This illustrates that burning methane creates a weight of carbon dioxide that is 2.75 times the weight of the original methane gas Applications for natural gas include heating for homes and businesses. Most of the heat from the reaction is available for heating; a small fraction escapes with flue gases. Natural gas is also used as an alternative to gasoline for automobiles and trucks and is widely used in electrical power generation. As an electrical power energy source, it emits the lowest amount of carbon dioxide of any fossil fuel per unit of energy produced. Natural gas turbines are used to supplement renewable sources when they are not available because natural gas turbines have quick start-up times and thus can be brought on line rapidly. Another natural gas application is as a fuel source for some fuel cells; it may find more widespread use when fuel cell vehicles become more common in the future.
<urn:uuid:ca5c0bde-ddca-4831-aadd-5be8c9e2003f>
CC-MAIN-2022-40
https://electricala2z.com/renewable-energy/types-fossil-fuels/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00265.warc.gz
en
0.943209
2,568
4.03125
4
Q&A – What Does ‘Open-Ended Play’ Actually Mean? Fiona Bland, early years adviser at NDNA, clears up the misconceptions surrounding open-ended play… - by Fiona Bland What is open-ended play? Children are naturally curious and explore the world around them through play experiences. Open-ended play can be described as play that has no pre-determined limitations and no fixed answer – children simply follow their imagination to allow the play to go in any direction their creativity takes them. As there are no set outcomes, there is no ‘right’ or ‘wrong’ with open-ended play. What is the difference between open-ended resources and made-for-purpose toys? Made-for-purpose toys are often single-purpose – think puzzles or games which have a predetermined conclusion (a puzzle is completed, a game has a winner). In contrast, in a child’s hands a stick can be anything they want it to be – a horse, a guitar, a magic wand, a tool to make marks with in the sand or soil, or part of a den-building project. Open-ended resources are multi-use and encourage a child to use their imagination and creativity. There are no rules, no expectations, no specific problems to solve, and no pressure to produce a finished product when engaging freely in open-ended play. An open-ended resource is any item that can be used in a range of ways – this could include things such as wooden blocks, a range of fabrics, a lump of clay, milk crates, shells, paper and a range of mark-making tools, pebbles and stones, water, cardboard boxes – the list is endless. Blocks are particularly good resources because of their ability to be turned into anything the children want to create. They support children to develop motor skills; communication and language; mathematical vocabulary and concepts; self-control and concentration – as well as creativity, imagination and exploration. They can also be incorporated into all areas of play, including with water, sand and role play. Is it play ‘on the cheap’? Not really. All types of resources have their place, but due to the nature of open-ended play, the resources you need can be sourced from the outdoor environment – from twigs to a tree full of conkers – or recycled from homes and the community. For example, nurseries can ask parents or local suppliers for donations of items they would otherwise throw away, like crates, wood, fabric, buttons or plastic bottles destined for the recycling box. This means that open-ended play can be managed on any sized budget – but while the resources may be cheap, the play is invaluable. It allows children to express themselves freely and creatively, not bound by pre-set limitations. Children can follow their own interests and fascinations, and act out personal experiences. It enables learning in a holistic way, through active play with diverse materials. Children can explore the look and feel of the materials and objects – what they can do, how they move. The creative nature of open-ended play also enhances cognitive skills, such as working memory, cognitive flexibility, self-regulation and self-discovery. Children can focus on creating based on inner inspiration and motivation. In contrast, closed-ended activities have a determined outcome, a right answer and a restriction on individual differences. Should there be an element of risk involved? Managing risk is an essential life skill for children to learn, to enable them to keep themselves safe and learn how to manage future risks they encounter in life. It is important to look at the benefits involved with any activity as well as the risks involved. Talk with the children about the risks, discuss the issues together and share ideas on how you can make activities safe. Does open-ended play always have to be outside? It can be explored both indoors and outdoors, but being outdoors will allow children to explore the world around them and access natural resources and materials to include in their play. You can incorporate each season, and the resources and opportunities they provide for open-ended learning – for example, using snow as a canvas for painting. Won’t all those loose parts mess up the nursery? Cleaning and tidying following any activity should be incorporated as part of your daily routines. Having clearly identified spaces, drawers, boxes or containers will make this process easier, as children will be able to access and return items they use. What topics are covered in NDNA’s training course, Open up to Play? This course covers a range of topics, including: - Open-ended resources in learning and development - Enhancing the learning environment - Open-ended resources versus made-for-purpose toys - Block play - Encouraging parents to collect open-ended resources - Saving money using open-ended resources. NDNA can deliver the course as a full day for your staff team.
<urn:uuid:2baf06e7-fcda-40d6-84ee-3544e6768e97>
CC-MAIN-2023-50
https://www.teachwire.net/news/qa-what-does-open-ended-play-actually-mean/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00088.warc.gz
en
0.957844
1,027
4.0625
4
What Is the Formula for Calculating Momentum? The formula for calculating momentum is mass multiplied by velocity. An object’s momentum is equivalent to its mass times its velocity, therefore the equation for momentum is the same. Momentum is measured in kilogram-meters per second, which are all standard metric units. Momentum is a type of movement that every object contains. Objects in motion all have momentum that depends on their velocity and their mass. An object at rest does not have momentum because it has zero velocity. When calculating momentum, one must also calculate the direction in which the object is moving, since momentum is known as a vector quantity. This means that if an object is moving backwards, it may have negative momentum. However, the negative is only referring to the object’s direction, not its motion.
<urn:uuid:7ac20768-e331-4651-828e-4591b48dddbe>
CC-MAIN-2022-33
https://www.reference.com/science/formula-calculating-momentum-4d75dc6f79c9dab8
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00137.warc.gz
en
0.963463
170
4.4375
4
In the early 1800s, European nations had just little land in Africa, holding only areas along the coast. In the mid-1800s, though, Europeans had renewed its interest in Africa. This rose, in part, from a desire to create overseas empires, a movement called imperialism. European nations wanted to control lands that had raw materials they needed for their industrial economies. They also wanted to open up markets for the goods they made. Nationalism fed the drive for empires as well. A nation often felt that gaining colonies was a measure of its greatness. Racism was another reason. Europeans thought that they were better than Africans. Finally, Christian missionaries supported imperialism. They thought that European rule would end the slave trade and help them convert native peoples. As a result of these factors, the nations of Europe began to seize lands in Africa. Technology helped them succeed. Steam engines, railroads, and telegraphs made them able to penetrate deep into Africa and still have contact with the home country. Machine guns gave them a weapon of far greater power than any African peoples possessed. Finally, discovery of quinine gave doctors a weapon against malaria, which struck Europeans. They were also helped by the lack of unity among African peoples. The events called the European “scramble for Africa” began in the 1880s. The discovery of gold and diamonds in Africa increased European interest in the continent. So that they would not fight over the land, European powers met in Berlin in 1884–85. They agreed that any nation could claim any part of Africa simply by telling the others and by showing that it had control of the area. They then moved quickly to grab land. By 1914, only Liberia and Ethiopia were independent of European control. Luis Rafael Acta 10A Before European domination, Africa had hundreds of ethnic groups and linguistic groups. They had their own traditions, culture, religion and culture. As early as the 1450s, Europeans started to have contact with Africans. However, for over 400 years, African could keep Europeans out of most Africa and they only owned 10 percent of their land because Africans used to have powerful African armies. Later on, Europeans started to compete with each other over Africa because travel books and newspapers showed them fact about Africa that intrigued their desire of exploration and adventure. After some trips planned to Africa, Europeans saw how rich Africa was so they decided to go back and do more research about them. This brought them a sense of Imperialism which is the desire of taking control of a inferior country. Europeans believed that they were superior to the Africans becuse of their skin color, this is known as racism, the belief of that one race is superior to the other. Others factors encouraging Imperialism were the European's technological superiority and the diverse languages and ethnic groups of the Africans made them easier to colonize because they were more separated and less unite. Interest in Africa rose in the 1880 because French discovered gold in South Africa. By then, Europeans fought more over Africa because of the rich resources that benefited themselves. An agreement was made to divide Africa between the Europeans in the Berlin Conference. Later on, Africans were divided into two groups to defend themselves because the control of the Europeans was getting out of their hands. The two groups were the Zulu and the Boers. Saluna Chow 10a European forces dominated Africa in the mid-1800s. Before this domination, Africans were divided into between different linguistic and ethnic groups. In the early 1450s, Europeans and Africans had contact and for 400 years, they could keep them out of most African land. In the late 1180s, Europeans only had 10 percent of the land. African newspapers and travel books created curiosity to Europeans and Americans and they fought for the exploration, navigation, and adventures through Africa. After they saw how rich the continent was, they researched more and wanted to take control of it (Imperialism). The main reason Europeans became Imperialistic were that they were racists; they believe they were superior to the Africans for their skin color. Some other factors that promoted the colonization of Africa were that Europeans were technologically superior from the Africans and that Africans had different languages and ethnic groups and that made them more diverse. After gold was discovered in South Africa in 1886 and diamonds in 1867, Africans became very desired. Europeans were interested in them again. A treaty made in the Berlin Conference in 1884-85 said that Africa was going to be divided between the European countries. After Africa saw that all this was getting out of their control, they decided to divide themselves into two different groups in order to defend themselves. This two groups were known as the Zulus and the Boers. Gabriela Elias :D In the early 1800s, Africans controlled their own trade networks and provided the trade items. During this period the European took major expeditions to the interior of Africa. Most of the European that went there were mainly missionaries or explorer that did not agreed to African slavery, in which the mainly went to discover the mysteries and adventures in Africa. However in the mid 1800s, African were dominated by the European domination in which people were divided into hundreds of ethnic and linguistic groups meaning that these groups spoke more than 1000 different language. During the forces driving imperialism, the European countries had had lots of control due to the fact of the Industrial Revolution and because of the power they had. This period was a time in which the European was extremely racism and tried to gain control of some parts in Africa. In 1884 to 1885 the Berlin Conference was made to lay down the rules for the division in Africa, in which no African ruler was invited to attend these meetings. The fact that European countries began colonizing this meant that the good they were sailing still need the raw materials from Africa so their business can continue to proceed. During this period three groups clash over South Africa, in which a war was created called the Boar War. The Boar was mainly a war between the British and the Boers, at the end of the war the Britain had won. This had caused the Europeans to make a big effort to change political and social life of the people they had conquered. the scramble for africa~ industrialization increased the need for raw materials and new markets. nations wanted more land, to colonize because they need more stuff for industries. stronger countries dominated the political, economic, and social life of the weaker countries, AS USUAL. the industrial revolution break out a desire to own more lands. europeans viewed an empire as a measure of national greatness. europeans were racist. also they were technologically superior. in addition, they had the machine gun, the steam engine, railroads, were susceptible to malaria, and africans were not unified, which europeans used to play rival groups against each other. The process of industrialization had increased the demands for raw materials and for markets where to sell the finished products. The way to achieve this was through controlling new territories, and this is exactly what the European nations began doing in Africa. Another factor that made Europeans turn their heads towards Africa was their desire to build an empire and show their national greatness. They knew they could achieve this through controlling other territories and peoples. Many Europeans believed that they belonged to a superior race, and that therefore they had the right to conquer other peoples such as the Africans. Colonizing Africa did not result as difficult for the Europeans. They aided themselves in their task of conquest with the newly invented machine gun, steam boat, and locomotive. The rivalries that existed between the different ethnic groups in Africa created a disunited Africa; therefore it resulted easier for the Europeans to fight the Africans. The colonization of Africa became a race. Each country wanted to gain more territory than the other. To prevent conflict several European countries met at the Berlin conference from 1884-1885. At this conference the nations agreed that they would claim African land by notifying other nations of their claims and demonstrating their control of the claimed area. Africa at this time, was very rich in resources and raw materials. this caused a lot of interest for the Europeans because they were in a time were they wanted to expand. In the mid- 1800s, Europe started to gain control over African territory. But it wasn't so simple at first because they only controlled a 10% of Africa mainly on the coast, but couldn't control the inner part of the city because of the people. little by little they started to gain control of more and more land, because Africans were not really trying to stop the Europeans and the Europeans were more advanced than they were. They wanted to gain total control of Africa to show that they were superior and that they were great, and unfortunately for Africa they were on their way to achieve this. At this time, Africa was like the main point of Europe, many European countries started to claim land were it was possible. There were to factors that intensified the interest of Europeans in Africa and was that diamonds were discovered in 1867, and then gold in 1886. Because more and more Europeans claimed African lands, they created the Berlin conference in 1884-1885, were they basically divided Africa between them, and were a country could claim the land but had to notify the others so disputes weren't created. -Because of the Industrial Revolution many of the Europeans countries were ambitions. They wanted more resources to fuel their industrial production. Many of the countries went to Africa as a source of raw material and as a market for industrial products. So they went to Africa and imperialize them. Before the invasion of the Europeans to Africa, Africa was conquered by the African and only 10 percents was European and also the trade network of the Africans were not interfered. -But all that changed when the Europeans started seizing land around Africa, waging war with other European nations to gain more territories or to protect them, and killing the Africans in the areas. With the Berlin conference they divided Africa and set a list of law to prevent wars between European countries. This made the African furious causing three groups to Clash the Africans, Dutch, and the British who went to war with each other during this period. Kendrick Abreu Grullon European countries sought out to take over African countries for the control of their resources. Imperialism took its toll and Europe had agreed it was for the best. Europe didn't want to fight over the lands in Africa so they held the Berlin Conference, which was the equal division of African lands for the European nations involved. Although this was agreed upon, struggle developed for the control of South Africa. Africans, British, and Dutch fought over South Africa and is basically the structure of South Africa today.
<urn:uuid:583854cd-964a-4fbe-be10-be140ddefff3>
CC-MAIN-2015-22
http://cdaworldhistory.wikidot.com/the-scramble-for-africa
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927458.37/warc/CC-MAIN-20150521113207-00289-ip-10-180-206-219.ec2.internal.warc.gz
en
0.987491
2,175
4.03125
4
Acute respiratory distress syndrome (ARDS) happens if the fluid builds up in the small, flexible air sacs (alveoli) in the lungs. The liquid prevents the lungs from getting filled with sufficient air, so that less amount of oxygen will arrive to the blood. This leave the body parts without the oxygen they require to keep on functioning. ARDS normally happens in those who are drastically sick at present or have important damages. Serious dyspnea (which is the leading sign of ARDS) typically occurs within some hours to couple of days after the experiencing an injury or infection. Several people with ARDS will not live on. The mortality rate increases with an increase in age and severity of the illness. Some of the people who survive ARDS experience full recovery whereas others have to live with damaged lungs. Signs and symptoms of the ARDS usually grow within 24 to 48 hours. Most of the time, who have developed ARDS get so ill that they can’t even complain of the symptoms. The following symptoms might come up in the patients: Medical experts are working on the condition to find the causes of ARDS and to learn more about it. The causes of a case is not always clear. As mentioned above most of those who get ARDS are already in the clinical center for another disease. Some of the reasons behind ARDS could be as follows: Sepsis: This happens when a person gets a contagion in the bloodstream and the immune system goes into overwork, which causes inflammation and eventual blood clots. Accidents: Injuries resulting from a car accident or a fall can harm lungs or the part of the brain which works with breathing. Breathing in damaging materials: Thick smoke or fumes from chemical substances could cause ARDS. Other possible causes of ARDS may be as follows: Although there is not a specific test for diagnosing ARDS, a full assessment is needed to identify the basic cause and remove other conditions. The assessment is consist of: The main goal of treating ARDS is to provide you with enough oxygen to avoid organ failure. The medical practitioner uses mask to give oxygen. Also, a mechanical ventilation machine could be used to push air into the lungs of the patient and decrease the fluid force in the air sacs. The doctor might use a method called positive end-expiratory pressure (PEEP) to control the pressure in the lungs. High PEEP can rise lung functioning and reduce lung damage through using a ventilator. Managing fluid intake is another method of treating ARDS. This method guarantees that the patient has an adequate fluid balance. Excessive fluid in the body could result in gradual increase of fluid in the lungs. Nonetheless, not enough fluid may pressure the heart and organs. In order to cope with the side effects, doctors usually prescribe medications for ARDS patient. Following kinds of medication are prescribed: Some of those recovering from ARDS might require pulmonary rehabilitation. This is a method used to reinforce the respiratory system and increase the capacity of the lung. These programs might consist of exercise training, lifestyle classes, and support teams to assist the patient to recover from ARDS. If one has ARDS, he could grow some other medical problems at the time of hospitalization. The most prevalent problems are as follows: Hospitalization and recumbent for a long time may increase the risk of getting infections, like pneumonia. Ventilation machine may also increase the risk of getting infections. A condition in which air or gas is gathered in the atmosphere around the lungs. This could make one or both of the lungs to collapse. The pressure of air coming from ventilation machine can cause this problem.
<urn:uuid:7fe05ec1-9a29-43a6-b712-e5365c6a7236>
CC-MAIN-2020-05
https://www.alale.co/en/a/AcuteRespiratoryDistressSyndrome
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607596.34/warc/CC-MAIN-20200122221541-20200123010541-00407.warc.gz
en
0.94024
755
4
4
The star upsilon Andromedae was discovered to be hosting a short period planet in 1996. In this case, the orbital period was 4.6 days. Two years later, it was revealed that this star is also orbited by two further planets, with orbital periods of 58 days and 437 days respectively. So the upsilon Andromedae system is one of the growing number of multiple planet systems known around other stars. The star is located in the northern constellation Andromeda, not far (in angular extent!) from the famous Andromeda Galaxy (the nearest large galaxy to the Milky Way). It is located about 44 light years from earth, and is one of the brightest stars known to host a planet. It has a low mass stellar companion about 750 astronomical units away (so far enough away to not have a significant gravitational influence on the planetary orbits today). The upsilon Andromeda system has been monitored enough that we know that the innermost planet does not cross the face of the star as viewed from Earth, i.e. it does not transit. Thus, we know that the planet orbit is not aligned exactly edge-on with our line of sight. There are also indications that the outer two planets are not orbiting with orbits that lie face-on to the line of sight. This comes from the fact that radial velocity measurements do not measure the mass directly, but only the mass divided by a function of the inclination angle. If the outer planet orbits were almost face-on, the measured quantities would imply that the true planet masses were quite a bit larger. They cannot be too large, because then the gravitational interactions between the planets would make the planetary system unstable and we would not see the configuration we see today. We do not know for certain if the innermost planet (the one we study here) is aligned with the outer two, but that is the expectation based on the idea that planets form from disks of gas left over from the formation of the star. Thus, the most likely orientation for the planet orbit lies somewhere in between the two extremes of face-on and edge-on. There is one interesting piece of evidence for some level of star-planet interaction in this system. Scholnik and collaborators report evidence that some of the lines in the stellar chromosphere vary with the orbital period of the innermost planet. They interpret this as evidence for starspot activity induced on the star by interaction with the planet. We considered whether this mechanism was responsible for our observation too, but it seems unlikely, because the energy required to explain our observations is a lot larger. As a result, if one were to invoke the same mechanism, the planet would have spiralled into the star a long time ago. The reason it doesn't do so in our proposed model is that the energy that powers the variation we see comes from the radiation of the star, whereas it comes from the energy of the planet orbit in the case of the starspot model. The full specifications of the system can be found A picture of the ups Andromedae field can be found here .Brad Hansen
<urn:uuid:36dfff8e-72d5-409f-bb1d-ef878a963583>
CC-MAIN-2015-32
http://www.astro.ucla.edu/~hansen/UpsAndInfo.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989790.89/warc/CC-MAIN-20150728002309-00162-ip-10-236-191-2.ec2.internal.warc.gz
en
0.963222
626
4.1875
4
Participation in World War II had a profound influence on the United States. Although no fighting took place on the American mainland, the war engulfed the nation and became the focus of all its activity between 1942 and 1945. It demanded intense military and diplomatic efforts, at unprecedented levels, to coordinate strategy and tactics with other members of the Grand Alliance. It required a monumental productive effort to provide the materials necessary to fight. And it resulted in a meaningful reorientation of social patterns at home. The United States became a major force in the war. Although its entrance into the struggle came late—more than two years after hostilities began—America had been increasingly committed to the Allied cause even before the Japanese attack on Pearl Harbor in December 1941 elicited a declaration of war. United States naval convoys had already been protecting shipments of military and economic aid for the beleagured overseas democracies, exports that American factories had been working hard to produce. While the nation’s formal entrance into hostilities merely ratified a process already underway, active involvement gave the United States a vested interest in the outcome of the conflict and validated the enormous effort being undertaken by the American people. In making that ultimately successful effort, American society changed. Ravaged by the Great Depression, the United States re-
<urn:uuid:9b9c1fcd-6488-4d47-a3c2-302475686428>
CC-MAIN-2015-32
https://www.questia.com/read/117217691/home-front-u-s-a-america-during-world-war-ii
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.92/warc/CC-MAIN-20150728002310-00125-ip-10-236-191-2.ec2.internal.warc.gz
en
0.970497
256
4
4