text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Every year, October is designated as American Archive Month. While many people may think "archive" means only dusty books and letters, there are, in fact, many other types of important archives. This includes the use of archives for major telescopes and observatories like NASA's Chandra X-ray Observatory. The Chandra Data Archive (CDA) plays a central role in the mission by enabling the astronomical community - as well as the general public - access to data collected by the observatory. The primary role of the CDA is to store and distribute data, which the CDA does with the help of powerful search engines. The CDA is one of the legacies of the Chandra mission that will serve both the scientific community and the public for decades to come. To celebrate and support American Archive Month, we have selected images from a group of eight objects in the CDA to be released to the public for the first time. These images represent the observations of thousands of objects that are permanently available to the world thanks to Chandra's archive. G266.2-1.2 was produced by the explosion of a massive star in the Milky Way galaxy. A Chandra observation of this supernova remnant reveals the presence of extremely high-energy particles produced as the shock wave from this explosion expands into interstellar space. In this image, the X-rays from Chandra (purple) have been combined with optical data from the Digitized Sky Survey (red, green, and blue). Jets generated by supermassive black holes at the centers of galaxies can transport huge amounts of energy across great distances. 3C353 is a wide, double-lobed source where the galaxy is the tiny point in the center and giant plumes of radiation can be seen in X-rays from Chandra (purple) and radio data from the Very Large Array (orange). A region of glowing gas in the Sagittarius arm of the Milky Way galaxy, NGC 3576 is located about 9,000 light years from Earth. Such nebulas present a tableau of the drama of the evolution of massive stars, from the formation in vast dark clouds, their relatively brief (a few million years) lives, and the eventual destruction in supernova explosions. The diffuse X-ray data detected by Chandra (blue) are likely due to the winds from young, massive stars that are blowing throughout the nebula. Optical data from ESO are shown in orange and yellow. This image provides a view into the central region of a galaxy that is similar in overall appearance to our own Milky Way, but contains a much more active supermassive black hole within the white area near the top. This galaxy, known as NGC 4945, is only about 13 million light years from Earth and is seen edge-on. X-rays from Chandra (blue), which have been overlaid on an optical image from the European Space Observatory, reveal the presence of the supermassive black hole at the center of this galaxy. When radiation and winds from massive young stars impact clouds of cool gas, they can trigger new generations of stars to form. This is what may be happening in this object known as the Elephant Trunk Nebula (or its official name of IC 1396A). X-rays from Chandra (purple) have been combined with optical (red, green, and blue) and infrared (orange and cyan) to give a more complete picture of this source. 3C 397 (G41.1-0.3): 3C 397 (also known as G41.1-0.3) is a Galactic supernova remnant with an unusual shape. Researchers think its box-like appearance is produced as the heated remains of the exploded star -- detected by Chandra in X-rays (purple) -- runs into cooler gas surrounding it. This composite of the area around 3C 397 also contains infrared emission from Spitzer (yellow) and optical data from the Digitized Sky Survey (red, green, and blue). The details of how massive stars explode remains one of the biggest questions in astrophysics. Located in the neighboring galaxy of the Small Magellanic Cloud, this supernova, SNR B0049-73.6, provides astronomers with another excellent example of such an explosion to study. Chandra observations of the dynamics and composition of the debris from the explosion support the view that the explosion was produced by the collapse of the central core of a star. In this image, X-rays from Chandra (purple) are combined with infrared data from the 2MASS survey (red, green, and blue). NGC 6946 is a medium-sized, face-on spiral galaxy about 22 million light years away from Earth. In the past century, eight supernovas have been observed to explode in the arms of this galaxy. Chandra observations (purple) have, in fact, revealed three of the oldest supernovas ever detected in X-rays, giving more credence to its nickname of the "Fireworks Galaxy." This composite image also includes optical data from the Gemini Observatory in red, yellow, and cyan.
<urn:uuid:e3de874d-c2c7-41b1-a840-4651f855e866>
CC-MAIN-2016-26
http://www.chandra.harvard.edu/photo/2013/archives/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943849
1,040
3.4375
3
- Resource Centers Many people, even health care professionals, have trouble functioning well as patients?whether limited by knowledge, emotional or clinical state, socioeconomic factors, cultural background, or language differences. The television show ER portrayed this problem in an episode in which a Spanish-speaking woman misunderstood the directions for taking isoniazid (INH). The prescription label stated to take the medication ?once? daily. In the Spanish language, however, ?once? means ?eleven.? In the show, the patient died from taking such an excessive dose. A similar, real-life problem occurred when a Spanish-speaking mother applied oxiconazole 1% cream (Oxistat) to her baby?s inflamed rash up to 11 times each day. The mother was simply following prescription label directions that stated, half in English and half in Spanish, ?Aplicarse once cada dia til rash is clear.? The problem is that ?once? means ?eleven? in Spanish. Fortunately, this was a topical medication, and while the inflammation got worse, no permanent harm resulted. Had this been an oral medication, however, the outcome could have been much more serious. When a pediatric patient with seizures was discharged from the hospital, the physician wrote the following prescription: ?phenytoin suspension 30 mg/5 mL, take 5.8 cc three times a day.? Since the patient and his family spoke only Spanish, the nurse gave the patient?s mother the written prescription and an oral syringe marked with tape at the 5.8 mL mark. Because phenytoin suspension is no longer commercially available in the 30 mg/5 mL concentration, however, the pharmacy where the mother took the prescription filled it with phenytoin 125 mg/5 mL. The prescription was labeled correctly and stated that the patient was to be given 1.3 mL 3 times a day. The pharmacist, who did not speak Spanish, could not counsel the patient?s mother. As a result, the mother used the syringe the nurse had given her, and she administered 145 mg 3 times a day instead of 34.8 mg 3 times a day. A few days later, the patient was readmitted to the hospital intensive care unit nearly comatose with phenytoin toxicity. The child recovered and was discharged. In another example, a physician prescribed ?Amoxicillin 200 mg/5 mL? with instructions to administer 5 mL tid to a 3-year-old child. The pharmacy carried only a 250 mg/5 mL strength, so the pharmacist changed the directions to ?Take 4 cc (4/5 teaspoonful) by mouth 3 times a day.? The child?s father misunderstood the directions, as English was his second language. He did not know what ?cc? meant, but upon seeing ?4/5 teaspoonful,? he thought he should give his child 4.5 teaspoons of the medication. After 5 doses, he brought his child to the emergency department with severe diarrhea. The use of 2 abbreviations??cc? and a slash mark (/)?contributed to the error. The child?s father did not interpret either abbreviation as intended. Inadequate patient counseling also played a role. Although he had been given a 10 mL measuring device for oral solutions marked in mL and teaspoons, specific directions for measuring each dose were not reviewed with the father when he picked up the prescription. Patient counseling is always important, especially if a pharmacist must use a different concentration of a drug than originally prescribed because the directions that the physician initially provided to the patient differed from the actual directions on the prescription label. If the patient?or the family, in the case of a pediatric patient?does not speak English, however, it is a difficult situation. If you have a lot of patients who speak another language, consider having patient information brochures already translated into that language. While oral and written instructions are definitely preferred, for those patients who speak other languages written brochures may be the only way to provide counseling.
<urn:uuid:63ba6074-bca0-4d1b-a78b-e9275912aadf>
CC-MAIN-2016-26
http://www.pharmacytimes.com/publications/issue/2007/2007-09/2007-09-6770
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959549
824
2.90625
3
ABA’s Model Rules of Professional Responsibility for lawyers provide that lawyers need to keep abreast of changes in the law and its practice and the risks and benefits associated with relevant technology (Rule 1.1: Competence). So having knowledge of relevant technology is now a part of what the lawyer should do. There are five simple steps that might help lawyers in this respect. ABA’s Law Practice Management Section lists the five steps. If you are interested, read on…. A further step is to ponder how a law school can teach some of the skills. Many law schools are now offering classes such as law practice and technology; clinics are using case management software. Some law schools even go as far as teaching students how to program, i.e., writing codes. The goal is not making law students into technology geeks, but to produce lawyers with enough tech skills to thrive in this difficult era.
<urn:uuid:dc3f08a6-ace2-43d6-a27c-4fee1c010c67>
CC-MAIN-2016-26
http://blogs.stthomas.edu/lawtechnologycenter/2013/06/20/five-simple-strategies-to-keep-current-with-technology-for-lawyers/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00071-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974145
186
2.6875
3
Forging the National Dream William R. Morrison, University of Northern British Columbia, 2003 Although many Canadians under the age of forty have never travelled by train, railways were once as vital to this country as highways and airplanes are today. Not only did they make modern travel and trade possible, they were also at the heart of Canada's growth as a nation. From the country's first railway, the Champlain & St. Lawrence, to the establishment of the Canadian National Railways in 1918, railways were the steel that bound this country together. And it was the railway that made modern Canada possible. The Champlain & St. Lawrence, opened in 1836 between Laprairie and St-Jean, Quebec, linked Montreal to the Hudson River Valley and New York City via the Richelieu River and Lake Champlain, in the process greatly improving trade and travel. The Grand Trunk Railway, opened in 1856, joined Toronto and Montreal, and made it possible to travel between the two cities in a matter of hours. Prior to this, a trip by sleigh could take more than a week. Canada's most important railroads, however, had as much to do with nation building as they had with trade. The building of the Intercolonial Railway Line, for example, is one of the conditions under which the Maritime Provinces agreed to Confederation. Stretching some 1100 kilometres, it was completed in 1876 and linked Nova Scotia and New Brunswick with Quebec and Ontario. But even more vital to the new country was the Canadian Pacific Railway, begun in 1875 but built mainly between 1881 and 1885 from Ontario to the Pacific Coast. Without the promise of a transcontinental railway, British Columbia would not have entered Confederation, and it would have been impossible to settle what are now the provinces of Manitoba, Saskatchewan and Alberta. The political careers of men such as John A. Macdonald, Georges-Étienne Cartier, and Francis Hincks (who once said "railways are my politics") depended largely on the success of their railway projects. More than just a line of steel, the Canadian Pacific Railway was an integral part of the federal government's national policy in the years after Confederation. The company was heavily involved in the sale of prairie land, as well as in tourism and the hotel business. It also built a steamship fleet that carried passengers and trade goods from Canada to Japan, China and other countries. In the 20th century, it expanded into the airline and telecommunications sector - to name just two. Clearly, then, it is the railways that made the "national dream" of a united Canada a reality. This tour will trace the building of the Canadian Pacific Railway from central Canada to the west coast, showing how it achieved the national dream of joining the country together "from sea to shining sea".
<urn:uuid:b8cbd62d-57b1-4775-913c-3197fbe55fc2>
CC-MAIN-2016-26
http://www.mccord-museum.qc.ca/en/keys/webtours/GE_P2_4_EN
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00073-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974422
578
3.625
4
What are peptides and why are they important? This article seeks to demystify what peptides are and the role they play in everyday life and chemistry. Peptides are basically short chains of amino acids monomer that are chemically linked by amide bonds. Amide bonds are also widely denoted as peptide bonds, and both are often used interchangeably to refer to one another. Peptides are can also be defined as compilations of natural amino acid structures that act as building blocks for enzymes and proteins. These peptides are folded and bonded to create a 3D structure, with the type of peptide formed during this process determining the type of protein produced. Each peptide form behaves differently with continued research revealing the full potential of all these important compounds. The formation of a peptide involves the elimination of one molecule of water from the condensation process of amino acids. This procedure often results in the formation of a peptide or amine bonds. The either ends of dipeptide molecule can also react to give rise to a tripeptide. This tripeptide in turn condenses with another amino acid, and this goes on and on. This process gives rise to the formations of long chains of amino acids. A good example if this is insulin that possesses 51 units of amino acids in two linked chains. Peptides are so numerous and occur in plenty because of their ability to be synthesized or manufactured into very long chains. Though proteins and peptides are similar in structure, they have very noticeable and distinct differences between them. A protein, for instance, is a polymer of amino acids. It is a product of complex peptides containing several amino acids that are linked together. There are several known types of peptides. The ones with the shortest peptide chains are known as dipeptides. These dipeptides consist of two amino acids linked together by one peptide bond after the loss of the water molecule. The synthesis of peptide bonds has always required an input of free energy. Peptides bonds are very kinetically stable. Perhaps the most interesting and exciting fact about peptides is that in an aqueous solution a peptide bond in the absence a catalyst has a lifetime of close to a thousand years. Another type of peptides is tripeptide. The most essential and well-known example of this is glutathione. This compound is commonly found in significant concentrations in virtually all tissues. It contains glycine, glutamic acid and cysteine as its main ingredients. In tripeptides, the carboxyl acid lateral chain forms part of the backbone of the peptide structure quite different from the usual backbone that is often characteristic of a peptide, The normal carboxyl group is usually referred to as the side chain in this case. A Polypeptide chain consists of a habitually repeating fragment. It acts as the main chain. In many instances, it is referred to as the backbone of these polypeptides and possesses a very high hydrogen bonding potential. Besides this chain, there is also a variable part. This is made up of unique and distinctive lateral or side chains. Every residue of the polypeptide chain consists of at least one member of the carbonyl group. This carbonyl group, as has been earlier discussed, has a high hydrogen-bond affinity except Proline, which serves as an excellent donor of the hydrogen bond. These two groups are highly reactive and effortlessly interact with each other. They also react with the functional groups present on these side chains. The reaction and integration are of great importance in stabilizing these polypeptide chains’ functional groups that are present on these side chains. Most of the naturally occurring polypeptide chains are made up of between 60 and 2000 amino acid residues. They are what is commonly referred to as proteins. When the polypeptide chains are small, they are called Oligopeptides or just peptides. Thorough Examination of the geometry of these polypeptides reveals a number of outstanding and previously unexplored characteristics. Some of these features are linked the characteristic of the basic peptides forming them. Peptides bond have been found to be mainly planar. They have also been found to contain a considerable double bond character preventing rotation about this bond. The inability of these bonds to rotate account for their planarity. Both the Trans and CIS configurations are possible for a normal peptide bond. More information on peptides can be found at American Science Labs.
<urn:uuid:9fb71a9c-624c-4a3e-bdc3-830cbd5b95f8>
CC-MAIN-2016-26
http://www.newportcongregationalchurch.org/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962648
923
3.75
4
Early Christian Art Facts and interesting information about Medieval Art, specifically Early Christian Art, during the Middle Ages The History of Early Christian Art Early Christian Art and religious iconography began, about two centuries after the death of Jesus Christ. Early Christian Art and religious iconography was originally based on the classical art styles and imagery used by the Ancient Greeks and the Ancient Romans. In the period encompassing Medieval art iconography began to be standardised and to relate more closely to the texts found in the Bible and became the basis for many of the images found in Early Christian Art Early Christian Art - Symbolism and Icons The definition of the Christian Symbol or Icon in early religious Christian Art forms. A Christian sign or icon is an object, character, figure, or color used to represent abstract ideas or concepts - a picture that represents an idea and fundamental to understanding the icons and images found in Early Christian Art. A religious icon is an image or symbolic representation with sacred significance. The meanings, origins and ancient traditions surrounding Early Christian Art symbols date back to early times when the majority of ordinary people were not able to read or write and printing was unknown. Many Early Christian Art symbols or icons were 'borrowed' or drawn from early pre-Christian traditions. Early Christian Art - Symbols and Icons of the Evangelists Matthew, Mark, Luke and John were the authors of the four New Testament books telling the story of the life of Christ. They were called the Evangelists. The Evangelist symbols in early Christian art were found in manuscripts, sculpture and wall paintings. Their symbols were as follows: Matthew: Angel (man) Early Christian Art Symbols - Anchor The Anchor represents an emblem of hope. Early Christian Art Symbols - Ankh The word Ankh is an Egyptian word. It is a tau cross with a loop at the top, used as an attribute or sacred emblem, symbolizing or representing generation or enduring life. Also called 'crux ansata'. The emblem was adopted by Christians, from the Egyptians, as a symbol of eternal life. Early Christian Art Symbols - Apple The Apple is also referred to as the Forbidden fruit symbolizing sin. The association between 'apple' and 'sin' possibly derives from the similarity of two Latin words malum and malus meaning “apple” and “sin” respectively. This led to the to the apple’s connection with the fruit that was eaten by Adam and Eve in the Garden of Eden. The eating of the Forbidden fruit from the Tree of the Knowledge of Good and Evil which brought sin into the world. Early Christian Art Symbols - Ashes The To cover the head with ashes was a token of self-abhorrence and humiliation . This first day of Lent is called Ash Wednesday. The Roman Catholic Church remembers Ash Wednesday by having ashes placed on a person’s forehead by a priest. Early Christian Art Symbols - Books The In Religious Art books in the hands of saints show they where well educated in the Scriptures. Early Christian Art Symbols - Bread and Wine The Lord’s Supper is symbolized by a chalice of wine with the bread rising above it. The bread is referred to as the host and is a thin, round wafer made from bread and used for Holy Communion. The host usually has the letters I.N.R.I. on it which is an acronym of the Latin phrase IESVS·NAZARENVS·REX·IVDĆORVM, which translates to English as: "Jesus the Nazarene, King of the Jews". It also has rays coming from it, symbolizing that the Real Presence of Christ is in the host. Early Christian Art Symbols - Butterfly The Butterfly Christian Symbol represents and symbolizes the Resurrection. The butterfly has three phases during its life: The caterpillar - The caterpillar which just eats symbolises normal earthly life where people are preoccupied with taking care of their physical needs. The chrysalis or cocoon - The chrysalis or cocoon resembles the tomb. The butterfly - The butterfly represents the resurrection into a glorious new life free of material restrictions. Early Christian Art Symbols - Cedar of Lebanon The Cedar of Lebanon is the Cedrus Libani an evergreen tree found in Lebanon and north western Syria that attains great age and height. The Cedar of Lebanon is representative of Christ and is associated with eternal life. Early Christian Art Symbols - Circle The Circle Christian Symbol represents eternity. The circle symbolises eternity as it has no beginning or end. Because of this many early Christians believed that there was something divine in circles. Early Astronomy and astrology was connected to the divine for most medieval scholars, the circular shape of the sun, moon and the planets were related to God's act of Creation. Early Christian Art Symbols - Coins In Christian Art coins are often shown numbering thirty which representative of the betrayal of Jesus by Judas Iscariot. Early Christian Art Symbols - Crown A crown is a royal headdress or cap of sovereignty, worn by emperors, kings and princes. A Latin cross with a crown of a king resting on top of it symbolizes eternal life. The Crown Christian Symbol represents royal authority, and is often used for Christ, the King of Kings. The image of the crown of thorns is often used symbolically to contrast with earthly monarchical crowns. Early Christian Art Symbols - Eye The Eye Christian Symbol represents the "all-seeing eye" representing the eye of God the Father, the all-knowing and ever-present God. In later examples of Christian art the eye was pictured in a triangle with rays of light to represent the infinite holiness of the Trinity. Early Christian Art Symbols - Fish or Ichthus The What is the definition and the meaning of the Ichthus / Fish symbol? Ichthus is the Greek word for fish (ΙΧΘΥΣ). The initials of the word Ichthus are also used as a Christian acronym of the following Greek words: Using the Ichthus acronym IChThUS means "Jesus Christ, God's Son, Savior". Early Christian Art Symbols - Fleur-De-Lis The Fleur-De-Lis Christian Symbol with its association with the lily represents purity, and in turn to the Virgin Mary. As the Fleur-De-Lis composes of composed of three petals and three sepals it also symbolises the Trinity. In Christian art Fleur-De-Lis is also the attribute of the archangel Gabriel, notably in representations of the Annunciation. Early Christian Art Symbols - Gate The In Christian Art an open gate symbolize the entrance to Heaven, a closed gate symbolize death or exclusion, broken gates symbolize the powers of hell. Early Christian Art Symbols - Goat The Goat Christian Symbol represents oppressors, wicked men and demonic forces. The goat also symbolises unrepentant sinners who will be separated from God on judgment day Early Christian Art Symbols - Harp The Harp Christian Symbol represents music, instruments, joy and worship in praising God. Early Christian Art Symbols - Keys Pictures of Saint Peter show him holding keys. Two keys represent the dual authority with which to open heaven to repentant sinners and to lock heaven to the unrepentant - the ability to grant or withhold salvation. This power is believed to be bestowed upon every pope who leads the Roman Catholic Church. Early Christian Art Symbols - Lamb The Lamb Christian Symbol represents Jesus Christ. Early Christian Art Symbols - Olive Branch The Olive Branch Christian Symbol represents an emblem of peace. This because a dove returned to Noah with an olive branch to let him know that the flood waters had abated, and that the Great Flood of God's judgment was over. Early Christian Art Symbols - Palm A branch or leaf of the palm, anciently borne or worn as a symbol of victory or rejoicing. Early Christian Art Symbols - Pearls Pearls represents the word of God and The kingdom of heaven. Jesus warned his disciples not to cast Pearls before swine. Early Christian Art Symbols - Rock That which resembles a rock in firmness; a defense; a support; a refuge. The Rock represents Jesus Christ. A rock can also symbolize obedience to Christ illustrated by Saint Peter whose name means "rock". Early Christian Art Symbols - Scales The Scales Christian Symbol represents symbol of justice which correspond to the use in metaphor of matters being "held in the balance" and may be used to represent the final judgment. Early Christian Art Symbols - Tower The Tower Christian Symbol represents God our Refuge Medieval Art - Early Christian Art The Medieval Times website provides interesting facts, history and information about the great artists and important historical events which scatter the Medieval History books on the subject of Early Christian Art. The Medieval Times Sitemap provides full details of all of the information and facts about the fascinating subject of Early Christian Art during the historical period of the Middle Ages. The content of this article on Medieval art provides free educational details, facts and information for reference and research for schools, colleges and homework for history courses and history coursework.
<urn:uuid:49ef624b-9ee4-4adb-b28e-8ae1b5221aee>
CC-MAIN-2016-26
http://www.medieval-life-and-times.info/medieval-art/early-christian-art.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919415
1,926
3.75
4
Art Of Singing: Classification Of The Voice General Conservation Of The Voice Development Of Voice Interpretation And Expression Read More Articles About: Art Of Singing Interpretation And Expression ( Originally Published Early 1900's ) It is difficult to express a thought without first having it well pictured or "imagined." This relates as well to tone as to interpretation. Too many of the singers and pupils give to their audience the very clear sensation that they are thinking, not of what they are singing, but of their voice, their vocal cords, their breath, their support, their vocal apparatus and not of the object of their song. Instead of our ears understanding more or less clearly the spirit of "Oh, Heavenly Aida," our brain only comprehends "diaphragm, breath, timbre, voice in the mask, the high notes pointed, the focus between the eyes, the size of the mouth, etc.," things which do not captivate us. The majority of singers neglect the arduous training which is necessary to develop the will to express. They think of everything but the thing they sing, and prevent us from thinking of it also. Many pupils do everything that their teachers have done, but do not feel it, they copy. They enunciate poorly because they do not conceive clearly. Often they do not conceive the very thing they intend to express. If they thought of that which they ought to give, if they felt it, they would awaken in themselves the desired means of expressing it, and so would rise to the demands of the author and the public. Nothing will assist more the development of the power of expression than a careful study of mimicry and gesture. Success in the business of entertaining others is sometimes said to be due to personal magnetism. And yet the secret of personal magnetism is the absolute effacement of self. In artistic lines the best work is always inspirational. The vocal expression or interpretation must be felt by the singer, who, for the time, completely for-gets himself and his lessons. The art of expression is a special talent and one which can only be cultivated and developed with the aid of the imagination. To express well means to imagine well, imagination being the basis of creation. The powers of expression are aided by good habits of accentuation and pronunciation. Accentuation.—There are three accents in the voice : the accent of intensity, the accent of height, the accent of timbre. In reality, the accent is only the making evident of one of the qualities of the sound. Generally the three qualities assert themselves together, but one of them may be more accented than the others. In one phrase it may be the force of the syllable which is important, in that the intonation expresses a significance in the melody; in another it is the vocal timbre, with tone color, etc. Here still it is the thought which gives the expression. Nothing is easier than to accentuate, but it is necessary not to allow ourselves to be absorbed by the machinery of the voice, but to modulate the voice upon the objective conception, upon the exterior realization of the sonorous forms ideally evoked at a distance, in a way to give to the song the employment of a vast, sonorous gesture, filling the hall and fixing the attention of its auditors upon its sonority. Pronunciation.—Articulation is the distribution of the sonorous accents of pronunciation at a distance. There are two articulations, the articulation glottic, or vocalization; the articulation buccal, or verbalization. The latter is the word sung, the speaking in the song. In vocalization, the vowel matters little. It is necessary to vocalize upon all. The work upon "distance" is, here as every-where, the first condition of a good buccal articulation. It is necessary to pronounce largely in proportion as you intend to sing afar off. In no case is it necessary that the vocal form be carried away on the verbal form, that the note go beyond the syllable. Wherever the voice carries it ought to take a verbal form, and have besides a syllabic character. For that result it is necessary that the syllable come out clearly articulated in the forward part of the mouth. If it must be formed in the rear, as when we pronounce the gutturals, the palatals and the vowels like "ei" or "ou," it ought to be held longer in the forward part of the mouth, and not allowed to go out except by an orifice vibrant. It carries then its syllabic timbre, which will not leave it until it arrives at its destination. It is essential above all that the accentuation carry upon every syllable and not upon the vowel. The ear of the singer, which watches from afar the force, the intonation, the timbre, ought equally to watch the pronunciation, its verbal significance. The term "pronunciation" admirably sums up the physiology of the act. Many singers practice retronunciation. It goes almost without saying, so evident is it, that each pupil must learn to understand, to compare, to appreciate, to feel, the sentiment of the song, and to conform the tone thereto. Who-ever does not comprehend a beautiful sound will never be able to reproduce it. Nor can one give the proper reading to a poetic theme without in a measure feeling the emotion to be expressed. Natural talent and passion are gifts, impossible of manufacture when not possessed; but like diamonds in the rough, these essential attributes need refining. The pupil must learn to like what he is singing, thus adding interest to study which is in itself beautiful, and causing the student to forget the hard work involved. Again it is ruinous to adopt for all students a uniform program of study, akin to the inflexibility of a mechanical system. There must be room for all the elasticity that may be necessary for adaptation to all circumstances, to all types, to all characters ; to adjust itself to the physical strength of every pupil, to the limits of his voice or breathing ability, to the special aptitudes and to the different faults, natural, or acquired by practice or totally erroneous study. The Mouth.—The most important factor is the proper opening of the mouth. The slightest deviation from its correct position will lead to more or less dangerous contraction of the muscles. Any stiffening of the muscles of one part of the vocal machinery is automatically imparted to and shared by other parts, thus throwing out of gear the entire vocal apparatus. In the old Italian school of singing, much attention was paid to the position of the mouth. In a work about the voice, published in the second part of the nineteenth century, the author, whose name I do not now recall, quotes Tosi, Bernacchi, Gervasoni, Florimo, and other great vocal teachers of the same period, on the position of the mouth when singing. All those celebrities agreed that one of the greatest difficulties confronting a vocal teacher is to obtain from their pupils a natural, or, as they call it, "right" opening of the mouth; and Bernacchi even went so far as to state emphatically (in which he was indorsed by a group of wonderful singers, his pupils) that it is impossible to produce a correct tone without assuring correct positions of the tongue and mouth. Undoubtedly the failure of many singers is due to a stiff, forced constrained action of the articulating organs. There is no sound in the human voice (except a grunt) that can be made independently of the mouth. The mouth regulates pitch, quality, intensity. The unruly tongue, hard lips, smiling cheek, stiff lower jaw draw upon the muscles of the throat, which in turn press upon the larynx, thus interfering with the right action of the voice, and preventing free, natural, beautiful tone. Seeking to improve the vocal tone without regarding as most important the natural articulation and correct pronunciation will inevitably result in the malformation of the mouth, and consequently of the voice. A tone that comes from a constricted mouth is not a really human tone, but partakes rather of the instrumental. Correct speaking will lead to correct singing, as speaking and singing are modulations of the same function. The Phonograph.—Just as conceit hinders progress, so any kind of fair criticism tends to assist in the development of artistic work. The phonograph, while not in any sense a substitute for a vocal teacher, is still of great value to the singer. It reveals at once his defects as well as his strong points. I cheerfully grant that the quality of a voice is lost in its phonographic re-production, but everything else remains, and faulty breathing and errors of diction, interpretation, tone-placing, etc., are distinctly revealed. It is often claimed by singers who have been unsuccessful in obtaining engagements as phonographic artists, that not every good voice is suitable for recording purposes. As a general proposition, the singers who make such claims are badly mistaken, for the phonograph, when properly handled, gives back very nearly what it receives. It is a fact known to phonograph experts, how-ever, that many records are poor through no fault of the singer, such as failure to use proper recorders (of just the right sensitiveness) for the voice in question, so as to correspond with the voice quality, also improper size of horn used, —either of which factors, if overlooked, will ruin the record of the greatest artist in the world. It is possible, therefore, that a failure may be due, not to vocal inadaptabilities for recording, but simply to a poor selection of instruments for that particular voice. But as a general thing the phonograph is a fairly accurate mirror of the human voice, and the singer with many faults in his production who claims his poor record is the fault of the instrument and not his own has a very difficult claim to substantiate. It is my personal belief, after a great deal of experience with phonograph singing and singers, as well as the recording instruments themselves, that phono-recording is an almost invaluable aid to the student in his endeavor to succeed. Speaking about phonographs, I will describe in a few words the process of manufacturing records. The singer, surrounded by the orchestra, sings into the horn of a recording machine. The width and length of horn have a great deal to do with the success or failure of the venture, as has also the proper selection of the recording instrument. The latter consists of special castings, in general form similar to the ordinary phonograph reproducer, on which is set a round glass plate, with a diamond needle attached for the purpose of cutting the groove on the wax blank. The thickness of the glass and the manner of setting the needle upon it have much to do with the sensitiveness of the completed recorder. As I have said, the selection of suitable horn and recorder for each singer is a problem upon which all depends. After the sound vibrations are recorded upon the wax blank, the blank is taken to the galvano-plastics bath, where from the copper solution a negative form of the original wax blank is made. This process is repeated several times until the copper stamper, as it is called, is completed. The stamper, after being nickeled and backed, is then placed with the record stock, under a hydraulic pressure of thousands of pounds, which converts it into a finished record.
<urn:uuid:ae6879b0-3ae7-4eaa-ac1c-a8e8e06d2be4>
CC-MAIN-2016-26
http://www.oldandsold.com/articles35/singing-8.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956088
2,373
3.34375
3
NEW YORK (CBSNewYork) — Out of nowhere sinkholes can swallow up the earth, sometimes taking cars and people with it. Now, NASA says it may be able to predict where the next big sinkhole will occur, CBS 2’s Vanessa Murdock reported. One sinkhole in Rockville Centre, Long Island nearly swallowed a car whole, and in Bay Ridge, Brooklyn another sinkhole left a car teetering on the edge. A sinkhole in Tampa, Fla. opened up without warning while a man slept in his bed. His body was never recovered from the abyss that measured 30 feet deep. NASA now says in some cases, warnings are possible. The administration has demonstrated the ability to predict where a sinkhole might form using a specialized radar attached to the belly of a jet. Researchers can measure the ground moving by scanning the surface from 40,000 feet, over and over again. One month before a sinkhole took out trees in Los Angeles, NASA detected movement of nearly a foot. “That Bayou Corne one is several acres now. It’s huge,” said William Branford, associate professor of Geology at Queens College. Branford says sinkholes of that magnitude just don’t happen in the greater New York area. “Most are the size of potholes. Some are much larger, maybe take a house down,” Branford said. Because of the size of the sinkholes and the way they form — usually as a result of water main or sewer pipe breaks — Branford says he believes NASA’s technology wouldn’t work well in predicting sinkholes in the Tri-State area. “I just don’t know how much lateral movement you’re going to see,” he said. Branford does, however, see use for it in storm surge modeling. “If you have a really good idea of the surface, you can see where the surge is going to come on up,” he explained. The storm surge from Superstorm Sandy led to sinkholes along the Jersey Shore. Check Out These Other Stories From CBSNewYork.com: - State Sen. Espaillat Declares Victory In 9-Way Dem Primary Race For Rangel’s Congressional Seat - Family’s Demands For Justice Lead To Manslaughter Indictment In Deadly Fort Greene Accident - SPCA: New Jersey Couple Hoards 100 Cats Inside Small Condo - ‘You Have To Fight Fire With Fire’: Trump Calls For Return Of Waterboarding Following Istanbul Terror Attack
<urn:uuid:938799c4-d698-4959-8325-19b8fb54ad28>
CC-MAIN-2016-26
http://newyork.cbslocal.com/2014/04/24/nasa-technology-could-help-predict-sinkholes/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00105-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916083
552
2.859375
3
Lightning triggered a complex of forest fires in Northern California’s Klamath National Forest on June 21, 2008. As of July 14, this group of fires, known as the Siskiyou Complex, had grown to affect an estimated 35,400 acres and was about 16 percent contained, according to the National Interagency Fire Center. Firefighters were preparing for at least another month of battle with the blaze. This false-color image was made from visible and infrared data collected by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA’s Terra satellite on July 13. The image centers on the largest of the fires in the Siskiyou Complex, which is the Dark Three Fire. The burned area is charcoal-colored, while surrounding forest and other vegetation is red. Water is dark blue. The western perimeter of the fire is hidden by smoke. Several landmarks and features are labeled. The burned area stretches all the way from Bear Peak in the north to beyond the Gasquet-Orleans Road in the southwest (commonly called the “G-O Road.”) Some areas appear to have been more heavily damaged than others. Western areas seem more uniformly dark, while eastern areas have more red mixed with charcoal colors, indicating not all vegetation burned. (This effect, however, could be due to shadows cast by the topography or the smoke in the western part of the burned area.) The fires are burning in very rugged and steep terrain with few roads and with stands of large trees. Although the area is not very populated, the forest is the site of significant Karuk and Yurok Tribal cultural and religious sites, which are at risk. - USDA Forest Service. (2008, July 14). National Interagency Coordination Center Incident Management Situation Report Monday, July 14, 2008–0530 MDT (pdf). National Interagency Fire Center Website. Accessed July 14, 2008. (Reports are not archived online.) - USDA Forest Service. (2008, July 13). Siskiyou Complex Fire Incident. InciWeb.org. Accessed July 14, 2008.
<urn:uuid:786154f7-3dd0-43be-934c-247e1f297316>
CC-MAIN-2016-26
http://visibleearth.nasa.gov/view.php?id=8913
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957041
438
3.421875
3
The latest news from academia, regulators research labs and other things of interest Posted: July 12, 2010 Organic nanowires open up possibilities for nanoelectronics (Nanowerk News) Swiss and German materials scientists have created simple networks of organic nanowires for future electronic and optoelectronic components. The successful approach synthesises the complex and incredibly thin nanowire structures, and joins them to electrically conducting links (essentially creating an electronic circuit). The result is a culmination of work that began in 2006 under the PHODYE ('New photonic systems on a chip based on dyes for sensor applications scalable at wafer fabrication') project, which was funded EUR 1.92 million under the 'Information society technologies' (IST) Thematic area of the EU's Sixth Framework Programme (FP6). The PHODYE project was initiated by Dr Angel Barranco from the Instituto de Ciencia de Materiales de Sevilla in Spain, who invited his former colleagues from the Swiss Federal Laboratories for Materials Testing and Research (Empa) to become involved. Empa is one of eight academic and industrial partners from four European countries (Belgium, Spain, Sweden and Switzerland) currently working on the project. The aim is to develop a new family of sensor devices that combines dye sensor films and photonic structures. These incredibly sensitive gas sensors (made up of thin films that change colour and fluoresce on contact with certain gas molecules) could eventually be used to monitor vehicle emissions or to provide warnings of the presence of poisonous substances. It was during their work on PHODYE that Empa's Ana Borras, Oliver Gröning and Pierangelo Gröning, and Jürgen Köble from Omicron Nanotechnology in Germany created the unique methodology for connecting organic nanowires. The result is a step towards the manufacture of cheaper and more flexible sensors, transistors, diodes, and other components, ranging from the micro all the way to the nano scale. The physicists developed a new vacuum deposition process for synthesising organic nanowires and discovered how to manufacture nanowires with largely varying characteristics by appropriately selecting the starting molecule and the experimental conditions. Their method is particularly unusual and surprising because it has generated a perfectly monocrystalline structure by precisely controlling the substrate temperature, molecule flow and substrate treatment. The team soon discovered that the new process was not only able to provide nanowires for the gas sensors needed under PHODYE, but it opened the door to creating complex 'nanowire electric circuits' for electronic and optoelectronic applications (e.g. solar cells). The reason being that the range of nanowires can be used together (as required) to form networks with broadly varying properties. The secret to this lies in having decorated (using a sputter-coating process) the nanowires growing on the surface with silver nanoparticles. Thanks to these particles, more nanowires can be grown that are in electrical contact with the original wires - the foundation of an electrical circuit on the nanoscale. Dr Gröning explained that the potential exists for being able to manufacture organic semiconductor materials, which are very attractive candidates for the manufacture of inexpensive, large area and flexible electronic components. The team has presented the results of their finding in the journal Advanced Materials. The PHODYE project formally concludes in October 2010. If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks! Check out these other trending stories on Nanowerk:
<urn:uuid:58f701b8-e1f3-4d9f-b94d-692d6961c298>
CC-MAIN-2016-26
http://www.nanowerk.com/news/newsid=17119.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923949
735
2.609375
3
Another Da Vinci mystery: Is a newfound 500-year-old painting his? The previously unknown painting of a Renaissance noblewoman, which appears to be based on a Da Vinci sketch hanging in the Louvre, was found in a Swiss bank vault. Rome — The steady gaze and beatific smile are reminiscent of the world’s best known painting, the Mona Lisa. Those clues, along with carbon dating and other scientific tests, have led Italian experts to claim that they have found the holy of holies of the art world: a previously unknown Leonardo da Vinci masterpiece. Experts believe that the newly discovered painting is the full-blown, oil version of a pencil sketch drawn by Leonardo of a Renaissance noblewoman, Isabella d’Este, which now hangs in the Louvre Museum in Paris. Leonardo completed the sketch when he was staying in the city of Mantua, in the northern Lombardy region, in 1499 or 1500. Pleased with her portrait, the Marquesa d’Este then sent letters asking him to produce a new, more elaborate version in colored oils. According to historical records, she never received a reply. Art historians speculated that the Renaissance master had moved on to grander, more lucrative commissions, including the Mona Lisa, which is believed to be a portrait of Lisa Gherardini, the young wife of a rich Florentine merchant. Now, it appears, Leonardo did indeed paint the oil portrait, perhaps when he met d’Este, one of the most influential female figures of her day, in Rome in 1514. The oil painting was discovered recently in a Swiss bank vault, part of a collection of 400 works owned by an Italian family who have asked not to be identified. Measuring 24 inches by 18 inches, it bears a striking resemblance to the Leonardo sketch held by the Louvre – the woman’s posture, her hairstyle, her striped dress, and the way she holds her hands are almost identical. “There are no doubts that the portrait is the work of Leonardo,” Carlo Pedretti, a professor emeritus of art history and an expert in Leonardo studies at the University of California, Los Angeles, told Corriere della Sera newspaper on Friday. “I can immediately recognize Da Vinci's handiwork, particularly in the woman's face." Scientific tests have shown that the type of pigment in the portrait was the same as that used by Leonardo, as was the primer used to treat the canvas. Carbon dating, conducted by a mass spectrometry laboratory at the University of Arizona, has shown that the portrait was painted between 1460 and 1650. Professor Pedretti, a recognized expert in authenticating disputed works by Da Vinci, said more analysis was required to determine whether certain elements of the portrait – notably a golden tiara on the noblewoman’s head and a palm leaf held in her hand like a scepter – were the work of Leonardo or one of his pupils. But as with any new-Leonardo-da-Vinci-discovered story, doubts were expressed by some eminent experts. Martin Kemp, professor emeritus of the history of art at Oxford University, and one of the world’s foremost authorities on da Vinci, said if the find was authenticated it would be worth “tens of millions of pounds” but raised doubts about whether the painting was really the work of Leonardo. The portrait found in Switzerland is painted on canvas, whereas Leonardo favored wooden boards, he said. And Leonardo gave away his original sketch to the marquesa, so he would not have been able to refer to it later in order to paint a full oil version. It was more likely to have been produced by one of the many artists operating in northern Italy who copied Leonardo’s works. "They'd take a Madonna head from one work and then pick the figure of John the Baptist from another, and produce a sort of pastiche. It was a sort of early version of Photoshop," he says. Only around 15 works have been reliably attributed to Leonardo, including the Mona Lisa, which is also hangs in the Louvre. If these latest claims are backed up by other leading da Vinci experts, that number has just jumped to 16.
<urn:uuid:9b502025-f431-4369-b922-b585d811aee1>
CC-MAIN-2016-26
http://www.csmonitor.com/World/Global-News/2013/1004/Another-Da-Vinci-mystery-Is-a-newfound-500-year-old-painting-his
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972518
901
2.796875
3
Facts about this animal The red-crested pochard is a somewhat atypical diving duck with narrow bill and large head, and less inclined to dive and more frequently seen ashore than the Aythya species. The body-weight of males is about 1.1 kg, of females about 1 kg. Nest sites are variable, often close to water or on floating mats of aquatic vegetation. 6 to 12 greenish eggs are laid, which are incubated by the female alone for 26-28 days. The food of the red-crested pochard consists mainly of plant material, but they take also some molluscs, aquatic insects and small fish. Did you know? that red-crested pochards are the only waterfowl to engage in ritualised courtship feeding, a behaviour aparently restricted to mated pairs, which may serve to reinforce the pair bond? While the female waits on the surface, the male dives and brings her food offerings and, sometimes, even inedible items. |Name (Scientific)||Netta rufina| |Name (English)||Red-crested pochard| |Name (French)||Nette rousse| |Name (Spanish)||Pato colorado| |Local names||Czech: Zrzohlávka rudozobá Italian: Fistione turco Romansh: Anda cotschna Swedish: Rödhuvad dykand |CITES Status||Not listed| |CMS Status||Appendix II (as Anatidae spp.) Included in AEWA| Photo Copyright by |Range||North Africa: Algeria, Egypt, Morocco Asia: Afghanistan, Bangladesh, Bhutan, China, India, Iran, Iraq, Israel, Japan, Kazakhstan, Kyrgyzstan, Mongolia, Myanmar, Nepal, Pakistan, Saudi Arabia, Syria, Tajikistan, Thailand, Turkmenistan Albania, Armenia, Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, France, Germany, Greece, Hungary, Italy, Latvia, Macedonia, former Yug. Rep., Moldova, Montenegro, Netherlands Poland, Portugal, Romania, Russian Fed.,Serbia, Slovakia, Slovenia, Spain, Switzerland, Turkey, Ukraine, United Kingdom, Uzbekistan There are vagrants in a series of other, mainly European and Mediterranean countries| |Wild population||The global population is estimated to be 350,000 to440,000 individuals by Wetlands International (2002).| |Zoo population||1192 reported to ISIS (2006).| In the Zoo How this animal should be transported For air transport, Container Note 18 of the IATA Live Animals Regulations should be followed. Find this animal on ZooLex Photo Copyright by Why do zoos keep this animal The red-crested pochard is not a threatened species. Zoos keep them for educational purposes and as an ambassador species for wetland conservation. in the past, red-crested pochards bred by zoos have occasionally been used for (re-)introduction projects.
<urn:uuid:b2c2da8c-7880-43b1-af6d-fcd7cb726ab7>
CC-MAIN-2016-26
http://www.waza.org/en/zoo/choose-a-species/birds/ducks-geese-and-swans-anseriformes/netta-rufina
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.841971
669
3.3125
3
The United States Government has recognized over 150 Native American Indian Tribes ranging from the Absentee-Shawnee Tribe of Indians of Oklahoma to the Zuni Tribe of the Zuni Reservation, New Mexico. They have been granted the tribal right of self-governance, with the ability to set their own priorities and goals for the welfare of their nations. One of these rights, that has only recently been exercised, is the issuance of coins. The Sovereign Nation of the Shawnee Tribe (Oklahoma), recognized by the United States under the Shawnee Tribe Status Act of 2000, authorized the historic first American Indian coins - featuring Shawnee heroes Chief Tecumseh and his brother Tenskwatawa, "The Prophet." These were followed by coins picturing Lewis and Clark, and the Indians who guided them. In 2004, Tribal Chairman Eddie L. Tullis authorized the issuance coins to commemorate the 20th Anniversary of Recognition of the Sovereign Nation of Poarch Creek Indians by the United States. These coins feature notable Tribal members such as Chief Menawa and Tchow-ee-pu-kaw, as well as enthusiastic Pow-Wow dancers.
<urn:uuid:06e0a209-f6af-448a-a5b2-0d9be81848f4>
CC-MAIN-2016-26
http://www.pandaamerica.com/americanindian.asp?keyword=shawnee&sortby=dateasc
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956976
236
3.171875
3
Growing microgreens is a recent trend, gaining high popularity among home gardeners. In the earlier days, microgreens were mainly grown by chefs who used these in different salads and soups. The popularity grew and today these healthy microgreens are considered to be the hottest food trend. These can be grown indoors in small containers throughout the year. Are Microgreens Similar to Sprouts? Microgreens are small edible plants and they are a bit older than sprouts. Since these are small in size, they are often confused with sprouts. Sprouts are germinated seeds that are grown in water and eaten whole. When we have sprouts we are actually consuming their seeds, stem and root. Microgreens grow in soil or even on a sterile growing mat like a fiber mat. You can grow them indoors with little exposure to sunlight. These are harvested just after the development of first ‘true’ leaves. Quite easy to grow and can be grown from any type of plant variety but the most popular are mustard, radish and beet. The seed density when planted is also lower than with sprouts which makes sure that they have plenty of space to grow. Microgreens also have less contamination problems than sprouts. Lastly, unlike sprouts, they are harvested with their roots. High in nutritional value, microgreens are an easy way to create a healthier diet for yourself. Growing Microgreens in Your Garden Home gardeners are now showing great interest in growing microgreens as these can be grown indoors since they need very little space. These can be grown in wide varieties to add that spicy ‘zing’ to any salad or main course. The common types of microgreens grown in home gardens are cabbage, cilantro, celery, radish and mustard. The best part is that these are completely ready to be harvested in just two weeks or less which means if you are really fond of these greens, you can produce at least 20 crops in a year and enjoy their freshness and nutritional value everyday. Tips For Growing Microgreens Seeds for Microgreens – You will need untreated seeds to grow microgreens at home. These should preferably be organic. The same seeds are used in growing full size plants. The plants grow quite close to one another – you will always need more seeds to grow microgreens than in case of other crops when you are planting them in your garden. Sprinkle several seeds if you are trying to grow them in containers. Soil Needed – Potting soil is good for growing microgreens. If you add natural nutrients it is surely going to help further. You need to remember that microgreens grow well in a moist soil but it shouldn’t be too soggy or it will damage them completely. Harvesting Time – Plants emerge within 5 days and they are completely ready to harvest when the leaves are unfolded and they are about 2 inches tall. Watch out for the second set of leaves that are often referred to as ‘true leaves’. This is the harvesting time, though many growers like to grow a week more. You need to pick them from their base which is close to the dirt.
<urn:uuid:84cc7067-5cbe-4faa-82a6-d7b420e7486c>
CC-MAIN-2016-26
http://toddsseeds.com/microgreens-growing.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970005
657
2.59375
3
Parents underestimate prevalence of cyberbullying As teens become increasingly connected to each other an the outside world online, parents are often in the dark about their kids digital interactions and commonly underestimate their kids' exposure to cyberbullying. Among other goals, good social science fills in the numbers to back (or disprove) popular perceptions. It's easy to feel as though our kids are running wild online, and that we don't know the half of it. According to a Cornell University paper entitled "Peers, Predators, and Porn: Predicting Parental Underestimation of Children’s Risky Online Experiences" and published in the Journal of Computer-Mediated Communication, not knowing the half of it is almost literally true. Parents reported a number of potentially dangerous behaviors (their kids being cyberbullied, cyberbullying others, or being approached by a stranger online) at a rate about half that of their kids reported the same experiences. When parents had an inaccurate view of their kids behavior, they generally underestimated rather than overestimated. Ten times as many parents who were inaccurate about their kids being cyberbullied underestimated the fact, for example, and a similar ratio underestimated their kids cyberbullying others. The "Peers, Predators, and Porn" study touches on an intriguing observation about parenting styles relevant to how online socializing is regulated within a household. Parents are generally categorized as inclined toward one of three parenting styles – permissive, authoritarian, and authoritative (Baumrind, 1991; Maccoby & Martin, 1983). Permissive parents tend to be more lenient and indulgent in order to avoid confrontation with their children – allowing considerable self-regulation; in some households, children set the rules. In contrast, authoritarian parents expect high levels of obedience, sometimes without explanation, and provide strict nonnegotiable rules. Authoritative parents juggle being responsive to their children’s thoughts and ideas, yet firm about expectations in the household (Baumrind, 1991). The implications are that both permissive and authoritarian parenting styles can lead to unpleasant online outcomes for children – the former because a parental avoidance of conflict leads to an underestimation of risky behavior, and the latter because: ...authoritarian parents expect higher levels of obedience, [and so] their children are more likely to conceal risky online experiences, leading the parents to underestimate them. Authors do offer one suggestion that might prove useful to parents – simply move the computer into a public area of the home. Parents are urged to be aware that if their children are online in a private place, it probably indicates that they do not know exactly what they are doing. Moving the computer to a public place in the home seems prudent, however this strategy is difficult to enforce, as the more strictly a parent controls Internet use, the more likely the youth are to find their way around such rules (Byrne & Lee, 2011) All of this feels particularly relevant amid the controversy surrounding the death of Florida 12-year-old Rebecca Sedgwick, who was reportedly bullied by a 14-year-old and a 12-year-old, the former of whom was arrested after supposedly posting on Facebook that she had bulled Sedgwick and didn't care about the consequences. The parents of the 14-year-old are now publicly stating that she can't be Rebecca's cyberbully – they monitored her behavior online, they maintain. True or not, it nods toward a common situation among parents: feeling as though they're got their kids all figured out while missing an entire secret life.
<urn:uuid:e9992aa6-2b1b-479b-8777-f81eb4e8acd1>
CC-MAIN-2016-26
http://www.csmonitor.com/The-Culture/Family/Modern-Parenthood/2013/1018/Parents-underestimate-prevalence-of-cyberbullying
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00051-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966469
733
3.140625
3
Periapical (root-tip) Abscess A periapical (root-tip) abscess is a pocket of infection at the base of a tooth's root. The tooth becomes abscessed after the pulp (nerve) of the tooth becomes infected. A periapical abscess is usually caused by deep decay or an accident (trauma to the tooth involving nerve damage). A periapically abscessed tooth will require either Root Canal Therapy or an Extraction. In some cases an antibiotic will also be prescribed. A lateral abscess is similar to a periapical abscess, but develops along the lateral surface of the tooth's root. In this case, the infection comes from outside the tooth instead of from within. A lateral abscess can either be gingival (located near the gum line) or periodontal (located deeper in the periodontal tissues). Since most cases of lateral abscess are due to periodontitis (gum disease), treatment is part of an overall periodontal (gum) treatment program. |An abscessed tooth is usually sensitive or painful. The discomfort is what normally alerts the patient to the problem. Occasionally, an abscess may be detected on an x-ray and treated before the patient experiences any discomfort. Left untreated, an abscess may compromise the immune system and in some cases may become life-threatening.
<urn:uuid:2a8d802d-e167-4cec-be11-22e3af031f88>
CC-MAIN-2016-26
http://www.todayssmile.net/abscessed.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.87464
319
3.765625
4
31 October is the 304th day of the year (305th in leap years), known for being the date of Hallowe'en. There are 61 days remaining in the year. - 1492: Sir Nicholas de Mimsy-Porpington is executed after accidentally growing a tusk on Lady Grieve. - 1981: Lord Voldemort murders James and Lily Potter at Godric's Hollow and is defeated for the first time when he fails to murder Harry Potter. The First Wizarding War ends. - 1991: The first year Charms class topic is Wingardium Leviosa, the Levitation Charm. Later, the Hallowe'en feast is interrupted by the arrival of a Mountain Troll. Harry Potter and Ron Weasley save Hermione Granger from the Troll in the girls' bathroom. Professor Quirrell attempts to steal the Philosopher's Stone, but is thwarted by Severus Snape, who is attacked by Fluffy as a result. At some point during the day, a Slytherin student asks Harry if he has seen his magic hamster. - 1992: Sir Nicholas de Mimsy Porpington's 500th Deathday Party takes place in one of the roomier dungeons at Hogwarts, and it is attended by many ghosts, Harry Potter, Ron Weasley, and Hermione Granger. Ginny Weasley, under the influence of Tom Riddle's Diary, opens the Chamber of Secrets for the first time in fifty years. The basilisk that is released from the Chamber petrifies Mrs Norris. - 1993: Sirius Black enters Hogwarts Castle and attacks the Fat Lady when she refuses to give him passage to Gryffindor Tower. For security, all the students sleep in the Great Hall while the castle is being searched for any sign of Black. - 1994: At breakfast time, Fred and George Weasley fail to pass Dumbledore's Age Line around the Goblet of Fire with an Ageing Potion. Later that day, Harry Potter, Ron Weasley, and Hermione Granger visit Rubeus Hagrid and discover that he has a crush on Olympe Maxime. The Hallowe'en feast takes place at dinnertime and after it, the Goblet of Fire chooses the Triwizard champions — Viktor Krum for Durmstrang; Fleur Delacour for Beauxbatons; Cedric Diggory for Hogwarts and an unexpected fourth champion — Harry Potter.
<urn:uuid:66c2fc7b-a33c-478f-804d-58594b279204>
CC-MAIN-2016-26
http://harrypotter.wikia.com/wiki/October_31
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916618
484
2.5625
3
The law is the result of Keplers long attempt to relate musical harmony to planetary motion and, more specifically, to find a 'musically harmonious' relation between the distances of the planets from the sun. The distances vary as the planets revolve around the sun, and so Keplers first calculations were based on the greatest and least distances. When calculations in distances failed to produce a concord, Kepler turned his attention to the angular velocities of the planets. (Velocities and distances are related, since the closer a planet approaches the sun, the greater its angular velocity with respect to the sun). Kepler associated the varying angular velocity of each planet with a musical interval, letting the two outer notes of the interval represent the greatest and least velocities. Then he put each planets interval into a different pitch register, which was determined by the planets average distance from the sun. Next he tried to relate the average angular velocities to the average distances from the sun. When this approach did not disclose Gods harmony, Kepler substituted period of revolution for the average angular velocity. Here he found the relation he had been seeking. Keplers third law is usually stated as a mathematical formula: T^2 / D^3 = K where T is the planets period of revolution, D is the average distance from the sun and K is a constant. The values of T and D are known for the earth. T is one year and D is 93 million miles. Therefore K can be compute, so that one can compute any other planets average distance from the sun if the period of revolution is known or the period of revolution if the average distance is known. The formula looks antiseptic but like so much of mathematics it springs from the aesthetic power of the natural order. Essentially it is this same aesthetic power that lies at the root of musics relation to number". (Note: Another method was found to calculate planet position by use of harmonic numbers found and derived from Mayan astronomical calculations). See David Wilcox, Part 11 and Maurice Chatelain's discovery of The Nineveh Constant (David Wilcock) 68.4 Durinal Movements of the Planets "The seventeenth-century astronomer, Johannes Kepler, continued and expanded upon earlier demonstrations of actual musical relationships among the spheres. By computing the theoretical musical intervals according to the angular velocities of the planets from the sun, he proposed the following proportions as demonstrations of world harmony of which the musical harmony is the expression: 68.5 Apparent Diurnal Movements Employing these diurnal movements of the planets in their orbits, which, Kepler points out, are apparent from the viewpoint at the sun, we then arrive at the harmonies between two planets: Diverging- a/d = 1/3 Converging- b/c = 1/2 Saturn-Jupiter; Diverging- c/f = 1/8 Converging- d/e = 5/24 Jupiter-Mars; Diverging- e/h = 5/12 Converging- f/g = 2/3 Mars-Earth; Diverging- g/k = 3/5 Converging- h/i = 5/8 Earth-Venus; Diverging- i/m = 1/4 Converging- k/l = 3/5 Venus-Mercury. Kepler noted, moreover, that because of the eccentricities of the orbits, there were variations in the derived orbits, and, consequently, he presumed from the ratios given in the first section of this chart (which has been separated into three parts for clarity of presentation, the note being held until the last part), that certain concordances were present between the extremes of the apparent movements of the single planets. By this means he arrived at the third section of his calculations, the harmonies between the movements of each planet itself- that is, the harmony within the movement of each planet between its aphelion and perihelion extremes of distance from the sun": Saturn 1' 48'' : 2' 15'' = 4 : 5, a major-third; Jupiter 4' 35'' : 5' 30'' = 5 : 6, a minor-third; Mars 25' 21'' : 38' 1'' = 2 : 3, a fifth; Earth 57' 28'' : 61' 18'' = 15 : 16, a semi-tone; Venus 94' 50'' : 98' 47'' = 24 : 25, a diesis; Mercury 164' 0'' : 394'' 0'' = 5 : 12, an octave and minor-third. (Note: Typo, 394 possibly should be 384 in above table). Since, in the course of a revolution, a planets' velocity would change, this theoretical tone would run through the entire range of the interval between the extremes already shown. Venus' almost circular orbit Kepler showed as possessing a single note, whereas the much more elliptical course of Mercury produces a much wider progression. These theoretical harmonies are given in modern musical notation by Mr. Elliott Carter, Jr., as shown below: It goes beyond our needs in this study to attempt to delve into the complicated astronomical and mathematical calculations by which Kepler arrived at the basis for his theories. These are to be found in most of his works passim, and they are particularly developed in the Epitome of Copernican Astronomy and in The Harmonies of the World". The Celestial Journey and the Harmony of the Spheres. Impossible Correspondence Index © Copyright. Robert Grace. 1999
<urn:uuid:9b31ae3f-5826-4e4d-9e9c-5d87818cdcc6>
CC-MAIN-2016-26
http://greatdreams.com/grace/50/68kepmusphere.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00057-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935537
1,171
4.15625
4
A virtual private network is a network which uses encryption to provide a secure connection through an otherwise insecure network, typically the Internet. Or in other words "private data travelling over public IP infrastructure". This is not a new concept, but is becoming increasingly important as people need to access their IT resources when away from home, or when using external ISP providers. In this particular case, the connection is a VPN (a virtual private network connection to the CUDN). - A working network connection. The VPN software transmits the data securely over an existing network connection. - You must also install or configure the correct software for the version of OS X installed.
<urn:uuid:50117792-9ae6-4d10-9aa2-303ffd6d72b4>
CC-MAIN-2016-26
http://www.ucs.cam.ac.uk/support/mac-support/macvpn
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00018-ip-10-164-35-72.ec2.internal.warc.gz
en
0.896384
132
3.234375
3
Placenta Stem Cell Therapy Abroad One of the most important factors of newborn development is the placenta, the 'sac' that protects and provides a liquid-filled home for a developing fetus. During pregnancy, the placenta serves as important protection, growth, development and nourishment (via the umbilical cord) of vital organs such as the lungs, liver, kidneys, immune system and digestive system while such organs are developing within the fetus itself. With new scientific technology, there has been a discovery of unique multipotent stem cells to be found within the placenta. Such cells found within placental tissues have been shown to treat such diseases as, among many others: Parkinson’s and Alzheimer’s disease, auto-immune disorders, stroke, lupus. muscular dystrophy and cerebral palsy. While the stem cell controversy is still up for debate, many find that the use of placenta stem cells morally acceptable and ethical because the newborn babies no longer need them after birth. Placenta tissues and afterbirth have been discarded in the past. In addition, acquiring placental stem cells causes no harm to the fetus or embryo or mother. The placental stem cells are harvested after the baby is born via cesarean section so that the placenta does not receive contamination through a traditional birth. Placenta Stem Cell Therapies and Treatments Placenta stem cell treatments are uniquely determined based on medical need or condition. A physician will review the patient’s medical record and decide where to place the stem cells during a patient consultation. Determining where to put the stem cells is vital to offer the greatest impact on the patient’s condition and generate the best results as possible. Placenta stem cell therapy has been utilized for more than twenty years in Europe for treatment of multiple disease processes with promising results. Once the administration sites are determined, the patient is given a local anesthesia at the implant site. Many patients find the treatment painless. One such technique to implant placenta stem cells was devised by Dr. Omar Gonzalez of the Integrative Medical Center in Mexico. Once the anesthesia has begun to work, the physician then makes small incisions in which to place the placental stem cell implants. The placental tissue is placed within the incision and then closed up with a suture. Results to the stem cell implants are seen anywhere from 24 hours, though it takes about six months to one year for the placental tissue to be fully absorbed by the body. The physician might also decide to give stem cell injections at different locations on the body to increase the positive effects of the implant procedure. Benefits of Placenta Stem Cell Therapies Undergoing stem cell therapy is ideal for anyone who suffers from numerous medical issues or ailments and has not seen results in other methodology treatment plans. Painless and effective, placental stem cell therapy has been shown to drastically improve many medical conditions. For those who might find the use of stem cells to be a debatable issue, many ethicists and physicians agree that placental stem cells are ethical since the newborn baby no longer needs it, and it is traditionally discarded after birth. Stem cells offer a renewable source of new cells for the body to replace and treat many medical issues such as those listed above. Cost of Placenta Stem Cell Therapies Stem Cell therapy treatments are not yet approved in the United States, so individuals wishing to enjoy the benefits of treatment of any type of stem cell technology must venture outside American borders. However, such treatments in the U.S. in the future are projected to be $25,000-$30,000 depending on the number of implants, while in China prices are generally about $40,000 and placenta stem cell treatments in India are approximately $25,000. Costs in Mexico are competitive. For those who wish to undergo stem cell therapy through medical tourism, costs usually include travel expenses and any additional medical attention needed. Choosing a doctor for Stem Cell Therapy Treatments Many facilities worldwide offer stem cell therapy, but it's important to locate accredited and experienced physicians who deal with placenta stem cell treatments. Check references and resources to determine whether physicians or specialists are trained, experienced and accredited with placenta stem cell therapy treatments with his or her country of origin and that facilities provide state-of the art technology and equipment and have a well maintained, trained and educated staff. Check out some of the stem cells doctors we collaborate with. If you want to find out where you can get placenta stem cell therapy, or you want to ask us a question, let us know by using the button below.
<urn:uuid:db49eb27-398b-4f2b-b395-b90b4994aaaf>
CC-MAIN-2016-26
http://www.placidway.com/subtreatment-detail/treatment,31,subtreatment,99.html/Placenta-Stem-Cell-Therapy-Treatment-Abroad
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944292
965
2.921875
3
One of the things that mothers worry about when taking care of their baby is having enough supply of breast milk. It’s a natural fear for a baby’s diet is purely of breast milk for the first 4 to 6 months of life. A mother would be increasingly concerned when her baby is a voracious eater. Certain circumstances can cause your breasts to decrease milk production. These circumstances can include stress, illness, lack of nutrition, dehydration, or going back to school or work. The best way to maintain and increase breast milk is through constant feeding and pumping. Normally, breast milk is produced when your body and your baby is in sync. You will notice that you would experience breast tenderness whenever you see your baby, whenever your baby cries, or (if your baby is highly structured) when it is time to feed. The more you feed and pump, the more your breast will produce milk. Try to pump your breast in between feedings. If you have time, pump each breast for 5-10 minutes after each feeding. A hospital grade, double motor breast pump will be just perfect for it pumps your breast in the same frequency as your baby. If you’re at work, steal time to pump your breasts so that when you go home your breasts are still able to produce milk for your baby. Eat right and drink right. Proper nutrition and hydration ensures plentiful milk supply. A nursing mother needs to consume about 1,800-2,200 calories per day. Empty calories are not advisable (i.e. doughnuts, junk foods). What you need would be foods rich in calcium, vitamins, and protein. As much as possible, unless otherwise indicated by a medical condition, drink 2 liters or more of water. The more water you consume, the more milk your breasts would be able to produce. Eating oatmeal daily also increases milk supply. Exactly how oatmeal increases milk production is still unknown. However, a lot of mothers have found success with eating oatmeal daily. Try to relax before and during breast feeding. Tension and stress can lessen milk production. Lounge on your sofa, watch TV, read your favorite book or listen to music to relax your muscles. If you’re sleep deprived, have naptimes with your baby. Consider co-sleeping so you and your baby will get adequate sleep. Don’t give your baby water or juice in between feedings. If your baby’s stomach is full to capacity, your baby will not be spending too much time getting milk from your breast. This would result to decreased milk production. Similarly, don’t give your baby too much sucking time with a pacifier. Your breasts will serve as one if your milk supply is dwindling. Because of increased sucking time, your breasts will be stimulated to produce more milk. If all the above options fail, have an appointment with a lactation consultant. Bring your baby along with you so that she can assess if feeding technique is the problem (positioning, latching). Latching problems can come from using artificial nipples, alternating feeding of breast to bottle, or having too much sucking time with a pacifier. There are herbal remedies that are claimed to help increase milk supply. Fenugreek is increasing in popularity as an herbal supplement that can increase milk supply for short and long term basis. Caution is advised when taking this supplement and a doctor’s approval is needed before you undergo the regimen. Mothers with diabetes or taking anti-coagulants are not advised to take this herbal supplement.
<urn:uuid:81c0742d-aed8-40e5-a562-9bcb5f435197>
CC-MAIN-2016-26
http://newsarticlesonhealth.com/review-on-increasing-breast-milk-supply/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956304
727
2.921875
3
Making Eye ContactWhile students with Autism Spectrum Disorder (ASD) are thought to be visual learners, they nevertheless, or perhaps because of their heightened visual processing abilities, find establishing and maintaining appropriate eye contact or eye gaze with communication partners extremely difficult. Anecdotal evidence suggests that people with ASD may find eye contact difficult when engaging with a communication partner as processing speech and facial expression at the same time results in sensory overload. Therefore it should not be assumed that because a person with ASD is not looking they are not attending to and processing what is being said. TED'S ICE CREAM ADVENTURE This game teaches the child that they have to look at someone to communicate, and that if someone is looking at them, they should respond correctly by looking back. The game does not eye contact by insisting 'look at me' but makes it a gradual process. The large eyed Teddy bear characters are super cute and non confrontational. The game reinforces the following keywords that parents and teachers can generalize into real world situations; look, looking, eyes.
<urn:uuid:6452c624-e180-43aa-8217-d5b484f25fa2>
CC-MAIN-2016-26
http://www.autismgames.com.au/game_eyecontact.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00009-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957693
213
3.515625
4
Egypt and Islamic Art With the advent of Islam to the country, Egyptians fell in love with Islamic art. One outstanding advantage of Islam is that it is both a spiritual and civic religion. In other words, beside religious issues, Islam addresses and organizes various walks of life. Architecture, in general and urban architecture in particular, is the physical receptacle of community life. The principles, values and teachings of Islam clearly define the appropriate urban and architectural patterns. Unity and Diversity As a result, Islamic architectural and town- planning styles and patterns have shown striking similarities all over the Islamic world in general , with specific variations to suit different environmental conditions. Islamic art is , therefore, characterized with both unity and diversity, that can be clearly seen in the common style of mosques, houses, residences and similar architectural design and titles of residential districts and market places in Islamic towns. For example, Cairos markets of coppersmiths, jewelers, glass-makers, and spice and silk-dealers, etc. are echoed in Tunis, Fez, Damascus, Baghdad and other cities in both the Mashreq and Maghreb. As Islam is a religion of peace and harmony, so is the Islamic art. When Egyptians embraced Islam, they also loved Arabic that was the medium of the Holy Quran. Music of Language Influenced by the beauty of the Arabic language, the Egyptian artist made use of its intrinsic music in his artistic creations. A close scrutiny of the ground marble at Sultan Hasans Mosque in Cairo for example will reveal the contrast and harmony of colors, in the same pattern of paronomasia and antithesis in figurative literary language. Sometimes, an artist created rhythms of calligraphy and formations. Although the Egyptian artist in the Islamic era inherited a system of interlaced block masonry, he introduced his own system of color distribution. Impressed by harmony and music, together with a long history, the Egyptian artist using such distributions, created a plethora of lively works of art. The Qibla at the mosque itself chants, in colors, calligraphic distributions and verses of the Holy Quran. With an entrenched sense of civilization, the Egyptians had recognized that humans yearn to tunes and harmony. If a noble meaning is coupled with fine melody, it will be soon heart-felt. That is why Egypt was the first Islamic country to know melodious recital of the Holy Quran. Art of engraving As the Egyptian Muslim artist inlaid and adorned, he recalled earlier experience of stone sculpture, gilding, forming, painting, etc.. Here comes out Egyptian Mishkas, as though formed out of the light of rare gems. In addition to its ancient art of sunken or embossed engraving, filigree, enamel coating, Islamic Egypt introduced the art of inlaying that was adapted but never matched by Europe and Italy in particular. In the Fatimid era, deemed by historians to be a turning-point in the history of Egypt in terms of religion, the art of deep engraving, earlier created by ancient Egyptians, re-emerged. This exquisite formation can be seen in the double minbar (pulpit) at the Goos mosque, the mihrab of Sayyda Roqayya and panels of the minor Fatimid palace in display at the Islamic Museum. With its geometric and star-shaped ornamentation Egypt had outrun the most famous Islamic antiquities in the world. This is evidenced in the mausoleums of Imam al-Hussein and Imam al-Shafiie and in Tolons mosque minbar. Decoration is indeed an ancient Egyptian art as reflected in its antiquities and inscriptions. The magnitude of such decorations indicates that they were not simply created with the sole purpose of adornment but rather inspired by the Egyptian vitality and intimate desire to express deep rhythms of life in a visible way. Thanks to Egypt, wooden lattice work, wood assembly and lathe-turning have spread all over the world. These styles, commonly known as Arabesque had been adapted by the Arabs and later copied by Europe through Andalusia. Egypt had presented to Islamic art, al-Jamie al-Aqmar whose facade is a genuine piece of fine art. In this mosque, there appeared for the first time stalactites(Muqarnasat) that later became a unique product of Islamic art. Another Egyptian innovation was gilted mosaic used as coating for the dome of King as-Saleh Najm-Eddin Ayyoub. Egypt was the first to use vaulted ceiling and upgraded the dome that later became one of the most significant features of Islamic art. This art rose to a peak during the Mameluk era. In Egypt, the dome was the pyramids cap. In the hand of the Muslim Egyptian artist, lines became more curved and softer as a side-effect of the new lenient religion. Egypt also introduced glazed ceramic and roof tiles. It further developed mosque architecture, particularly minarets that were a natural extension of Pharaonic obelisks. Hence, it affirmed the importance of both the cultural and religious dimensions in urban architecture and construction. Undoubtedly, the pre-Islamic cultural heritage had a clear impact on Islamic urban architecture, particularly in view of the fact that Islam as a religion was highly responsive to generally acknowledged, beneficial practice. Religious Architecture I slamic traditional and particularly religious monuments are still extant and functioning, while civil and military monuments such as gatehouses, walls, towers and castles are now deserted and unpopulated remains. Fatimid Cairo houses such a great number of gorgeous Islamic monuments that it deserves to be called an open museum. Egypt has always been keen to maintain and safeguard its wealth of Islamic heritage. However, an overall, unprecedented face lifting scheme for the Fatimid Cairo area is underway, in a bid to restore the beauty of the once prosperous district of Cairo. Mishkah A Marvelous, Dazzling Islamic Art With the help of a rich heritage of well-established traditions of craftsmanship and artisanship dating back to millennia, the handicraft of painting on glass flourished in general during Mameluke era. Glazing industry in Egypt prospered during the 16th Century BC and further progressed over time. Muslim contributions to this industry had added many experiences in both art and application. However, glazing industry reached particularly high peaks in the Mameluke era, where Egyptian artisans developed a variety of processing techniques such as blowing, printing, gilding and coloring. Through the Ayyobid and Mameluk eras, Egyptian artisans inherited and further advanced these artistic and technical tradition. Particularly in the Mameluke era, artisans beat the limit in the art of Mishkah-making. Mishkah is a glass housing for lanterns used both to protect candle or torch light against air currents and to diffuse light evenly over the place. The lantern is fixed inside the light housing with wires pegged to the edges. Mishkah itself was hung from the ceiling of mosques with chains of silver or brass tied to handles around the body of Mishkah. There still exist intact about 300 Mishkahs, of which a collection of the largest in number and finest in value and artistic beauty are kept at the Cairo Museum of Islamic Art. Almost all Mishkahs in display belong to the Mameluke state, where Mishkahs-making reached its peak particularly in the 8th Century AH (14th Century AD). Value of Mishkah This art flourished because it was badly needed to light and adorn huge religious facilities. Mameluke Sultans, princes and gentry vied for acquiring Mishkahs in an attempt to gain Allah Almightys favor. As a matter of fact, most of Islamic buildings that still exist in plenty in the older parts of Cairo date back to the Mameluke era. These include inter alia mosques of all types, Bimarstans (hospitals) for medical treatment and study of medicine, khanegas and zawayas ( prayer and accommodation rooms for sufists and ascetics), mausoleums, vaults etc. Due to pressing need for Mishkahs, they were in high demand. As a result , this industry highly prospered and there emerged a large number of artisans. Mishkahs were adorned with a variety of ornaments, chief of which was Arabic calligraphy. In terms of content, calligraphy on Mishkahs ranged from religious to memorial and historical inscriptions. While religious inscriptions often comprised some Quranic verses, memorial ones contained historical and social data often of high importance, such as the name titles, positions of the principal, who could be a sultan, prince, employee, etc., followed by some appropriate supplications. The writings could also indicate the place where Mishkah is destined to be installed, such as the sacred chamber of Prophet Mohammad or other mosques or schools. A Mishkah could also show the name of artisan who made it, such as the signature of Ali Ibn Mohammad of Makkah that appears on a Mishkah presently in display at the Cairo Museum of Islamic Art. Calligraphy on Mishkahs Passages of calligraphy often circumscribed, in wide bands, the neck, base or the whole body of a Mishkah. The style most commonly used in handwritings on Mishkahs was one known to scholars of arts and antiquities as the Mameluke style of calligraphy, characterized with elegant curvature and flowing proportionate letters. Since the 12th Century AD, the Neskhi calligraphic style (the common Arabic cursive script) was used as a substitute for the Kufic style as a monumental one. Naskhi-style handwritings on Mishkahs were in many cases inscribed against a back- ground of floral ornaments consisting of harmonious clusters of plant stems from which leaflets and flowers branched. This is a common style of ornamental Arabic calligraphy. It reflected a happy harmony between Kufic calligraphy with its curved and straight lines and floral decorations with clusters and arches. However, using the same background together with Neskhi style sometimes led to interlocking between letters and ornamental plant stems, unless enamel colors were different for both elements. Nevertheless, Egyptian Muslim artists could, in many instances, achieve a fine harmony between Neskhi calligraphy and its floral background. In most cases, a decorator had to offset the horizontal extension of the arrangement of calligraphy bands around Mishkah by placing, in-between, at equal intervals, round decorative elements containing, in addition to floral ornaments, supplications in favor of the Sultan. Mishkah often showed the owners name and logo, drawn in a gorgeous decorative style. Mishkas and logos A logo is a specific sign or emblem a person takes up exclusively for himself. A logo normally consisted of a drawing of a specific object or creature such as an animal, bird or flower or more than one at the same time. Logos were usually inscribed on all personal property including buildings, garments, utensils or metal work etc. Logos were widely used particularly during the Mameluke era, where they became a formal tradition strictly maintained and cherished by holders. They ultimately turned into an exclusive prerogative of the Sultan and princes. Probably, the logo format was relevant to the vocation or position of owner upon being appointed and granted his logo. For example, a Saqi , would be given a logo shaped like a cup or and a keeper of ink-pot, a logo shaped like an ink-pot or stylo holder. Sometimes, a logo of a prince reflected the meaning of his name, as was the case with prince Aqoush (that means a white bird). Some people would take up logos in the form of animals or birds known for their power such as lions, eagles or cocks, symbolizing their might or greatness. Art museums abound in many gorgeous Mishkas with their splendid decorations. Striking examples of this art can be seen in Mishkas still existing at Sultan Hassans Mosque in Cairo. Coptic Art Where all Civilizations Converge Upon first coming across the phrase Coptic art, one may think it means Christian art. However, Christian art refers to the art of Christians worldwide, while Coptic art means the Egyptian Christian art. Originally, the word Coptic is derived from the ancient Egyptian word Ha-Ka-Petah associated to Petah temple at Memphis. The name of the capital was used figuratively to denote the whole country then called Egiptos. The name was converted into Arabic from Greek as Gipt or Copt as it was later known in English as a specific denomination for Egyptian Christians. Influences and Effects Initially, Coptic art was influenced by some features of the ancient Egyptian civilization, then by the Greek and Roman interloping civilizations due to the distinguished location of Alexandria and finally by the Islamic civilization. However, Coptic art is characterized by its ability to adapt to whatever would establish its identity. Despite the brevity of its evolution, it is an authentic and highly distinct art with a marked ability to adapt to different creeds, beliefs and traditions. Coptic artists were inspired by some Egyptian, Grecized, Greco- Roman, Persian, Byzantine, and Indian forms. But, despite these several contributions, Coptic art has remained Coptic after all. It has never abandoned its indigenous heritage or the traditional principles of folk arts. No wonder then to see models of Coptic arts adorning many museums all over the world. For instance, the Louvre Museum in Paris has a special department for Coptic art, so do the Berlin, Metropolitan, London, and Brussels museums. These contain important pieces of sculpture, marble, textile, ivory, metal and ecclesiastic tools for which Copts were famous since the Second Century AD up to the Sixth Century AD. World-famous universities were deeply interested in such art. Some of these universities such as Leden, Holland and Monster, Germany, provide specialist academic and post-graduate studies in this field. Worldwide, universities like Warsaw and Paris Universities send missions to study Coptic art on site. Moreover, Coptic art was able to respond to the contemporary artistic concerns. It did not only handle subjects from its own perspective, but it also attended , in line with modern trends, to decorations that enhance the aesthetic value of works. Themes of Coptic Art Historians mention many subjects tackled by Coptic art. It is interesting to note that St. Luke the Apostle was himself an adept painter. He is believed to have drawn the Virgin Mary holding infant Jesus Christ. This drawing later became a stereotype in all churches. Historian Father Vancelip mentioned that he witnessed during his visit to the Cathedral of Alexandria an icon of angel Michael drawn by the hands of St.Luke himself. One of the witnesses to Coptic art in its early centuries were the catacombs. These are a series of underground tunnels used for burying the poor often in engraved boxes. The walls of these tunnels were full of symbolic drawings like the picture of a fish that stands for Jesus Christ. Coffins made of marble and carved stones were used for burying rich people. Since Christianity albeit widespread, was recognized only in the third Century AD. As a result, pagan and Coptic arts interlapped as the same artist decorated both Christian and pagan coffins. Gradually, Coptic coffins were made in the Coptic style, with inscriptions showing miracles of Jesus Christ. These chests, were used to keep things as a source of blessing. The chest was covered with drawings of the Virgin Mary, nativity of Jesus Christ and an icon of baptism. Each drawing expresses a place of the holy visit. One of these chests, dating back to the end of the fifth Century AD, is kept at the Vatican Museum. These vessels were sold in Jerusalem and the holy places during the Fifth and the Sixth Centuries. They were filled with the water of the holy river for visitors of the holy places to take back home. These vessels were adorned with drawings of the cross on one side and the resurrection and seven drawings of holy scenes. These vessels were named after the city of Monza in Italy where a large collection of them were found. Evolution of Coptic Art Evolution of Coptic art can be traced in three stages as follows: 1- Stage of Awakening By the end of the third Century AD, Coptic art started to emerge. During this time, Coptic art derived effective element for awakening from the history of ancient Egyptian art . However, Coptic art was much more concerned with the idea than the contrast between mass and space as it believed this would overshadow the idea. Coptic art in fact sought after the spiritual satisfaction and the perception of the invisible through symbol. Coptic art had been initially influenced by the Greek and Roman mythology. Initially unaware of Christian traditions, Coptic artists drew nude pictures in the style of Greek mythology. It was only when they grasped the principles of Christianity that they started to make drawings expressing its solemn traditions. 2- Stage of Consummation During the mid-fifth Century, there appeared several drawings expressing a blend of pagan and Christian beliefs. Then, gradually pagan drawings disappeared and only the Christian ones with their purely Coptic symbols persisted. During this period also Coptic decoration, unlike the Greek and Roman art had no longer an architectural function. Coptic sculptors chose decorations specifically for ornamentation and beautification purposes. During this period, Coptic art could create great works expressive of innermost feelings, tending to idealism. It also translated reality into levels more elevated than realism and expression is stage of wide dissemination from the eighth to the twelfth Century. 3- Stage of Proliferation As Coptic works of art continued to adhere to Grecized trends, researchers came to the conclusion that Coptic art will remain unchanged. Thus, they arbitrarily disregarded the bulk of great Coptic productions made after the Islamic conquest. However, they later came to realize that Coptic art was able to grow and evolve even under the Islamic reign. Thus, Coptic art had progressed from a folk art to an ornamental art in its stages of boom, associating itself with traditions of the Pharaonic art. Despite the Hellenistic occupation, it emerged as the real heir to Pharaonic art and could inspire Nubian art with some elements and to lend Islamic art some of its main distinct features inside and outside Egypt. Who are we? Tour Egypt aims to offer the ultimate Egyptian adventure and intimate knowledge about the country. We offer this unique experience in two ways, the first one is by organizing a tour and coming to Egypt for a visit, whether alone or in a group, and living it firsthand. The second way to experience Egypt is from the comfort of your own home: online.
<urn:uuid:f8606126-2f69-445c-8853-d451e8e481c7>
CC-MAIN-2016-26
http://www.touregypt.net/featurestories/art.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00055-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962577
3,922
3.375
3
305 - Accounting - BACC AY 2014-2015 Student Learning Outcomes - Students will use accounting systems to prepare accounting information. - Students will evaluate and interpret accounting information. - Students will collect and summarize evidence to 1) justify a position or opinion or 2) support a course of action using accounting standards, regulations, theory, laws, or accounting records. - Students will understand the fundamental principles of financial and cost accounting, taxation, and auditing.
<urn:uuid:b827d7d5-cd0f-40d1-b91c-0fc08454a38a>
CC-MAIN-2016-26
https://www.cameron.edu/transparency/business
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.813446
95
2.828125
3
|Scientific Name:||Acipenser nudiventris| |Species Authority:||Lovetsky, 1828| |Infra-specific Taxa Assessed:| |Red List Category & Criteria:||Critically Endangered A2cde ver 3.1| |Assessor(s):||Gesner, J., Freyhof, J. & Kottelat, M.| |Reviewer(s):||Pourkazemi, M. & Smith, K.| The species is known from the Black, Aral and Caspian seas. However, it is extirpated from the Aral Sea, nearly extirpated in the Black Sea basin and there are only occasional records from lower Volga. The only remaining population occurs in the rivers Ural (Russia, Kazakhstan) and possibly the Rioni (Georgia - last recorded 1997 through bycatch; there are no recent surveys), and possibly the Safid Rud (seven individuals recorded in 2002) in Iran. In Europe, it is thought that few individuals exist in the Danube - indeed it is considered possibly extinct. Even though there is no catch data it is suspected that the species has undergone a population decline of more than 90% in the past three generations (estimated at 45 years) which is expected to continue. It is believed the species is on the verge of global extinction. The largest population is in Lake Balkash (introduced for commercial reasons) which is outside the species natural range. |Previously published Red List assessments:|| |Range Description:||This species has been recorded from the Black, Azov, Caspian and Aral Seas, and some rivers (Danube up to Bratislava, Volga up to Kazan, Ural up to Chkalov, Don and Kuban, Rioni). It was introduced to Lake Balkhash (Kazakhstan), to the upper Illi River in China, and to River Syr-Darya (Aral basin) in the 1960s.| Native:Azerbaijan; Georgia; Hungary; Iran, Islamic Republic of; Kazakhstan; Russian Federation; Serbia (Serbia); Turkey |FAO Marine Fishing Areas:|| Mediterranean and Black Sea |Range Map:||Click here to open the map viewer and explore range.| |Population:||It is currently known from the Caspian Sea, where it ascends only the Ural river (where it naturally reproduces) and the Sefid Rud River (where there is no natural reproduction), where 5 fish were caught in 2002 (Parandavar et al. 2009). In the Black Sea, it ascends the Rioni (last recorded 1997 through bycatch (Zarkua pers. comm.)). In the Danube it was last recorded in 2003 in Serbia at Apatin (released alive) and in 2005 in Mura in Hungary (killed); both these caught fish were males (Simonovic et al. 2003; Streibel pers. comm.). In Romania, according to a fisherman survey carried out between 1996-2001, 15 individuals were caught by Romanian fishermen (last scientifically recorded in 1950s) (Suciu et al. 2009) . Little catch data is available. It has not been caught in Ukraine for the past 30 years. In Kasakhstan 12 tonnes were caught in 1990, 26 tonnes in 1999; in Iran 1.9 tonnes were caught in 1990, 21 tonnes in 1999 (CITES Doc. AC.16.7.2), and 1 ton in 2005/6, with 0.5-1% of total sturgeon catch in Iran belonging to this species (in past 20 years) (Pourkazemi pers. comm.). According to Caspian Aquatic Bioresource Commission (CAB), since 2001/2 export quota for caviar is zero for all Caspian range states. |Current Population Trend:||Decreasing| |Habitat and Ecology:||Habitat : At sea, close to shores and estuaries. In freshwater, deep stretches of large rivers. Juveniles in shallow riverine habitats. This species spawns in strong-current habitats in main courses of large and deep rivers on stone or gravel bottom. Biology: Anadromous (spending at least part of its life in salt water and returning to rivers to breed), with some non-migratory freshwater populations. Males reproduce for the first time at 6-15 years, females at 12-22, with an average generation length of 15 years (but in the Danube, the average population age has now increased and in the Caspian Sea, the average population age is decreasing because of overharvesting). In most drainages, there are two migration runs, in spring and autumn. Individuals migrating in autumn remain in the river until the following spring to spawn. Females reproduce every 2-3 and males every 1-2 years in March-May and at temperatures above 10°C. Most juveniles move to sea in their first summer and remain there until maturity. Some individuals remain in freshwater for a longer period. Feeds on a wide variety of benthic fishes, molluscs and crustaceans. This species has the highest relative fecundity for any sturgeon species (Chebanov pers. comm.). |Generation Length (years):||12-22| |Movement patterns:||Full Migrant| |Use and Trade:|| Skin is used as leather, Caviar is used as cosmetic and medicinal purposes. The cartilage is used medicinal use, the intestine is used as sauce (food) and to produce gelatine, and the swim bladder is used as glue. Natural and ranched individuals are used from Kazakstan, ranched from Iran, captive bred from Russia. All international trade is historical as trade was banned from 2001. However some illegal trade does exist. Over harvesting, bycatch and illegal fishing (poaching) along with dams, water abstraction and drought has led to the loss of spawning habitats/ground and has caused massive population declines. In the Caspian Sea and Sea of Azov the illegal sturgeon catch for all species was evaluated to be 6 to 10 times the legal catch (CITES Doc. AC.16.7.2). Transfers of A. stellatus from the Caspian Sea, carrying a nematode parasite, were introduced to the Aral sea in the late 1960s and along with increasing salinity, helped cause the extirpation of A. nudiventris in the Aral sea within a few years (Gessner, J. pers. comm.). The Allee affect is also a potential threat to the species (Gessner, J. pers comm.). Hybridisation between this species and all sturgeons and especially A. stellatus occurs in freshwater naturally (Chebanov pers. Com.). |Conservation Actions:||There is a zero quota of exporting of Caviar (CAB) but there is still a catch for domestic use. Iran and Russia have established gene bank conservation for this species for both live specimens and cryopreservation with DNA and tissue samples. The 2004 progeny have been produced from captive bred individuals - juveniles were released into the Don and Kuban rivers - and there are between 15 and 20 'farms' in Russia (Chebenov pers comm.). In Iran 80,000-1 million fingerlings (3-5 g each) (from ranched individuals) are released annually to the Caspian Sea (Pourkazemi pers comm.). This species was listed on CITES Appendix II in 1998.| |Citation:||Gesner, J., Freyhof, J. & Kottelat, M. 2010. Acipenser nudiventris. The IUCN Red List of Threatened Species 2010: e.T225A13038215. . Downloaded on 27 June 2016.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:a444bbda-34e8-4d12-b03d-2f63c8301392>
CC-MAIN-2016-26
http://www.iucnredlist.org/details/225/0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908829
1,711
2.71875
3
Betsy Franco does a fantastic job here of explaining a math concept in a way that feels like poetry. Zero is not necessarily an easy concept for young children. You can’t count to zero. Franco here demonstrates zero in ways children can easily understand: zero is the leaves on the tree in autumn, zero is the bikes in the bike rack on the last day of school, zero is the sound snowflakes make on your mittens. The simple watercolor illustrations and spare text make for a book that is beautiful to read and that enhances a mathematical concept. We were reading about zero because David and I have started using Life of Fred for a math supplement. If you don’t know Fred, it’s a funny quirky approach to math. David adores it. The concepts presented are sometimes deceptively simple and the elementary books can be read through very fast. However, one of the things I’ve liked about Fred is that it’s opened the door to talking about math and playing with math on our own. One of the concepts presented in Life of Fred:Apples was the idea that sets of zero are the most common sets. One exercise was to play a game where one person thought of a set of one and the other had to think of three sets of zero to go with it. Set of one: Mommy has one nose. Sets of zero: Mommy has zero purple polka-dotted noses. Mommy has zero elephants sitting on her nose. Mommy has zero roses growing out of her nose. You get the idea. Much hilarity for a five year old. For more great juvenile nonfiction check out Nonfiction Monday at Rasco from RIF.
<urn:uuid:f5ca928d-f52f-4123-a7c5-15f1d741c2b9>
CC-MAIN-2016-26
https://supratentorial.wordpress.com/2012/03/12/there-are-zero-purple-polka-dotted-elephants-in-this-post/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00148-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960921
351
3.15625
3
Learn something new every day More Info... by email Prosthetic limbs are artificial devices that provide a portion of the functions normally provided by natural arms and legs. Often employed when a loss of limb occurs due to an accident or birth defect, the prostheses make it possible for individuals to enjoy more mobility and a better quality of life. A prosthetic limb may be a simple device that is functionally efficient, or an enhanced limb that is configured to have an appearance and range of motion that is very similar to that of a natural limb. The concept of prosthetic limbs is found in many ancient cultures. Early prosthetic legs were carved from wood or other elements to provide warriors who had lost a portion or all of a leg in battle with at least some degree of mobility. In some cases, the design was very similar to that of a classic peg leg, essentially fitting to the knee joint or hip and making it possible to walk with the aid of a cane or staff. Today, artificial legs are often sophisticated devices that function with the use of a power source and sometimes sensors that make it possible to create a form of communication process between the device and the individual. Unlike the legs of centuries past, these newer prosthetic limbs often feature hydraulics that allow the leg to bend with the aid of an artificial knee joint, as well as balance on a prosthetic foot that mimics the actions of a human foot. Cosmetically, some of the replacement legs are covered with synthetic materials that have a appearance and feel that is similar to that of natural skin. Prosthetic arms are also more sophisticated than in times past. Different models can be used to replace a forearm and hand, or an entire arm. Sensors mounted on the device at the point where the artificial arm fits onto the human body helps the wearer achieve more control of the limb, making it easier to operate the mechanism and simulate the natural movements of a human arm. The latest generation of artificial arms often are equipped with hands configured with digits that are capable of providing some ability to grip and perform functions in a manner similar to those of a human hand. In general, prosthetics are used to replace a lost limb and restore some range of motion and mobility. However, there are also customized prosthetic limbs that are specifically configured to enhance a particular function. One example is that of prosthetic leg attachments that bear little resemblance to a natural foot, but provide excellent balance. Some of these designs also enhance the ability to run as well as walk. This function can allow athletically minded people to continue enjoying physical activities even after losing an arm or leg. While prosthetic limbs in years past were not often built to the specifications of an individual, customized limbs are much more common today. This makes it possible to acquire an artificial limb that fits more comfortably, is easier to operate, and in general provides an enhanced level of service to the wearer. While many models are expensive, others are more affordable and can often be secured with the use of insurance coverage or programs designed to assist people with physical disabilities. It is not unusual for amputees to own several prosthetic limbs. Some of the limbs may be used for particular tasks, such as participating in sports. Others may be constructed to cosmetically resemble human limbs and are worn in social situations. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
<urn:uuid:2f4484ef-8c4c-41e9-92b7-28c74c197f2b>
CC-MAIN-2016-26
http://www.wisegeek.com/what-are-prosthetic-limbs.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963845
719
3.453125
3
Blinking. It's not just about lube... Why Do We Blink So Frequently? We all blink. A lot. The average person blinks some 15-20 times per minute—so frequently that our eyes are closed for roughly 10% of our waking hours overall. Although some of this blinking has a clear purpose—mostly to lubricate the eyeballs, and occasionally protect them from dust or other debris—scientists say that we blink far more often than necessary for these functions alone. Thus, blinking is physiological riddle. Why do we do it so darn often? In a paper published today in the Proceedings of the National Academy of Sciences, a group of scientists from Japan offers up a surprising new answer—that briefly closing our eyes might actually help us to gather our thoughts and focus attention on the world around us. The researchers came to the hypothesis after noting an interesting fact revealed by previous research on blinking: that the exact moments when we blink aren’t actually random. Although seemingly spontaneous, studies have revealed that people tend to blink at predictable moments. For someone reading, blinking often occurs after each sentence is finished, while for a person listening to a speech, it frequently comes when the speaker pauses between statements. A group of people all watching the same video tend to blink around the same time, too, when action briefly lags. As a result, the researchers guessed that we might subconsciously use blinks as a sort of mental resting point, to briefly shut off visual stimuli and allow us to focus our attention. To test the idea, they put 10 different volunteers in an fMRI machine and had them watch the TV show “Mr. Bean” (they had used the same show in their previous work on blinking, showing that it came at implicit break points in the video). They then monitored which areas of the brain showed increased or decreased activity when the study participants blinked. Their analysis showed that when the Bean-watchers blinked, mental activity briefly spiked in areas related to the default network, areas of the brain that operate when the mind is in a state of wakeful rest, rather than focusing on the outside world. Momentary activation of this alternate network, they theorize, could serve as a mental break, allowing for increased attention capacity when the eyes are opened again. To test whether this mental break was simply a result of the participants’ visual inputs being blocked, rather than a subconscious effort to clear their minds, the researchers also manually inserted “blackouts” into the video at random intervals that lasted roughly as long as a blink. In the fMRI data, though, the brain areas related to the default network weren’t similarly activated. Blinking is something more than temporarily not seeing anything. It’s far from conclusive, but the research demonstrates that we do enter some sort of altered mental state when we blink—we’re not just doing it to lubricate our eyes. A blink could provide a momentary island of introspective calm in the ocean of visual stimuli that defines our lives. Read more: http://blogs.smithsonianmag.com/scie...#ixzz2HBLttFZA Follow us: @SmithsonianMag on Twitter
<urn:uuid:a8c5f3f1-4873-429b-8cd9-119478443836>
CC-MAIN-2016-26
http://www.chiefsplanet.com/BB/showpost.php?p=9290899&postcount=473
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93219
662
3.03125
3
There are a number of frameworks available for developing cross-platform applications. JUCE achieves this with a combination of consistency, ease-of-use, and breadth of functionality. JUCE is a C++ toolkit for building cross-platform applications on PC, Mac, Linux, iOS, and Android. It encourages you to write consistent code and is particularly good for complex, customized GUIs and audio/midi processing; it also includes a vast range of classes to help with all your day-to-day programming tasks. Getting Started with JUCE is a practical, hands-on guide to developing applications using JUCE which will help you get started with many of the core aspects of the JUCE library. The book guides you through the installation of JUCE and covers the structure of the source code tree including some of the useful tools available for creating JUCE projects. Getting Started with JUCE will guide you through how to use the JUCE library, from the installation of basic tools to developing examples using many of its classes. It will take you through a series of practical examples that show you how to create user interfaces, illustrating the key features. You will also learn how to deal with files, text strings, and other fundamental data using the JUCE library. In particular, you will learn how to create user interfaces both using code and using the Introjucer tool to layout and configure user interface functionality. You will also manipulate image and audio data and learn how to read and write common media file formats. With this book, you will learn everything you need to know to understand some of the additional helpful utilities offered by JUCE and how to use the JUCE documentation to get started with such classes. This book is a fast-paced, practical guide full of step-by-step examples which are easy to follow and implement. Who this book is for This book is for programmers with a basic grasp of C++. The examples start at a basic level, making few assumptions beyond fundamental C++ concepts. Those without any experience with C++ should be able to follow and construct the examples, although you may need further support to understand the fundamental concepts.
<urn:uuid:5b73a5c8-915c-4240-bbad-107bf76972fa>
CC-MAIN-2016-26
http://shop.oreilly.com/product/9781783283316.do?sortby=publicationDate
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00091-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928676
450
2.53125
3
What is introspection? Nothing! Or rather, almost everything. A long philosophical tradition, going back at least to Locke, has held that there is a distinctive faculty by means of which we know our own minds -- or at least our currently ongoing stream of conscious experience, our sensory experience, our imagery, our emotional experience and inner speech. "Reflection" or "inner sense" or introspection is, in this common view, a single type of process, yielding highly reliable (maybe even infallibly certain) knowledge of our own minds. Critics of this approach to introspection have tended to either: (a.) radically deny the existence of the human capacity to discover a stream of inner experience (e.g., radical behaviorism); (b.) attribute our supposedly excellent self-knowledge of experience to some distinctive process other than introspection (e.g., expressivist or transparency approaches, on which "I think that..." is just a dongle added to a judgment about the outside world, no inward attention or scanning required); or (c.) be pluralistic in the sense that we have one introspective mechanism to scan our beliefs, another to scan our visual experiences, another to scan our emotional experiences.... But here's another possibility: Introspective judgments arise from a range of processes that is diverse both within-case (i.e., lots of different processes feeding any one judgment) and between-case (i.e., very different sets of processes contributing to the judgment on different occasions) and yet also allows that introspective judgments arise partly through a relatively direct sensitivity to the conscious experiences that they are judgments about. Consider an analogy: You're at a science conference or a high school science fair, quickly trying to take in a poster. You have no dedicated faculty of poster-taking-in. Rather, you deploy a variety of cognitive resources: visually appreciating the charts, listening to the presenter's explanation, simultaneously reading pieces of the poster, charitably bringing general knowledge to bear, asking questions and listening to responses both for overt content and for emotional tone.... It needn't be the same set of resources every time (you needn't even use vision: sometimes you can just listen, if you're in the mood or visually impaired). Instead, you flexibly, opportunistically use a diverse range of resources, dedicated to the question of what are the main ideas of this poster, in a way that aims to be relatively directly sensitive to the actual content of the poster. Introspection, in my view, is like that. If I want to know what my visual experience is right now, or my emotional experience, or my auditory imagery, I engage not one cognitive process that was selected or developed primarily for the purpose of acquiring self-knowledge; rather I engage a diversity of processes that were primarily selected or developed for other purposes. I look outward at the world and think about what, given that world, it would make sense for me to be experiencing right now; but also I am attuned to the possibility that I might not be experiencing that, ready to notice clues pointing a different direction. I change and shape my experience in the very act of thinking about it, often (but not always) in a way that improves the match between my experience and my judgment about it. I have memories (short- and long-term), associations, things that it seems more natural and less natural to say, views sometimes important to my self-image about what types of experience I tend to have, either in general or under certain conditions, emotional reactions that color or guide my response, spontaneous speech impulses that I can inhibit or disinhibit. Etc. And any combination of these processes, and others besides, can swirl together to precipitate a judgment about my ongoing stream of experience. Now the functional set-up of the mind is such that some processes' outputs are contingent upon the outputs of other processes. Pieces of the mind stay in sync with what is going on in other pieces, keep a running bead on each other, with varying degrees of directness and accuracy. And so also introspective judgments will be causally linked to a wide variety of other cognitive processes, including normally both relatively short and relatively circuitous links from the processes that give rise to the conscious experiences that the introspective judgments are judgments about. But these kinds of contingencies imply no distinctive introspective self-scanning faculty; it's just how the mind must work, if it is to be a single coherent mind, and it happens preconsciously in systems no-one thinks of as introspective, e.g., in the early visual system, as well as farther downstream. [For further exposition of this view, with detailed examples, see my essay Introspection, What?]
<urn:uuid:ef7db58a-92f6-4103-9e1b-aecb30d9be4d>
CC-MAIN-2016-26
http://schwitzsplinters.blogspot.com/2014/04/a-negative-pluralist-account-of.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948076
972
2.828125
3
While REDD+ is aimed at reducing emissions from forests, its effectiveness will depend on how much the benefits trickle down to those living closest to the forest. These same rural households are also best placed to provide local evidence of what works and what doesn’t, to influence decisions on REDD+ architecture at the national and international level. An international mechanism that aims to provide incentives to conserve and restore forests and to sustainably manage forests and enhance forest carbon stocks called REDD+: Reduced Emissions from Deforestation and Forest Degradation, faces challenges that go beyond saving or planting trees. Yes – the ultimate goal of any REDD+ programme is to attempt to mitigate climate change by curbing the vast deforestation that has occurred in recent history. But without tackling the social issues, REDD+ will not go very far. As Prof. Dr. Niels Elers Koch highlighted at Cifor’s Forest Day, the inclusion of socioeconomic goals upfront will increase the likelihood of achieving carbon and biodiversity goals. A first step is to find out what has the potential to work and where – and this includes whether it will reflect what people want. Last Thursday, IIED held a workshop [PDF] at the 18th UNFCCC in Doha, to explore the challenges and potential of a pro-poor REDD+ approach. One big issue is the costs. If REDD+ is going to work for the poor and genuinely compensate them for lost opportunities, what sort of price are we talking? It turns out the answer is very different, depending where you are looking. Take Vietnam: REDD+ will never be able to compete with high value crops in the country. But Adrian Enright from SNV Vietnam showed that in some areas of low return smallholder agriculture in Lam Dong province, REDD+ compensation wouldn’t have to be high to work. Restricting further expansion through forest clearing could be adequately compensated, and bring improvements to the livelihoods of the communities there. We also heard from Gene Birikorang of Hamilton Resources, Ghana, who showed how the ‘plus’ activities of REDD+, in this case, treeplanting on farms, could in theory be attractive for cocoa farmers. This comes with a caveat, however. Without support to cover the upfront costs of planting the trees this would be out of reach of the majority of farmers, as the initial tree planting costs would be over 90% of the average annual household income. In such circumstances, it would have more chances of working if combined in the initial years with alternative livelihoods, such as beekeeping, to help finance the transition. And yet these opportunity costs are only part of the considerations to be taken into account. These analyses assume that benefits from REDD+ will be channelled to those who are bearing the costs. Supporting structures and safeguards are needed to make sure this happens, and they can be costly. SNV have been working on preliminary estimates of a benefit distribution system, designed to ensure that the livelihoods of the poor are not compromised. These early results suggest that the upfront costs will be high but that, once running, operational costs would be much lower. Over time, costs could come down as experience is gained and economies of scale achieved. But even where it is clear that a REDD+ intervention can be financially attractive for small farmers and forest-dependent communities, it is critically important to design the intervention in ways that respond to their concerns and wishes when it comes to things like the type of compensation or who will manage the funds. For example, researchers at Sokoine University of Agriculture, Tanzania, found that households in the area of the Kilosa REDD+ pilot project prefer other forms of compensation, such as increased employment and better social services, over direct payment. Researchers at Makerere University, Uganda studied the preferences of people in another proposed REDD+ pilot project, the Ongo Community Forest, and found that the government was the least-favoured organisation to manage the REDD+ compensation scheme. IIED’s workshop highlighted some different models for pro-poor REDD+ in practice. The Bolsa Floresta (which means Forest Allowance), run by the Amazonas Sustainable Foundation in forest reserves in the Amazonas state Brazil, rewards communities who are not deforesting by distributing payments to families and community associations. The scheme combines cash payments to each household with support to community level investments in social services and development of local income-generating activities. The Micaia Foundation in Mozambique pursues an inclusive business approach in which communities have a share of the equity in an ecotourism enterprise and a honey business. Micaia intends to build on this experience to apply to REDD+ in Mozambique as a more sustainable approach, as it does not rely on payments in the long term. REDD+ was conceived to respond to market failure and to improve decisions about forest and land use. But decisions on the design of REDD+ also need to be adequately informed to ensure that social goals are taken into account. The challenge is that there are many layers of decision-making. Decisions taken at the international and national levels can constrain the choices at the local, community and household level, and determine how much REDD+ can help improve the livelihoods of poor households. There is often a lack of knowledge among policymakers at the higher levels of decision-making about the wishes, interests and concerns of the local communities, households and minorities living at the local level. For REDD+ to work for poor forest-dependent communities, the voices of rural households and local evidence of what works and what doesn’t, need to influence decisions on REDD+ architecture at national and international level. And the interests of the urban poor, who often depend on forest resources for their energy needs, should also be considered. What is it critical to keep researching these two key questions: what do people want from REDD+, and, if they get it, will it work? The workshop and research formed part of a Norwegian Agency for Development Cooperation (Norad) funded project on poverty and sustainable development impacts of REDD+ architecture: options for equity, growth and the environment. Read the baseline survey results from the project.
<urn:uuid:1a994d53-2fa1-4ebb-98c3-67dce6c86b4b>
CC-MAIN-2016-26
http://www.iied.org/redd-what-is-needed-to-make-it-work-for-poor
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00055-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953402
1,279
3.1875
3
I really feel ashamed to ask this question however I don't have time for revision. Also not a native English speaker, so excuse my lack of math vocabulary. I am writing a program that requires assigning probabilities to variables then selecting one randomly. Imagine that I have I coin, I would like to assign the probably of 70% to heads and 30% to tails. When I toss it I would like to have 70% chance that the heads appears and 30% tails. A dumb way to do it is to create an array of cells insert the heads 70 cells and the tail in 30. Randomize them and select one randomly. I would like to be pointed to how I can do it mathematically, Without creating the array. Edit 1: I also would like to point out that I am not limited to 2 variables. For example lets say that I have 3 characters to select between (*,\$,#) and I want to assign the following probably to each of them * = 30%, \$ = 30%, and # = 40%. That's why I did not want to to use the random function and wanted to see how it was done mathematically.
<urn:uuid:77d8fc08-4791-414f-bf8b-60e2f102c6bd>
CC-MAIN-2016-26
http://math.stackexchange.com/questions/274990/probablity-of-a-random-variable
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97083
239
2.59375
3
The topography of Savai’i (background) and Upolu (foreground), the two large islands of the Independent State of Samoa, is well shown in this color-coded perspective view generated with digital elevation data from the Shuttle Radar Topography Mission (SRTM). The Samoan Islands are a product of volcanism, which is primarily evidenced by numerous volcanic cones, many of which are seen in this view. Tropical rainfall has deeply eroded parts of these islands, but most of the land surface is depositional: the product of lava flows, some of which have occurred in historic times. The total area of these islands is about 2,800 square kilometers (about 1,000 square miles). The highest point in Samoa is Mauga Silisili on Savai’i (1,858 meters, or 6,096 feet). On September 29, 2009, a tsunami generated by a major undersea earthquake located about 200 kilometers (120 miles) south of Samoa inundated villages on the southern coast of the islands with an ocean surge perhaps more than 3 meters (10 feet) deep. It also impacted the more heavily populated northern coasts with a surge measured at nearly 1.5 meters (4 feet) at the capital city Apia (on Upolu). Scores of casualties have been reported. Digital topographic data such as those produced by SRTM aid researchers and planners in predicting which coastal regions are at the most risk from such waves, as well as from the more common storm surges caused by tropical storms and even sea level rise. Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shaded image was derived by computing topographic slope in the northeast-southwest direction, so that northeast slopes appear bright and southwest slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations. The image was then projected using the elevation data to produce this perspective view, with the topography exaggerated by a factor of two. Elevation data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Science Mission Directorate, Washington, D.C. Location: 14 degrees South latitude, 172 degrees West longitude Orientation: Northwest perspective view Size: approximately 150 by 75 kilometers (100 by 50 miles) SRTM Data Acquired: February 2000
<urn:uuid:1abb7304-5f18-4ce8-98da-6637e7563005>
CC-MAIN-2016-26
http://photojournal.jpl.nasa.gov/catalog/PIA11966
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00107-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949892
676
3.625
4
NASA has turned off its Galaxy Evolution Explorer (GALEX) after a decade of operations in which the venerable space telescope used its ultraviolet vision to study hundreds of millions of galaxies across 10 billion years of cosmic time. "GALEX is a remarkable accomplishment," said Jeff Hayes, NASA's GALEX program executive in Washington, D.C. "This small Explorer mission has mapped and studied galaxies in the ultraviolet, light we cannot see with our own eyes, across most of the sky." Operators at Orbital Sciences Corporation in Dulles, Virginia, sent the signal to decommission GALEX at 3:09 p.m. EDT Friday, June 28. The spacecraft will remain in orbit for at least 65 years, then fall to Earth and burn up upon re-entering the atmosphere. GALEX met its prime objectives and the mission was extended three times before being cancelled. Highlights from the mission's decade of sky scans include: - Discovering a gargantuan, comet-like tail behind a speeding star called Mira - Catching a black hole "red-handed" as it munched on a star - Finding giant rings of new stars around old, dead galaxies - Independently confirming the nature of dark energy - Discovering a missing link in galaxy evolution — the teenage galaxies transitioning from young to old The mission also captured a dazzling collection of snapshots, showing everything from ghostly nebulae to a spiral galaxy with huge, spidery arms. In a first-of-a-kind move for NASA, the agency in May 2012 loaned GALEX to the California Institute of Technology in Pasadena, which used private funds to continue operating the satellite while NASA retained ownership. Since then, investigators from around the world have used GALEX to study everything from stars in our own Milky Way galaxy to hundreds of thousands of galaxies 5 billion light-years away. In the space telescope's last year, it scanned across large patches of sky, including the bustling, bright center of our Milky Way. The telescope spent time staring at certain areas of the sky, finding exploded stars, called supernovae, and monitoring how objects, such as the centers of active galaxies, change over time. GALEX also scanned the sky for massive, feeding black holes and shock waves from early supernova explosions. "In the last few years, GALEX studied objects we never thought we'd be able to observe, from the Magellanic Clouds to bright nebulae and supernova remnants in the galactic plane," said David Schiminovich of Columbia University, a longtime GALEX team member who led science operations over the past year. "Some of its most beautiful and scientifically compelling images are part of this last observation cycle." Data from the last year of the mission will be made public in the coming year. "GALEX, the mission, may be over, but its science discoveries will keep on going," said Kerry Erickson, the mission's project manager at NASA's Jet Propulsion Laboratory in Pasadena, California.
<urn:uuid:a0b677c4-7345-42b6-92be-0174872c8ac7>
CC-MAIN-2016-26
http://www.astronomy.com/news/2013/07/nasa-decommissions-its-galaxy-hunter-spacecraft
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927045
627
3.265625
3
Surprising new evidence which overturns current theories of how humans colonised the Pacific has been discovered by scientists at the University of Leeds, UK. The islands of Polynesia were first inhabited around 3,000 years ago, but where these people came from has long been a hot topic of debate amongst scientists. The most commonly accepted view, based on archaeological and linguistic evidence as well as genetic studies, is that Pacific islanders were the latter part of a migration south and eastwards from Taiwan which began around 4,000 years ago. But the Leeds research - published today in The American Journal of Human Genetics - has found that the link to Taiwan does not stand up to scrutiny. In fact, the DNA of current Polynesians can be traced back to migrants from the Asian mainland who had already settled in islands close to New Guinea some 6-8,000 years ago. The type of DNA extracted and analysed in this kind of study is that stored in the cell's mitochondria. Mitochondrial DNA (mtDNA) is passed down the maternal line, providing a record of inheritance which goes back thousands of years. The scientists look for genetic signatures which enable them to classify the DNA into different lineages and then use a 'molecular clock' to date when these lineages moved into different parts of the world. Lead researcher, Professor Martin Richards, explains: "Most previous studies looked at a small piece of mtDNA, but for this research we studied 157 complete mitochondrial genomes in addition to smaller samples from over 4,750 people from across Southeast Asia and Polynesia. We also reworked our dating techniques to significantly reduce the margin of error. This means we can be confident that the Polynesian population - at least on the female side - came from people who arrived in the Bismarck Archipelago of Papua New Guinea thousands of years before the supposed migration from Taiwan took place." Nevertheless, most linguists maintain that the Polynesian languages are part of the Austronesian language family which originates in Taiwan. And most archaeologists see evidence for a Southeast Asian influence on the appearance of the Lapita culture in the Bismarck Archipelago around 3,500 years ago. Characterised by distinctive dentate stamped ceramics and obsidian tools, Lapita is also a marker for the earliest settlers of Polynesia. Professor Richards and co-researcher Dr Pedro Soares (now at the University of Porto), argue that the linguistic and cultural connections are due to smaller migratory movements from Taiwan that did not leave any substantial genetic impact on the pre-existing population. "Although our results throw out the likelihood of any maternal ancestry in Taiwan for the Polynesians, they don't preclude the possibility of a Taiwanese linguistic or cultural influence on the Bismarck Archipelago at that time," explains Professor Richards. "In fact, some minor mitochondrial lineages back up this idea. It seems likely there was a 'voyaging corridor' between the islands of Southeast Asia and the Bismarck Archipelago carrying maritime traders who brought their language and artefacts and perhaps helped to create the impetus for the migration into the Pacific. "Our study of the mtDNA evidence shows the interactions between the islands of Southeast Asia and the Pacific was far more complex than previous accounts tended to suggest and it paves the way for new theories of the spread of Austronesian languages." The study, which involved researchers from the UK, Taiwan and Australia, was mainly funded by the British Academy, the Bradshaw Foundation and the European Union.
<urn:uuid:b8c521de-9069-44d5-bc48-d62df7580c2e>
CC-MAIN-2016-26
http://www.eurekalert.org/pub_releases/2011-02/uol-gsu020111.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957038
725
3.703125
4
SANTA BARBARA, Calif., Jan. 28 (UPI) -- The increased frequency of drought conditions in Eastern Africa for the last 20 years is likely to continue while global temperatures rise, researchers say. Frequent or prolonged drought poses increased risk to millions of people in Kenya, Ethiopia and Somalia, who currently face potential food shortages, researchers at the University of California, Santa Barbara, and the U.S. Geological Survey say. They say warming of the Indian Ocean causing decreased rainfall in eastern Africa is linked to global warming, a UCSB release reported Friday. "Global temperatures are predicted to continue increasing, and we anticipate that average precipitation totals in Kenya and Ethiopia will continue decreasing or remain below the historical average," Chris Funk, a USGS scientist, says. "The decreased rainfall in Eastern Africa is most pronounced in the March to June season, when substantial rainfall usually occurs." The research is part of an effort to identify areas of potential drought and famine, to target food aid and help inform agricultural development, environmental conservation and water resources planning. "Forecasting precipitation variability from year to year is difficult, and research on the links between global change and precipitation in specific regions is ongoing so that more accurate projections of future precipitation can be developed," Park Williams, a postdoctoral fellow in the UCSB Department of Geography, says.
<urn:uuid:49736e77-fa6f-4ea4-9761-2b2321645e24>
CC-MAIN-2016-26
http://www.upi.com/Science_News/2011/01/28/More-Africa-droughts-as-global-temps-rise/UPI-76271296266200/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925458
272
2.765625
3
A report says New Hampshire’s coastal communities’ exposure to future flood risks is significant, and now’s the time to plan to minimize that. The New Hampshire Coastal Risk and Hazards Commission spent 2 1/2 years looking at areas vulnerable to extreme precipitation, projected storm surge and rising sea levels. The report released on March 18 says the communities avoided the extreme impacts of Tropical Storm Irene and Superstorm Sandy, but they have experienced other events, like a nor’easter in February 2013 and the October snowstorm of 2011. “These types of storms will have even greater impacts as sea levels continue to rise, and floods will continue to worsen as extreme rain events intensify,” the report said. Based on available data, sea levels in New Hampshire have been rising by an average of 0.7 inches per decade since 1900. The rate of sea-level rise has increased to about 1.3 inches per decade since 1993. Using 1992 levels as a baseline, New Hampshire sea levels are expected to rise between 0.6 and 2 feet by 2050 and between 1.6 and 6.6 feet by 2100, the report said. The data show that as of last year, the state’s 17 coastal communities were home to about 12 percent of the state population and host over 100,000 jobs. The report says the region is growing at nearly three times the rate of the state as a whole. The commission established goals in the areas of science, assessment, implementation and legislation. For example, it recommends the Legislature authorize a state agency to convene a science and technical advisory panel to review and evaluate the current state of climate change science. It said the vulnerability of buildings, cultural, natural and historic resources should be assessed, with state agencies and communities developing long-term strategies to protect them. The recommendations are primarily directed to the Legislature, state agencies, and municipalities, but successful implementation of the recommendations will require collaboration between the public and private sectors and among many stakeholder groups. “The report emphasizes that early and consistent collaboration between state and local governments can result in solutions which in turn increase our preparedness and resiliency,” said Democratic State Sen. David Watters, one of the 36 members of the commission. The report also brings attention to the efforts of individual communities, such as the town of Newmarket, which updated its “vision statement” in becoming more resilient against flooding through local land use policies and regulations. Public comments are being accepted on the report through June 30. Public meetings on the report have been scheduled for May 26 in Greenland and June 1 in Rye.
<urn:uuid:a1bd4eb2-d7bb-4a92-8152-b130fe9a328a>
CC-MAIN-2016-26
http://www.insurancejournal.com/news/east/2016/03/21/402550.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00032-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957639
541
2.8125
3
A research team led by Igor Efimov at Washington University in St. Lewis has developed a stretchy, custom-fitted, implantable device that can give doctors feedback about life-threatening irregularities occurring inside someone’s heart. This photo shows sensors embedded in the silicon membrane that could provide stimulation to the surface of the heart. (Source: Washington University/St. Louis) Good question, Cabe. I think the connection would probably be secured somehow to avoid such scenarios, but in truth I don't really know. Something for me to follow up on with the researchers. Thanks for asking! The medical uses for the membrane are certainly remarkable. I assume the information collected would be sent over a Wi-Fi connection for doctors to review. So my question is, wouldn't that make it vulnerable to being hacked? I say this because heart defibrillators and pacemakers can be hacked to overvolt or dump their medicine, which would be detrimental to the patient. That is a very interesting question, a2. I suppose when any information is sent wirelessly there are security issues, but I can't imagine they would not be addressed before these devices were used on patients. But these are good questions to be asking before the technology comes out of the lab. Elizebeth, in terms of cancerous cells we can say that as kemitherapy is the solution for cancer but it is very hazardous as well so i guess 3D technology should do something or introduce any technology which act as a replacement of kemotherapy to reduce the side effects . The other question is about connections, which none are visible in the photo. It certainly is an interesting concept, and more details about the actual printing process would be both educational and potentially useful. The point about this being a stretchy design makes it quite unique indeed. Most designs are ridgid, typically, or a bit flexible at best. So flexible and stretchable is something quite new. I don't know about the process, William K., I would have to look into it further. Yes, the operation certainly would be risky, as all surgeries are, especially when the heart is exposed. I imagine this type of thing would only be used in patients that really needed constant monitoring and for whom it would be more beneficial to have potentially dangerous surgery than not. Or perhaps there is a low-invasive way to insert the membrane. I will try to do some digging and get back to you. DIY candy, journeys to Mars, coding for road trips, and more. These STEM (science, technology, engineering, and math) activity options will keep kids engaged this summer, from 10-minute activities to more advanced undertakings. Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
<urn:uuid:0db73bf1-3cd8-4357-83cc-833f5a45e2df>
CC-MAIN-2016-26
http://www.designnews.com/author.asp?section_id=1386&doc_id=272227&image_number=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965962
613
3.015625
3
Nuclear weapons have not been used in war since 1945. In 1989, the Berlin Wall began its long fall, and the border between East and West Germany opened. And in 2010, when I was 19, the presidents of the United States and Russia signed the New START treaty, vowing to reduce the number of deployed nuclear warheads. I cannot remember a time when the threat of nuclear weapons seemed real. In school, my classmates and I studied World War II. We discussed the ethics of dropping atomic bombs, and I even wrote a paper on how using those weapons enabled the United States to avoid a much more costly and lethal ground invasion of Japan. Our teachers did not shy away from these topics, but we thought of nuclear weapons as history. My generation grew up believing that the problem of nuclear weapons had been solved. The United States’ main nuclear opponent, the Soviet Union, is no more. Our president has agreed to reduce the nuclear arsenal, and we no longer practice hiding under our desks in case the bombs drop. We have no context for the kinds of danger that these weapons present. Civil rights revisited. Today’s nuclear non-proliferation activists face some of the same challenges as civil rights activists: Although society still has a long way to go in terms of eliminating racism, sexism, and classism, many people believe those issues to be solved. Look at the response to the Black Lives Matter movement, for example. Those who fight most ardently against it claim that racism is a thing of the past, and that the proponents of the movement are bringing race into issues where it does not belong. When people think a problem is solved, the last thing they want to do is revisit it. Civil rights activists themselves have usually experienced or witnessed the effects of discrimination. Yet despite the fact that my generation has a clear context for racism—we have witnessed the protests in Ferguson, and the wave of demonstrations that followed over the killings of black men by police officers—movements like Black Lives Matter have had to fight and scrape for every gain. Consider, then, the difficulty that nuclear non-proliferation activists face: Nuclear weapons still exist, but for most Americans they only exist in the abstract. We see them in movies or in games. Sometimes we talk about nuclear weapons in school, but almost always in history class. We talk about them as if they are fictional, or past threats that no longer apply. My generation has no context for nuclear weapons. We do not have the fear that our parents confronted; we do not have the stakes that our parents grew up with; and for most Americans, nuclear weapons seem like a non-issue. Things have yet to go terribly wrong, so why should we worry? Attracting attention. I do not know how to garner the attention of young people. I would love to claim that my generation can be easily plied by this or that tactic, but I cannot. Without a context, how can young people hope to understand the issue of nuclear weapons? We see racism, sexism, classism, environmental destruction, immigration issues, refugee issues, economic issues, and so many more all around us. These are the issues that occupy my generation, because these are the things that we are most afraid of. Maybe it comes down to fear. Perhaps that is what context provides for us, along with a spark to move toward a better future. I do not think there is any easy fix to make us fear nuclear weapons in the way we should. I have studied this issue, have worked with activists on this issue, and yet I still find myself more preoccupied with other problems. Even after all my work, I do not have the context. Perhaps creating context should not be the goal. Our understanding of the world’s issues is built on years of experience, and trying to fabricate that experience seems like an exercise in futility. Our parents’ generation had to practice “duck and cover,” and heard about the possibilities of nuclear annihilation almost daily. My generation did not. We learned about different issues throughout our childhoods. We cannot build that level of context with articles and acts of protest, but we can catch attention—and while we have that attention, we can educate. Blood and hammers. Do actions such as pouring blood on nuclear weapons attract the attention of my generation, as they did for an earlier generation described by longtime activist Paul Magno in this month’s issue of the Bulletin? To quote Magno: “We must begin somewhere.” So many people of my generation are apathetic about the threat of nuclear weapons not because they are lazy, but because they are unaware. I might not have the context to fear nuclear weapons, but I do now have the knowledge. I can think about these issues, and I can talk about the dangers that these warheads represent. The actions Magno discusses are ones that can garner that kind of attention. If I see a nun pouring human blood on a nuclear warhead, you bet I will pay attention. The real trick is following that up with education—using that brief window of attention as an opportunity to educate. During the Cold War, everyone understood what the blood and hammers meant. Nowadays, without the context, it might just look like another viral video. In my work with N Square—a two-year pilot program of the Ploughshares Fund intended to “ignite the public imagination” and spark new ideas—we commissioned some demonstrations to show how virtual-reality technology might help generate awareness of the threat of nuclear weapons. One of our demos relied on the shock value of experiencing a nuclear blast firsthand to catch the viewer’s attention, and then followed that up with a brief explanation of the nuclear issues we face today. Another demo allowed the viewer to navigate the nuclear-materials black market in order to learn how easy it would be for someone to acquire the pieces of a nuclear warhead. These projects focused on teaching people about nuclear threats that they do not see in history class or in fiction. It might be impossible to give my generation the context to fear nuclear weapons, but it is not impossible to teach my generation, and to tie that knowledge to what we care about. We say we care about the environment, for example, but how can a person call himself or herself an environmentalist and not recognize the dangers that nuclear weapons pose to the natural world? How can someone care about improving the lives of the down-and-out without fearing the effects a nuclear catastrophe would have on the people with the least power? We have no context for fearing nuclear weapons, but we can learn about them, and about how they relate to the issues for which we do have a context. So, “we must begin somewhere.” And then we must follow up with education.
<urn:uuid:dc001861-8bab-4168-9ada-becef4bf2e2b>
CC-MAIN-2016-26
http://thebulletin.org/why-young-people-think-nuclear-weapons-are-history9229
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965114
1,400
2.765625
3
Immigrants in Antebellum America A demand for labor and a supply of immigrants in search of economic opportunity coincided in the Antebellum period. As the manufacturing sector in the Northeast expanded rapidly, managers sought large pools of unskilled or semi-skilled labor. Meanwhile, in Ireland, beginning in 1845, a disastrous blight fell on the staple crop, potatoes. This created a major famine that forced millions of Irish people into starvation. From 1847 to 1854, over 1.25 million Irish people moved to the United States to escape the famine, some as a result of American labor recruiters in Ireland. The Irish immigrants of the 1840s and 1850s were largely poor and unskilled, unlike the more socio-economically diverse Irish immigrants of previous decades. Swindlers took advantage of immigrants coming into port cities. In 1855, however, New York City, the major port of entry for immigrants, established facilities in lower Manhattan which registered and processed immigrants, and provided them with information on finding their relatives and traveling to final destinations in the United States. This helped reduce the amount of trickery to which immigrants were subjected. Many of these new immigrants, especially the women, worked in factories and as domestics in the homes of the wealthy. While wages were low, the opportunities and pay were greater in the United States than in Europe. Others, largely men, worked in mines or mining towns. Life in the mining towns was full of hardships. Working in a mine was a major health hazard, and mine owners generally did not take adequate health precautions. Wages were low for long days of hard labor. Laborers and their families were required to live in company towns and purchase food and other items in company stores, which took shameful advantage of the unfortunately powerless immigrants. Fear of losing their jobs and the autocratic power of managers and owners prevented or squashed any attempts to protest the unfair working and living conditions. Although a large portion of antebellum immigrants were Irish, there were also immigrants from other locations. German immigrants were another major group. Political revolts and revolution attempts in the 1840s across Europe, but especially in German-speaking areas (Germany, Austro-Hungarian Empire, etc.) caused many to flee to the United States for asylum or to escape the turmoil in the continent. Unlike Irish immigrants of the same period, however, German-speaking immigrants came from different social and economic classes, and many were skilled laborers or professionals. This made it was somewhat more difficult for native-born Americans to stereotype Germans than Irish, although ethnic stereotypes did exist. In addition, the larger number of Irish immigrants made native-born Americans feel more economically threatened by them than by German immigrants. Thus, the German antebellum immigrants generally assimilated into the United States more easily than the Irish. It was not until the Civil War, in which many Irish Americans served prominently, that the new Irish immigrants were able to achieve a sense of belonging. The other major immigrant group introduced to the American tapestry in the antebellum period was the Chinese immigrant of the West. In the late 1840s, Chinese immigrants began arriving in the United States in significant numbers. By the early 1880s, about 250,000 Chinese and Chinese Americans lived in the United States, most of whom were located in California or other western territories and states. For the first few years, Chinese immigrants, mostly men, were the objects of curiosity, but relatively little social attack. Few knew English, and most worked for one of the Six Companies, which were Chinese organizations in the United States that governed the actions of Chinese immigrants. These companies took the place of village governments and patriarchal associations, and had their own laws, independent of American laws. Anyone disobeying the rules was quickly punished, regardless of relevant American laws. Living in fear, many Chinese immigrants were completely dependent on these companies, and interacted little with native-born Americans. As the California Gold Rush of the late 1840s and 1850s brought more people to the state, Chinese immigrants found their attempts to pan for gold restricted by the racial prejudice of some of their fellow fortune-hunters. The more numerous and visible the Chinese were, the more they became the subjects of discrimination and ridicule. A large number worked in mines or on the construction of railroads, although relatively little mining and railroad construction occurred in the West until after the Civil War. Many Chinese immigrants and Chinese Americans had to turn to jobs that white men did not want to do themselves, such as making clothing, washing laundry or cooking for others. Nevertheless, through hard work and careful financial planning, many Chinese immigrants and Chinese Americans were able to make their businesses successful. By 1860, the Chinese were responsible for almost all the manufacturing of shoes, shirts, underwear, cigars and tin products in California. Many others owned their own laundries, restaurants and hotels, catering to the many single men living in the West. When the West experienced economic difficulties, Chinese Americans were among the first to be attacked, verbally and physically. In addition, territories and states passed discriminatory laws. For example, Chinese people were the only group which had to pay an annual $20 tax required of foreign miners in California. White westerners began creating stereotypes of Chinese Americans, depicting them as belonging to a retrogressive and inferior race. Popular culture readily adopted the cruel stereotypes and ridiculed Chinese Americans in pictures, verbal expressions and myths. Chinese Americans were soon categorized with Native Americans and African Americans, as inferior peoples. While they were not strictly enslaved, they were not strictly free. This war was marked by a series of bloodless skirmishes on the border between Maine and Canada. This border had never been clearly defined and thus was disputed by both sides. President Van Buren sent General Winfield Scott to negotiate a deal. Scott was able to arrange a truce.
<urn:uuid:bfeeadeb-cd06-4962-9e7d-34b738603218>
CC-MAIN-2016-26
http://www.historycentral.com/Ant/People/immigrants.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983289
1,182
4.1875
4
The demands of today’s global economy continue to stimulate public discourse and policy initiatives concerning the competitiveness of the United States in science, technology, engineering, and mathematics (STEM). There is growing recognition that community colleges can play a unique role in leading efforts that bring together industry leaders and educators to resolve the complex challenges of educating and training students taking STEM courses in the U.S. According to the National Science Board (2006), changing workforce requirements mean that new workers will need even more sophisticated skills in science, mathematics, engineering, and technology. Scientific and engineering-related occupations are expected to continue to grow more rapidly than occupations in general. In fact, long-term growth in STEM occupations has far exceeded that of the general workforce. This project is intended to help ensure the rigor of mathematics and science concepts taught in the six STEM-related clusters at the community college level and encourage the pursuit of STEM-related careers. The Center for Occupational Research and Development (CORD) was contracted to conduct the “STEM Transitions” project, funded by the U.S. Department of Education Office of Vocational and Adult Education under cooperative agreement with the League for Innovation in the Community College. Work on the project began November 2007 and concluded at the end of 2008. The main focus of the project was the development of context-based instructional materials that demonstrate the convergence of technical and academic concepts within STEM-related clusters. The intent is to provide community college faculty with teaching resources that offer both academic and career-related skills. CORD, in conjunction with 40 faculty conferees from community colleges across the country, has developed over 60 integrated curriculum projects for use in math, science, and technical courses in the six STEM-related clusters—health science; information technology; manufacturing; transportation; science, technology, engineering and mathematics; and agriculture. The projects are intended to aid in student mastery of essential mathematics and science concepts while motivating students to pursue STEM-related careers. CORD was assisted by its project partners, the College and Career Transitions Initiative and the States’ Career Cluster Initiative, in 1) the identification of math, science, and cluster standards to be addressed by the projects, 2) selection of topics for development, 3) review of project drafts, and 4) dissemination of completed teaching resources. From September 2 through October 31, 2008, the integrated project materials were evaluated by interested faculty members as part of the External Review Phase of the STEM Transitions project. Revisions to the projects were completed in December 2008. An evaluation form will continue to be available within each integrated project throughout the life of this website. We invite community college faculty members to share comments and ask questions of our staff as they implement projects in their classrooms. Below are project materials for download and dissemination:
<urn:uuid:b45c4f4a-c251-4305-bb3a-c8619a95787f>
CC-MAIN-2016-26
http://www.stemtransitions.org/overview.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00027-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957217
566
2.71875
3
In Michigan, the unemployment rate is 9.3 percent – almost three times what it was in 2000. And since 2000, 1.82 million residents – 20 percent of the state population – are now on some form of public assistance. The New York Times reports, “In the first nine months of this year, some 130,000 Michigan residents who had lost their jobs remained out of work so long that they ran out of regular unemployment benefits. By the middle of this month, 63,000 people (who had already run out of their ordinary maximum benefit — as many as 26 weeks, at as much as $362 a week) also ran out of an extension authorized by Congress.” Michigan’s economic crisis, compared to other states, has only been made worse by the failure of the auto industry. As Congress and President-elect Obama consider bailing out the failing auto industry, other proposed stimulus measures look to put the unemployed back to work. In Michigan, many of the state’s unemployed are hoping for just that. As old-style manufacturing jobs have been downsized or lost to workers overseas, American workers want to go back to school to be trained for technical, medical, green and infrastructure jobs. But some 1.7 million Michigan residents, according to The New York Times, “have ‘basic skill challenges,’ like poor English or no high school diploma. As far as higher education, the state ranks 35th, below the national average, in college graduates.” There is a workforce ready to work, in Michigan and certainly throughout the rest America, however that workforce, in addition to needing more job opportunities with livable wages, will need extensive training to be able to work in these new fields.
<urn:uuid:b3466d75-11f0-42ef-a852-ce5099068739>
CC-MAIN-2016-26
http://www.pbs.org/wnet/blueprintamerica/blogs/the-dig-workingmans-dead/255/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971249
360
2.609375
3
We have the knowledge, the evidence and the strategies to improve the plight of those who fall between the cracks and stay there for long periods of time. But every night, the shelters continue to fill up, and every day, the many people who are homeless on our streets watch as we pass by with eyes averted. At this moment, Canada has an opportunity to take action and reduce homelessness dramatically by expanding strategies we already know can work. The federal government launched the National Homelessness Initiative in 1999, after a significant rise in homelessness. This initiative allocated more than $1 billion to funding solutions such as community programs and beds in shelters. Programs such as these play an important role, but have not measurably reduced the number of homeless people country-wide. For that reason, the current government has sought evidence on the cost-effectiveness of alternative options, such as "Housing First." Housing First is based on the principle of providing housing to those in need before they're deemed "ready" to re-enter society. To qualify for housing, individuals don't need a job or a stable lifestyle, and they don't need to enter rehab, though once they get a home, many of them will accomplish all of these things and more. Canada will soon finish the largest randomized trial of its kind on Housing First in the world. Overseen by the Mental Health Commission of Canada with funding from Health Canada, At Home/Chez Soi, where we are both investigators, has housed about 1,000 people with mental illness in five cities across Canada. Each participant was given a choice of apartments to live in, a rent subsidy and an assigned case worker for support. The study randomly assigned 990 participants to a control group of people who only received the services already available in their cities. About 85 per cent of participants who were housed are still in the first or second apartment they chose. Not only that -- many of them are thriving. Many are volunteering and enrolling in school. Many participants have accepted professional help for their mental illnesses. Results from this study will help governments invest cost effectively in the reduction of chronic homelessness and in doing so will radically improve people's lives. For every two dollars spent on Housing First, the system saved a dollar by reducing the costs of police detentions, hospital services and shelters. For those who used services the most, those savings were even greater, with three dollars saved for every two dollars spent. Homelessness is more than a social issue, it's a health issue. The participants in At Home/Chez Soi all live with mental illness and they are at a much higher risk of physical illness than most Canadians. Getting appropriate health care is just one of the things that community support teams help participants with. A chronic lack of affordable housing and stable employment opportunities that pay a living wage for low-skilled workers are often the reason people end up homeless in the first place. It's a game of musical chairs, and when the music stops, often those who need support the most are left standing outside the circle. But once they have a decent place to live, they can begin to reconnect with friends and rejoin the community. The At Home/Chez Soi model is a wise investment in addressing the inequalities faced by those with complex illness. This is ground-breaking research with the potential to help governments drastically improve Canada's approach to homelessness, social policy and our entire health care system. Continued support for At Home/Chez Soi and similar Housing First programs will help ensure we don't lose the crucial ground we've gained in improving the lives of Canadians. Follow Stephen Hwang on Twitter: www.twitter.com/StephenHwang
<urn:uuid:09446d41-a9c5-4baf-ae13-8bf10a298cd4>
CC-MAIN-2016-26
http://www.huffingtonpost.ca/stephen-hwang/at-home-homeless-study_b_2536434.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968575
756
2.734375
3
Himalayan neighbours to give the tiger more room Monday, October 27, 2008 India, Nepal plan to connect forest reserve areas to give big cats a new lifeline and save them from extinction LUCKNOW: “All along the edge of the Himalayas, from Saharanpur and the Jumna River in the north-west, to Gorakpur and the Gandak River to the south east, is a belt of forest varying in width from twenty to fifty miles, which is home to many species of animals,” wrote celebrated wildlife conservationist Col RW Burton in his diary in January 1924. Little would he have known then that over eight decades later,this entire area would be developed into one linear strip for the protection and conservation of wildlife, especially the tiger. The Wildlife Institute of India has drafted an ambitious project to connect the forest reserve areas in India with those in Nepal through a wildlife corridor. The unique “Terai Arc” project is to be completed in five years. “Connectivity with the Royal Chitvan Park in Nepal would mean a new lifeline for our tigers,” says Qamar Qureshi of the Wildlife Institute of India (WII). Otherwise, in a few years, tigers may cease to exist in habitats like Sohagibarwa in Maharajganj and Suhelwa in Balrampur district of UP, which are under enormous anthropogenic (human related) pressures and have few tigers left, he says. Tiger population in India has dwindled to an all-time low of 1,411 in 2007 as per the latest report of the National Tiger Conservation Authority (NTCA) and WII. Though conservationists say this figure is much lower than a realistic 2,000, the 2002 tiger census figure of 3,642 was also considered too optimistic. The Indian portion of the Terai Arc Landscape (TAL) is about 42,700 sq km with a forest area of about 15,000 sq km stretching from the Yamuna in the west to Bihar’s Valmiki Tiger Reserve in the east. It spreads across Himachal Pradesh, Haryana, Uttarakhand, UP and Bihar along the Shivaliks and the Gangetic plains, and has nine distinct tiger habitat blocks, the largest one being the Corbett Tiger Reserve in Uttarakhand. Thirteen corridors that potentially connect these nine blocks have been identified. “Without corridors with Nepal, Indian tigers will not survive,” says Ravi Singh who heads WWF in India. Once the corridor is completed, wild animals would be able to amble freely through the jungles of UP, Bihar, Uttarakhand and Nepal without any hindrance or conflict with human beings, he explains. Human population increase, ever-growing habitat encroachments, poaching, firewood extraction and collection of bhabar grass for rope-making, stealing of tiger and leopard kills, and boulder-mining are causing enormous disturbances, says a WII report. “Cross border co-operation between India and Nepal is a must to ensure the long-term conservation of tiger and its habitat ,” it adds. Three of India’s 27 tiger reserves are located in this area - Corbett in Uttarakhand, Dudhwa in UP and Valmiki tiger reserve in Bihar. “Big cats need a large area to exist. They can move over a stretch of 150 km in 30 days. For better conservation, tigers need long stretches of forest area unhindered by humans,” says famous wildlife expert Mike Pande. He also asserts that common corridors between distinct tiger habitats would also mean a genetically strong species. “Movement over large areas would prevent inbreeding and genetic anomalies,” he said.
<urn:uuid:f6b655d4-5b1a-4906-b690-23cadb75063b>
CC-MAIN-2016-26
http://bigcatrescue.blogspot.com/2008/10/himalayan-neighbours-to-give-tiger-more.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00007-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94345
781
3.390625
3
Brief SummaryRead full entry Passiflora incarnata, also known as the Purple Passionflower or Maypop, is a fruit –bearing perennial vine common to the southeast United States. Though often considered a weed in its native habitat, the plant is used in horticultural applications due to its fast-growing vines and uniquely beautiful flowers. These white-to-purple summer-blooming flowers have a very interesting structure, including a showy corona, and grow to 2-3 inches in diameter. The fruit, also referred to as a Maypop for the sound it makes when stepped upon, is approximately the size of a chicken egg, and turns from green to orange as it ripens. The vines can grow from 6 to 25 feet long, but generally don’t climb higher than 8 feet tall. Passionflowers are primary producers native to temperate deciduous forests. They are pollinated by bees, and are self-sterile. The fruit is commonly eaten by animals, including songbirds, which helps to distribute the seeds. The fruits are also an important larval food for some species of butterfly. Maypop fruits are similar to their relative the Passion Fruit, P. edulis, and can be eaten raw or made into jam or jelly. Native Americans traditionally used P. incarnata for its sedative and anxioltyic properties, in addition to eating the sweet fruit. These medicinal uses have been expanded upon in recent years, with P. incarnata extracts being shown to treat withdrawal symptoms and exhibit an aphrodisiac effect, in addition to the plant’s traditional effects being confirmed through various scientific means. Further research is being done to uncover the specific compounds responsible for these properties, so that they can be more effectively used for pharmaceutical applications.
<urn:uuid:b4fef850-2b76-462f-82cc-68a1895d39d6>
CC-MAIN-2016-26
http://www.eol.org/pages/486617/overview
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964292
366
3.28125
3
There are several striking features of the composition and the formation of the sandstone. The layers in the rock are over eighty degrees. If they were less than 30 degrees, most likely old earthers would claim it to be petrified sand dunes, and it would be end of subject. Because of this great angle another formation story is required. Actually the angle of the sandstone “layers†found here would probably never be confused by geologists as being due to coming from sand dune formation even if under 30 degrees. Eolian sandstones are largely identified by diagnostic cross-bedding, usually between 20-30 degrees, which is set at an angle to the bedding you describe here as “layers.†Also the nature of the grains in the sand dunes is very unlike windblown deposits in terms of being poorly sorted. There are also conglomerates, and I doubt “old earthers†would try and say that these were windblown. Winds that can carry pebbles do not allow for the creation of sand dunes. The grains are not frosted and are angular. This also is not typical of windblown sediment. Geologists are not trained to jump to conclusions using only one line of evidence, and ignoring other evidence in doing so although I do see creationists do this rather frequently. The angle of inclination to the beds really has nothing to do with the composition of the actual sandstones or the nature of the depositional environment in which they formed. The origin is generally thought to be a leftover of eroded mountains--thus the high angles are the result of orogeny, followed by erosion--a common uniformintarian (as opposed to "actualist, which claims to acknowledge catastrophe) approach to unconformities and potential "flies in the ointment." Once again the “high angles†of the beds have little or nothing to do with the presumed provenance or source of the sediments. There is not a direct relationship as you imply here. The source is thought to have been from eroded granitic rocks still found in the remnants of eroded mountains to the south after the mountains were created in the Petermann Orogeny. This basically is more about the proper study of the sandstones than holding to uniforitarianism. It is the composition of the sandstones that ties the rocks to their source, not the high angles of the beds in Ayers Rock. The inclined bedding would have come by different orogenic activity than that which produced the sediments. Inclined beds are not “flies in the ointment†any more than unconformities are in terms the application of the science of geology. These are fies in the ointment for YEC ideas. I notice that you have now generally shifted away from using the term uniformitarianism for a concept you want to attack in favor of actualism. Unlike the inclination of the bedding, your attempted slam of geologists who hold to uniformitarian concepts falls flat. There is essentially no difference between the modern concept of “uniformitarianism†as used in the 20th or 21st centuries and “actualism†in fact I made it all the way through grad school studying geology before anyone started to use a term other than uniformitarianism for the general nature of how we use present processes active on the planet to interpret past earth history. The presence of occasional catastrophic occurrences has been well known in modern earth environments and taken to be something that occurred in the past, sort of like punctuation to the general slow and uniform processes at work. We have not held to the slow and very uniform rates that Sir Charles Lyell tended to embrace in the 19th century. We have known that the name was not the best for the concept, but it had long usage and it didn’t bother most of us since earth processes are far more uniform than the alternative discredited “catastrophic†school which fails miserably to explain much of the evidence found in the rocks. I do not use the term actualist out of past habit, but do accept that I hold to an uniformitarian viewpoint. Basically in geology after the 19th century Uniformitarianism= actualism and all that occurred was a name change used by some for greater clarity. Some are sticklers for keeping the historical context of uniformitarianism as it was held by Lyell when using the word, I am not one of them. If you find a geologist today that claims to accept uniformitarianism it will most likely be the same concept as actualism. But there is plenty of evidence to build a water transport deposition, accompanied by following diagensis and/or metamorpism because of the volcanism that many creationists believe accompanied the breaking up of the "fountains of the deep." So is there other evidence of water? Yes, the rock is classified as arkose. Arkose contains calcite, which acts as an initial cement. This would have most likely come from the ocean's diatomic and planktonic organisms that mixed with the sand underwater. Also there are ripples and cross bedding found in the rock. So because many creationists believe volcanism accompanied the “great fountains of the deep†you conclude that metamorphism is present? I would think a better approach would be to only cite what evidence has actually be found, not to conclude it is there because your model predicts it should be there. Arkoses do not usually contain calcite as a primary clastic component. Although calcite cement is commonly found, it is not an essential ingredient for a rock to be called an arkose. They are sometimes informally called “dirty sandstones†due to their large non-quartz content. While there is ample evidence that the sediments were water-lain, there is no evidence I have seen that shows these rocks were deposited by ocean waters. As far as I know these rocks do not contain much in the way of fossils, which is something typical of terrigenous rocks. Diatoms are not present as far as I have heard, but if so they would leave behind silica and not calcium carbonate (calcite). Material from planktonic organisms mixing with the sand underwater during a catastrophic flood?...there is no evidence of this at all. Calcite cements are not exclusive to sediments deposited by oceanic waters. They are often not primary either, but part of the diagenetic process. Ripples and cross-bedding are often found in fluvial deposits. I see you have invoked the now nearly ubiquitous creationist usage of the “great fountains of the deep†which are commonly cited to fill in several holes in creationist flood geology. They are used to provide great volumes of water, they are used to provide tectonism, and they also are sometimes linked to volcanism. Did they also slice and dice rocks like a Vegamatic? Did they emit gases that cured the common cold for a term as well? I’m sure with a little imagination other phenomina can be explained by invoking “the great fountains of the deep†whatever they actually were. I don't want to deviate into alluvial fans, but is the 18000 feet of sandstone a result of water run off for a canyon??? That is what they are suggesting here! It is enough to make me sick--sorry! Don’t you think you may be exaggerating a little in terms of the thickness involved? Although I have seen estimates of multiple kilometers of thickness, the present Ayers Rock is a bit over 1,100’ and although the exact depth the beds reach below the surface is unknown, I think it is most commonly speculated to that 1/2 to 2/3 cannot be seen. That would put the thickness closer to 4,000’ or less. I would recommend to stop being sick and do some more research into the geology involved with alluvial fans, one of the more common methods of accumulating great thicknesses of sediment. Your problem seems to be that your “worldview†….as you would term it, does not allow for you to add the time element that is part and parcel for such thick accumulations. Several “layers†are present and they would have been formed by repeated pulses of sedimentation on individual alluvial fans or coellescing fans accumulating through time. The sedimentation on these is often in sheet floods or related flash flooding events so you are basically arguing against a straw man that you have constructed about slow flow through canyons. Go out and look at the sediment accumulated on such a fan. They will not likely be very rounded as you seem to be saying in a couple of places. Alluvial fan deposits are more typically angular. I once was hiking on Miyajima, a small Japanese island, and arrived just after some heavy rainfall. The streams were literally choked with “granite wash†and it appeared that there was almost as much sediment as water in the flow. Lots of pink and white feldspar. This would have formed a rather thick bed of sand down and off the slope. Nothing all that slow about it, and certainly nothing catastrophic about it. I survived my hike without even getting my feet wet. Although finding many sands in a section is generally a good thing in an oil and gas well since it provides reservoirs, we sometimes drill into sequences like this for thousands of feet that lack traps and seals. We term failed wells of this sort “sand pilesâ€Â…a couple of exploration wells drilled here a few years ago failed for just such a reason. Oil and gas had been generated an migrated through the sands present, but it kept on going updip to traps formed by faults and sealed by shales across the boundary line of the concession. Since by common sense we can rule out any possibility of canyon water run off, we must conclude by the sheer volume of arkose, and the upturn that we are looking at the results of cataclysm, involving both high regime current, earth movement, and volcanism. This is consistent with the Biblical account of ALL the fountains of the deep were broken up. Common sense favors the possibility shown by all the evidence that can be used in making conclusions. The evidence does not lead in the direction of cataclysm, but more mundane geological processes such as those found in river and alluvial fan deposition. Being consistent with something nodody can really explain? Those very useful “fountains of the deep†put in yet another appearance. Volcanism cited again with no real evidence of any being involved in the tectonic tilting of the beds. Some minor amounts of basalt have been found in the sediments, but that is all. Actually common sense after an actual study of the rocks would not rule out alluvial fans at all, since the sediments involved are rather diagnostic of such features. Ruling something out just because you do not understand it is not a very good scientific approach. In fact it is one of the more dangerous ways to approach science if a correct conclusion is the goal. What common sense rules out is deposition of 18,000’ of sandstone in one world-wide flooding event over a few weeks time. Trying to conceive of that makes me dizzy, not sick. First of all you have to produce all that sediment through erosion. The action of even the most rapid flood waters would not allow for even a fraction of the sediment to be produced from the erosion of the granites with a year’s time. Then the flood waters would be required to transport it all and deposit it with bedding surfaces, and as you have pointed out, ripples and cross-bedding. Not much chance of strong currents after all the mountaintops are covered so I would guess we are limited to when waters are raging down due to heavy rainfall, and then when the waters receded. What does the Bible give us as a time frame for all of this? Well, the waters decreased enough that by the seventh month the ark came to rest on ground and decreased steadiliy until the tenth month. Then after another month Noah sent out the bird to test for dry ground. So by the furthest stretch as I can see it, all of this had to accomplished in eleven months. About 330 days. If we say all the sediment was eroded already by the start of the flood, which is not likely due to a lack of the action of rains, etc. and the fact that the sediment could not have been accumulating for a long time prior to deposition due to the unweathered nature of the grains, we would have deposition would have to average about 55’ a day. Ripple marks and cross-bedding are very hard to preserve in the high flow regimes that would be necessary. So the presence of these bedforms goes against your hypothesis of deposition through a massive flooding event. The arkose also contains 25 % conglomerate rock, which would lend secondary support to turbidity currents. No it wouldn’t. I don’t think the conglomerates present are part of a graded-bedding Bouma sequence that would be found in a turbidite. The evidence of volcanism is high, by the presence of feldspar (50%), and mica. Also it contains pieces of basalt--indicating extrusive lava that was then covered by sand. Basalt is what is on the oceanic ridges, and formed the Hawiian islands. There is no talus (broken eroded sediment) at the foot of Ayers Rock, because there are no joints (cracked openings), indicating the rock is not only cemented by calcite, but in part welded by the contact metamorphism caused by the volcanic heat within the sandstone. I think evidence of some regional volcanism is present, but not evidence for the creation of the actual rocks in Ayers Rock, which is shown by only small amounts of basalt fragments within the sandstones. Other than that, the arkosic sands are mostly typical of a granite wash in terms of grain size and sorting. With rather rare exception feldspar or mica grains from a volcanic source would be much smaller than seen here. Basalt also commonly forms on land. So if sand covered the lava, where is the lava now? Was it ever really present, or is it hat your model would predict it should be there so you assume it is there? Where did you find reference to metamorphism in the sandstones? I have never seen any mentioned nor any volcanism causing alteration. You would plainly see this in “baked zones†yet Ayers Rock has none that I have ever seen reported. The unweathered arkoses are grey in color and not showing the typical red color caused by the heat of nearby igneous activity. In desert environments such as on the Colorado Plateau you can see a similar situation with talus. Some studies indicate that if weathering of produced talus dominates the creation of more talus not much will be found at the base of cliffs. Joints have been noted and small caves appear to form near there. But having joints or not is not evidence of calcareous cementation. This is also not evidence of metamorphism. As I mentioned before, I will do so again for the purposes of this question. If the massive sub-surface deposit at Uluru is the result of eroded mountains, where are the signs of the erosion? There should be talus around Ayers Rock. Metamorphism welded the rock to make it a unit, so that there are no joints. This is a sign of a LACK of erosion--and an indication that the rock is relatively young. You seem to be confusing two erosional events. One would be the erosion of granitic mountains located some 60 kms away, as shown by the included sediment types. The erosion of the beds that formed into rocks would be a separate issue. The pictures you have provided very plainly show features caused by erosion, including what appear to be ventifacts from wind erosion. The ribs show differential erosion rates of the beds present. The very color of Ayers Rock is evidence of erosion. Again you lead with metamorphism. Please supply a citation that such is present or I will continue to assume that you are simply jumping to a conclusion. Severe Uplift Would Cause Joints and Faults I am not saying there were no earth movements at all, but bedding angles are caused by severe uplift and turning from oregenic events, as claimed by standard geology, why does the large Ayers rock contain no faulting or joints. You would think that if the layers are strata laid originally horizontal, and then turned nearly vertical on such a gargantuan scale, there would be faulting and joints-but there is none! So you have studied Ayers rock to the point that you know that there are no faults or joints in it? Geologists have found faults and joints associated with Ayers Rock, it is just that they are far less common than usual. I have heard it is bounded by major faults on some sides and that the material on the other sides of the faults was less resistant to erosion and has since vanished. There are sheeting joints where blocks of the sandstone is separating from the dome. Some caves are thought to be associated with joints. Evidence of Transport The grains are jagged, supporting rapid transport, rather than slow run off through a canyon, which would tend to produce more eroded rounded grains. The angularity of the grains is evidence of deposition following relatively little transport. This fact does not come about due to the speed of the transport per se. If a stream carries such grains at a rate of 1 meter a minute and deposits them relatively close to where they were eroded you will still see this angularity. But once again you try and force fit mainstream geologic work into an unrealistic uniformitarian straw man. By the way, the photomicrograph shows no signs of metamorphism that I can detect, and I have looked at metamorphic in such sections. There is no binding at grain contacts at all. There is no undulation to the surface of the grains.
<urn:uuid:3ac1734a-e175-4c4d-8ecf-cddcc4a64b28>
CC-MAIN-2016-26
http://evolutionfairytale.com/forum/index.php?showtopic=3862&st=0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971716
4,204
3.359375
3
Bladder Cancer (cont.) Surgery is by far the most widely used treatment for bladder cancer. It is used for all types and stages of bladder cancer. Several different types of surgery are used. Which type is used in any situation depends largely on the stage of the tumor. Many surgical procedures are available today that have not gained widespread acceptance. They can be difficult to perform, and good outcomes are best achieved by those who perform many of these surgeries per year. The types of surgery are as follows: - Transurethral resection with fulguration: In this operation, an instrument (resectoscope) is inserted through the urethra and into the bladder. A small wire loop on the end of the instrument then removes the tumor by cutting it or burning it with electrical current (fulguration). This is usually performed for the initial diagnosis of bladder cancer and for the treatment of stages Ta and T1 cancers. Often, after transurethral resection, additional treatment is given (for example, intravesical therapy) to help treat the bladder cancer. - Radical cystectomy: In this operation, the entire bladder is removed, as well as its surrounding lymph nodes and other structures that may contain cancer. This is usually performed for cancers that have at least invaded into the muscular layer of the bladder wall or for more superficial cancers that extend over much of the bladder or that have failed to respond to more conservative treatments. Occasionally, the bladder is removed to relieve severe urinary symptoms. - Segmental or partial cystectomy: In this operation, part of the bladder is removed. This is usually performed for solitary low-grade tumors that have invaded the bladder wall but are limited to a small area of the bladder. As the name implies, radical cystectomy is major surgery. Not only the entire bladder but also other structures are removed. - In men, the prostate and seminal vesicles are removed. (The seminal vesicles are small tubes that carry semen from the prostate to the penis.) This operation stops production of semen and may affect your sexual function. However, nerve-sparing techniques can spare erectile function in some men after surgery. - In women, the womb (uterus), ovaries, and part of the vagina are removed. This permanently stops menstruation, and you can no longer become pregnant. The operation may also interfere with sexual and urinary functions. - Removal of the bladder is complicated because it requires creation of a new pathway for urine to leave the body. This is called urinary diversion. Some people wear a bag outside the body to collect urine. Others have a small pouch made inside the body to collect urine. The pouch is usually made by a surgeon from a small piece of the intestine. Most patients (both men and women) are candidates for continent (internal, controlled) urinary tract reconstruction so that volitional (voluntary) voiding may be restored. - Surgeons and medical oncologists are working together to find ways to avoid radical cystectomy. A combination of chemotherapy and radiation therapy may allow some patients to preserve their bladder; however, the toxicity of the therapy is significant, with many patients requiring surgery to remove the bladder at a later date. If your urologist recommends surgery as treatment for your bladder cancer, be sure you understand the type of surgery you will have and what effects the surgery will have on your life. Even if the surgeon believes that the entire cancer is removed by the operation, many people who undergo surgery for bladder cancer receive chemotherapy after the surgery. This "adjuvant" (or "in addition") chemotherapy is designed to kill any cancer cells remaining after surgery and to increase the chance of a cure. Some patients may receive chemotherapy before radical cystectomy. This is called "neoadjuvant" chemotherapy and may be recommended by your surgeon and oncologist. Neoadjuvant chemotherapy can kill any microscopic cancer cells that may have spread to other parts of the body and can also shrink the tumor in your bladder before surgery. - If it has been decided that you need chemotherapy in conjunction with your radical cystectomy, the decision to elect neoadjuvant or adjuvant chemotherapy will be made together on a case-by-case basis by the patient, medical oncologist, and urologic oncologist. Medically Reviewed by a Doctor on 8/17/2015 Must Read Articles Related to Bladder Cancer Computerized tomography scans (CT scans) are an important diagnostic tool for a variety of medical conditions. The process uses X-rays and a computer to produce...learn more >> Cystoscopy is a diagnostic procedure in which ...learn more >> Urinalysis (UA) is a commonly ordered medical test to analyze urine. It may be used to diagnose urinary tract infections (UTIs) or kidney stones, to screen for ...learn more >> Patient Comments & Reviews The eMedicineHealth doctors ask about Bladder Cancer:
<urn:uuid:ea4d56be-dff3-4eb4-b853-6efbbb524caf>
CC-MAIN-2016-26
http://www.emedicinehealth.com/bladder_cancer/page11_em.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945919
1,021
2.6875
3
Pilgrim Token with Image of Saint Symeon Stylites the Younger Made in Syria H: 2 3/8 in. (6 cm); diam: 2 3/16 in. (5.5 cm) Stiftung Preußischer Kulturbesitz, Staatliche Museen zu Berlin—Skulpturensammlung und Museum für Byzantinische Kunst, Berlin (32/73) Not on view Stylites were ascetics who lived on platforms atop columns. This movement had practitioners into the nineteenth century, from Mosul in today’s northern Iraq to Gaul in France. Syria was home to large numbers of stylites, including the first stylite, Symeon Stylites the Elder (ca. 389–459). Saint Symeon Stylites the Younger ended his life on the Wondrous Mountain, named after the miracles he worked there. Like Qal‘at Sem‘an, the mountain functioned as a pilgrimage site until the arrival of the Arabs and experienced a revival during the Byzantine reoccupation of the area from 969 to 1074. This lead medallion, inscribed in Greek and based on the iconography of sixth- and seventh-century clay tokens, was produced during this period. Inscription: [in Greek, around the edge:] Eulogia [blessing] of Saint Symeon Thaumatourgos [miracle worker] Amen
<urn:uuid:609379f6-e941-486f-9018-6f61ca4fa6ec>
CC-MAIN-2016-26
http://www.metmuseum.org/exhibitions/view?exhibitionId=%7B60853040-AE7E-4162-8FA7-525505D6B633%7D&oid=479066&pg=4&rpp=20&pos=75&ft=*
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00122-ip-10-164-35-72.ec2.internal.warc.gz
en
0.901022
306
2.609375
3
Selecting the right temperature sensor depends on the process being measured, the temperature range stipulated, the response time desired, the accuracy required, and the operating environment encountered. Another important factor to consider is price, which varies with the accuracy rate and the mounting style of the device. Temperature sensors generate output signals in one of two ways: through a change in output voltage or through a change in resistance of the sensor's electrical circuit. Thermocouples and IR devices generate voltage output signals. RTDs and thermistors output signals via a change in resistance. There are two methods of temperature sensing: contact and noncontact. Contact sensing brings the sensor in physical contact with the substance or object being measured; you can use this approach with solids, liquids, or gases. Noncontact sensing reads temperature by intercepting a portion of the electromagnetic energy emitted by an object or substance and detecting its intensity; you can apply this technology to solids and liquids. To figure out which method you should use, just follow this rule of thumb: if the object or medium being heated moves, has an irregular shape, or would be contaminated by contact with a sensor, then you should use IR sensing.Contact Sensors The three basic types of contact temperature sensors are thermocouples, RTDs, and thermistors. Thermocouples. These sensors have the widest operating range and are best suited for high temperatures. Thermocouples of noble metal alloys can be used for monitoring and controlling temperatures as high as 3100ºF. These devices are also best for applications requiring miniature sensor designs. The inherent simplicity of the devices enables them to withstand extreme shock and vibration. Thermocouples can be configured in small sizes to offer near-immediate response to temperature changes. Thermocouples come in a variety of shapes and sizes. Here are the most common types. Insulated wire products. These special wire-formed metal alloys are covered with insulating material, which provides both physical and electrical isolation between thermocouple wire alloys. The insulating materials are operative in temperatures as high as 2300ºF. This kind of thermocouple is cost-effective for short-term measurements. Junctions and instrumentation connections are easily fabricated on site. Mineral-insulated metal-sheathed thermocouples. Specialty thermocouple alloys are encased in a metal tube containing magnesium oxide for electrical isolation. These are general-purpose products suitable for measuring many liquids, solids, or gases. A large variety of metal coverings is available to protect the thermocouple alloys from corrosive environments. The devices have a long life in applications involving rapid temperature cycling. Protected-element thermocouples. An assortment of formed metal and ceramic tubes are used to protect the thermocouple sensing element from harsh process conditions. These thermocouples are long-lasting and durable, and you can replace them without shutting down the process. RTDs. These are precision temperature-sensing devices. They're the ones to use when applications require accuracy, long-term electrical The sensing element in RTDs is typically a fine platinum wire winding or thin metallic layer applied to a ceramic substrate. The platinum resistance thermometer is the primary interpolation instrument used by the National Bureau of Standards in applications with operating temperature ranges from 436ºF to 1135ºF. Precision thermometers can be manufactured with stability of 0.0025ºC per year. However, industrial models typically drift < 0.1ºC per year. RTDs with platinum and copper elements follow a more linear curve than thermocouples or most thermistors. Unlike a thermocouple, an RTD uses copper wire products for instrument connection and requires no cold junction compensation. As a result, system cost is often lower. Although point measurements are often desirable, they can cause errors. An RTD element can be spread over a large area, improving control with area averaging, an impractical technique with thermocouples. The voltage drop across an RTD provides a much larger signal than thermocouple voltage output. The drawbacks to this sensing technology are slower response time (due to large element size), sensitivity to shock and vibration, small resistance change (low sensitivity) for temperature variations, and low base resistance. Low base resistance and small resistance change for corresponding temperature change become a concern when long lead lengths are required because the leads create additional resistance. When added to the resistance of the RTD element, the lead resistance can result in measurement errors. To overcome lead-length problems, you should use 3- or 4-wire lead circuitry; this allows the effect of a bridge circuit to measure the resistance change based on temperature. Wire-length errors are minimized because the resistance change occurs at the RTD sensing point. Accuracy of the measurement is primarily dependent on the accuracy of the signal conditioning circuit in the controller or measuring device. Thermistors. These sensors are sensitive to small temperature changes. These devices are best for low-temperature applications over limited ranges. The element is small-thermistor beads can be the size of a pinhead Base resistance can be several thousand ohms. This provides a larger voltage change than RTDs with the same measuring current, negating leadwire resistance problems. You must be careful, though, to limit measuring current because small thermistors are more susceptible to self-heating than RTDs. Many newer thermistor models are trimmed to tight tolerances over limited temperature ranges, but they are priced accordingly. The drawbacks you will encounter when using thermistors are the result of the sensors' fragile nature, limited temperature span, initial element drift, and decalibration at higher temperatures. Thermistors are generally interchangeable and, unless additional instrument circuitry is added, will not provide a fail-safe condition if the element should open. Thermistors also do not have the same level of established industry standards as thermocouples and RTDs.Noncontact Temperature Sensors An IR device intercepts heat energy emitted by an object and relates it to the product's known temperature. An IR sensor offers many advantages and can be applied where contact sensors cannot be used. For example, IR Some IR sensors can also be interfaced with special IR temperature controls. These provide a closed-loop, noncontact temperature-control system with options for serial data communications and data logging.Making the Right Selection When it comes time to select a sensor, consider these questions: Arthur Volbrecht is a Product Manager at Watlow Gordon, 5710 Kenosha St., Richmond, IL 60071; 815-678-2211, fax 815-678-3961.
<urn:uuid:beb0cc22-0fcc-4010-b82c-d8b479df7000>
CC-MAIN-2016-26
http://archives.sensorsmag.com/articles/0698/tem0698/main.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00167-ip-10-164-35-72.ec2.internal.warc.gz
en
0.896585
1,371
3.828125
4
Researchers at the National Institute of Standards and Technology (NIST) and Wesleyan University have used computer simulations to gain basic insights into a fundamental problem in material science related to glass-forming materials, offering a precise mathematical and physical description* of the way temperature affects the rate of flow in this broad class of materials-a long-standing goal. |Battery acid, plastic containers and windowpanes are among the many glassy materials whose molecular properties the new study quantifies. Application of the findings could help manufacturers improve the design of such materials from the ground up. images ©Shutterstock/collage K. Talbott | Manufacturers who design new materials often struggle to understand viscous liquids at a molecular scale. Many substances including polymers and biological materials change upon cooling from a watery state at elevated temperatures to a tar-like consistency at intermediate temperatures, then become a solid "glass" similar to hard candy at lower temperatures. Scientists have long sought a molecular-level description of this theoretically mysterious, yet common, "glass transition" process as an alternative to expensive and time-consuming trial-and-error material discovery methods. Such a description might permit the better design of plastics and containers that could lengthen the shelf life of food and drugs. A fundamental question is why many materials behave differently when temperature changes. In some "fragile" glass-forming materials, a modest variation in temperature can make the material change from highly fluid to extremely viscous, while in "strong" fluids this change in viscosity is much more gradual. This effect influences how long a manufacturer has to work with a material as it cools. "For decades, material scientists have heavily relied on empirical rules of thumb to characterize these materials," says NIST theoretician Jack Douglas. "But if you want to design a material that does precisely what you want, you need a molecular understanding of the underlying physical processes involved." According to Douglas, the increasingly viscous nature of glass-forming liquids is related to molecules that move together in long strings around other atoms that are almost frozen in their motion. The growth of these snake-like structures leads to an increase in the viscosity of the liquid: the lower the temperature, the longer the chains, and the more viscous the fluid. The team found that the rate at which these spontaneously organizing snake-like strings grow in size as the material cools is quantitatively related mathematically to the fluid fragility-confirming intuitive arguments made nearly half a century ago by physicists G. Adams and J.H. Gibbs, but now bolstering them with a firm computational underpinning. Douglas and his collaborator Francis Starr of Wesleyan University achieved a large variation of fluid fragility through use of a computer model, which mimics a polymer fluid that includes tiny nanometer-sized particles. Portraying the addition of various amounts of nanoparticles and varying their interaction with the polymers, Starr says, gave the team a sort of "knob to tweak" to reveal how the fluidity changed with temperature and how the motion of the clusters was quantitatively related to changes in the fluid's properties. This tuning of cooperative motion in glass-forming liquids and fragility should be crucial in material design. Douglas says.
<urn:uuid:ac74a66d-51fc-47b9-b010-6f20f79d80eb>
CC-MAIN-2016-26
http://www.domain-b.com/technology/materials/20110622_relax.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927198
659
3.734375
4
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | The Bennett scale, also called the DMIS (for Developmental Model of Intercultural Sensitivity), was developed by Dr. Milton Bennett. The framework describes the different ways in which people can react to cultural differences. Organized into six “stages” of increasing sensitivity to difference, the DMIS identifies the underlying cognitive orientations individuals use to understand cultural difference. Each position along the continuum represents increasingly complex perceptual organizations of cultural difference, which in turn allow increasingly sophisticated experiences of other cultures. By identifying the underlying experience of cultural difference, predictions about behavior and attitudes can be made and education can be tailored to facilitate development along the continuum. The first three stages are ethnocentric as one sees his own culture as central to reality. Moving up the scale the individual develops a more and more ethnorelative point of view, meaning that you experience your own culture as in the context to other cultures. At the next stage these ethnocentric views are replaced by ethnorelative views. Developmental Model of Intercultural Sensitivity Edit - Denial of Difference - Individuals experience their own culture as the only “real” one. Other cultures are either not noticed at all or are understood in an undifferentiated, simplistic manner. People at this position are generally disinterested in cultural difference, but when confronted with difference their seemingly benign acceptance may change to aggressive attempts to avoid or eliminate it. - Defense against Difference - One’s own culture is experienced as the most “evolved” or best way to live. This position is characterized by dualistic us/them thinking and frequently accompanied by overt negative stereotyping. People at this position are more openly threatened by cultural difference and more likely to be acting aggressively against it. A variation at this position is seen in reversal where one’s own culture is devalued and another culture is romanticized as superior. - Minimization of Difference - The experience of similarity outweighs the experience of difference. People recognize superficial cultural differences in food, customs, etc,. but they emphasize human similarity in physical structure, psychological needs, and/or assumed adherence to universal values. People at this position are likely to assume that they are no longer ethnocentric, and they tend to overestimate their tolerance while underestimating the effect (eg “privilege”) of their own culture. - Acceptance of difference - One’s own culture is experienced as one of a number of equally complex worldviews. People at this position accept the existence of culturally different ways of organizing human existence, although they do not necessarily like or agree with every way. They can identify how culture affects a wide range of human experience and they have a framework for organizing observations of cultural difference. - Adaptation to Difference - Individuals are able to expand their own worldviews to accurately understand other cultures and behave in a variety of culturally appropriate ways. Effective use of empathy, or frame of reference shifting, to understand and be understood across cultural boundaries. - Integration of Difference - One’s experience of self is expanded to include the movement in and out of different cultural worldviews. People at this position have a definition of self that is “marginal” (not central) to any particular culture, allowing this individual to shift rather smoothly from one cultural worldview to another. - ↑ While this level may initially be interpreted as a higher level of sensitivity, it is actually consistent with the dualistic thinking characterized by this stage where one culture is seen as good and another culture as bad. In this case, however, it is one’s own culture that is seen as bad and another’s culture that is seen as good; neither culture is valued in its own right. Bennett, M. J. (2004). Becoming interculturally competent. In J.S. Wurzel (Ed.) Toward multiculturalism: A reader in multicultural education. Newton, MA: Intercultural Resource Corporation. (Originally published in The diversity symposium proceedings: An interim step toward a conceptual framework for the practice of diversity. Waltham, MA: Bentley College, 2002). Additional information at www.idrinstitute.org Bennett, M. J. (1993). Towards ethnorelativism: A developmental model of intercultural sensitivity (revised). In R. M. Paige (Ed.), Education for the Intercultural Experience. Yarmouth, Me: Intercultural Press. Bennett, M. J. (1986). A developmental approach to training intercultural sensitivity. in J. Martin (Guest Ed.), Special Issue on Intercultural Training, International Journal of Intercultural Relations. Vol 10, No.2. 179-186.hu:Interkulturálisérzékenység-fejlesztési modell
<urn:uuid:5d9f2d4f-6920-4d6a-b31f-01b2c633e8d2>
CC-MAIN-2016-26
http://psychology.wikia.com/wiki/Developmental_Model_of_Intercultural_Sensitivity_(DMIS)
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922545
1,012
3.171875
3
For the past five years, UVA’s neurotrauma laboratory has gathered the school’s top doctors to study what has been called a “signature wound” in American military conflicts in Iraq and Afghanistan. Traumatic Brain Injuries—known as TBIs—have afflicted 200,000 soldiers since the start of Operation Enduring Freedom in 2001. In an effort to develop TBI testing fit for the battlefield, the Department of Defense provided the UVA neurotrauma lab with $6 million to create portable ultrasound machines that can evaluate brain injuries during combat. Dr. James Stone is currently developing a hand-held ultrasound unit that could be used to detect Traumatic Brain Injuries outside of the hospital, including in conflict zones. Dr. James Stone, an assistant professor of radiology and medical imaging at the University’s School of Medicine, is working with neurological surgeon Dr. Greg Helm to develop a hand-held ultrasound unit. However, before the technology can be built, Stone and Helm must validate the theory the entire project rests on—that ultrasound measurement of tissue stiffness can actually detect TBI. According to Stone, the basis for the ultrasound research stems from a collaborative project with Temple University investigators, which “showed alteration in tissue stiffness in a measurable fashion following experimental brain injury.” If Stone can prove that tissue stiffness is correlated with TBI, brain injury diagnosis could become more reliable. Current military tests fail to catch nearly 50 percent of brain trauma cases, according to a NPR report. “Although we have spent the better part of a century exploring how TBI occurs, we still have much to learn,” admits Stone. The Center for Disease Control lists TBI as a contributing factor in one-third of all injury-related deaths, with 1.7 million Americans suffering from brain trauma each year. In military conflicts in Afghanistan and Iraq, hundreds of thousands of soldiers have experienced head injuries ranging from mild to life threatening, usually caused by mines and improvised explosive devices. Military helmets offer protection against bullets, but still face challenges from roadside bombs. “In terms of resulting clinical symptoms, injuries can be mild and manifest as mood disturbances, difficulty sleeping, headaches, and memory loss,” explains Stone. Patients with more severe brain trauma have experienced lifelong debilitation, coma, and death. A study from Vanderbilt University reveals that 30 percent of TBI patients will develop clinical depression—three times greater than the national average. In 2007, UVA recognized the urgent need to better detect and understand instances of TBI on the battlefield. For the past six years, says Stone, UVA’s neurotrauma lab “has been entirely focused upon exploring questions related to combat [inflicted] TBI.” In addition to Stone and Helm, there are several other UVA doctors developing applications for the DOD. Dr. George Rodeheaver directs the University’s Wound Healing Lab, where his highly successful burn treatment gel caught the government’s attention. Last November, Rodeheaver’s company PluroGen signed an $8.6 million federal contract to fund the regulatory approval process and increase manufacturing of the gel. Despite the death of al-Qaeda figurehead Osama bin Laden last week, the country remains enmeshed in the war on terror. President Obama’s 2012 budget devotes $80 million to Department of Defense research development, which may help programs like UVA’s neurotrauma laboratory further limit the deadly effects of a decade-long war.
<urn:uuid:5d4d88fb-2275-4d80-b5ba-bce8eb89d41c>
CC-MAIN-2016-26
http://www.c-ville.com/UVA_fights_TBI_at_home/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929024
730
3.234375
3
Monday, May 28, 2007 NPR aired an excellent piece this morning on Flagstaff, Arizona geologist-pilot-physician-photographer Michael Collier - read it at http://www.npr.org/templates/story/ Collier's latest book (#13) is Over the Mountains, An Aerial View of Geology (Mikaya Press), described as "the first in a new series of picture books focused on the evolution of landscapes." He also has an exhibit of 45 photographs, entitled Stones from the Sky, that will be shown at the AAAS headquarters in Washington, D.C., running from June 7 to Sept. 14, 2007. The UA Press published a number of Collier's books - http://www.uapress.arizona.edu/catalogs/author_books.php?id=1590 Among many other awards, Collier is the 2006 recipient of AGI’s Outstanding Contribution to Public Understanding of Earth Sciences award. Monday, May 21, 2007 As recently as 2000, the price of the element tellurium averaged $3.82 pound, according to the USGS (http://minerals.usgs.gov/minerals/pubs/commodity/selenium/). Last year it went to $96 and this year it hit $100. So, what’s up? An investment report released last month by Jack Lifton at ResourceInvestor.com says that this fall, Intel and Samsung "will introduce flash memory replacements…that can be used, erased, and used again indefinitely, but, rather than being crystalline silicon technology based, are made from tellurium based glasses composed of germanium, antimony, and tellurium.” They offer the promise of low-cost reliable electronics including smart cell phones, according to Lifton. Lifton also briefly mentioned the military’s use of tellurium and selenium in solar energy conversion. Coincidentally, Phoenix-based First Solar Inc. which went public last fall, produces photovoltaic solar panels (ie, electricity producing panels) using cadmium tellurium as the absorption layer. First Solar says in their annual report that, “Cadmium tellurium…has the potential to deliver competitive conversion efficiencies with approximately 1% of the semiconductor material used by traditional crystalline silicon solar modules.” This reduces the cost of solar panels, making them more competitive. [Note – in the interest of disclosure - I own stock in First Solar]. Why does all this matter? Well, it turns out there are no primary tellurium mines in the However, at $100 per pound and the potential for major new demands on the mineral, it seems possible that tellurium could follow in the footsteps of molybdenum, another ‘worthless’ mineral that now is a major contributor to mining company bottom lines and the state's economy. Monday, May 14, 2007 A lawsuit filed against the federal government by four trade associations could prohibit geologists, geographers, and many other professionals from making maps using federal funds, according to the American Association of Geographers (http://www.aag.org/help/links.html). MAPPS, (Management Association for Private Photogrammetric Surveyors), American Society of Civil Engineers (ASCE), National Society of Professional Engineers (NSPE) and Council on Federal Procurement of Architectural and Engineering Services (COFPAES) is suing the U.S. Government over regulations for federal procurement of architectural and engineering services, including surveying and mapping, in regards to qualifications based selection (QBS) process in the Brooks Act (40 USC 1101 et. seq.) http://www.mapps.org/QBSlawsuit.asp The MAPPS plaintiffs filed suit to force the US Government’s Federal Acquisition Regulatory (FAR) Council to “define ‘survey and mapping’ so as to include contracts and subcontracts for services for Federal agencies for collecting, storing, retrieving, or disseminating graphical or digital data depicting natural or man made physical features, phenomena and boundaries of the earth and any information relating thereto, including but not limited to surveys, maps, charts, remote sensing data and images and aerial photographic services.” [emphasis added] My reading of this section of the complaint seems to make it clear that geologists would be prohibited from using Federal funds for almost everything we do, unless we work under the direction of a licensed engineer or surveyor. AAG filed an amicus brief this spring, stating in part: “…a victory for plaintiffs would not only insulate all federal mapping contracts from price competition, but also exclude everyone else – that is, anyone and everyone other than licensed engineers and surveyors – from even being eligible to receive a federal mapping contract, even where engineers and surveyors lack the training and subject matter expertise needed to perform the contract.” People following this lawsuit say we can expect a ruling from the judge at any time. All but a about a dozen of the total 58 stations in the EarthScope (http://www.earthscope.org) USArray Transportable Array (TA) of broadband seismic stations are installed in Arizona at present. A map of the stations was published in "Arizona Geology" last year (http://www.azgs.az.gov/Spring_06.pdf). [right: broadband seismic station in New Mexico is similar to those in Arizona. Courtesy EarthScope] Prof. Matt Fouch at ASU reports that “many of the stations are returning better data than permanent station installations, and data from the TA have already been extremely useful in helping constrain crustal thickness and other regional structure. The TA operations facility has detected earthquakes and mine blasts at the magnitude 1-2 threshold. It opens up the issue of potential seismic hazard estimates in other regions of the state besides the Flagstaff region, which has typically been the only area with any sort of station coverage. Data from the TA have also already been extremely useful for helping reform ideas about things like Basin and Range extension and Colorado Plateau uplift, but the array is still new enough that we need to wait for more events before we can really make a fundamental contribution to some of these issues.” There is now an unanticipated opportunity to keep some of these broadband seismometers in the state after the array moves east (currently scheduled to move 12-24 months from now). Oregon and Washington raised $500,000 from private foundations to purchase key TA stations, while Nevada is seeking a Congressional earmark to acquire stations in that state and have the USGS maintain them. There are already 2 stations (at Organ Pipe and Petrified Forest National Monuments) that the TA plans to leave in the state, and they will operate and archive the data in near real-time for the next ~10 years. We have an opportunity to acquire more of these stations and essentially instantly build a very high-quality seismic network in the state if we can identify funds to pay for replacement stations as the Array moves eastward. Friday, May 11, 2007 The Arizona Daily Star last month reported that: “a 480-acre retired farm on the “Because of its closeness to the San Pedro and the amount of water the retired farmland once used, the Sandlin property is crucial to the river and, in turn, neighboring Fort Huachuca, which is under court order to cut water consumption in and around the river. “The fort’s future and the future of the San Pedro are inextricably linked,” said Col. Jonathan Hunter, the fort’s garrison commander.” Bill Hess of the Sierra Vista Herald/Review wrote yesterday that, “The land involved in the swap has been put back on the market. [emphasis added] Some believe that if it is sold to a developer or to an agricultural business, water will again be pumped to the detriment of the river and the partnership’s goal of finding ways to conserve water."The Herald reports that the water deficit in the Upper San Pedro Basin - the difference between what is being pumped from the aquifer and what is being recharged – is 10,800 acre-feet, instead of the previously measured 7,700 acre-feet, according to the U.S. Geological Survey. The higher number is due to better quality measurements. This double whammy will make it harder to meet the region's water goals. Another twist on these complex deals is a recent Wall Street Journal story that described the land deal role in the Resolution copper mine: “North America's largest copper lode is believed to be buried more than a mile beneath Apache Leap, the stark red cliffs that loom above this storied Old West town about an hour east of In exchange for supporting the bill, the local congressman, Rick Renzi, a Republican, insisted on something in return: He wanted Resolution to buy, as part of the land swap, a 480-acre alfalfa field near his hometown of Resolution executives refused. For starters, they thought the land was overpriced, people close to the deal say. More troubling, they discovered it was owned by Mr. Renzi's former business partner, these people say. Resolution wasn't the only party troubled by the congressman's demands. His chief of staff resigned and began cooperating secretly with the Federal Bureau of Investigation, according to witnesses and others close to the case. The FBI began a preliminary inquiry that was first reported in October, just before Mr. Renzi was elected to a third term.” Thursday, May 10, 2007 I refer you to a new website on communicating science - "Speaking Science 2.0" - www.scienceblogs.com/speakingscience/ put together by Matt Nisbet and Chris Mooney. Chris is Wash. DC correspondent for Seed magazine, author of "The Republican War on Science" and the about-to-be-released book "Storm World: Hurricanes, Politics, and the Battle over Global Warming." He blogs at www.scienceblogs.com/intersection. Matt is professor of communications at American University in Wash. DC and is widely recognized for his work on framing science messages (also the name of his blog - http://scienceblogs.com/framing-science/). They recently published op-ed pieces in Science and the Washington Post that attracted widespread national attention. They are starting a nationwide lecture tour today on this topic. A synopsis of their talk follows below: Speaking Science 2.0 "We do not see the world as it is. We see the world as we are." -- Talmud scripture "It's not what you say, it's what people hear." --Republican strategist Frank Luntz Over the past several years, the seemingly never-ending fights over evolution, embryonic stem cell research, global climate change, and many other topics have led to a troubling revelation. Scientific knowledge, alone, does not always suffice when it comes to winning political arguments, changing government policies, or influencing public opinion. Put simply, the media, policymakers, and the public consume scientific information in a vastly different way than do the scientists who generate it. As a result, scientists and their organizations repeatedly face difficult challenges in explaining their knowledge to diverse groups of citizens. As issues at the intersection of science and politics gain more and more attention, something beyond pure science--beyond "getting the facts out there"--will be necessary to break through to the public. But what are the new directions? It's time to question some central assumptions and focus on fresh ideas. **A conversation about new directions in science communication.** In this joint presentation, journalist Chris Mooney and communication professor Matthew Nisbet explain how scientists and their allies can "reframe" old debates in new ways, while taking advantage of a fragmented media environment to connect with a broader American public. Drawing on case studies from the battles over stem cell research, evolution, global warming, hurricanes, and other subjects, a key point of emphasis will be that scientists must adopt a language that emphasizes shared values and has broad appeal, avoiding the pitfall of seeming to belittle fellow citizens or attacking their Innovative strategies for public engagement could not be more urgent: Science will figure, as never before, in the 2008 presidential campaign and beyond. Scientific "facts" will increasingly be pulled into fraught political contexts, and bent and twisted in myriad ways. This political environment can seem perplexing to scientists, or worse. But it's one to which they must adapt if they want their hard-won knowledge to play its necessary role in shaping the future of Wednesday, May 02, 2007 This month’s AAPG Explorer magazine quotes Vince’s recent talk to the AAPG annual meeting in The list includes energy resources such as oil and natural gas, coal, and uranium. The global demand for copper is responsible for the record prices of the last year or so which has helped make Vince expects the increasing competition for resources to bring · Shortages of raw materials · Pressures to develop more resources in the states · Potential conflicts with multinational corporations in the states The bigger problem may be that Vince doesn’t see any obvious solutions to what he views as a pending crisis. What comes to mind immediately are the number of wars that have been fought over access to or control of natural resources. Are we destined for similar instabilities in coming years? The long-enduring debate over collecting fossils on federal lands is continuing. The bills sponsor, Rep. James McGovern (D-MA) testified it “provides stiff penalties for crimes involving the theft and vandalism of Fossils of National Significance (FONS) in order to deter the illegal collection of these resources on public lands. And, it is important to note that the bill seeks only to penalize those who knowingly violate the law and seek to illegally profit from these public resources. It does not place any new restrictions on amateur collectors who by and large respect the value of these fossils. It is limited to public lands, and will in no way affect private land-owners. Furthermore, this bill mandates that all such fossils taken from federal land be curated at museums or suitable depositories. Lastly, it standardizes the permitting practices for excavation on public lands to ensure that fossils are not needlessly damaged.” The bill was opposed Peter L. Larson, President of the Black Hills Institute of Geological Research, Inc, a commercial fossil company based in South Dakota. Larson argued that the bill would limit collecting to academics only and that amateurs and commercial companies unearth fossils that would otherwise be lost to weathering or never found by the scientific community. Larson also argued against allowing scientists and federal land managers to keep secret the locations of significant finds. This debate, which has played out over many years, comes down to a few basic battles:
<urn:uuid:1035f01f-4941-4fbd-84bf-7df00543b962>
CC-MAIN-2016-26
http://arizonageology.blogspot.com/2007_05_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936582
3,176
2.78125
3
Perhaps at no time has it been more important for students, especially high school students, to be as prepared as possible for assessments of 21st-century math learning standards as well as college aptitude exams such as the SAT and ACT. Math proficiency is especially vital. High school math may begin with algebra, but the learning curve extends to higher mathematics and sciences such as chemistry and physics. While every teacher tries to provide the attention every student needs, there is not enough time in the day and there are too many students in each class. Resources to help high school students stay on track and help them through more difficult areas of study are available in math websites, homework databases and lesson plans directories. In addition, many sites will offer games, videos and testing to help with high school math. High school math websites Math websites can range in design from simple problem-solving to advanced theories with additional site references. They can be divided by subject and grade level so that the student is not just visiting a page with numbers, but can focus on the type of mathematics where they may be experiencing difficulty in such as algebra, trigonometry, geometry and calculus. If the student is advanced and is still looking for math websites homework and math help on a higher level, there are also sites that cater to advanced placement mathematics and pre-college level math. The developers of this site have been helping high school students achieve greater results in mathematics for 17 years. They have received numerous awards for their work and continue to provide a site that lends assistance in all areas of math. From explanations of theory to problem-solving equations, the information on this site is completely free. They also have a page titled Cyberexam that offers quizzes and testing for each area of mathematics learning. It does not have a catchy name, but Math.com has everything needed for a website that assists students with help in high school math. This site is easy to navigate with a drop-down menu on the side that contains choices for every subject of mathematics studied on a high school level. The site also includes a section where teachers may find information for lesson plans and a multitude of classroom resources. There is another section for parents who need help from the website’s homework directory for their child. Owned and operated by Discovery Education, this website provides teachers with helpful resources to captivate each and every student in the classroom. As a top math help website, Webmath.com is a free student resource in which students can enter any math-related question and get a detailed solution on the spot. The unique aspect of Webmath.com is that in the “Math for Everyone” section, there are very clear examples of everyday expenses in which math is necessary. You can learn how to determine a tip at your favorite restaurant or figure out the odds of winning that million-dollar lottery. Whatever the instance, this website has all the secrets for math success! Another great site for help in high school math is mathplanet.com. This site is also well designed and offer lessons, examples and explanations that every high school math student needs. One unique attribute of this site is that many areas of mathematics have video lessons. In addition, Mathplanet.com offers SAT and ACT tests with separate downloadable answer keys. These are just a few of the better websites that provide help in high school mathematics. Each one provides lessons, explanations and resources that will assist students, parents and teachers in achieving success to prepare the for the future.
<urn:uuid:4b155edf-00f7-453c-ae4c-68d7e7714579>
CC-MAIN-2016-26
http://education.cu-portland.edu/blog/e-learning/websites-that-help-students-with-high-school-math/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963897
712
3.25
3
February 12, 2013 New World Record Efficiency For Thin Film Silicon Solar Cells A 10.7 percent conversion rate has been achieved using less than 2 micrometers of raw material The Photovoltaics-Laboratory (PV-Lab) of EPFL's Insitute of Microengineering (IMT), founded in 1984 by Prof. Arvind Shah and now headed by Prof. Christophe Ballif, is well known as a pioneer in the development of thin-film silicon solar cells, and as a precursor in the use of microcrystalline silicon as a photoactive material in thin-film silicon photovoltaic (TF-Si PV) devices. A remarkable step was achieved by the team led by Dr. Fanny Meillaud and Dr. Matthieu Despeisse with a new world record efficiency of 10.7% for a single-junction microcrystalline silicon solar cell, independently confirmed at Fraunhofer Institute for Solar Energy Systems (ISE CalLab PV Cells) in Freiburg (Germany)."Deep understanding has been gained these last years in material quality, efficient light-trapping and cell design, which in combination with careful process optimization led to this remarkable world-record efficiency" says Simon Hänni, PhD student at IMT Neuchâtel. Importantly, the employed processes can be up-scaled to the module level. While standard wafer-based crystalline silicon PV technology implements absorber layers with a thickness of about 180 micrometers for module conversion efficiency of 15 to 20%, 10.7% efficiency was reached here with only 1.8 micrometers of silicon material, i.e. 100 times less material than for conventional technologies, and with cell fabrication temperature never exceeding 200°C. Thin-film silicon technology indeed offers the advantages of saving up on raw material and offering low energy payback time, thus allowing module production prices as low as 35 /m2, reaching the price level of standard roof tiles. The reported progress is of paramount importance for increasing further TF-Si PV devices efficiency and potential, as at least one microcrystalline silicon junction is systematically used in combination with an amorphous silicon junction to form multiple junction devices for a broader use of the solar spectrum. The reported record efficiency clearly indicates that the potential of TF-Si multi-junction devices can be extended to > 13.5% conversion efficiency with a minimum usage of abundant and non-toxic raw material at low costs (TF-Si PV modules implementing in their simplest form two glasses and few microns of zinc and of silicon for an easy recycling). On the Net:
<urn:uuid:04263e0f-4c80-46b6-a956-1615627d3e63>
CC-MAIN-2016-26
http://www.redorbit.com/news/science/1112782693/new-world-record-efficiency-for-thin-film-silicon-solar-cells/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902224
544
2.9375
3
@Cubbi: what is libc++ then? Compared to libstdc++? libstdc++ is the GNU C++ standard library that accompanies the GNU g++ compiler. It relies on GNU C library for the shared features and supports whatever platforms g++ supports. It's missing a lot of minor and several major C++11 requirements. It doesn't directly support other compilers, but because it is ubiquitious, other compilers implement whatever g++ quirks are needed to be used with it (clang++ and intel did that, for example) libc++ is a complete (I think, the only complete so far) C++11 standard library implementation written from scratch for Apple. It supports MacOS and relies on BSD C library, although there is work ongoing to support Linux and Windows (it mostly works, but some features are missing). It expects clang++ as the compiler. llvm is good in general. It can be used for on-the-fly or JIT compilation, it can be used as a backend and optimizer to several different platforms for a compiler or VM. It's just... I love it. I would probably marry it if it were legal in my state. Back on topic, libc++ is co-developed by Apple because Clang is big on Mac OSX. I don't think they created it although don't quote me. It still uses extensions heavily for optimization and it's main goal was licensing differences. libstdc++ works perfectly fine with Clang (as I use it interchangeably with gcc) and supposedly, vice versa (I don't have a Mac nor do I care to test on Linux).
<urn:uuid:6095458c-c2eb-418e-9c07-ad6154c945be>
CC-MAIN-2016-26
http://www.cplusplus.com/forum/lounge/110714/5/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959118
343
2.859375
3
September 16, 2009 Northwestern United States Could Face More Tamarisk Invasion Models show habitat of the aggressive invasive plant likely will expand as temperature warms If the future warming trends that scientists have projected are realized, one of the country's most aggressive exotic plants will have the potential to invade more U.S. land area, according to a new study published in the current issue of the journal Invasive Plant Science and Management. The study found that tamarisk"”prevalent today in some parts of the region, but generally limited to warm and dry environments"”could expand its range into currently uninvaded areas."Results of our study suggest that a little over 20 percent of the Northwest east of the Cascade Mountains supports suitable tamarisk habitat, but less than one percent of these areas is currently occupied by the species," said Becky Kerns, a research ecologist with the Western Wildland Environmental Threat Assessment Center (WWETAC) who led the study. "That means the remainder is highly vulnerable to invasion right now with the situation potentially getting worse as favorable conditions for tamarisk may expand under climate change." These findings translate into a two- to ten-fold increase in highly suitable tamarisk habitat in Oregon, Washington, and Idaho by the end of the century. Tamarisk, also known as "saltcedar," is a deciduous shrub or small tree that grows quickly, reproduces profusely, and tolerates drought and salty conditions, making it capable of easily displacing native species. It also sheds flammable leaves that serve as potential fuel, significantly increasing an area's wildfire risk. The plant was intentionally introduced to the West in the 1800s as an ornamental, windbreak, shade, and erosion control species and today can be found growing prolifically in the Northwest in the central Snake River Plain, Columbia Plateau, and Northern Basin and Range. "Tamarisk is not a newcomer to the Northwest," Kerns said. "But most people are surprised that it is found here and that it forms extensive stands along certain portions of our arid waterways." In the study, Kerns and her Forest Service and Oregon State University colleagues compiled distribution data for all species of tamarisk in the region and used the information to develop habitat suitability maps, which helped to identify those areas most susceptible to invasion. They then projected differences in habitat resulting from a changing climate to determine how the plant's habitat and distribution may change in the future. Their projections indicated that, although most of the region maps as low habitat suitability for tamarisk, suitable and unoccupied habitat prone to invasion exists. Large, relatively uninvaded areas"”including the Columbia, Okanagon, Yakima, upper John Day, Deschutes, lower Salmon, upper Owyhee, and lower Snake Rivers and their tributaries"”appear to be especially vulnerable to infestation from adjacent populations. "It's important to acknowledge that considerable uncertainty exists surrounding future climate change," Kerns said. "But our results provide a useful starting point for discussing the emerging threat of this highly invasive species in relation to climate change." On the Net: - USDA Forest Service, Pacific Northwest Research Station - To read a summary of the study online, visit http://wssa.allenpress.com/perlserv/?request=get-abstract&doi=10.1614%2FIPSM-08-120.1
<urn:uuid:4d93072c-57bf-4344-abff-62ec21612b95>
CC-MAIN-2016-26
http://www.redorbit.com/news/science/1753860/northwestern_united_states_could_face_more_tamarisk_invasion/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00195-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930398
719
3.328125
3
|Biosphere Reserve Information| Vodlozersky Biosphere Reserve is located in the extreme north-west of the country. Mid-latitude and northern taiga landscapes are preserved here and all kinds of terrestrial and aquatic ecosystems typical for this zone are represented - boreal coniferous forests (spruce Picea abies, P. obovata and pine Pinus sylvestris), mires, lake and river ecosystems. The most important task for the Vodlozersky Biosphere Reserve is the improvement of the social and economic conditions of life for the local population along with conservation and development of traditional, ecologically safe kinds of nature management and land uses (fisheries, haymaking, pasture and arable lands). Approximately 4,560 permanent residents (seasonally up to 5,600) live in the biosphere reserve (2001). The cultural heritage dates back to Mesolithic archaeological sites and to the 15th century when Russians started to colonize Vodlozerje which initiated the forming of an isolated cultural region in the Russian North. Apart from churches and chapels, also a very old cultural landscape and local customs have survived to date. Within the biosphere reserve, scientific and ecological expeditions are conducted to study and inventory the nature diversity and territory resources. Monitoring of boreal, mire and aquatic species is also carried out. There are programmes on the revival of traditional land-use practices and cultural traditions. At present, the territory is a model for working out a strategy for tourism development as a promising branch of economy in Inega and Pudozg regions. |Major ecosystem type||Boreal needleleaf forests or woodlands| |Major habitats & land cover types||North-taiga pine open wood shrubby forests characterized by Pinus sylvestris, Vaccinium myrtillus, Calluna vulgaris etc.; north-taiga spruce shrubby forests dominated by Picea abies, P. obovata, Pinus sylvestris etc.; middle-taiga greenmoss-blueberry-spruce forests including Picea abies, P. obovata, Dicranum sp. etc.; middle-taiga pine forests with lichen and greenmoss forests, including Pinus sylvestris, Betula pendula, B. pubescens etc.; upper oligotrophic sphagnum mires with ridges and pools characterized by Pinus sylvestris, Picea abies, Betula nana and Andromeda polifolium; agroecosystems; forestry systems| 62°07’ to 63°35’N; 36°15’ to 37°30’E 62°51’N; 36°52’E (central point) |Transition area(s) when given||450,000| |Altitude (metres above sea level)||+136 to +317| |Administrative authorities||Management of Voldozersky National park- Biosphere Reserve 'Vodlozersky', reporting to the Federal Forest Service of the Russian Federation| |Last updated: 01/03/2007|
<urn:uuid:3ffafb6d-fbdc-4d48-8592-e1252b12c1a1>
CC-MAIN-2016-26
http://www.unesco.org/mabdb/br/brdir/directory/biores.asp?mode=gen&code=RUS+25
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.853868
661
3.0625
3
After taking a fresh look at the data from two previous genome-wide scans for MS-related gene variations, the International MS Genetics Consortium (IMSGC) concluded that MS risk is governed by a cumulative effect of dozens of allelic variants throughout the genome, probably involving as many as 100 genes (N Engl J Med. 2007 ; 357(9):851-862; N Engl J Med. 2010 ; Apr 9;86(4):621-5. To see the full collection of published genome-wide MS studies, go to www.msgene.org The role of genetics in MS As the role of genetics in MS continues to be explored, the following statements can be used to help frame conversations with patients about MS risk (Compston & Coles, 2008): The risk of developing MS in the general population is approximately 0.1%. The risk for a child with one parent who has MS is approximately 2%. The risk for a child with two parents who have MS is approximately 12.2% (Ebers et al, 2000). The risk for a dizygotic twin and other siblings is approximately 5%. The risk for monozygotic twins is approximately 25% (Willer et al, 2003). The risk for second-degree and third-degree relatives is approximately 1%. Genetic differences between African-Americans and Northern Europeans with MS Research has demonstrated that MS occurs in most ethnic groups, including African-Americans, Asians and Hispanics/Latinos. Susceptibility rates vary among these groups, with recent findings suggesting that African American women have a higher than previously reported risk of developing MS. In 2013, a nationwide team of researchers reported the largest genetic study of people with MS of non-European ancestry. Investigators obtained DNA samples from 1,162 African-Americans with MS and 2,092 African-Americans without MS, as well as 577 white Americans with MS and 461 white Americans without MS. The team looked for similarities and differences in 128 gene variants that have been associated with MS. They confirmed associations of key immune-response genes (HLA) with MS among African Americans. However, among 73 non-HLA genes that were associated with MS among white Americans, only 8 were associated with MS among African Americans. The authors concluded that MS genetic risk in African Americans only partially overlaps with that of Europeans and could explain the difference of MS prevalence between populations. Conversations about the genetic contribution to MS risk should include mention of the complex interaction with environmental factors that appears to exist.
<urn:uuid:3ed34f70-8e56-4353-bf38-1919d6951e20>
CC-MAIN-2016-26
http://www.nationalmssociety.org/For-Professionals/Clinical-Care/About-MS/Interaction-of-Genetics-and-the-Environment/Genetics
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00007-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964806
527
3.015625
3
Every man over the age of 65 could be screened for dangerous swelling around the heart. Men are more likely to suffer an aortic aneurysm Government experts are investigating whether the NHS should screen patients for aortic aneurysms. This potentially fatal condition occurs when the major blood vessel from the heart - the aorta - swells and ruptures. Men are six times more likely to have an aneurysm than women. If an aneurysm ruptures the chances of survival are low, with half of patients never reaching hospital. Currently, the only effective treatment is to repair them by surgery. The UK National Screening Committee is considering whether men over the age of 65 should be routinely screened for the condition. It follows a study by the Medical Research Council, published last year, which suggested that screening could save thousands of lives each year. Researchers screened 67,800 men aged 65 and older for the condition. Men who had aneurysms of larger than 3cm were closely monitored, and underwent surgery when it was thought necessary. As a result, the death rate among this group was 52% lower than among men who were not screened. These men were also more likely to survive after undergoing surgery. The National Screening Committee has commissioned a study to examine the cost implications of introducing an aneurysm screening programme. The researchers who are carrying out that study are expected to report their findings next year. Committee members will then decide whether or not to recommend the introduction of a screening programme to ministers. A Department of Health spokeswoman said: "The National Screening Committee has commissioned a further study on the feasibility of the proposals, opportunity costs, and the likely costs of performing screening in those centres not involved in the studies and will make recommendations in the early half of 2004." Mr Alan Scott, who headed the Medical Research Council study into the effectiveness of a screening programme, welcomed the move. "The work that we have done has shown that screening is cost effective and beneficial," he told BBC News Online. Mr Scott suggested that screening all men over the age of 65 for aortic aneurysms would cost just £23 per scan and £63 if treatment is included. "The plan would be to screen men once at 65. This should reduce their risk of serious problems for 10 years," he said. "I would hope they would introduce screening in the near future."
<urn:uuid:270e7eac-4152-4133-bb39-91849492c0a0>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/health/3052322.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00113-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970069
506
2.703125
3
Short Radius Centrifuge A view of a NASA-provided Short Radius Centrifuge at UTMB in Galveston is reminiscent of similar equipment from 40 years ago when astronauts were training for the initial giant leap to the Moon. With the national impetus to return humans to the Moon and make trips to Mars and beyond, new artificial gravity studies are about to begin to find answers to a number of questions regarding human beings' ability to withstand the rigors of such travel. A major undertaking in artificial gravity research will begin this summer at the University of Texas Medical Branch at Galveston. Overseen by NASA's Johnson Space Center, the centrifuge will be used to protect normal human test subjects from deconditioning when confined to strict bed rest in UTMB's National Institutes of Health-sponsored General Clinical Research Center. This study, which supports NASA's Artificial Gravity Biomedical Research Project, will allow researchers to study for the first time, on a systematic basis, how artificial gravity might be used as a multi-system countermeasure against the effects of prolonged microgravity on the human body. + View high-resolution image (1 Mb)
<urn:uuid:e6ea4f5f-cb82-458a-883c-d19d7a4e794a>
CC-MAIN-2016-26
http://www.nasa.gov/vision/space/preparingtravel/human_centrifuge_08315.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.883303
233
3.21875
3
While there are plenty of settings where wearable tech like Google Glass is rather inappropriate and can make you, well, look like a tool, it's becoming more clear that one place it could find a home is in the medical community. Doctors with Glass? Makes sense. Last June, Google Glass was used during surgery for the first time ever, and since then more and more hospitals and medical schools and hospitals have been experimenting with the technology. Now, one school is looking to make Google Glass an integral part of its curriculum. UC Irvine School of Medicine has announced intentions to become the first med school in the country to fully integrate Google Glass into its four-year program–from anatomy course to rotations, UC Irvine wants to equip its students with the wearable tech. “I believe digital technology will let us bring a more impactful and relevant clinical learning experience to our students,” said Dr. Ralph V. Clayman, dean of medicine. “Our use of Google Glass is in keeping with our pioneering efforts to enhance student education with digital technologies – such as our iPad-based iMedEd Initiative, point-of-care ultrasound training and medical simulation. Enabling our students to become adept at a variety of digital technologies fits perfectly into the ongoing evolution of healthcare into a more personalized, participatory, home-based and digitally driven endeavor.” The benefits of basically having a hands-free computer on your face are obvious from the doctors' point of view. Glass, pull up medical records on Jane Smith. Glass, record this procedure from my perspective so I can use it in instruction later. UC Irvine wants to use Google Glass in the OR, ED, ICU, anatomy labs, medical simulation center, ultrasound institute, and even in lecture halls. It appears that this will be a huge part of going to med school at UC Irvine. But here's something interesting–having clinical patient don the wearable tech for teaching purposes: “The most promising part is having patients wear Glass so that our students can view themselves through the patients’ eyes, experience patient care from the patients’ perspective, and learn from that information to become more empathic and engaging physicians,” said Dr. Warren Wiechmann, assistant clinical professor of emergency medicine and associate dean of instructional technologies at the school. In other Google Glass news, it's now available to anyone and everyone in the U.S.–provided you have $1,500 burning a hole in your pocket. Image via YouTube
<urn:uuid:726e6ce6-8855-43b6-8a10-779dad58ff74>
CC-MAIN-2016-26
http://www.webpronews.com/med-school-promises-full-google-glass-integration-2014-05/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953389
511
2.859375
3
With all of the attention being paid lately to the skyrocketing prices at the pump, the sudden rise in the cost of natural gas has gone practically unnoticed. But like its counterpart, oil, the price of natural gas has gone through the roof too— jumping 97% since last August. I mean... take a look at this natural gas futures chart. It is as ugly as it gets. Even so, big price jumps like these never managed to make the front pages anywhere. Of course, I guess if the price of natural gas was posted on nearly every corner in America—much like gas is—it would have been plastered all over the place, but it's not. The net result, though, is the same—more money out of our pockets every month for the energy we need to consume. So quietly now, the price of natural gas has nearly reached heights not seen since the months after Hurricane Katrina. So what is the culprit? Well it's not the weather this time. Remarkably, the Gulf of Mexico has been quiet for two years now. Instead, it an extremely tight supply and demand situation, similar to what is going on with oil. We simply consume too much natural gas in comparison to what we actually bring out of the ground. The Marcellus Shale Formation But as we discovered recently here at home with the Bakken oil find, the massive gas reserves confirmed within the Marcellus Formation may be part of the answer to those increasingly inadequate supplies. All that needs to be done now is to successfully develop those reserves. That was part of what was on President Bush's mind on Tuesday in a press conference that touched on rising energy prices. America, he suggested from the Rose Garden, should take a hard look in the mirror when it comes to them. In fact, he said that ""If Congress is truly interested in solving the problem, they can send the right signal by saying we're going to explore for oil and gas in the US territories, starting with the Arctic National Wildlife Refuge." And while he didn't exactly mention the massive natural gas formation by name his implication was crystal clear: America desperately needs to develop the resources within its own borders. The development of the Marcellus formation, of course, is big part of that task. That's how huge it is. The shale formation actually covers an area of roughly 54,000 square miles, making it bigger than the state of Pennsylvania. It runs from upstate New York, across Pennsylvania into eastern Ohio and across most of West Virginia. In fact, in terms of the scale of the development, the Marcellus gas formation would add up to 20% to the known natural gas reserves within the U.S. That dwarfs another important piece of the natural gas puzzle, the Barnett Shale. The Marcellus formation, however, isn't new at all. In fact, they have been drilling for natural gas in those same Appalachian Mountains for years now. The holes are everywhere. But what is completely new to the region are the estimates of exactly how much natural gas there is within Marcellus. Gas that can now be recovered with deeper wells and better technology. That's because in January, two geosciences professors rocked the natural gas industry when they estimated the Marcellus Shale reservoir contains 168 trillion cubic feet of natural gas and could contain as much as 516 trillion cubic feet. That, according to conservative estimates would yield as much 51 trillion cubic feet of natural gas. By comparison, the entire U.S. production of natural gas is only some 30 trillion cubic feet per year. Moreover, at the top end of the range, the Marcellus formation would be twice as big as the Barnett Shale, the busiest natural gas field in the country. The "New" Super Giant Marcellus Spurs a Land Rush The Marcellus formation, in other words could now be classified as a "Super Giant" That is a complete reversal of a U.S. Geological Survey from only six years ago that estimated there may only be 1.9 trillion cubic feet of product within the area. The result has been a virtual land rush by numerous natural gas companies looking to cash in on those reserves, even though the expensive horizontal drilling needed to tap the resources has been limited so far. Nonetheless, these companies are making big bets that the natural gas pulled out of the Marcellus will means billions of dollars to their companies and their shareholders in the long run. So here is the bottom line on the Marcellus shale formation: it has gone from a complete has-been to a game changer in only five months. That is turning heads on Wall Street, especially with the price of natural gas headed to new highs. In fact, the Marcellus upgrade is probably the biggest energy story that you will never read about in your favorite newspaper. But that's the story in natural gas, we don't really notice it until the bill rolls in. Next week, I'll take a look some of people that have already hit it big in this newfound land frenzy and the companies that have been more than happy to pay them. After all, that's where the pay off for investors is to be found. Your natural-gas-loving analyst, Chief Investment Analyst The Wealth Advisory PS. Speaking of the big news in the Bakken oil formation, it is definitely not too late to make an energy investment in this massive field. To find out the best way to play it click here
<urn:uuid:252b7abb-381d-4632-8614-0a9d069d32c0>
CC-MAIN-2016-26
http://www.wealthdaily.com/articles/marcellus-formation-natural+gas/1284
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00044-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963226
1,143
2.546875
3
If you have any curiosity, at all, about the 1906 Earthquake (especially a morbid one), the Mission District is probably the most interesting place to look. Here are the top 3 reasons history nerds should take a closer look in the Mission. Turns out that when you fill a marsh in with sand and debris, build lavish 3 & 4 story buildings on that sand and debris, then shake the ground for half a minute, those buildings pretty much sink right down into the ground. Sinking buildings were built over what was once lake or marsh. Guests on the 4th floor of the Valencia St. Hotel (top) simply stepped out of the window onto the street. Those sleeping on floors 1-3 weren’t so lucky. Most of the buildings destroyed by the earthquake were wiped out by fire. But this block of victorians on South Van Ness (below) survived 3 days of fires to become a tourist attraction. South Van Ness between 18th & 19th. 2. FIRE LINE - The blocks in red were leveled by the fire that spread from downtown. The fires burned out in the Mission leaving a dramatic contrast between prosperity and homelessness (just like today!), thriving commerce and total annihilation (just like today!), Victorian architecture and Edwardian. Walk down 20th street from Dolores Park to Valencia paying attention to the architecture on the North side (post 1906) vs. the south side (pre 1906). Much of the commercial hub in the Mission District survived. There weren’t many places left in the city that you could buy anything so thousands flocked to the Mission for goods and services in the days, weeks, and months after the fires. 3. DOLORES PARK At the corner of 20th and Church remains one of the few fire hydrants in the city that was functioning after the city’s water mains had burst. This hydrant is credited for helping stop the fire for pushing forward and is painted gold on April 18th each year. Dolores was also the temporary home for some of the quarter of a million refugees (more than half of the city’s population). A handful of these Army built earthquake shacks remain in the city. Next week Mission Bicycle Company begins hosting 1906 Earthquake bike tours which include a theatrical simulation of the 46 seconds of the earthquake, 10 stops with before and after pictures, little known stories, a few surprises, lunch and a rental bike (more info).
<urn:uuid:92f0754e-d768-42cb-b6c8-71576f5fa593>
CC-MAIN-2016-26
http://www.missionmission.org/category/history/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963434
505
2.765625
3
Lesson 4: Danger Zone Trouble [Heb tsar—narrow, a tight place; root tsarar—to cramp, oppress, vex, trouble] vividly portrays the concept of being confined in a tight place, crowded by an opponent or adversary, afflicted, experiencing anguish, distress, sorrow, or tribulation. Troubles are often inflicted by external circumstances and enemies, producing profound physical, mental, and emotional anguish and grief, yet they are a strong motivation to draw near and cry out to God. God hears his children’s cries in times of trouble, will preserve them, is a refuge and source of strength, will rescue and deliver them in His perfect timing, and ultimately will cause all things to work together for good (Rom 8:28). Present trials are a prelude to future testimonies and triumphs! Questions for Group Discussion Reflection: What aspect or lesson from last week’s lesson or lecture most encouraged or challenged you? Why? This week we continue to focus on the Psalms for the danger zone. These Psalms of lament caused by external circumstances produce passionate petition for help from God. Trials provide an opportunity and motivation to express our emotions honestly to God and to grow in our freedom and intimacy with Him. 1. What have you learned about Lament Psalms from observing them last week and this week? 2. Read 1 Samuel 20, 21, 22:1. These verses describe the circumstances David faced after he had been anointed by Samuel as the future king and before he hid himself in the cave described in Psalm 142. A. How do you think David might have been feeling when he wrote this Psalm? Who was his enemy? B. How have you ever been confused about God’s timing in your own life? 3. According to 1 Samuel 22:2, who joined David in the cave? Were they there to comfort David or for themselves? 4. Read Psalm 142. A. List as many words as you can from the first four verses that indicate deep and intense emotions. B. Which word or words best reflects your feelings this week? C. How does God view our emotions? What additional insights do you gain from the cross references on tears and weeping in the Optional Studies for Personal Enrichment? 5. List on the following chart the various characteristics of God noted by David in this Psalm, and note why they might be important to David. Aspect of God’s Character Importance to David 6. Which characteristic of God is most encouraging to you this week? How could you apply it to your present circumstances? 7. Reread Psalm 142. A. List as many specific complaints of David as you find in this Psalm. B. List the specific requests that David makes of God. 8. From Psalm 142, list the verses where David directly addresses the LORD [YAHWEH]. 9. From Psalm 142, which verses indicate David’s dependence upon God for deliverance? 10. Three important characteristics of effective prayer are (1) intensity, (2) specificity, and (3) direct address toward God—based on a personal relationship with Him. Write out a prayer about a personal situation you are facing using these three elements. 11. What one insight or lesson do you want to remember from this week’s lesson? Note it below and on the journal page entitled “Songs for My Soul” at the back of the workbook. Choose one verse from this week’s lesson to memorize. Write it here and meditate on it. Can you recite from memory all four verses you have chosen so far in this study of the Psalms? If not, use this next week (a Fellowship Week with no lesson due) to review them so when trouble comes you have the resources to stand strong in faith. Optional Studies for Personal Enrichment Psalms: Songs for the Soul - Danger Zone In every life there is a time to laugh, a time to mourn, and a time to dance. Troubles, while bringing believers to the throne of God in prayer, are often a time of expressing emotional anguish with tears and weeping. Trace the trail of tears in the related scriptural cross references. As you record your insights, briefly note who is weeping and their circumstances. What is God’s response to the tears of His beloved children? Tears and Weeping 1 Sam 1:8, 10 2 Sam 12:21; 18:33 2 Ki 20:2–3, 5; Isa 38:5 Jn 11:33, 35 Jn 20:13, 15 Rev 7:17 ; 21:4 Today’s tears provide the heartfelt irrigation for tomorrow’s harvest of joy! Related Topics: Curriculum
<urn:uuid:dc3d2174-dda0-4afa-81ab-fff32e60b232>
CC-MAIN-2016-26
https://bible.org/seriespage/lesson-4-danger-zone
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92262
1,018
3.234375
3
The mergesort algorithm is based on the classical divide-and-conquer paradigm. It operates as DIVIDE: Partition the n-element sequence to be sorted into two subsequences of n/2 elements each. CONQUER: Sort the two subsequences recursively using COMBINE: Merge the two sorted sorted subsequences of size n/2 each to produce the sorted sequence consisting of n elements. Note that recursion "bottoms out" when the sequence to be sorted is of unit length. Since every sequence of length 1 is in sorted order, no further recursive call is necessary. The key operation of the mergesort algorithm is the merging of the two sorted sequencesin the "combine step". To perform the merging, we use an auxilliary procedure Merge(A,p,q,r), where A is an array and p,q and r are indices numbering elements of the array such that dure assumes that the subarrays A[p..q] and A[q+1...r] are in sorted order.It merges them to form a single sorted subarray that replaces the current subarray A[p..r]. Thus finally,we obtain the sorted array A[1..n], which is the solution.
<urn:uuid:5315d5f5-025c-4d1e-b23f-ef3e8bedb661>
CC-MAIN-2016-26
http://www.cse.iitk.ac.in/users/dsrkg/cs210/applets/sortingII/mergeSort/merge.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.799994
278
4.21875
4
Algebra word problem A framer wants to enclose a rectangular field by a fence divide it into two smaller rectangular fields by constructing another fence parallel to one side of the field. The farmer has 3000 yards of fencing. Find the dimension of the field so that total enclosed area is a maximum. (hint let h be the height and w be the width) then 3h+2w=3000 You want to maximize the area hw. If you solve for h in terms of w then substitute into the expression hw, you get a quadratic function (you could just as well solve for w in terms of h) Find the maximum of quadratic using one of three techniques) either take the derivative then imput the values into and see if its negative values so you have maxes...test them in and see what the absolute max is...or you can graph it and see the absolute max
<urn:uuid:d5279666-119c-4cfb-bd5a-a8d6250969aa>
CC-MAIN-2016-26
http://mathhelpforum.com/algebra/33461-algebra-word-problem-print.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00140-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90192
188
3.171875
3
The vast majority of us are constantly being exposed to toxic chemicals of one form or another – and not just from obvious culprits like car exhaust, cigarette smoke, and pick your favorite industrial pollutant. Household items and products, including some common foods and even tap water, harbor hazardous levels of toxins that have very real health effects over time. And experts are increasingly worried about one health issue in particular: endocrine disruption. Early studies have linked a whole host of common chemicals, including bisphenol-A (commonly called BPA), phthalates, and flame retardants, to possible endocrine system damage. What is particularly insidious about these toxins is that they mess with the body's natural hormone production, which can lead to a host of varied complications affecting everything from the thyroid to testosterone to sperm count. To combat the problem, the Environmental Working Group (EWG) just released its Dirty Dozen List of Endocrine Disruptors, outlining 12 of the worst offending chemicals. Odds are high that many of these toxins are lurking in your house and office now. Here's where you'll find them and how to limit your exposure. In an effort to make homes safer, chemical fire retardants have been added (often due to federal mandate) to scores of products, from foam couch cushions to desk chairs to carpet pads, and draperies to mattresses. There are many different types of retardants, but PBDEs and TDCPP are the most concerning. "PBDEs aren't used very often anymore, but they're definitely still around," says Dr. Gina Solomon, deputy secretary for science and health at the California Environmental Protection Agency. "PBDEs harm thyroid function and brain development, while TDCPP is a listed carcinogen. The other big problem with PBDEs is they can accumulate in the environment and concentrate in human fat, which lets them linger in the body long after exposure." Sharp says newer fire retardants coming to market may be just as hazardous, but fortunately laws are changing, and soon furniture manufacturers won't be required to use so many of them. The bad news is you really have no way of knowing what chemicals are used on home goods. So unless you ditch your La-Z-Boy for a metal chair, they are tough to avoid. Here's what you can do, though: Clean well and clean often. "People are mostly exposed because these chemicals accumulate in household dust," Solomon says. "So vacuum regularly with a good vacuum cleaner that has a HEPA filter." Also be sure to dust everything – including electronics, which gather gobs of dust – at least once a week. It's also possible to find companies that manufacture flame retardant-free furnishings. Click here for the Green Policy Institute's list, or check out eco-blogs and forums, which often have suggestions. Credit: Getty Images
<urn:uuid:909d60ad-c20a-4455-8422-253dedd86b7e>
CC-MAIN-2016-26
http://www.mensjournal.com/expert-advice/the-most-toxic-household-products-20140113/home-furnishings
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967799
587
2.921875
3
Authors: Dmitri Rabounski It is possible one thinks that the General Theory of Relativity is a fossilized science, all achievements of which were reached decades ago. In particular it is right - the mathematical apparatus of Riemannian geometry, being a base of the theory, remains unchanged. At the same time the mathematical technics have many varieties: general covariant methods, the tetrad method, etc. Developing the technics we can create new possibilities in theoretical physics, unknown before. Comments: recovered from sciprint.org [v1] 25 Feb 2007 Unique-IP document downloads: 55 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:f5253dfe-2925-4b86-bef2-a6242d26ebd3>
CC-MAIN-2016-26
http://vixra.org/abs/0702.0036
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897183
182
2.734375
3
The influence of edge effects on evapotranspiration of fragmented woodlands. Der Einfluss von Randeffekten auf die Verdunstung fragmentierter Waldbestande Herbst, M.; Roberts, J.M.; Gowing, D.D.. 2007 The influence of edge effects on evapotranspiration of fragmented woodlands. Der Einfluss von Randeffekten auf die Verdunstung fragmentierter Waldbestande. Berichte des Meteorologischen Institutes der Universitat Freiburg (16). 117-122.Before downloading, please read NORA policies. The water use of forests has been the subject of many studies in the past decades. They were mostly carried out in extensive areas of woodland and achieved consistent results. However, there is as yet a large uncertainty about the role of fragmented woodlands in the catchment water balance, since water losses from small patches of woodland have rarely been measured. In the framework of the "Lowland Catchment Research" (LOCAR) programme, a 7-months field measurement campaign has been carried out in southern England in order to measure the transpiration of a mixed deciduous forest in various distances from the forest edge by means of the sap flux techniqure. The annual transpiration per unit ground area near the forest edge equalled potential evaporation and was about 60% higher than in the forest interior and similar to the transpiration of hedgerows as determined in a corresponding study. Interception evaporation was not affected by the proximity to an edge. Based on these results it is shown that the edge effect dominates the water use of small forests (<10 ha) and becomes negligible only for woodlands larger than 100 ha. |Item Type:||Publication - Article| |Programmes:||CEH Programmes pre-2009 publications > Water| |CEH Sections:||Harding (to July 2011)| |Additional Information. Not used in RCUK Gateway to Research.:||In German| |NORA Subject Terms:||Ecology and Environment |Date made live:||25 Jan 2008 11:18| Actions (login required)
<urn:uuid:45d66505-6ab2-4f60-8272-76980db596bd>
CC-MAIN-2016-26
http://nora.nerc.ac.uk/2174/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00197-ip-10-164-35-72.ec2.internal.warc.gz
en
0.844538
452
2.671875
3
Savouring each mouthful can help you eat slower A talking, computerised weighing device that tracks how quickly food is gobbled off the plate could be a solution to childhood obesity, researchers say. The Mandometer keeps tabs during meal times and tells the user if they are wolfing down meals too fast - a habit experts have linked to weight gain. In a trial with 106 obese children the gadget showed promising results, the British Medical Journal reports online. After 12 months of use the children weighed less and ate smaller portions. Their speed of eating was reduced by 11% compared with a gain of 4% in a comparison group. Experts believe eating too fast can interfere with an inbuilt signalling system that tells the brain to stop eating when the stomach becomes full. But early in life, with instructions like "make sure you eat it all up", children are taught to override these signals. Scientists at the Karolinska Institute in Stockholm set out to design a device to pace eating, primarily to help patients with the eating disorder bulimia, who tend to eat quickly. The Mandometer plots a graph showing the rate at which food disappears from the plate, compared with an "ideal" graph programmed in by a food therapist. Take your time over mouthfuls A meal should last for at least 10 minutes Avoid distractions like the TV when eating And if the user is eating too quickly, the talking machine will tell them. Inspired by this work, researchers at Bristol Royal Hospital for Children and the University of Bristol, decided to try out the device on their young, obese patients. Lead researcher Professor Julian Hamilton-Shield said: "It really did seem to help them." He said the children learned how to eat more slowly and, as a result, felt full sooner and ate less. "Their portion sizes decreased by a seventh. Even though this may not sound a lot, it is enough to make a difference. "And the improvement seems to be durable because it continued six months after the trial finished." He said people should aim to take at least 10 minutes to eat meals, ideally sitting at a table rather than in front of the TV. "What tends to happen when we eat alone or while watching the TV is we eat more quickly. Then we miss the signals that tell us we are full up and to stop eating." Tam Fry of the National Obesity Forum said a Mandometer was useful but should be superfluous. "Parents should be able to teach their children to do this themselves. The tragedy is they do not. "We have far too many children eating far too much and piling on the pounds, causing future problems not only for themselves but also for the NHS."
<urn:uuid:618c95ef-08e8-4c85-9113-471a591622cd>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/health/8440193.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00105-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96714
560
3.203125
3
- Head or Brain Symptoms - Eye Symptoms - Ear Symptoms - Nose Symptoms - Breathing or Chest Symptoms - Mouth/Teeth/Throat Symptoms - Abdomen (GI) Symptoms - Genital or Urinary Symptoms - Limb (Arm and Leg) Symptoms - Skin - Localized Symptoms - Skin & Widespread Symptoms - Newborn Questions - Fever, Infections and Crying - Drug Dosage Tables - The croupy cough is tight, low-pitched, and barky (like a barking seal). - The voice or cry is hoarse (laryngitis). - Stridor is a harsh, raspy sound heard with breathing in. Loud or continuous stridor means severe croup. All stridor needs to be treated with warm mist. - Cause: usually a parainfluenza virus See More Appropriate Topic Call 911 Now (your child may need an ambulance) If - Severe difficulty breathing (struggling for each breath, unable to speak or cry because of difficulty breathing, making grunting noises with each breath) - Child has passed out or has bluish lips - Croup started suddenly after taking a medicine or allergic food - Child is drooling, spitting or having great difficulty swallowing (EXCEPTION: drooling due to teething) Call Your Doctor Now (night or day) If - Your child looks or acts very sick - Child choked on a small object that could be caught in the throat - Difficulty breathing (age < 1 year old) not relieved by cleaning the nose - Difficulty breathing (age > 1 year old) present when not coughing - Ribs are pulling in with each breath (retractions) - Stridor (harsh noise with breathing in) is present or has occurred today - Child can't bend the neck forward> - Fever > 104°F (40°C) at any age - Age less then 12 weeks with fever > 100.4°F (38°C) rectally - Severe chest pain Call Your Doctor Within 24 Hours (between 9 and 4) If - You think your child needs to be seen - Continuous (nonstop) cough - Age less than 1 month (EXCEPTION: coughs a few times) - Age 1 to 3 months with a cough for > 3 days - Earache is also present - Fever present > 3 days Call Your Doctor During Weekday Office Hours If - You have other questions or concerns - Croup is a recurrent problem - Barky cough present > 10 days Parent Care at Home If - Mild croup with no complications and you don't think your child needs to be seen First Aid Advice for Stridor (also use for any respiratory distress or severe coughing) - Inhale warm mist in a foggy bathroom with the shower running, from a wet washcloth held near the face, or from a humidifier (add warm water) for 20 minutes. - If that fails, inhale cool air from breathing near an open refrigerator or taking outside for a few minutes. Home Care Advice for Croupy Cough - Humidifier: If the air is dry, run a humidifier in the bedroom. (Reason: dry air makes croup worse.) - Coughing Spasms: For coughing spasms, give warm fluids to relax the airway (warm apple juice or caffeine-free tea) (Avoid if < 4 months old) - Cough Medicine for Mild Coughs: These are less helpful than warm mist. After age 1, use corn syrup 2 to 5 ml whenever needed as a homemade cough medicine. It can thin the secretions and loosen the cough. After age 4, use cough drops or hard candy. Remember, coughing up mucus is very important for protecting the lungs from pneumonia. - Cough Suppressant for Severe Coughs: Use dextromethorphan (DM). Do not use under 1 year old. See dosage chart. - Fever Medicine: For fever > 102°F (39°C), give acetaminophen or ibuprofen. - Observation During Sleep: Sleep in the same room with your child for a few nights. (Reason: croup can suddenly become severe at night.) - Avoid Tobacco Smoke: Active or passive smoking makes coughs much worse. - Contagiousness: Your child can return to day care or school after the fever is gone and your child feels well enough to participate in normal activities. For practical purposes, the spread of croup and colds cannot be prevented. - Expected Course: Croup usually lasts 5 to 6 days and becomes worse at night. Call Your Doctor If - Stridor (harsh raspy sound) occurs - The croup lasts > 10 days Your child becomes worse or develops any of the "Call Your Doctor Now" symptoms
<urn:uuid:750f8585-67b9-4a87-b3ed-e94391688d48>
CC-MAIN-2016-26
http://www.tenaflypediatrics.com/index.cfm?fuseaction=trees.treePage&p=449-127-1119
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902872
1,042
2.734375
3
What does TRIAC stand for? What does TRIAC mean? This page is about the various possible meanings of the acronym, abbreviation, shorthand or slang term: TRIAC. We've found a total of 1 definition for TRIAC: What does TRIAC mean? - TRIAC, from Triode for Alternating Current, is a genericized tradename for an electronic component that can conduct current in either direction when it is triggered, and is formally called a bidirectional triode thyristor or bilateral triode thyristor. TRIACs belong to the thyristor family and are closely related to Silicon-controlled rectifiers. However, unlike SCRs, which are unidirectional devices, TRIACs are bidirectional and so current can flow through them in either direction. Another difference from SCRs is that TRIACs can be triggered by either a positive or a negative current applied to its gate electrode, whereas SCRs can be triggered only by currents going into the gate. In order to create a triggering current, a positive or negative voltage has to be applied to the gate with respect to the MT1 terminal. Once triggered, the device continues to conduct until the current drops below a certain threshold, called the holding current. The bidirectionality makes TRIACs very convenient switches for AC circuits, also allowing them to control very large power flows with milliampere-scale gate currents. In addition, applying a trigger pulse at a controlled phase angle in an AC cycle allows one to control the percentage of current that flows through the TRIAC to the load, which is commonly used, for example, in controlling the speed of low-power induction motors, in dimming lamps and in controlling AC heating resistors.
<urn:uuid:c6bd3f1f-95a5-4936-946c-0e10b41ceeae>
CC-MAIN-2016-26
http://www.abbreviations.com/TRIAC
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928537
356
2.984375
3
Early Conveyor Belt is uncertain if this photo is of a McIntyre mine: however, it is from one of the Coal Company mines. It illustrates a conveyor belt system that was used in early mining operations. Photo: courtesy Archives Department, Rochester and Pittsburgh Coal Company Records, Indiana University of Pennsylvania.
<urn:uuid:95170049-e5ff-4b6e-837a-d82df7cf0145>
CC-MAIN-2016-26
http://www.mcintyrepa.com/earlyloadingconveyor.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00193-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913066
68
2.515625
3
Thank You For Voting Us One Of The Web's Best Online Toy Stores! To Play With Children Since you are probably familiar with the above games we will list a couple of less known games for you to try. Both of the games can be played with children of different ages, so they are perfect for a daycare environment or whenever a group of children are together. Preschoolers and elementary aged children will both have fun playing these games! Preparation for this game is dependent on the age of the children playing. Younger children may need to be taught how to make complete flying movements. And you may also need to discuss with younger children which animals can and cannot fly. As the adult, it is best for you to be the first leader of the game. Afterwards, let the children have turns being the leader. The children should stand in a group facing the leader. Make sure each child has enough room to flap their arms in a flying motion. The leader faces the group and calls out "Ducks Fly!" "Owls Fly!" "Pigs Fly!" and so on. When an animal is named which does fly the children should be flapping their arms, when an animal is named which doesn't fly they should not be "flying". When a child makes a mistake, they get a point. There are various ways to handle the point system. Some teachers allow a child to get 1, 2, or 3 points and then they are out. Some play that once a child gets 1, 2, or 3 points they become the leader. Either way is fine. The problem with the first method is that it eliminates children from the group activity. The problem with second method is some children will make mistakes on purpose because they want to be the leader. This is a fun game. Children enjoy playing on their own. But when an adult is the leader it presents an opportunity to teach children about animals they may not know. The adult can yell out uncommon animals and insects such as an Emu, Orangatan, Cicada, Slug, etc. which are sure to elicit mistakes from the group. Then the leader can teach the children about the animal such as "yes, an Emu is a bird, but it is a bird which doesn't fly!" Who Am I? One child is chosen from the group to be the Guesser. The rest of the children make a circle around the Guesser. The Guesser is blindfolded and spinned around clockwise by the adult a few times. As the Guesser is being spinned, the circle of children are told to move around the Guesser in the opposite direction. The Guesser then points towards the circle and names an animal. The child being pointed to must then make a sound like the animal. The Guesser must guess the name of the player. If the Guesser is correct, the Guesser gets another turn being the Guesser. If the Guesser does not name the correct child, then that child becomes the Guesser! As a variation of this game some schools play that the selected child not make an animal sound, but disguises his/her voice (one good way for children to disguise their voice is by holding their nose as they speak) and then ask, "Who Am I?" and again, the Guesser must guess which child was selected.
<urn:uuid:f6a40d57-76f1-42ac-9723-a7a161855da5>
CC-MAIN-2016-26
http://www.liveandlearn.com/daycaregame.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00189-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965629
684
3.015625
3
Evangelical Church of the Augsburg and Helvetic Confessions in Austria |Church Family :| |Based in :||Austria| |Present in :| |Member Of :| |Associate Member Of :| |WCC Member Since :||1948| (Evangelische Kirche Augsburgischen und Helvetischen Bekenntnisses / A.u.H.B. in Österreich) For purposes of legal recognition by the government, the Evangelical Church of the Helvetic Confession (Reformed) and the Evangelical Church of the Augsburg Confession in Austria (Lutheran) form together an ecclesiastical entity called the Evangelical Church of the Augsburg and Helvetic Confessions, a designation which provides for cooperation in certain areas but leaves the two groups fully independent in matters of confessional identity and administration. The dual legal entity both churches agreed to, with the obligation to follow in the way of the reformers, is the basis of Austrian Lutheran and Reformed participation in the WCC, an arrangement akin to that of Germany's Lutherans and Reformed, participating under the Evangelical Church in Germany (EKD). Austrian Lutherans are members of LWF through their own ecclesiastical body, the ECAC, just as the Reformed are members of WCRC through their own Reformed Church of Austria. The Reformation reached the area of what is today called Austria very early. By the end of the 16th century two thirds of the population were touched by it. At that time a systematic counter-reformation was started in the Habsburg empire. Evangelical preachers had to leave the country, churches were destroyed, books and writings burned. Citizens as well as farmers had to choose between emigration or return to the Catholic Church. For more than three generations they had to decide for their faith or their home country. Thousands chose to emigrate, many turned back to Catholicism and some stayed Evangelicals in their heart. A "secret Protestantism" was able to survive for decades, mainly by withdrawing to the hardly accessible valleys in the mountains of Carinthia and Upper Austria. During this period of persecution evangelical worship services were allowed only in Vienna. In 1781 the emperor issued a Deed of Tolerance, and in 1861 the Protestants were granted complete freedom of confession and public practice of religion. During the "Free from Rome" movement in the late 19th and early 20th century many Catholics joined the Protestant churches. The ideological trends of the 1930s brought about a severe time of testing for Protestants. A few men and women raised their voice and offered resistance but the church as a whole did not follow them. The influx of German refugees from central and eastern Europe after World War II, and of Hungarians after the uprising of 1956, increased once again the membership of the Protestant churches. Lutheran and Reformed Christians in Austria have been working together in areas of spiritual and administrative concern for many years. One of the reasons is that in both churches membership is mixed, i.e. the Lutheran Church has Reformed members, and the Reformed Church has Lutherans. The law for Protestants voted in 1961 by the Austrian parliament provided for the legal autonomy of the churches and public support for the Protestant faculty of theology, for religious instruction in schools, and for military chaplaincies and church-related welfare services. Finding itself in the very centre of Europe, the church makes great efforts to promote dialogue with various Christian communities in neighbouring nations, in particular in the Czech Republic, Slovakia, Hungary, Slovenia, Croatia, Serbia and Montenegro, Italy, Germany and Switzerland. Through its theological faculty in Vienna it contributes to the ongoing European theological debate.
<urn:uuid:54b77c4c-b1db-49b5-b8d1-1d768b6f2cc0>
CC-MAIN-2016-26
http://www.oikoumene.org/en/member-churches/evangelical-church-of-the-augsburg-and-helvetic-confessions-in-austria
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964688
767
2.578125
3
Ukraine's Ticking 'Time Bomb': Old Pesticides A haz-mat team containing a chemical stockpile. Photo via Obsolete Pesticides. When you think of dangerous stockpiles in the former Soviet Union, nuclear and chemical weapons are probably what come most readily to mind. But a single stash of old pesticides in Ukraine poses a major threat to some 7 million people -- and that's just the tip of the icky iceberg.The environmental blog Twilight Earth has the scoop on the latest meeting of the International HCH and Pesticides Association (IHPA), which called on the European Union to immediately disarm the "biggest chemical time bomb of Europe." 10,000 Tons of Carcinogenic Fungicide According to the IHPA, the former Kalush factory in western Ukraine contains at least 10,000 tons of hexachlorobenzene (HCB), a fungicide previously used on wheat that has been banned globally under the Stockholm Convention on persistent organic pollutants. The toxic chemical is a confirmed animal carcinogen and a probable contributor to cancer in humans as well; it was outlawed in the United States in 1966. The Kalush chemicals pose a particular danger to human health and the environment because the factory is located along the Dniester river, Ukraine's second largest. "A single flood and the high concentrations of poison would pollute the natural habitat of some 7 million people in the west of Ukraine and Moldavia," the IHPA says. Former USSR a Chemical Hot Spot Between 178,000 and 289,000 tons of obsolete pesticides are estimated to be stockpiled throughout the European Union, Southeast Europe, and the former Soviet Union, with Ukraine having one of the highest totals for an individual country: 30,000 tons in 4,500 storage locations. "The substances have been prohibited since 2001. As a rule the packaging only lasts five to ten years," the IHPA says. "If nothing happens in that time, then the substances could simply end up in the soil or in the water." Pesticides Illegally Exported by Organized Crime Rural populations are particularly at risk from such contamination of soil, groundwater, surface water, and air, which can be caused not only by the chemicals themselves, but also by the old sprayers, empty packages and containers, and building materials they have come into contact with, as well as the earth around the storage sites. The collection points in the former Soviet Union, known as "Polygons" or burial sites, are especially problematic due to the dissolution of the USSR, the IHPA explains: The landfills were commonly fenced and guarded, and all amounts have been accurately registered. However, with the collapse of the Soviet Union's central control system, Polygons were abandoned, fences were torn down, and pesticides were illegally excavated, repackaged, and sold onto local markets or exported by organized crime. Polygons -- by the sheer nature of the concept -- comprise a limited number of very large sites, often in combination with other hazardous waste. Despite their concentration in certain areas, pesticide stockpiles have a worldwide impact. As highly stable chemicals, they persist in the environment -- and in people and animals' bodies. Cleanup to Cost 1 Billion Euro The IHPA is working with national governments and other international organizations such as Green Cross to stabilize or safely destroy all current stocks of superfluous pesticides, an effort it estimates will cost 1 billion euros. So far, it has executed three pilot projects in Moldova, Georgia, and Kyrgyzstan to raise awareness and clean up and safely contain pesticide inventories. More about the dangers of pesticides: Reduce Your Pesticide Exposure by 80 Percent Pesticides Deform Two More Species of California's Frogs Lion-Killing Pesticide Might be Banned in Kenya A Little Help From Up Above: Israel is Replacing Pesticides with Owls and Falcons 12 Vegetables with the Most Pesticides (Slideshow) Europe to Ban Cancer Causing Pesticides Lawsuit Filed to Force EPA to Give Up Documents on Pesticide's Impact on Honey Bees
<urn:uuid:7557f964-edf9-48d2-b57e-d78bae4b56d5>
CC-MAIN-2016-26
http://www.treehugger.com/green-food/ukraines-ticking-time-bomb-old-pesticides.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00013-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953604
859
2.578125
3
There are two essential components to all telephone calls. The first, and most obvious, is the actual content—our voices, faxes, modem data, etc. The second is the information that instructs telephone exchanges to establish connections and route the “content” to an appropriate destination. Telephony signaling is concerned with the creation of standards for the latter to achieve the former. These standards are known as protocols. SS7 or Signaling System Number 7 is simply another set of protocols that describe a means of communication between telephone switches in public telephone networks. They have been created and controlled by various bodies around the world, which leads to some specific local variations, but the principal organization with responsibility for their administration is the International Telecommunications Union or ITU-T. Signalling System Number 7 (SS#7 or C7) is the protocol used by the telephone companies for interoffice signalling. In the past, in-band signalling techniques were used on interoffice trunks. This method of signalling used the same physical path for both the call-control signalling and the actual connected call. This method of signalling is inefficient and is rapidly being replaced by out-of-band or common-channel signalling techniques. To understand SS7 we must first understand something of the basic inefficiency of previous signaling methods utilized in the Public Switched Telephone Network (PSTN). Until relatively recently, all telephone connections were managed by a variety of techniques centered on “in band” signaling. A network utilizing common-channel signalling is actually two networks in one: 1. First there is the circuit-switched "user" network which actually carries the user voice and data traffic. It provides a physical path between the source and destination. 2. The second is the signalling network which carries the call control traffic. It is a packet-switched network using a common channel switching protocol. |The original common channel interoffice signalling protocols were based on Signalling System Number 6 (SS#6). Today SS#7 is being used in new installations worldwide. SS#7 is the defined interoffice signalling protocol for ISDN. It is also in common use today outside of the ISDN environment. The primary function of SS#7 is to provide call control, remote network management, and maintenance capabilities for the inter- office telephone network. SS#7 performs these functions by exchanging control messages between SS#7 telephone exchanges (signalling points or SPs) and SS#7 signalling transfer points (STPs). The switching offices (SPs) handle the SS#7 control network as well as the user circuit-switched network. Basically, the SS#7 control network tells the switching office which paths to establish over the circuit-switched network. The STPs route SS#7 control packets across the signalling network. A switching office may or may not be an STP. SS7 Protocol layers: The SS7 network is an interconnected set of network elements that is used to exchange messages in support of telecommunications functions. The SS7 protocol is designed to both facilitate these functions and to maintain the network over which they are provided. Like most modern protocols, the SS7 protocol is layered. Physical Layer (MTP-1) This defines the physical and electrical characteristics of the signaling links of the SS7 network. Signaling links utilize DS–0 channels and carry raw signaling data at a rate of 56 kbps or 64 kbps (56 kbps is the more common implementation). Message Transfer Part—Level 2 (MTP-2) The level 2 portion of the message transfer part (MTP Level 2) provides link-layer functionality. It ensures that the two end points of a signaling link can reliably exchange signaling messages. It incorporates such capabilities as error checking, flow control, and sequence checking. Message Transfer Part—Level 3 (MTP-3) The level 3 portion of the message transfer part (MTP Level 3) extends the functionality provided by MTP level 2 to provide network layer functionality. It ensures that messages can be delivered between signaling points across the SS7 network regardless of whether they are directly connected. It includes such capabilities as node addressing, routing, alternate routing, and congestion control. Signaling Connection Control Part (SCCP) The signaling connection control part (SCCP) provides two major functions that are lacking in the MTP. The first of these is the capability to address applications within a signaling point. The MTP can only receive and deliver messages from a node as a whole; it does not deal with software applications within a node. While MTP network-management messages and basic call-setup messages are addressed to a node as a whole, other messages are used by separate applications (referred to as subsystems) within a node. Examples of subsystems are 800 call processing, calling-card processing, advanced intelligent network (AIN), and custom local-area signaling services (CLASS) services (e.g., repeat dialing and call return). The SCCP allows these subsystems to be addressed explicitly. ISDN User Part (ISUP) ISUP user part defines the messages and protocol used in the establishment and tear down of voice and data calls over the public switched network (PSN), and to manage the trunk network on which they rely. Despite its name, ISUP is used for both ISDN and non–ISDN calls. In the North American version of SS7, ISUP messages rely exclusively on MTP to transport messages between concerned nodes. Transaction Capabilities Application Part (TCAP) TCAP defines the messages and protocol used to communicate between applications (deployed as subsystems) in nodes. It is used for database services such as calling card, 800, and AIN as well as switch-to-switch services including repeat dialing and call return. Because TCAP messages must be delivered to individual applications within the nodes they address, they use the SCCP for transport. Operations, Maintenance, and Administration Part (OMAP) OMAP defines messages and protocol designed to assist administrators of the SS7 network. To date, the most fully developed and deployed of these capabilities are procedures for validating network routing tables and for diagnosing link troubles. OMAP includes messages that use both the MTP and SCCP for routing.
<urn:uuid:8574959f-ca86-40a7-8240-0d88d1e301e6>
CC-MAIN-2016-26
http://www.telecomspace.com/ss7.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917859
1,294
3.5
4
For generations societies predominantly on the Indian subcontinent have employed sharply demarcated caste systems, a measure of social standing generally based on one’s genealogy, which in turn traditionally determined the rest of the course of one’s life. While such social stratification is largely now seen as discriminatory and divisive (and thus almost obsolete), it looks like the modern digital age is not without its own caste system, but this time based on how we interact with technology. In his keynote address at last week’s Mobile World Congress, Google executive chairman Eric Schmidt spoke of connectivity and the spread of the digital revolution across the globe. He noted that while an overwhelming majority of the world’s population has a mobile device, approximately only 2 billion are online, meaning the age of connectivity has yet to reach 5 billion others. But as smartphone prices drop and wireless technology becomes ubiquitous around the world we’ll soon start to see a different sort of technological and cultural divide begin to grow, one not based social standing, on the difference between connected or unconnected, but defined by our level of connectivity. While certainly not as controversial as social stratification, the caste system that will emerge in the digital age, according to Schmidt, will nevertheless have its own sharply demarcated categories, clearly separating the “haves” from the “have-nots.” Schmidt noted that as this divide emerges we’ll see at the top of the pile the “ultra-connected,” the “lucky few” of the digital age who find themselves on the bleeding edge of wireless advancement, enjoying a veritable digital paradise, “where bandwidth is plentiful, devices are affordable, everything is on the cloud and technology becomes as invisible as electricity is now.” As Schmidt explained, “It [the Internet] will just be there. The Web will be everything and at the same time nothing.” In the middle will sit the so-called “connected contributors,” a veritable digital middle class. Schmidt described this particular group as those who can use technology to change their lives, made up of developers and “educated consumers, supporting the creations of the 10 percent [the ultra-connected].” Finally there are the other 5 billion, the “aspiring majority,” who will find the web more accessible and usable in year’s to come as wireless networks spread and technology becomes more affordable and accessible. While this vision of the digital class system has the same sort of sharp demarcation we’ve seen in past social caste structures, Schmidt was quick to point out one vital difference, in this class system “technology is a leveller — the weak will be strong, and those with nothing will have something.” His point, regardless of one’s connectivity the benefits of the digital will be that all (or most) will be connected, and all will have access to the same opportunities the Internet provides. “Technology is power by its very nature,” Schmidt said, waxing hopefully, “and by ensuring access we can create a global community of equals.” In the end, however, I remain sceptical of Schmidt’s altruistic utopian vision, seeing instead a converse dystopian future where one’s place on the connected class system will continue to be exploited for financial gain and technological superiority will continue to be used to keep people down, perhaps even by companies who proclaim “Don’t be Evil” as their corporate motto.
<urn:uuid:77528caf-3d4d-4f8d-b1c9-f519d83decce>
CC-MAIN-2016-26
http://www.thetelecomblog.com/2012/03/07/mobile-web-creates-digital-caste-system/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00178-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946263
737
2.640625
3
Definitions for coll This page provides all possible meanings and translations of the word coll to hug or embrace. A medieval English short form of the male given name Nicholas; very rare today. Origin: From French coler, acoler ‘accoll, throw arms round neck of’, ultimately Latin ad + collum ‘neck’. Origin: [OF. coler, fr. L. collum neck.] Coll is a small island, west of Mull in the Inner Hebrides of Scotland. Coll is known for its sandy beaches, which rise to form large sand dunes, for its corncrakes, and for Breachacha Castle. Chambers 20th Century Dictionary kol, v.t. (Spens.) to embrace or fondle by taking round the neck.—n. Coll′ing, embracing. [Fr. col—L. collum, the neck.] The numerical value of coll in Chaldean Numerology is: 7 The numerical value of coll in Pythagorean Numerology is: 6 Images & Illustrations of coll Translations for coll From our Multilingual Translation Dictionary Get even more translations for coll » Find a translation for the coll definition in other languages: Select another language:
<urn:uuid:81d071fe-163f-4e79-afea-8396b32d0e48>
CC-MAIN-2016-26
http://www.definitions.net/definition/coll
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.742815
268
2.734375
3
15: 37: 08 Thrantas are a large, muscular reptavian species native to Alderaan. The most frequently witnessed is a sub-species referred to as the common thranta. In addition to their muscular frame, the common thranta have a wingspan of approximately six meters. This allows them to climb to great heights and, using their upper jaw, catch sky plankton, which is the main staple of their diet. After being trained by a handler for several weeks, the creatures can be purchased as a domesticated ride. It is common for a small saddle and compartment to be sold with them and mounted on their backs. Purchasers are typically local transport and gaming businesses that are looking to get an edge in difficult areas. As the interest in hunting them and using them as domesticated transport spread beyond Alderaan, there was an attempt to establish common thranta populations on more heavily populated worlds, such as Coruscant and Cathar. Due to the industrialized nature of these planets and those like them, the populations suffocated from the thick pollution and quickly died out. However, zoologists have been successful in placing populations on other moderately or sparsely industrialized planets, such as Kiffex and Togoria.
<urn:uuid:f18cdfd2-f173-404c-a0bc-f1e871443a40>
CC-MAIN-2016-26
http://www.swcombine.com/rules/?Creatures&ID=433
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970049
252
2.984375
3
The fossilised remains of a crocodile that ruled the oceans 140 million years ago have been discovered in Patagonia. The giant crocodile was a fearsome predator (Image: National Geographic) Scientists have nicknamed the creature Godzilla, because of its dinosaur-like snout and jagged teeth. The US-Argentine team of researchers believes the animal was a ferocious predator, feeding on other marine reptiles and large sea creatures. The species is formally known as Dakosaurus andiniensis and has been unveiled in the journal Science. Unlike modern crocodiles, it lived entirely in the water, and had fins instead of legs. It measured 4m (13ft) from nose to tail and its jaws alone were a third of a metre (1ft) long. Crocodiles became widespread during the Cretaceous Period (146 to 65 million years ago). Other marine crocodiles alive then had long, slim snouts and needle-like teeth, which they used to catch small fish and molluscs. But this creature had a dinosaur-like snout and large, serrated teeth. "These sorts of features are also present in carnivorous dinosaurs like Tyrannosaurus rex," said co-researcher Diego Pol, of Ohio State University in Columbus, US. "It shows a really unexpected morphology that nobody thought could be present in a marine crocodile." Palaeontologist Zulma Gasparini, of the National University of La Plata, Argentina, first came across a "Godzilla" specimen in 1996 in the Neuquen Basin, once a deep tropical bay of the Pacific Ocean. But it was little more than a fragment and provided few clues to the creature's nature and habits. Prof Zulma Gasparini and the skull (Image: Marta Fernandez) However, two further specimens have recently been discovered, including a complete fossilised skull. Computer analysis of the bones shows D. andiniensis belongs on the family tree of crocodiles. Scientists believe it evolved a different feeding strategy from its contemporaries. The shape and size of its jaws and teeth suggest it hunted large marine vertebrates such as the giant marine reptile, Ichthyosaurus, rather than small fish.
<urn:uuid:91d62cd7-e62f-4314-800e-1bac7a0a71bd>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/science/nature/4424734.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00176-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94512
461
3.53125
4
Take a piece of dark construction paper, and draw a large X shape on it, with a pencil. Then draw a straight line through the middle of the X, so that you end up with a six-sided snowflake outline. Give the paper to your child along with some glue and some small white poms. Have your child glue white poms all along the six sides to create a large snowflake. Variation 1): Instead of poms, have you child glue on white pasta pieces. Use uncooked pasta and place it in a plastic zipper bag, along with some white paint. Remove pieces from the bag and let them dry overnight. Variation 2): Give your child some white chalk and have her draw her own snowflake on top of the six pencil lines. PAPER SNOWFLAKES Young children love cutting paper snowflakes. Give your child a paper circle Next, show him how to fold the circle in-half Show him how to snip triangles out along the folded Snowflakes can be made from a variety of papers. Doilies make the prettiest ones. Small coffee filters are the easiest to make for 3 and 4 year Other paper circles can be made from tissue paper, waxed paper, typing paper and construction paper. You can also help your child make snowflakes to hang in your window out of Q-tips. For each snowflake your child will need three Q-tips cut in half, some glue and a small square of aluminum foil. First, have your child squeeze a small amount of glue in the middle of a piece of aluminum. Next, show him how to arrange six Q-tip halves coming out of the glue to resemble a snowflake. Let the glue dry, then carefully lift the glue off the foil. Tie a piece of thread or yarn around one of the arms of the snowflake and hang it in a window or other viewing Cut out a picture of a house from a magazine. Give the picture, some glue, and a piece of light or dark blue paper to your child. Have him glue the house picture onto the blue paper. Next, set out some white paint on a small saucer. Give your child a pencil with a good eraser and show him how to dip the eraser into the paint and then press it onto his paper to make snowflake dots. Have your child make snowflakes all over his paper for a Snowy Skies picture. Variation: Cut out different pictures from magazines for your child to glue onto his paper. (C) 2001 - 2016 Jean Warren www.preschoolexpress.com Copyright Policy - All rights reserved. Pages may be downloaded for personal use only. No part of this website may be distributed, reproduced or transmitted in any form to others without written permission from Jean Warren. The exception being teachers who freely share materials with parents or students. The following copyright notice must appear on all shared materials: (C) Jean Warren www.preschoolexpress.com Web Site Maintained by www.pacificCYBERworks.com
<urn:uuid:8fbd666c-3899-48fd-b0ce-5f1516a39eba>
CC-MAIN-2016-26
http://www.preschoolexpress.com/art-station08/snowflake-art-jan08.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906665
661
3.15625
3
ST. LOUIS—The first humans to spread across North America may have been seal hunters from France and Spain. This runs counter to the long-held belief that the first human entry into the Americas was a crossing of a land-ice bridge that spanned the Bering Strait about 13,500 years ago. The new thinking was outlined here Sunday at the annual meeting of the American Association for the Advancement of Science. The tools don’t match Recent studies have suggested that the glaciers that helped form the bridge connecting Siberia and Alaska began receding around 17,000 to 13,000 years ago, leaving very little chance that people walked from one continent to the other. Also, when archaeologist Dennis Stanford of the Smithsonian Institution places American spearheads, called Clovis points, side-by-side with Siberian points, he sees a divergence of many characteristics. Instead, Stanford said today, Clovis points match up much closer with Solutrean style tools, which researchers date to about 19,000 years ago. This suggests that the American people making Clovis points made Solutrean points before that. There’s just one problem with this hypothesis—Solutrean toolmakers lived in France and Spain. Scientists know of no land-ice bridge that spanned that entire gap. The lost hunting party Stanford has an idea for how humans crossed the Atlantic, though—boats. Art from that era indicates that Solutrean populations in northern Spain were hunting marine animals, such as seals, walrus, and tuna. They may have even made their way into the floating ice chunks that unite immense harp seal populations in Canada and Europe each year. Four million seals, Stanford said, would look like a pretty good meal to hungry European hunters, who might have ventured into the ice flows much the same way that the Inuit in Alaska and Greenland do today. Inuit use large, open hunting boats constructed from animal skins for longer trips or big hunts. These boats, called umiaq, can hold a dozen adults, as well as several children, dead seals or walruses, and even dog-sled teams. Inuit have been building these boats for thousands of years, and Stanford believes that Solutrean people may have used a similar design. It’s possible that some groups of these hunters ventured out as far as Iceland, where they may have gotten caught up in the prevailing currents and were carried to North America. “You get three boats loaded up like this and you would have a viable population,” Stanford said. “You could actually get a whole bunch of people washing up on Nova Scotia.” Some scientists believe that the Solutrean peoples were responsible for much of the cave art in Europe. Opponents of Stanford’s work ask why, then, would these people stop producing art once they made it to North America? “I don’t know,” Stanford said. “But you’re looking at a long distance inland, 100 miles or so, before they would get to caves to do art in.” - Ancient People Followed 'Kelp Highway' to America, Researcher Says - North America Settled by Just 70 People, Study Concludes - Possible Fire Pit Dated to Be Over 50,000 Years Old - Early Man Was Hunted by Birds
<urn:uuid:588b1258-12f9-4cb5-a3f3-32e810e68566>
CC-MAIN-2016-26
http://www.livescience.com/7043-americans-european.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963762
709
3.828125
4
What is the first-trimester screening for birth defects? Near the end of the first 3 months of pregnancy (first trimester), a woman can have two types of tests to show the chance that her baby has a birth defect. When the results are combined, these tests are known as the first-trimester screening. They also may be called the combined first-trimester screening or the combined screening. These screening tests help your doctor find out the chance that your baby has certain birth defects, such as Down syndrome or trisomy 18. These tests can't show for sure that your baby has a birth defect. You would need a diagnostic test, such as a chorionic villus sampling, to find out if there is a problem. Did You Know? Under the Affordable Care Act, many health insurance plans will cover prenatal services, including screening tests and breastfeeding support, at no cost to you. Learn more. The first-trimester screening combines the results of two tests. Nuchal translucency test. This test uses ultrasound to measure the thickness of the area at the back of the baby's neck. An increase in the thickness can be an early sign of Down syndrome. The test is not available everywhere, because a doctor must have special training to do it. First-trimester blood tests. These tests measure the amounts of two substances in your blood: beta human chorionic gonadotropin (beta-hCG) and pregnancy-associated plasma protein A (PAPP-A). Beta-hCG is a hormone made by the placenta. High levels may be related to certain birth defects. PAPP-A is a protein in the blood. Low levels may be related to certain birth defects. First-trimester screening also may be done as part of an integrated screening test. This combines the results of the first-trimester tests with those of second-trimester screening (a blood test called the triple or quad screening). You would get the results after the second-trimester test is done. How are the tests done? For the nuchal translucency test, your doctor or an ultrasound technologist spreads a gel on your belly. Then he or she gently moves a handheld device called a transducer over your belly. Images of the baby are displayed on a monitor. The doctor can look for and measure the thickness at the back of the baby's neck.
<urn:uuid:e85b3842-39d2-47d0-9124-38198336d2c2>
CC-MAIN-2016-26
http://www.webmd.com/baby/tc/first-trimester-screening-for-birth-defects-topic-overview
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945624
499
3.109375
3
By comparison with the average 15-year weight gain in participants with infrequent (less than once a week) fast-food restaurant use at baseline and follow-up, those with frequent (more than twice a week) visits to fast-food restaurants at baseline and follow-up gained an extra 4·5 kg of bodyweight and had a two-fold greater increase in insulin resistance. Fast-food consumption has strong positive associations with weight gain and insulin resistance, suggesting that fast food increases the risk of obesity and type 2 diabetes.A 10 lb difference in weight gain over 15 years may not seem like much, but believe me, for this type of study, that's a massive association. It's a rate of weight gain that would make a lean person overweight in about 30 years. Fast food, like all industrial convenience food, is professionally designed to maximize reward value and is therefore exceptionally fattening and unhealthy in my opinion. Most studies of this type measure specific dietary components, like fat, carbohydrate, fiber, meat, dairy, and vegetable intake. What those studies miss-- which I think is a critical factor-- is the form in which those nutrients/foods are consumed. This study addressed that by looking directly at the consumption of industrially processed food, and it found a striking outcome. Lately, I've been collecting data on how the US diet has changed, qualitatively, over the last 200 years or so. I have some graphs that are very telling. I'll be gradually releasing them on this blog and in my upcoming talks. A gentleman named Jeremy Landen, who I'll introduce in more detail later, has been collaborating with me on this. Here's one of my favorite graphs, as a sneak preview. It represents spending on food consumed at home (green), food consumed away from home (blue and red), and fast food (red), as a percentage of total food expenditures: - 93 percent of food was consumed at home in 1889, and most of that was homemade from scratch. - In 2009, barely half (51%) of food was consumed at home, the rest was consumed in either full-service or fast food restaurants. Probably a high proportion of what was consumed at home was actually processed food. - Fast food was not a significant expenditure before 1960, after which it rapidly gained in popularity. Today, fast food accounts for 18 percent of total food expenditures.
<urn:uuid:a1b7a01e-bc4f-49ea-8e4c-b4e4cc3e6e7a>
CC-MAIN-2016-26
http://wholehealthsource.blogspot.com/2011/05/fast-food-weight-gain-and-insulin.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970742
489
2.546875
3
There’s a lot of debate about how much to use an English Language Learner’s native language in studying English. Certainly, a straight translation methodology all of the time is not the way to go. However, I’ve found that, particularly with newcomers, providing them with access to an Internet resource that provides some native-language support can be a real confidence-booster. Pretty quickly, though, they often move away from those sites of their own accord. There are quite a few newer “learn-a-language” sites that provide multilingual support from a social network. There are others that offer translation help for a smaller number of languages. My intent behind creating this latest “The Best…” list was to identify sites that provide teacher-created content; do not require any registration; are free; and, except in one instance (where I identify what my students and I are think is the best bilingual English/Spanish site), provide resources in many languages, including ones that are not widely-used. You can also find links to the sites on this list, as well as to many others, on the Bilingual Exercises page on my website. Unlike some of my other lists, I am not identifying them in order of preference. I think they’re all pretty equal. Here are my choices for The Best Multilingual & Bilingual Sites For Learning English: Bilingual Quizzes From Activities For ESL Students is a project of the Internet TESL Journal. You can find a ton of quizzes using multiple multiple languages here. Goethe Tests covers vocabulary and language tests in twenty-five different languages. For an English/Spanish site, there’s no question that Pumarosa, created by teacher Paul Rogers, is by far the best resource for Spanish speakers. The Cultural Orientation Resource Center has put their extraordinary collection of refugee phrasebooks online and free for download. Here’s how they describe this incredibly useful resource: These phrasebooks are designed to supply refugees with the appropriate English phrases and supplementary vocabulary for use in the daily activities of American life (rather than simply word-to-word translations, as in a dictionary). Phrases contained in the books have been selected for their directness, brevity and relevance to the needs of newly arrived residents of the United States. Among the nineteen units included are sections on “Giving Information About Yourself,” “Recognizing Signs,” “Dealing With Money,” “Health,” “Food,” “Clothing,” “Housing,” and “Jobs.” Each phrasebook is approximately 140 pages and can be downloaded for free. They are available in these languages: Bosnian/Croatian/Serbian, Cantonese, Czech, Farsi, Haitian Creole, Hmong, Hungarian, Khmer, Lao, Polish, Russian, Somali, Spanish, and Vietnamese. GCF Learn Free’s reading site has been on several “The Best..” lists for its simple reading instruction, which is excellent for English Language Learners and new readers. They’ve kept that site, and have also added several multilingual features to specifically help ELL’s. You can visit their Learn English site here. They plan on adding many new activities there in the coming months. Duolingo looks like it’s a pretty decent language-learning site. Lingo Hut seems like a pretty impressive site for beginning learners of many different languages, including English. Using a drop-down menu, you can easily select your native language and the language you want to learn, and then progress through a well-designed series of exercises including reading, listening and speaking. Vocapic is a simple and free bilingual site that lets you learn either English, Spanish or French vocabulary. First click on the top right to indicate the language you speak now, then move down to indicate the language you want to learn. You can then choose word categories where you are provided a word in audio and text and then asked to choose the correct response among three images. It works very well if you are a Spanish speaker and want to learn English. There’s a section on the left of the home screen which indicates that if you click on it you will learn simple words. However, that doesn’t appear to work for learning English words. As always, please feel free to provide feedback in the comments section. You might want to consider subscribing to this blog for free if you’ve found the post useful.
<urn:uuid:0d823505-95e9-4f9a-aa7d-3552d10db847>
CC-MAIN-2016-26
http://larryferlazzo.edublogs.org/2008/05/25/the-best-multilingual-bilingual-sites-for-learning-english/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94844
968
2.6875
3
More than 2 billion people are living under online censorship This is more than enough for CyberGhost team to continue our fight for the free internet in 2016 with even more power and over 650 servers worldwide. But the news from the Web Index Report is not great: “The internet is less free, more unequal, and web users are at increasing risk of indiscriminate government surveillance”. In 2013, the report showed over 30% of Web Index countries were blocking politically or socially sensitive Web content to a moderate or extreme degree. In 2014 that figure went up to 38%. Here is a good definition of content filtering, as a form of online censorship, from Electronic Frontier Foundation (EFF): “Many governments, companies, schools, and public access points use software to prevent Internet users from accessing certain websites and Internet services. This is called Internet filtering or blocking and is a form of censorship. Content filtering comes in different forms. Sometimes entire websites are blocked, sometimes individual web pages, and sometimes content is blocked based on keywords contained in it. One country might block Facebook entirely, or only block particular Facebook group pages—or it might block any page or web search with the words “falun gong” in it.” And in recent days, other massively used apps, such as Whatsapp, have been blocked for short periods in Brazil and United Arab Emirates. The current censorship situation around the world At the same time, there is a fragile legal frame to support online freedom, with 84% of the countries having no effective laws and practices to protect the privacy of online communications. China is leading the top of countries blocking and filtering the web content. According to the same report, Uruguay allows its citizens the most online freedom. When governments really cross the line, violating fundamental rights, like Turkey, during the “Big Ban” from March 2014, when the government blocked several web pages and access to YouTube, Twitter and Soundcloud, international organizations and companies react strongly. Several countries pressured Erdogan’s government to release the ban and CyberGhost offerd 30.000 Premium keys to Turkish citizens so they can use the internet unrestricted. So there is hope. Out of the 45 Web Index countries with extensive constraints on speech, only seven (about 16%) seem to censor more heavily online than offline. What can you do? Use a trusted VPN to access safely the censored content and protect themselves and their online privacy. In some cases, like the case of some journalists in conflict areas, even their life. Here’s a video that explains how hiding your IP helps you unblock restricted websites and banned apps: If you want to learn more on how to avoid being tracked online, read these 3 essential tips. Does your government block access to an app or website? Enter here and tell us which ones and we will fight to help you unblock them with Cyberghost. So we just have to keep on reporting the abuses and demanding better laws to sustain online privacy for everybody, while using encrypted online connection.
<urn:uuid:e31ce089-19c8-43c1-8554-de6c2e400a0b>
CC-MAIN-2016-26
https://blog.cyberghostvpn.com/tag/censorship/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924677
621
2.75
3
1796 – Born on August 21st in Maplewood, New Jersey. American painter of the Hudson River School. 1823 – His reputation as an engraver was firmly established with the publication of his engraving after John Trumbull’s Declaration of Independence. 1821-1831 – Durand helped found the New York Drawing Association, the National Academy of Design, and the Sketch Club. – He formed a partnership with his brother, Cyrus, and Charles C. Wright which specialized in the production of bank notes. 1832 – Durand dissolved his profitable engraving business and entered into a short, successful period as a portrait painter. 1837 – A financial panic combined with encouragement from Thomas Cole led him to try landscape painting. 1845 – He became the second president of the National Academy of Design. 1847 – He helped found the Century Association. 1857 – Durand spent the rest of his life painting in New York City. 1886 – Died on September 17th.
<urn:uuid:70e11eaa-a85d-4935-b127-9b86efc1c0a3>
CC-MAIN-2016-26
http://www.s9.com/Biography/Durand-Asher-Brown
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00053-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97511
212
3.046875
3
Graphite, pen and brown ink, H. 0,260 m ; L. 0,390 m Annotated by Delacroix’s hand, towards the center : cloud that hides the end of the sea, the sunbeams show through its obscurity The most detailed part of this sheet presents a study for the female figure in Greece on the Ruins of Missolonghi (1826, Bordeaux, Musée des Beaux-Arts). Oddly enough, this depiction of the sufferings of the Greek people oppressed by the Turks was Delacroix’s inspiration for the allegorical figure in Liberty Leading the People (1831, Paris, Musée du Louvre) illustrated in other sketches also on this sheet of paper. The left-hand section of this sheet, which was purchased in the studio sale of the painter Félix Buhot in 1982, presents a preliminary step for the painting Delacroix made in 1826 (Bordeaux, Musée des Beaux-Arts). It pays tribute to the heroic resistance of the 4,000 defenders of the city of Missolonghi, surrounded by 35,000 men supported by the Turkish fleet in 1825. So that they wouldn’t fall into the hands of their enemies, the last fighters blew themselves up with their wives and children. Rather than creating a literal depiction of one of the bloodiest episodes in the Greek war of independence against the Turks, Delacroix opted for allegory. His first drawings show that he initially planned an almost square composition, centered on a female figure bent under the weight of pain. When he got the idea of an emblematic figure rising from the ruins, the axis of the scene shifted, and the final composition took on the form of a monumental banner. The other sketches around the main subject, on the other hand, refer to the painting exhibited at the 1831 Salon, Liberty Leading the People (Paris, Musée du Louvre), which was the unexpected result of the studies inspired by Delacroix’s philhellenic interests: the sheet includes a figure blowing a trumpet, another collapsed on a cannon, and a kneeling man with an outstretched arm. Hélène Toussaint, La Liberté guidant le peuple de Delacroix, catalogue exposition, Paris, musée du Louvre, 1982, n°3, repr. Arlette Sérullaz et Vincent Pomarède, Eugène Delacroix, La Liberté guidant le peuple, Paris, 2004, p. 30.
<urn:uuid:8febc889-a539-4104-bbb0-590a17720d29>
CC-MAIN-2016-26
http://www.musee-delacroix.fr/en/the-collection/works-on-paper/studies-for-greece-on-the-ruins-of-missolonghi-and-liberty-leading-the-people
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902765
545
2.828125
3
Earlier this month, Collins Kaumba, a World Vision communicator in Zambia, shared his experience visiting a camp for internally displaced persons (IDPs) in Sudan. His words were jarring: "Indelible memories of the suffering I saw in Darfur have followed me since the day I left Sudan...I have seen suffering and poverty in Zambia and other places in Africa -- but not of the magnitude I saw when I visited Darfur’s camps..." A reader in Israel commented on the post and made note of the thousands of Sudanese refugees there who are watching the situation in their homeland as the South prepares for its independence in just a few weeks. Years of conflict in this African country have caused millions to flee for their safety -- not just to other places within Sudan, but internationally as well. This is one global hotspot recognized as an origin of refugees. But the problem is much larger. A UNHCR report (pdf) from last year notes that there were 43.3 million people forcibly displaced by conflict at the end of 2009, the highest number since the mid-1990s. The same report provides some other staggering numbers: - One out of four refugees in the world is from Afghanistan. - Developing countries were host to four-fifths of the world’s refugees in 2009. - Women and girls represent 47 percent of refugees and asylum-seekers, and half of all IDPs and former refugees. - In 2009, 41 percent of refugees and asylum-seekers were children under the age of 18. - More than half of the world's refugees reside in urban areas. Natural disasters around the world, such as flooding, also claim an increasing number of displaced persons -- 42 million in 2010. In Darfur, an internally displaced person waits at World Vision supplementary feeding center where she gets nutritious food for her child. ©2011 Mohamad Almahady/World Vision On a personal level, it's difficult for me to imagine being forced to flee my home (and perhaps my country) because of violence or a natural disaster. I've simply never faced crisis of that magnitude. So when I think of the millions worldwide for whom this is a reality, the idea becomes nearly incomprehensible. The most vulnerable refugees cross international borders with little more than the clothes on their backs -- no money, no food, no medical care, no safe shelter, no education for their children, and often no legal recognition or protection. How does one survive under such circumstances? In observance of World Refugee Day, UN Secretary-General Ban Ki-moon writes the following: "Let us reaffirm the importance of solidarity and burden-sharing by the international community. Refugees have been deprived of their homes, but they must not be deprived of their futures." These words accurately reflect the circumstances faced by refugees across the globe. As Collins walked among war-weary displaced families at the camp in Sudan, he lamented the effects of war on the children he met -- boys and girls who have been deprived of childhood innocence and the opportunity to go to school. Indeed, they have lost their homes. But as Christians and humanitarians, we must work to ensure that they don't also lose their futures. June 20 is World Refugee Day. Download the UNHCR full report (pdf), "2009 Global Trends: Refugees, Asylum-seekers, Returnees, Internally Displaced and Stateless Persons," and consider making a monthly gift to support World Vision's work with children affected by war and conflict around the world.
<urn:uuid:e8cea47e-0c48-4fe4-a5f6-911f299352db>
CC-MAIN-2016-26
https://blog.worldvision.org/causes/world-refugee-day?vnc=Hh9Yb71sMMIDJgZxk8trH41plKsaj1WEhf2bqoChP8w&vnp=40
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00060-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961402
720
2.609375
3
Bezruč, Petr (pĕtˈər bĕzˈrōch) [key], pseud. of Vladimir Vašek väshˈĕk, 1867–1958, Czech poet, called the bard of Silesia. Bezruč's fame rests solely on the Silesian Songs (1903, enl. ed. 1909). In these 88 stark, moving verses the poet protests the suppression by the Austrians of the Slavic peoples living between Silesia and Moravia. Bezruč was an admirer of Whitman, but his work belongs to no school. After World War II the Czech government granted him a pension. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
<urn:uuid:422dbcc9-2ea4-433a-ac49-c524e9ff9738>
CC-MAIN-2016-26
http://www.factmonster.com/encyclopedia/people/bezruccaron-petr.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00118-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892641
168
2.578125
3
Sound waves turn natural gas into liquidTheAllINeed.com (NC&T/LANL) The thermoacoustic natural gas liquefier converts heat into sound waves and then converts the hot sound wave energy to cold refrigeration using highly pressurized helium contained in a network of welded steel pipes. First, the system combusts a small fraction of the natural gas to heat one end of the steel pipe network. Then, the resulting acoustic energy refrigerates the opposite end of the network, which cools the rest of the natural gas. At minus 160 degrees Celsius the natural gas liquefies - rendered dense enough for economical transport. This technology requires no moving parts, contributing to its economy of operation. According to a study done by the United States Government Accountability Office, every year about 3.3 trillion cubic feet of natural gas is flared or vented - burned wastefully or released into the atmosphere - across the globe, enough to meet the natural gas needs of France and Germany for a year. In addition, some 5,000 trillion cubic feet of undeveloped and unused natural gas deposits exist around the world in well fields that are too expensive to develop due to their size or location. "Using this wasted or dormant clean energy resource will address environmental concerns and help solve the world's energy problems," said Greg Swift, one of the Los Alamos inventors of thermoacoustic technology. While Swift LNG bears his name, Swift remains a Laboratory scientist who owns no interest in the company. "Today, capturing natural gas requires costly ultracold natural-gas liquefiers the size of oil refineries," said Swift. "But our thermoacoustic liquefier should be economical at a smaller size, useful for remote corners of the world where smaller gas fields are available. I'm especially eager to capture the associated gas that often comes out of the ground as a byproduct of oil production." Swift LNG plans to have the commercial thermoacoustic liquefaction system ready for use by 2010. This site is no longer updated. Click this link to have updated engineering news and articles. About the Author ©2006 All rights reserved
<urn:uuid:eb0c0852-f482-4974-bd49-f34c44c5c5de>
CC-MAIN-2016-26
http://www.theallineed.com/engineering/07032207.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899009
445
2.984375
3
The TRIUMF-ISAC Gamma-Ray Escape Suppressed Spectrometer (TIGRESS) is a state-of-the art gamma-ray spectrometer designed for a broad program of nuclear physics research with the accelerated radioactive ion beams provided by the ISAC-II superconducting linear accelerator at TRIUMF. The radioactive ion beams delivered by ISAC-II are accelerated to energies (continuously variable between 1.5 and 6.5 MeV/nucleon for heavy nuclei and up to 15 MeV/nucleon for light nuclei) sufficient for them to undergo Coulomb excitation, nucleon transfer, and/or nuclear fusion reactions in thin foils supported in a reaction chamber at the centre of the TIGRESS spectrometer. The high frequency photons, or gamma rays, emitted by the excited atomic nuclei produced in these reactions are measured by TIGRESS to study the structure of the nucleus and the forces that hold it together. TIGRESS design, development, and installation was supported by an $8.06M Research Tools and Instruments grant awarded by NSERC in 2003 to a collaboration of researchers from across Canada (the University of Guelph, Université Laval, McMaster University, Université de Montréal, Simon Fraser University, University of Toronto, and TRIUMF), with leadership by the University of Guelph. The TIGRESS spectrometer (Figure 1) is now fully operational and being used in a wide range of experiments at ISAC-II. |Figure 2 : Schematic of the 4 high-purity germanium (HPGe) crystals of a TIGRESS "clover" detector, showing the 8-fold segmentation of the outer electrical contacts of each crystal.||Figure 3 : A single TIGRESS 32-fold segmented HPGe clover-type detector in the testing laboratory.| The "heart" of the TIGRESS spectrometer is an array of 32-fold segmented high-purity germanium (HPGe) gamma-ray detectors. As shown in Figure 2, each of the four large single crystals of germanium in a TIGRESS detector has an outer electrical contact with 8 separate segments, for a total of 32 such outer contacts per detector. In addition to the electronic signals from the inner core contacts, which give high-resolution measurements of the total energy deposited by gamma-rays in each crystal, signals from the 32 outer segment contacts provide information on the locations of the gamma-ray interactions within the detectors. All of the TIGRESS detector signals are continuously digitized 100 million times per second (100 MHz) in custom-designed 14-bit 10-channel (TIG-10) waveform digitizer modules (Figure 4). The shapes of these digitized waveforms, from segments in which gamma rays interact as well as the induced signals in neighbouring segments, depend of the exact locations of the gamma-ray interactions within the HPGe crystals. The detailed analysis of these digitized waveforms thus allows the gamma-ray interaction locations to be determined with much finer resolution than the physical size of the detector segments. An average position sensitivity for single gamma-ray interactions of 0.44 mm has been achieved by these techniques (see Ref. 2 below). The ability to determine gamma-ray interaction locations within the TIGRESS detectors enables accurate correction of the measured gamma-ray energies for the Doppler shifts inherent in experiments with ion beams accelerated to several percent of the speed of light, while allowing each HPGe crystal to subtend a large solid angle about the reaction point at the centre of the spectrometer. TIGRESS thereby attains the excellent gamma-ray energy resolution that is the defining feature of HPGe detectors, while simultaneously achieving the very high gamma-ray detection efficiency required for experiments with accelerated radioactive ion beams. |Figure 5 : Schematic of the TIGRESS HPGe crystals surrounded by their modular front, side, and back Compton suppression shields.||Figure 6 : The maximum efficiency (left) and optimal suppression (right) configurations of the TIGRESS spectrometer.| The TIGRESS HPGe clover detectors are surrounded by Compton suppression shields constructed from the high-efficiency scintillator crystals bismuth germanate (BGO) and cesium iodide (CsI). These shields detect gamma rays that scatter out of the HPGe crystals without depositing their full energy, and the subsequent vetoing of these unwanted events significantly improves the signal to background in the gamma-ray spectra recorded by TIGRESS. As illustrated in Figures 5 and 6, the TIGRESS Compton suppression shields, each of which contains 20 optically isolated scintillator segments, have a modular design that enables rapid reconfiguration of the entire spectrometer between a "maximum efficiency" configuration in which the HPGe detectors are close-packed at 11.0 cm radius from the reaction centre, and an "optimal suppression" configuration in which the HPGe detectors are withdrawn to 14.5 cm and the BGO front shields are inserted radially to form a full Compton suppression shield around each HPGe detector. In both configurations of the TIGRESS array, an inner sphere of 11.0 cm radius is available to accommodate the auxiliary detection systems necessary to detect reaction products in coincidence with the gamma rays measured by the surrounding TIGRESS detectors. The initial Coulomb excitation, inelastic scattering, and transfer reaction experiments with TIGRESS at ISAC-II were performed with the segmented Si CD detectors of the Bambino (Figure 7) array developed by collaborators at Lawrence Livermore National Laboratory and the University of Rochester in the United States. The Silicon Highly-segmented Array for Reactions and Coulex (SHARC) (Figure 8) developed by collaborators from the University of York in the United Kingdom and Louisiana State University and Colorado School of Mines in the United States, has been used for (d,pγ) transfer reaction studies with TIGRESS at ISAC-II. The TIGRESS Integrated Plunger (TIP) developed by collaborators from Simon Fraser University provides a powerful system for nuclear excited state lifetime measurements with TIGRESS (Fig. 9). Most recently, the Spectrometer for Internal Conversion Electrons (SPICE) (Fig. 10), has been employed in first in-beam internal conversion electron measurements with TIGRESS. Auxiliary detectors that can be located downstream of TIGRESS at ISAC-II include the DESCANT neutron detector array developed at the University of Guelph and the ElectroMagnetic Mass Analyser (EMMA) currently being developed at TRIUMF. These combined systems provide a powerful facility to pursue nuclear structure, nuclear astrophysics, and nuclear reactions research with the high-quality accelerated radioactive ion beams from ISAC-II through the detection of conversion electrons, light charged particles, neutrons, and heavy ion recoils in coincidence with the gamma rays measured by TIGRESS. Additional technical details related to TIGRESS are given in the publications listed below, while the most recent physics results from our research programs with TIGRESS can be found in our publications and theses lists. |TIGRESS Technical Publications| 1. TIGRESS: TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer 2. Position Sensitivity of the TIGRESS 32-Fold Segmented HPGe Detector 3. TIGRESS Highly Segmented High-Purity Germanium Clover Detector 4. Measured and Simulated Performance of Compton-Suppress TIGRESS HPGe Clover Detectors 5. Optimization of Compton-Suppression and Summing Schemes for the TIGRESS HPGe Detector Array 6. Compton-Suppression and Addback Techniques for the Highly-Segmented TIGRESS HPGe Clover Detector Array 7. The TRIUMF Nuclear Structure Program and TIGRESS 8. The TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS): A Versatile Tool for Radioactive Beam Physics 9. The TIGRESS DAQ/Trigger System 10. SHARC: Silicon Highly-segmented Array for Reactions and Coulex used in conjunction with the TIGRESS γ-ray spectrometer 11. The TIGRESS Integrated Plunger Ancillary System for Electromagnetic Transition Rate Studies at TRIUMF 12. Simulated Performance of the SPICE In-Beam Conversion-Electron Spectrometer 13. The TRIUMF-ISAC Gamma-Ray Escape Suppressed Spectrometer, TIGRESS
<urn:uuid:86044788-c4a0-441b-b333-cd7644eb77e3>
CC-MAIN-2016-26
http://www.physics.uoguelph.ca/Nucweb/tigress.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.872033
1,836
2.734375
3
The new XC90 is said to boast two world firsts, namely an automatic braking function that works if the vehicle is turning at a junction, plus what Volvo calls Safe Positioning. Along with its other systems, the new features are said by Volvo to give the XC90 “the most comprehensive and technologically sophisticated standard safety package available in the automotive industry”. The autonomous braking at junctions is designed to prevent accidents where one vehicle cross in front of another. Upon detecting a potential crash, the XC90 brakes automatically in order to avoid it or at least minimise the damage. It forms part of the XC90’s City Safety auto braking system, which can also bring the car to a stop if it senses a collision with a pedestrian or cyclist. Safe positioning is part of what Volvo calls run-off road protection, and is designed to minimise the impact of those accidents where a car leaves the road as a result of driver distraction, fatigue or poor weather. Volvo says that in the United States half of all traffic fatalities are road departure accidents. So as well as technology that helps drivers to avoid leaving the road in the first place (such as lane keeping assist and an alert that warns the driver if it senses they are being inattentive), the XC90 can tighten the front seat belts if it senses control has been lost. In conjunction with energy-absorbing materials in the seat, this reduces the forces on a person’s spine in a hard landing. Volvo's autonomous emergency braking system can detect pedestrians and cyclists, as well as other vehicles Other safety features which help make the XC90 what Volvo describes as “one of the safest cars ever made” include rear facing cameras that can detect if the car is about to be hit from behind and thus tighten the seat belts to protect the occupants, plus a system that senses if the car is going to roll over and is able to brake individual wheels in an attempt to prevent it from doing so. Volvo’s aim is that nobody will be killed or seriously injured in one of its new vehicles by 2020, and the new XC90 will show off its latest technology in this regard. “Committing to safety is not about passing a test or getting a ranking,” said Prof. Lotta Jakobsson, a specialist at Volvo Cars Safety Centre. “It is about finding out how and why accidents and injuries occur and then developing the technology to prevent them.” Volvo has also revealed what engines the XC90 will be available with, including a petrol-electric hybrid alongside the traditional diesel options. The XC90, which is Volvo’s largest SUV to date, goes on sale in 2015.
<urn:uuid:3e24b932-ca17-4443-9cdb-3e1084640a04>
CC-MAIN-2016-26
http://www.telegraph.co.uk/motoring/car-manufacturers/volvo/10983686/Leading-safety-tech-for-new-Volvo-XC90.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949383
565
2.609375
3
Tuesday, December 11, 2012 International Human Rights Day: Ways to Take Action International Human Rights Day, which commemorates the day that the Universal Declaration of Human Rights (UDHR) was adopted, was celebrated on Monday, December 10th. The UDHR sets out a broad range of fundamental rights and freedoms to which all people are entitled without bias or segregation. It has been accepted by almost every government and has become the foundation on which protection and advocacy of human rights is based. Despite officially adopting the universal declaration in 1948, the Cuban regime has continuously and systematically violated the human rights, freedoms, and dignity of its population. Human Rights Watch has consistently accused the island’s government of torture, arbitrary detention, corrupt trial procedures, and extrajudicial execution, in addition to calling out the limits Cuban law imposes on freedom of expression, association, assembly, movement, and press. For more details on human rights in Cuba, visit Human Rights Watch or Amensty International. Below are a handful of ways that you can take action this week (and always) to help defend the rights of the Cuban people! Be a Loudspeaker for Cuban Voices Get involved in translating the dozens of blogs coming from Cuba. Cuban bloggers are yearning or their ideas to be expressed outside of their borders and to breed dialogue inside and outside of the island. Check out TranslatingCuba.com and read this interview with its founder, Mary Jo Porter, to understand the value and promise that translating blogs holds. Support Freedom of Expression and Access to Information Donate cell phones or USBs for Roots of Hope to refurbish and send to Cuba. These devices are the primary way that information spreads like wildfire among young Cubans, including artists, students, bloggers, and nascent entrepreneurs. Hear how technology helps. Use social media to make your #VoiceCount Follow @RootsofHope on Twitter, Instagram, or Facebook and share your ideas for supporting youth in Cuba and promoting their freedoms and rights. Draw Attention to Jailed Activists in Cuba Email or tweet Amnesty International to draw attention to the dozens of political prisoners in Cuba and ask for their inclusion in their letter writing and awareness campaigns. For examples, read about activists who are still jailed such as Calixto Ramon of Hablemos Press, and Marcos Máiquel Lima Cruz, or those who have been freed in recent years, such as Ricardo González. If you would like to write letters or organize a letter writing campaign for jailed activists in Cuba, email us at email@example.com
<urn:uuid:606e2e13-b985-478a-ae81-39736193d636>
CC-MAIN-2016-26
http://raicesblog.blogspot.com/2012/12/your-voice-counts.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940836
529
3
3
- Who invented the first computer? - Who made the first personal computers? - What was the first laptop? - What was the first computer bug and when did computer viruses first appear? - Who invented the mouse? Who invented the first computer? all depends on your definition of computer. Charles Babbage in the 1830s devised a plan for the first stored-program mechanical computer, using data modeled after the punched card templates used in industrial (Jacquard) looms. The first electronic digital computer was the ABC built by John V. Atanasoff and Clifford Berry in 1940 at Iowa State University. Several of its ideas were incorporated into the ENIAC which ran from 1945-1955 and is considered the first functionally useful electronic digital computer. The patent for ENIAC was awarded to Atanasoff by court order in 1974. The first commercially available electronic computer in the USA was the UNIVAC I, bought by and delivered to the US Census Bureau in 1951. Who made the first personal computers? of the earliest personal computers was the Intellec 4 by Intel, using their first commercially produced microprocessor - the four-bit 4004. The Altair built by MITS was the first commercially successful personal computer and the computer Gates and Paul Allen wrote Microsoft's (then known as Micro-Soft) first software product for - "BASIC for What was the first laptop? Model 100 comes as close to it as you will ever get in 1983. Running on 4 AA batteries, it was popular with newspaper reporters and made filing stories from the field much easier, provided they had access to a telephone line. What was the first computer bug and when did computer viruses Hopper found a moth had landed in between the solenoid contacts in the Mark II Calculator, a massive analog computer at Harvard University, designed by Howard Aiken. She removed the squashed moth and annotated her log book with an entry (along with the moth fastened to the page) that referred to the first "computer bug." Computer viruses became a problem in the 1980s as DOS based computers began to proliferate. The file structure utilized by these machines made it relatively easy in 1986 for the maker(s) of the "Brain" virus to play havoc with the boot sectors on floppy disks. In 1988, an anti-virus program was devised to remove Brain from infected floppies. Who invented the mouse? Engelbart, then at the Stanford Research Institute (SRI), demonstrated a mouse-keyboard component at a computer conference in 1968 in San Francisco.
<urn:uuid:d62b94d3-2f19-4b7c-93db-6b9fd79e7b4f>
CC-MAIN-2016-26
http://www.computer-museum.org/main/faq.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00193-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950139
576
3.078125
3
For an organism (think single-celled) living in just the right kind of environment, it might just be possible to survive using only facilitated diffusion, at least as far as small molecules are concerned. This kind of organism would have to maintain exactly the right concentration of the molecules it wants to keep or get rid of, based on the external concentration of those molecules. For example, if an organism was living in a glucose-rich environment, it might be able to fulfill its glucose requirements by (1) making the membrane itself tightly impermeable to glucose and (2) tightly controlling the number of passive glucose transporters. It would be nigh-impossible to achieve this for every solute in the environment, especially those that are able to permeate the membrane. If there were such a thing, it might be found in organisms that live in very low energy aquatic regimes such as anaerobic methane oxidizers, where the solute concentration in the environment doesn't really change in small time scales. Even in such environments, the number of small ions in the environment is likely to be high enough that some active transport would be required to maintain the osmotic balance within the cell.
<urn:uuid:a4d3e1c4-0439-4878-8b13-51ed35ef0088>
CC-MAIN-2016-26
http://biology.stackexchange.com/questions/8476/is-active-transport-required-for-life
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00005-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963358
245
2.90625
3
In early February, 1971, downtown Wilmington, N.C. was a war zone. Shots rang through the streets, traffic was blocked, and citizens were barricaded in a church. Although it took only a couple of days to restore peace and order, the actions of those few days and nights would bring worldwide attention to North Carolina, and would resonate for decades to come. In the late 1960s and early 1970s, African Americans in North Carolina were frustrated by the slow pace of school desegregation and other reforms promised by federal legislation and court decisions. Many young people, rejecting the commitment of the Civil Rights pioneers of the 1950s to non-violent tactics, looked for new ways to make themselves heard. There were prominent cases of arson against white-owned businesses in Charlotte and in Oxford, N.C., and many North Carolina cities erupted in violence after the assassination of Martin Luther King, Jr. in 1968. The largest demonstration following the assassination of King took place in the historic port city of Wilmington. Race relations there had worsened following the desegregation of the city's high schools at the beginning of the 1969/70 school year. There were frequent clashes between white and African American students resulting in a number of arrests and expulsions. The hostilities reached a boiling point in late January 1971 when Wilmington's African American students announced a boycott of the city's schools. Ben Chavis, an experienced activist from Oxford, N.C., was called to Wilmington to organize the boycott. Shortly after Chavis's arrival, two downtown businesses were burned, and there was evidence of other arson attempts. African American activists were blamed for the incidents. Members of the Ku Klux Klan and a group called The Rights of White People began to patrol downtown Wilmington armed and openly hostile to the boycotting students and their leaders. On the night of February 6, 1971, several fires were set, and a small downtown grocery store was firebombed. When firemen reported to the scene, they were shot at by snipers on the roof of the Gregory Congregational Church, in which Chavis and a number of students were barricaded. Two people were killed and several were injured during the battle that raged that night and into the next day. Finally, on February 8, National Guardsmen forced their way into the church only to find it empty. Based on the testimony of two African American men who claimed to have been in the church the night of February 6, ten people - nine African American men and one white woman - were arrested, tried, and convicted on charges of arson and conspiracy to fire upon firemen and police officers. The "Wilmington Ten" were sentenced to a combined 282 years in prison. In the years following the violence in Wilmington, the case became known around the world. The Wilmington Ten were perceived to be political prisoners by individuals and groups who believed that they were prosecuted not for the actions of February 6, 1971 - and about which there were still conflicting reports - but for their beliefs. Amnesty International took up the case of the Wilmington Ten in 1976, causing an embarrassment for both the North Carolina and federal governments. As the administration of President Jimmy Carter accused the Soviet Union of human rights abuses, it was especially sensitive to charges of mistreatment of American citizens. The case did not go away. In early 1977, the CBS news program 60 Minutes ran a feature on the Wilmington Ten, suggesting that the evidence against them had been fabricated. After higher courts refused to dismiss the charges, Governor Jim Hunt was under great pressure to pardon the prisoners. In January 1978, in an address broadcast throughout the state, Hunt refused to release the Wilmington Ten, but did reduce their sentences. In 1980 the U.S. Fourth Circuit Court of Appeals threw out the convictions, citing prosecutorial misconduct and denials of due process. Although the Wilmington Ten were freed from prison, the charges against them remained for another three decades. On December 31, 2012 North Carolina governor Beverly Perdue issued a full pardon for each member of the the group. The action meant that the state no longer believed that any of the 10 committed a crime. In explaining her pardon, Perdue pointed to the recent discovery of notes suggesting the prosecutor's efforts to select a jury based on race. She wrote, "These notes show with disturbing clarity the dominant role that racism played in jury selection. The notes reveal that certain white jurors believed to be Ku Klux Klan members were described by the prosecutor as 'good' and that at least one African American juror was noted to be an 'Uncle Tom type.'" At the time of the pardon, four members of the Wilmington 10 had already died and several were in ill health. "Free the Wilmington 10 Now!" ca. 1971-1981; 4.4 cm. North Carolina Collection. Gallery Accession No: CK.999.21 References and additional resources: Grimsley, Wayne. 2003. James B. Hunt: A North Carolina Progressive. Jefferson, N.C.: McFarland & Company. King, Wayne. 1978. "The Case Against the Wilmington Ten." New York Times Magazine, 3 December 1978. Thomas, Larry Reni. 1993. The True Story Behind the Wilmington Ten. Hampton, Va.: U.B. & U.S. Communications Systems. Tyson, Timothy. 2004. Blood Done Sign My Name. New York: Crown. 1 February 2005 | Graham, Nicholas
<urn:uuid:0d874d37-92e9-488a-a252-6fc8d800f08d>
CC-MAIN-2016-26
http://www.ncpedia.org/node/872/backlinks
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981765
1,102
3.703125
4
Entry from British & World English dictionary nounRugby , chiefly Australian /NZ A player positioned between the scrum half and the three-quarters. - He thinks Dan Carter is one of the best first five-eighths in the world. - He is definitely a five-eighth because he likes to glide across the field rather than take on the opposition and he also possesses a strong left-foot kicking game, which is an advantage in any team. - The five-eighth was able to take full advantage of his forward pack's size and strength to dominate the fringes of the ruck and score three tries. For editors and proofreaders Line breaks: five-eighth What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:65b09eef-b19c-41cf-bc9c-a1ac6daf388d>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/definition/american_english/five--eighth
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.960173
177
2.578125
3