text
stringlengths
188
632k
This information was published in 1966 in An Encyclopaedia of New Zealand, edited by A. H. McLintock. It has not been corrected and will not be updated. Up-to-date information can be found elsewhere in Te Ara. PROVINCE AND PROVINCIAL DISTRICT When the provincial boundaries were first delineated in 1853, some 2 million acres in the western North Island were assigned to the “Province of New Plymouth”, the smallest of the original six provinces. In 1858 the name of the province was changed by Act of the General Assembly to “Taranaki”, the Maori name for Mount Egmont. For most of the period of provincial government Taranaki had a smaller population than that of any other province, and in the 30 years after the landing of the first colonists in 1841, European settlement had not spread more than a few miles beyond the town of New Plymouth. At the beginning of the nineteenth century a dense Maori population occupied the coastlands from Mokau to Patea and was particularly concentrated on the fertile undulating lands between Urenui and Waitara. The clearing of land for crops had pushed back the coastal forest and the first Europeans found a fringe of fernland, from 2–4 miles deep, extending as a great arc around the forest of the interior. These fringe lands were dotted with numerous pa and kumara plantations and were intersected with wooded streams giving a parklike appearance which greatly attracted the first European visitors. In the 1820s a large-scale exodus of the Taranaki peoples took place to the Cook Strait district in the face of a threatened assault by Waikato tribesmen. In 1832 the Waikatos, equipped with firearms, did invade North Taranaki and overwhelmed the remnants of the Ngati Awa peoples except at Otaku pa (New Plymouth) where a spirited and successful defence was put up with the assistance of a group of English whalers. Thus, by the mid-1830s, the coast-lands of Taranaki were almost deserted and the survivors of the former inhabitants were living in slavery in the Waikato or as exiles in the Horowhenua and Cook Strait districts. Into this temporarily vacated district the first English immigrants stepped ashore from surfboats in 1841. Foundations of European Settlement The settlement was planned by a subsidiary of the New Zealand Company – the Plymouth Company – which was to take over some of the New Zealand Company's land, sell it in the west of England, select colonists, and organise a settlement. The Plymouth Company was absorbed in the parent organisation in 1841 after selling some £12,000 worth of land at 30 shillings per acre. Late in 1839 an advance party of the New Zealand Company, led by Col. William Wakefield, had landed at the Sugar Loaf Islands from the vessel Tory and purchased 60,000 acres of land from the sparse remnant of Ngati Awa peoples still living on the coast between Mokau and Patea. Early in 1841 the surveyor F. A. Carrington selected Taranaki as the site for the Plymouth Company's settlement after inspecting several alternative areas in Queen Charlotte Sound and Tasman Bay which the New Zealand Company claimed to have purchased. New Plymouth was the only one of the organised settlements in New Zealand without a natural harbour. The shortcoming was a severe handicap for many decades, but Carrington thought good land was more important for the proposed agricultural settlement than a good harbour. He observed that in central New Zealand a good harbour and good land seldom went together and, in justifying the choice of the Sugar Loaves as the site for New Plymouth, wrote that “the next generation will erect a commodious breakwater”. Carrington was to be present at the laying of the first stone for the breakwater, but that was in 1881, 40 years later. The first seven ships brought nearly 2,000 immigrants from the west of England counties of Cornwall, Devon, Dorset, and Hampshire, the majority being agricultural labourers and miners. The first colonists had barely established themselves before the former Maori owners, knowing nothing of the company's purchase of their lands, returned from slavery and exile. They objected to the extent of the land sales and their claims were substantially upheld when Governor FitzRoy allowed the New Zealand Company only 3,500 acres of its Taranaki land purchase, and that only in the immediate vicinity of New Plymouth. The initial area of the settlement was gradually extended by Government purchase from the Maoris to a total of 63,000 acres by 1858, mainly to the south and south-west of the town. The first farms were on fern country and, although the settlers looked enviously at the wide expanse of open, fertile plains near Waitara, the Maoris firmly refused to sell. Denied the chance of extending their farms into this open country, some settlers turned reluctantly to the heavier labour of hewing out farms in the bush, a task, however, in which they made use of the skill of Maori workmen with the axe and their knowledge of burning off. By 1850, although scarcely 4,000 acres were in crop, Taranaki had earned the title “The Garden of New Zealand” and agricultural produce was being shipped to other settlements. Wheat, oats, barley, and potatoes were the chief crops and, although Taranaki later developed into a highly specialised dairying area, in 1850 flour accounted for 67 per cent of the value of exports and butter only 12 per cent. Because of its isolation and limited opportunities, few immigrants came to Taranaki in the later 1840s and the 1850s and the population grew more slowly than in any other province. There were 1,091 people in 1843, 1,985 in 1853, and only 2,650 in 1858. By 1860, of 63,000 acres of land purchased by the Government for European settlement, only 13,000 acres were in cultivation. Twenty years after initial settlement the original colonists and their children formed a much higher proportion of the population than in any other province: in 1861 84 per cent of the overseas-born came from England, the highest proportion for any district in New Zealand. Another inheritance of the west-of-England origin of the early settlers was the high proportion of Wesleyan Methodists – 22 per cent in Taranaki, compared with 8 per cent for New Zealand. Since that time Taranaki has consistently recorded a higher proportion of Methodists than that of any other province. The Maori Land Question European settlement in Taranaki was more intimately affected by the Maori land question than elsewhere. During the 1850s the Maori became increasingly reluctant to part with his land. The Europeans, on their narrow coastal foothold, fretted against some 1,700 Maoris whom they regarded as shutting up from settlement some 2 million acres of virtually unused land. The settlers' determination to occupy the good open land near Waitara caused them to take risks and urge the Government to abandon its policy of land purchase by patient negotiation. In 1860, amidst a situation of general confusion and misunderstanding, European and Maori drifted into a war which was to last intermittently for 10 years. During the course of the conflict, European forces generally held the disputed Waitara lands, but elsewhere the Maoris moved where they wished, plundering and burning farms and killing some outsettlers. Military actions were generally inconclusive and desultory guerilla fighting and ambuscades continued until 1866 when General Chute marched a large force through the bush from Patea to New Plymouth, thence around the coast via Cape Egmont to Wanganui, destroying all pas and plantations in his way. One million acres of land were confiscated and military settlers were established on the forest-free Waimate Plains west of Patea in South Taranaki. In 1868 the Maoris took heart once more and attacked the Waimate Plains settlements. A colonial force of Forest Rangers (including Major von Tempsky) was raised and, after a series of European reverses, the war finally petered out in 1869. Of some 1 ¼ million acres of land confiscated by the Government up to 1870, 95,000 acres (mainly in open country) were laid out in 50-acre grants for military settlers, 91,000 acres were reserved to “friendly natives”, while much of the remainder (mainly bushland) was given back to the “rebel” owners, eventually to be purchased by the Government for settlement in the eighties and nineties. Before the abolition of the provincial governments in 1876 a “Provincial Government Forest Reserve” had been demarcated in a 6-mile radius from the summit of Mt. Egmont – the area which in 1900 was to be constituted the Egmont National Park. During the 1870s Taranaki stirred from its long stagnation. The European population increased from 4,500 in 1871 to 5,465 in 1874 (see map) and to some 15,000 in 1881. Assisted immigration brought 2,100 people to the province by 1879, a mere 2 per cent of all the assisted immigrants to New Zealand in the seventies but amounting to almost half of Taranaki's population in 1871. The frontiers of settlement spread in three directions: south-west of New Plymouth between Oakura and Okato; south from New Plymouth and Waitara into the forests of the Inglewood district; and on the open country of South Taranaki between Patea, Hawera, and Manaia. The construction of the railway line between Hawera and New Plymouth opened up the fertile and rolling bushland to the east of Mt. Egmont, and settlers came close at the heels of the construction gangs. Inglewood was reached by rail in 1877, Stratford in 1879. By 1880 the available open lands had all been occupied and pioneering in Taranaki was henceforth a matter of bushfelling. The western fringe of the province between Opunake and Okato was settled comparatively late by Europeans, and this area still has a high proportion of Maori land. In 1885 New Plymouth's long isolation was relieved by the completion of the rail link to Wellington, and until the completion of the Main Trunk Railway in 1908 the town was the transhipment point on the combined rail-sea journey between Wellington and Auckland. Development of Dairying Despite the impetus given to development after the Maori Wars, at least 20 years passed before the growth of dairying gave the smallholder a secure income from a type of farming adapted to Taranaki conditions. The pioneer farmer of the 1870s and 1880s spent much of his time on roading, bush-felling, and sawmill work. Amongst the stumps of his bush section he practised a part-time, semi-subsistence economy, growing wheat, oats, grass seed, and potatoes and tending livestock. Many were glad of the cash received by the export to China of an edible fungus collected from tawa and mahoe trees. The first cooperative dairy factories were opened at Inglewood and Opunake in 1885 and their success led to a revolution in farming, A Chinese merchant, Chew Chong, who had organised the export of fungus, played a major part in the establishment of creameries and factories and in organising marketing facilities. Although cooperative dairy factories had been established earlier in other parts of New Zealand, the “ring plain” of Taranaki, encircling Mt. Egmont, became the first specialised dairying region in the country. The reasons are probably to be found in the small size of holdings and the heavy rainfall conditions, both of which made fat-lamb farming a less attractive alternative than it was in many other districts. The 1890s were the “boom” years of Taranaki, when the population grew at a rate faster than in any other province. The lowlands were completely occupied and the farming frontiers advanced finger-like into the valleys of the tangled mass of hill country to the east of Stratford. The number of farms almost doubled from 2,500 to 4,235; the number of cattle doubled from 108,000 to 211,000; and the area in sown grasses increased from 300,000 acres to 700,000 acres. In 1896 there were 46 dairy factories. Five years later there were 95 butter factories and 21 cheese factories, as well as some 40 sawmills cutting into the fast-retreating bush. At the turn of the century, however, when most of the technical and organisational problems of the young dairy industry were being overcome, a new menace appeared in the declining fertility of the soil and in the rapid spread of blackberry and ragwort over the pastures. The solution was found in the removal of stumps and logs, followed by ploughing and resowing in improved strains of grasses, the liberal application of superphosphate, and the use of chemical weedkillers. By 1925 this consolidation phase was almost complete on the lowlands; and hedges of boxthorn and barberry gave a neatly enclosed appearance to a well ordered countryside. The supplementary feed crops of oats and turnips soon disappeared from the Taranaki scene as all-grass farming became securely established. The early specialisation on butter production gave place during the First World War to an emphasis on cheese; by 1920 there were 116 cheese factories and only 26 butter factories. Problems of Hill-country Farming In the steep hill country of eastern Taranaki, settlement was less successful and in some cases disastrous. By the mid-1890s the tide of settlement pressed inland from Waitara, Stratford, and Patea. As part of the Liberal Government's policy of granting “the land for the people”, men of limited capital were placed on small bush sections at Whangamomona and Ohura far in advance of roads and railways, and they set to work with a confidence born of experience in the fertile lowland bush country. The trees were felled and burned, grass was sown amongst the debris, and Lincoln sheep and Shorthorn cattle were turned out to graze. But on the steep hill slopes and under the high rainfall, reversion to secondary growth and severe soil erosion often resulted. Surveyors, accustomed to lowland concepts of an economic size of farm, made many properties too small for hill country. Renewed impetus to settlement came with the high wool prices after 1918, but the sharp slump of 1922 caused widespread abandonment. The highest acreage of sown grasses recorded in Taranaki was 1,237,000 acres in 1923. Since then the farming frontiers have retreated, rapidly at first to 950,000 acres in 1937 and more slowly since then. Both road and rail construction lagged far behind settlement – partly because of the difficulty of securing suitable reading metal in this predominantly mudstone country. The railway from Stratford to the Main Trunk line, begun in 1901, was not completed until 1932, and a suitable all-weather road to Taumarunui was not completed until 1945. The railway made it possible to open up the sub-bituminous coal deposits of the Ohura-Tangarakau area and helped to arrest the decline of settlement by allowing the application of fertiliser to the lower valley lands. Nevertheless, much of the steeper and remoter hill country reverted through fern and manuka to secondary forest within a generation of its first occupation. The advent of aerial topdressing in the 1950s has permitted a modest if selective improvement in the central and northern part of the uplands where rolling terrace country, capped with volcanic soils, offers sites for landing fields as well as cultivable land for winter feed crops. Since 1925 Romney sheep have entirely replaced the Lincolns of the pioneer phase, and Polled Angus cattle the Shorthorns. In 1911 the European population of Taranaki was 51,569 and in that year the provincial district had just over 5 per cent of the Dominion's population – the greatest relative share it has had at any census. Net migration into Taranaki ceased after 1911 and there has been outwards migration ever since, especially to the Auckland provincial district. Many sons of pioneer Taranaki dairy farmers moved north to become pioneers themselves in the Waikato, Northland, and Bay of Plenty. Similarly, Taranaki's earlier maturity as a dairying region enabled it to supply many of the stud Jersey cattle which built up the dairy herds of the Auckland district. The provincial population grew to 94,109 in 1956 and 99,774 in 1961, of which 7 per cent were Maoris. In common with most other long-settled farming districts in New Zealand the largest urban centre has absorbed most of the population growth in the past 50 years. Thus New Plymouth has increased more than fivefold since 1911, whereas smaller market towns, such as Stratford, Waitara, and Hawera, have merely doubled, and many townships have remained stable or declined. Taranaki's high birth rate has resulted in a rate of natural increase of population that has been exceeded on occasions only by Marlborough and Westland. Nevertheless, the actual increase of population between 1956 and 1961 was only 6 per cent, the lowest for the North Island and only half the national rate of growth. The prosperity of Taranaki has depended in the past mainly on its resources of soil and climate, supplemented for a period by its native timbers and, more recently, by coal in the eastern hill country. Two types of mineral deposits – oil and ironsand – have raised high hopes from time to time for at least a century. The first attempt to smelt the iron-bearing beach sands was made in 1848, but a commercially satisfactory process has yet to be devised. The first oil bore, at Moturoa near New Plymouth in 1865, produced a few gallons of oil, some gas, and much enthusiasm, and in recent years a small-scale oil-processing plant has operated at New Plymouth. The dramatic discovery in 1961 of a large source of natural gas beneath the dairy lands of South Taranaki has given a new complexion to New Zealand's power and fuel problems and raises possibilities of local industrial developments which could be as significant to Taranaki as was the rise of the dairy industry in the 1890s. by Murray McCaskill, M.A., PH.D., Reader in Geography, University of Canterbury. - See also Dairying, New Plymouth, Mount Egmont, etc. - An Account of the Settlement of New Plymouth, Hursthouse, C. (1851) - Taranaki, Seffern, W. H. J. (1896) - From Plymouth to New Plymouth, Wood, R. G. (1959) - N.Z. Journal of Agriculture, Vol. 96 (April and May 1958)
New Technology Shows Promise in Water, Energy and Money Savings Bruce MacKenzie was curious. Was it really possible to eliminate most of the multi-step process of wastewater treatment and come out with more, cleaner water at less cost? That was the promise of a new technology developed by Bay Area-based Sylvan Source Inc. MacKenzie, a technical specialist who oversees water treatment at Southern California Edison, was tasked with finding out if Sylvan Source’s technology could deliver on that claim. If the technology worked, it could be a major breakthrough not just for utilities, but any industry that treats its wastewater. This would be especially valuable in a state like California that is facing an ongoing and historic drought. MacKenzie, working in collaboration with the Electric Power Research Institute, oversaw a demonstration project of Sylvan Source’s technology from late August to mid-October. The project took place at SCE’s Mountainview Generating Station in Redlands, where wastewater from its steam-generated turbines and cooling towers is treated. The big difference between Sylvan Source’s technology and conventional wastewater treatment is how heat is transferred during the distillation process. Conventional wastewater treatment uses a heat exchanger to produce the steam while Sylvan Source relies on a proprietary thermal transfer mechanism. That change to the process reduces the need for high-flow pumps to circulate the water, which means less power is needed to operate the system. “The energy is simply used much more efficiently,” MacKenzie said. If a Sylvan Source system replaced the process SCE currently uses, MacKenzie said it would eliminate the need for many of the chemicals the company now employs to clean and soften the water. It would also reduce the cost of maintenance and replacement parts. Sylvan Source says its technology cuts capital expenditure and operating costs in half from conventional wastewater treatment processes while recovering 90 percent of the wastewater. The institute, which conducts research for the electric utility industry, provided most of the funding for the pilot project and will release an independent analysis of the results next year. Among other things, the institute studies technologies that have the potential to assist power plants with their water treatment needs. Laura Demmons, Sylvan Source’s chairman and CEO, praised SCE and the institute for being open to a new technology and willing to undertake an independent test of it. “SCE and EPRI clearly demonstrate global leadership in their ability to both identify truly game-changing technologies and then to comprehensively validate that these technologies are very real through rigorous field testing with live feed water streams,” Demmons said.
What is antiperspirant for? From the name itself, you can pretty much guess it's against ("anti") perspiration, or more simply, sweat. So you know the main fact about antiperspirant. But there are other things you probably didn't know about that little stick that you throw into your gym bag. Read on to find out more. How do antiperspirants work? Antiperspirants contain compounds made of either aluminum or methenamine. These compounds work to plug the sweat ducts. Your body senses that your sweat ducts are plugged, so it sends a message to reduce sweat flow. When you apply antiperspirants topically to the skin, you're essentially reducing sweat from coming out onto your skin, helping to prevent wetness and odour. What's the difference between antiperspirant and deodorant? Deodorant doesn't work the same way as antiperspirants. Deodorants help reduce body odour by killing off odour-causing bacteria (body odour comes from sweat mixing with bacteria that live on our skin) or including fragrances to mask odour. However, they do not help reduce sweating. So if you're trying to combat sweat, your best bet is to use an antiperspirant. If sweat and smell worry you, then you may want to choose a product that is both an antiperspirant and a deodorant. Some antiperspirants are also scented to keep you smelling fresh. Are antiperspirants effective? Yes, antiperspirants are effective at reducing sweat. When and how should I apply antiperspirant? Before applying antiperspirants, make sure the area you are applying on is completely dry. Then apply a light layer of the antiperspirant to the skin, making sure to cover the skin where you tend to get wet. Antiperspirants work best when applied at night to the affected areas. Before bed is the best time because people sweat less at night, so when you apply the antiperspirant, little or none of it gets washed away by sweat. When you sweat less, it also means more of the antiperspirant is absorbed into the skin, allowing more sweat ducts to be plugged. Wash the antiperspirant off in the morning to help prevent skin irritation. If irritation isn't an issue, you can just leave it on; the antiperspirant should work throughout the day even after a morning shower. Will antiperspirant irritate my skin? Skin irritation and discomfort may occur with antiperspirants. To help prevent this: - Wash off strong antiperspirants in the morning after you've left it on overnight. - Do not apply antiperspirant on recently shaved skin; wait 24 to 48 hours.
Story: Otago region The rare Otago skink and grand skink – both of which can grow to 30 centimetres in length – are unique to Central Otago. They live in mountain tussocklands, but occupy only one-tenth of their original range. The skinks are endangered by habitat loss and introduced predators, among other threats. About this item Department of Conservation Photograph by Mike Aviss This item has been provided for private study purposes (such as school projects, family and local history research) and any published reproduction (print or electronic) may infringe copyright law. It is the responsibility of the user of any material to obtain clearance from the copyright holder.
Brain Wave Study Predicts Mastery Over Video Games Brain waves can predict who will improve most on an unfamiliar video game. The researchers with the help of an electroencephalography looked at the electrical activity in the brain of 39 subjects before they trained on Space Fortress. None of the subjects were daily video game players. pace Fortress is a video game that is specially designed for the development of cognitive research. According to the researchers the subjects whose brain oscillated the most powerfully in the alpha spectrum that is about 10 times per second or 10 hertz when measured at the front of the head tended to learn at a faster rate than those whose brain waves oscillated with less power. According to Kyle Mathewson lead researcher and a postdoctoral researcher from University of Illinois, The EEG signal was a robust predictor of improvement on the game. "By measuring your brain waves the very first time you play the game, we can predict how fast you'll learn over the next month," Mathewson said. The EEG results predicted about half of the difference in learning speeds between study subjects, he said. Mathewson states that, the waves of the electrical activity across the brain reflect the communication status of millions or billions neurons. "These oscillations are the language of the brain, and different oscillations represent different brain functions," he said. They noticed that learning to play the game improved subjects' reaction time and working memory. These two skills are important in everyday life. "We found that the people who had more alpha waves in response to certain aspects of the game ended up having the best improvement in reaction time and the best improvement in working memory," Mathewson said. One analysis, led by Beckman Institute director Art Kramer (an author on this study as well), found that the volume of specific structures in the brain could predict how well people would perform on Space Fortress. That study used magnetic resonance imaging (MRI) to measure the relative sizes of different brain structures. With EEG, researchers can track brain activity fairly inexpensively while subjects are engaged in a task in a less constricted, less artificial environment, because the expesive MRI techniques requires them to lie immobile inside the giant magnet. This new study offers clues to the mental states that appear to enhance one's ability to perform complex tasks. "You can get people to increase their alpha brain waves by giving them some positive feedback," Mathewson said. "And so you could possibly boost this kind of activity before putting them in the game." They describe their findings in a paper in the journal Psychophysiology.
Scientists have been granted permission to screen test tube embryos for an inherited form of cancer. Embryo screening is already used to detect some other disorders The Human Fertilisation and Embryology Authority (HFEA) approved the screening following a request from couples seeking IVF treatment. The watchdog said there was a strong chance of the genetic bowel cancer being passed from parent to child. Scientists in London hope using the controversial technique could help to wipe out this type of cancer. A spokeswoman for the authority said: "We can confirm that we have issued a pre-implantation genetic diagnosis licence for that particular condition." A team at University College London has been granted the licence to screen embryos for the gene that causes familial adenomatous polyposis (FAP). If a parent is a carrier of the gene there is normally a 50% chance it will be passed on to their children. The gene can lead to the development of rectal or colon cancer in early teenage years. Embryos created using IVF can be screened using the pre-implantation genetic diagnosis process. Then only embryos free of the gene will be implanted. One of the couples to win the right to have their IVF embryos screened said they were delighted with the decision. They told The Times newspaper: "We are overjoyed to have been given this chance, not only to do as much as possible to make sure our children don't have the gene, but to stop them passing it on." The technique is already used in screening for other disorders such as cystic fibrosis. But this is thought to be the first time it has been used for a disease that does not affect the sufferer until early adulthood. Dr Mohammed Tarannisi, director of the Assisted Reproduction and Gynaecology Centre in London, said the latest decision should have been "put to a wider audience". He told BBC Radio 4's Today programme: "We are still talking here about medical conditions that have serious implications, but we are talking about conditions that are not going to be there at the time of birth. "These are conditions that may or may not develop 20, 30, 40 years down the line. Is this the right thing to do? "It is not up to the HFEA or three members of the HFEA or even a clinician like myself to make these kinds of decisions. "This is an issue that needs to be debated properly." Dr Tarannisi has an application for a licence to test for breast cancer genes being considered by the HFEA. Josephine Quintavalle of Comment on Reproductive Ethics said: "It's a very big ethical step forward. "The HFEA has yet again taken a big ethical decision without consulting the public." She said it was moving down a slippery slope from what had started as intervention for only disease that threatened the viability of the embryo to diseases that might appear in adulthood. "We should be looking for medicines that cure not medicines that kill," she said.
- Lucerne - King of fodder crops |Last update: April 3, 2012 07:57:05 AM| LUCERNE-KING OF FODDER CROPS G.C. de Kock Head: Agronomy Section Agricultural Research Institute, Grootfontein LUCERNE was probably the first crop to be cultivated for hay. It is indigenous to Mesopotamia. It spread with the Persian armies from Media to Greece in 470 B.C. where it was known as "Medic". Lucerne was brought from Greece to North Africa and hence to Italy and in 100 B.C. from Italy to the rest of Europe. Apart from other parts in Europe, it was cultivated in the Lucerne region of Italy from whence it obtained its common name. It spread from Spain to the Americas where it is known as Alfalfa, an Arabic word meaning "best fodder". It came to South Africa from South America in 1861 and was first grown in the Worcester district. From there it spread to the Little Karoo and Karoo, mainly as grazing for ostriches. Lucerne is today the most important fodder crop grown under irrigation in the Karoo and because of the high yields obtained, its palatability and high feeding value, is known as the King of fodder crops. Lucerne is known botanically as Medicago sativa belonging to the family Leguminosae. There are three species of the genus Medicago known as Lucerne, namely - Medicago falcata - Siberian yellow flowered lucerne. - Medicago sativa var. media - Variegated lucerne with various flower colours. (These two species are found mainly in the colder parts of the world.) - Medicago sativa - Common purple flowered lucerne of which there are a large number of cultivars in the world. Some cultivars are hybrids of the three species. Lucerne has a wide adaptation ability provided sufficient water is available. Temperature plays a very minor part in the regional distribution of lucerne. Moisture is, however, the limiting factor. Lucerne can be cultivated under dry land conditions where the annual summer rainfall is higher than 400 to 500 mm and in winter rainfall areas where the annual rainfall is higher than 350 to 400 mm. Lucerne requires large amounts of water for optimal production. (From 750 to 800 kg of water is required for every kilogram of dry matter produced.) Due to a deep root system the lucerne plant can withdraw water from deeper soil levels. An important advantage of such a deep root system lies in the fact that the plant can survive long dry periods once it has been established. Lucerne is adapted to a series of soil conditions provided the following requirements are complied with: 1. The soil should not be acid. The poor growth of lucerne on acid soils (ph < 5,5) is usually due to manganese and/or aluminium toxicity, reduced availability of molybdenum and phosphate and inefficient nitrogen fixation. The application of lime to raise the pH to 6,0 and higher is a prerequisite for lucerne production on acid soils. The lime requirement of a soil is determined by soil analysis. 2. The soil should not be brack or alkaline brack (ph > 8,3). Lucerne is reasonably brack resistant but the growth and development of seedlings is retarded by high concentrations of salts. The availability of other plant nutrients can also be influenced by a high pH and the presence of brack salts in the soil. The high pH and brack salt content can be rectified by the application of Agricultural Gypsum. The actual amount should be determined by soil analysis. 3. The soil profile is also very important. With its well-developed root system, lucerne prefers a deep soil. The depth of the soil profile can be limited by impenetrable layers or by compaction of the subsoil or a plough sole. Under such conditions a temporary water table can develop after irrigation or during a wet season, causing the lucerne roots to rot or be penetrated by disease organisms. Lucerne therefore requires a deep and well-drained soil. When lucerne is to be established under irrigation, it is especially desirable to prepare a fine and firm seedbed. As the soil will be irrigated for a number of seasons, well laid out beds save costs and ensure high yields. As efficient weed control is practically impossible in new established lucerne lands, the preparation of the soil must be such that the seedbed is practically weed free at the time of establishment. It is good practice, especially with new lands, to cultivate another crop for at least a season before establishing lucerne. It enables the farmer to control the weeds and to ensure that the slope of the irrigation beds is corrected before lucerne is planted. High patches in lands cause lucerne to suffer from drought, while it becomes waterlogged in low patches. It must always be kept in mind that lucerne usually remains on the land for a number of years and that good soil preparation before establishment is of the utmost importance because this will eliminate future problems. As fertility affects the yield and longevity of lucerne, fertilisers must be applied during the preparation of the soil. An application of supers at this stage is recommended. In the Karoo lucerne can be sown under irrigation at any time of the year except in June and July. The best time is, however, during late summer (February to April). If sown in spring, August and September would be the best months to sow. The time and money spent on the preparation of the seedbed can be in vain due to the use of poor seed. Apart from the fact that an unsatisfactory stand is obtained, there is also the danger that weeds, especially dodder, can be introduced. It is thus desirable to use certified seed. Lucerne seed should be inoculated with the correct nodular bacteria before sowing in order to ensure that nitrogen fixation takes place. Traces of Rhizobium bacteria are usually found on host plants, but even where lucerne has been in a crop rotation system on the land for years, seed should still be inoculated. High seeding rates (15-20 kg of seed per hectare) are required under irrigation, intermediate rates in high rainfall areas and low seeding rates in low rainfall areas. Lucerne seed should preferably not be sown with a so-called cover crop. The seed can be broadcasted by hand, seed drill or cycalone spreader. The seeding depth varies with soil depth and the moisture content of the soil. The optimum depth varies from 10 to 20 mm on sandy loam soil and 5 to 10 mm on clay soils. The seed should only be lightly covered. The land can be irrigated immediately after seeding. Unless the seedbed is well prepared beforehand, weeds often damage the young crop. Combat the weeds by cutting with a mower set high above the ground before any seed is produced. Lucerne is a perennial crop, which occupies the land for a number of years. The necessary nutrients must therefore be applied to the soil before establishment. Lucerne requires a large amount of phosphate, which is closely associated with the utilisation of nitrogen by the plant; it stimulates root growth and is also of importance in the flowering and seed setting processes. Super phosphate should be applied during soil preparation and worked into the soil. A shortage of nitrogen is seldom experienced in lucerne. It may, however, be necessary to apply a small amount of nitrogen at a young stage on soils where nitrogen is limited. The application of Potassium is only essential on certain sandy soils and can be supplied by the application of Potassium sulphate. Fertiliser applications must preferably be based on the results of soil analyses. Lucerne requires large amounts of moisture, but is at the same time sensitive to over irrigation, especially where the latter occurs on badly drained soils causing the accumulation of large amounts of free water. The first irrigation is applied before establishment of the lucerne. The soil is finally cultivated and sown after this irrigation. Irrigation immediately after sowing can cause the formation of a crust, which is detrimental to germination. Germination can sometimes be encouraged by a light overhead irrigation if the equipment is available and the irrigation water is of good quality. If the soil does not tend to form a crust the seed can be sown in dry soil and then irrigated. Root development does not take place in dry loose soil. It is not necessary to irrigate established lucerne frequently. Lucerne can use up to 80 per cent of the available soil moisture without any detrimental effect. The amount of water that should be applied depends on the length of the growing season, the number of cuts, climatic factors such as temperature, evaporation and rainfall and the degree of infiltration of the water in the soil. Lucerne is mainly flood irrigated by means of the bed system. The width and length of irrigation beds depend on factors such as the strength of the irrigation stream, the type and depth of the soil and the slope of the land. Two irrigations of 75 mm each will give better yields on shallow soils or on soils with a poor water holding capacity than one irrigation of 150 mm. Even on deep alluvial soils it has been found that an irrigation of 75 mm immediately after cutting and another irrigation of 75 mm approximately 14 days later produced the most satisfactory yields. Where irrigation water is limited lucerne can be irrigated by means of a sprinkler system. However, it appears that this method is not always advisable in the Karoo where the brack problem more than often occurs. In South Africa seed is at present only produced from South African Standard lucerne. This is a land race cultivar, which originated by natural selection. Weeds not only reduce the yield of lucerne, but also impairs the quality of hay and seed. Weed control should therefore receive attention before establishment. The following measures should be applied - Use certified weed-free seed. - Choose correct seedbed preparation. - Sow the correct amount of seed. - Fertilise the lucerne correctly. - Avoid water logging of the soil. - Do not graze young lucerne. - Practise thorough weed control. Management and Utilization The best stage to cut lucerne for hay is influenced by the quality of the hay and effect of cutting on the lifespan of the stand. By cutting lucerne continuously at a stage younger than the 10 per cent flower stage, the plants become weaker and their lifespan is shortened. This is due to a reduction of the root reserves. These reserves are built up in the roots, during flowering and later stages. The detrimental effect of harvesting at a young stage can be offset by letting the lucerne go to flower once or twice in late spring and autumn. Insignificant differences are found in hay yields from lucerne harvested at the 10, 50 or 100 per cent flowering stage, but a reduction in the percentage leaf and an increase in the fibre content are experienced when lucerne is harvested after the 10 percentage flower. Considerable weed infestation also occurs when lucerne is regularly harvested at a young stage. As lucerne becomes older, the stand becomes thinner and growth poorer; a result of various factors such as an exhaustion of the soil's fertility and a gradual loss of plant vigour due to regular cutting and perhaps grazing during the active growing season. It then becomes necessary to activate growth again by tilling the lands. A thorough tilling treatment during late winter, i.e. before the active growing season, has various advantages, the most important of which are the improvement of the soil structure and destroying many annual and even perennial weeds. It also gives the opportunity to replenish the nutritional requirements of the soil. A medium application of phosphate before tilling usually stimulates production. A great advantage of lucerne as a fodder crop lies in the many forms in which it can be utilized. As a hay, it can be utilized as ordinary hay, or leaf meal and stem meal or lucerne cubes. When weather conditions are unfavourable for haymaking, a good quality silage can be made from lucerne. Although pre-eminently a hay crop lucerne can also be utilized with great success as a grazing crop. As has been previously pointed out however, great care should be taken not to overgraze lucerne lands because this could be detrimental to future production. Such harmful effects can be reduced by cutting two crops at the 10 per cent flowering stage in spring and autumn. Karoo Agric 1 (1)
There are some techniques you can use to reduce eyestrain, improve your reading comprehension, and make reading more pleasurable. A halo is a thin white line perceived around black when black is placed against white. People with clear vision perceive halos. If you learn to see halos, you can relieve eyestrain and clear your vision. Goal – Learn to perceive halos. Steps – Place a white card against a piece of black felt. - Close your eyes, relax, take a deep breath, and open your eyes. Do you see a thin white line along the edge in the white area? Look along the top or bottom edge of a line of black type against a white background. - Close your eyes, relax, take a deep breath, and open your eyes. Do you see a thin white line along the bottom or top of the black letters of the card? - Close your eyes, take a few deep breaths, relax, and open your eyes again. Do you see a thin white line around the black letters on the card? - Read by looking at the halo at the bottom or top of the line of type, or the halo around the letters. Practice seeing halos with different sizes of type. Explanation – Your eyes see size, shape, and color by contrast, and the contrast creates the illusion of a halo. When you consciously perceive halos, your mind unconsciously follows the halo around the letter in a relaxed state free of eyestrain. Hints – Coax your mind to think white. Close your eyes and imagine something brilliant and white or scan across the black print and notice the white behind the letters. If you cannot see the halo, do not strain. Practice other relaxation techniques and try this later when you are more relaxed. If seeing halos is too difficult, scan across the middle of the print noticing the white background. The goal is to not hold onto or grab at the letters and words when you read, but to shift across the line and let the letters and words flow into your mind. After awhile, you can shift your focus to the bottom of the print and notice the white at the bottom. The white becomes brighter and looks like a thin halo with practice. You achieve reading comprehension by letting the words flow through your eyes and the meaning flow into your mind without holding onto or grabbing at the meaning. Goal – Practice reading comprehension. Steps – Let your eyes follow the thin white line or shift back and forth over two or three words at a time while you read. Explanation – Shifting back and forth helps to keep your mind and eyes coordinated. When you lose eye and mind coordination, eyestrain and blurry vision result. Hints – It sometimes helps break the strain if you read out loud while practicing this technique or have a partner read out loud while you read the print to yourself (you will both need a copy of the same page). Do not let your eyes move on when your mind is staying on an idea. Make up pictures when you read to help you become more interested in the subject matter and gain greater comprehension. You can catch a glint of light on the edge of a bent card and mentally place it next to the line of print to help see the thin white line. Make Up Pictures Goal – Create pictures when you read. Steps – Read short passages of text or have a partner read short passages of text to you and make up pictures as you go. If you are working with a partner, describe the pictures to each other. Explanation – Not everyone makes up pictures when they read, but if you learn to, it can help you achieve a relaxed state of mind because it increases your interest in the material. Hints – If you have trouble making pictures, do the following: - If you are right-handed, look to the left when you construct the picture and to the right when you retrieve it. - If you are left-handed, look to the right when you construct the picture and to the left when you retrieve it. Impulse reading teaches you to accept visual images of letters and words as they occur and to be immediately ready for the images that follow without grabbing at or holding onto any one letter or word or its meaning. Goal – Immediately see images on cards as they flash in front of you. Steps – This technique is more easily practiced with a partner handling the cards. - Quickly place one card at a time face up in front of you. - Say the names of the cards as you see them. - Do not stop on a card you do not see, but go to the next card immediately. - Vary the distance by moving closer to the cards (for farsight) or farther away (for nearsight ). Explanation – Impulse reading teaches you to rely more on your visual sense because there is no time to employ other senses. Reading at the Computer Adjust your monitor so the print is black against a white background. This provides the most contrast for reading. If your monitor has a lot dots per inch (dpi), the black lettering will not be very black and the white might have a slight tint making is unlikely you will be able to see halos on your computer screen. However, the other principles of reading apply. - Let your eyes travel along the bottom of the letters when you read. - Think of bright white. - Notice motion as your eye moves along the line. - Give your eyes rest by taking breaks, palming and swinging. - Make up pictures as you read. Make sure your monitor has good resolution (dpi) and does not flicker. The flickering of a monitor can make your mind tired and create a tension in your eyes. Position the computer to minimize glare, and use full spectrum lighting in your work area if you do not sit near a window.
Political Masterclass: Constantine and Christianity, III Scholarly opinion rests at all points on the scale when it comes to opinions on the genuineness of Constantine’s fathering of the Christian religion and soon-to-be church, from the motives of a true and deep religious conversion at one extreme to the moves of a calculating statesman consolidating his empire at the other, along with all points in between. However, regardless of how deeply genuine his conversion may or may not have run, Constantine would not have been able to help looking at the Christian through the eyes of a sovereign – a sovereign educated in the belief that the success of the Roman Empire was due at least in part to the unifying influence of the official state cult. As Christianity continued to spread and gain in numbers, he would have been hard-pressed not to see it as a potential replacement for the old Imperial cult, if only its issues with internal unity could be resolved. With his goal of a united church in mind, Constantine’s control over the Nicean council in 324 AD was much tighter than with his earlier attempts in Arles. In the interests of sustaining unity and peace throughout his empire, he dominated both the proceedings and their outcome, viewing himself as an integral part of the new, unified church being formed. Indeed, the association of Constantine the Roman Emperor and the Christian church was a unilateral one at the Emperor’s discretion; the Emperor defined the terms of the alliance and set them forth for the church to accept, which it was more than happy to do, given the great potential opportunities implicit within for the religious organization. The opportunities were still strictly limited, however: Constantine was an absolutist emperor who had no intention of allowing an increasingly influential Church operate independently of the state’s guidance and oversight. Yet Constantine’s meddling in the internal affairs of the emerging Christian religion was not yet finished. He did not intend to have a church that would be limited to a small, pristine body of the elected few. Such an organization would be useless in encouraging the unity of the state as a whole. Instead, he envisioned his state-supported Church as more of an ‘umbrella’ organization, able to encompass all the differing beliefs and factions that made up the whole of the Christian faithful under an over-arching mutual interest – a goal which, more than anything else, reveals how little he understood his chosen religion. The sought-after religious unity, which would in theory foster political unity once the church was tied to the state, was a goal of Constantine’s going into the Nicean council. However, he was not willing to impose such unity by force, which would only have served to increase tensions and aggression between the factions. Instead, he subtly used interpretations of Christian texts, including quotes attributed to Christ, to isolate and rebuke the extremists who were clamouring for such coercion to take place from the larger, more moderate mass of faithful. (As a quick and dirty example, one could cite Jesus’ commands to “turn the other cheek” and “love your enemies” to support a policy of non-intervention with non-believers, or instead bring up the tale of Jesus driving the money-lenders from the temple as support for coercion.) Constantine was able to make his own point by taking advantage of ambiguities in the text and interpret the message to his own ends, as he did in his “Oration to the Saints,” where among other things he related the story of Christ’s arrest to put forward the interpretation that Jesus made a decision “to choose rather to endure than to inflict injury, and to be ready, should necessity so require, to suffer, but not to do, wrong.” These themes, when looked at in the context of public policy, are pushing forward Constantine’s ideal of a tolerant and diversified Church operating under the “umbrella” of mutual interest. The Emperor may have claimed to be Christian, but he resisted the pressures to use coercion to enforce belief. Indeed, there is some basis on which to claim that Constantine did not particularly care which side in the Arian controversy proved victorious in the Council at Nicea (recall if you will his earlier opinion on the schism), but only that the victor could be shown to have the majority of the support of the lay worshippers. As long as he was satisfied on that count, he could – in theory – be assured of harmony within the Church and, by extension, his Empire. As it turned out, the doctrine of Arianism was defeated and declared heresy, and the Nicean creed, reaffirming belief in the orthodox viewpoint (most importantly that Christ was holy and possessed a divine soul, being essentially God and both part and all of the Holy Trinity of the Father, the Son, and the Holy Ghost) was put into widespread use across the Empire, primarily for baptisms but also taking on different usages in local churches depending on customs and the symbols present in each location. Unfortunately for the ambitious Emperor, his decisions were to have some unintended consequences…
Researchers at UT Southwestern Medical Center report the identification of a new cellular source for an important disease-fighting protein used in the body’s earliest response to infection. The protein interferon-gamma (IFN-γ) keeps viruses from replicating and stimulates the immune system to produce other disease-fighting agents. Neutrophils, the newly identified cellular source of the protein, are the major component of the pus that forms around injured tissue. The researchers also report that the neutrophils appear to produce IFN-γ through a new cellular pathway independent of Toll-like receptors (TLRs): the body’s early warning system for invasion by pathogens. This finding indicates that mammals might possess a second early-alert system - the sort of built-in redundancy engineers would envy, said Dr. Felix Yarovinsky, assistant professor of immunology and senior author of the study published online in the Proceedings of the National Academy of Sciences in June. “We believe our mouse study provides strong evidence that neutrophils, white blood cells created in the bone marrow, produce significant amounts of IFN-γ in response to disease,” Dr. Yarovinsky said. “The finding of a new and essential cellular source for IFN-γ challenges a long-held belief in the field and is significant because neutrophils are the most common kind of white blood cell.” Two pathogens were used in this study: the parasite Toxoplasma gondii - which can cause brain damage in humans and other mammals that have compromised immune systems - and a type of bacterium that causes gastroenteritis, Salmonella typhimurium. Innate immunity is the body’s first line of defense against pathogens, including those that it has never before encountered. Adaptive immunity is the secondary system that battles pathogens to which the body has previously been exposed and to which it has developed antibodies. Textbooks list natural killer (NK) cells and T cells as the body’s significant sources of IFN-γ. Although large numbers of neutrophils have long been observed to congregate at the site of a new infection, they were commonly thought to be first responders or foot soldiers rather than generals in the battle against disease, as this study indicates they are, Dr. Yarovinsky explained. About 20 years ago, there were clinical reports in humans and animals suggesting that neutrophils might produce IFN-γ, but the idea was largely ignored by the scientific community until the last decade, he said. Since then, studies at UT Southwestern and elsewhere have found that mice lacking NK and T cells, and therefore expected to be unable to produce IFN-γ, somehow continued to withstand infections better than mice genetically unable to make any IFN-γ. These observations suggested the possibility of an unknown source of the protein, he explained. In a series of experiments, the UT Southwestern researchers identified neutrophils as the major source of IFN-γ in mice lacking NK and T cells. “Based on what we know about neutrophils, their large numbers and rapid deployment to the site of infection should provide an important means of very early, robust, and rapid elimination of disease-causing agents,” the researchers wrote. Although neutrophil-derived IFN-γ alone is insufficient to achieve complete host protection, the protein significantly extended the survival of mice in this study, Dr. Yarovinsky said. In related news, the Burroughs Wellcome Fund in June announced that Dr. Yarovinsky had been selected for its 2013 Investigators in the Pathogenesis of Infectious Disease Award to further investigate mechanisms of host defense against various infectious diseases mediated by IFN-γ produced by neutrophils. The award will provide $500,000 over five years to pursue this line of research.
Most Active Stories Wed February 29, 2012 Dartmouth Study Says Alcohol in Movies Compels Teens to Drink A new study from researchers at Dartmouth Hitchcock Norris Cotton Cancer Center found that the more movies teenagers watch with images of alcohol, the more likely teens will start drinking. The study also found that an increase in movie watching was a major risk factor for teens who already drink to start binge drinking. Researchers surveyed 6,500 randomly selected American teenagers ages 10-14 over a period of two years. The teens were asked which blockbusters films they had seen and how many of these films they had watched over that period. Researchers measured how many times each film had a scene with alcohol or an alcohol product placement. The study found that on average teens were exposed to alcohol in movies for a total of 4.5 hours. Some teens were exposed to a total of 8 hours of alcohol saturated images in films. Teens who watched the most films were twice as likely to start drinking as compared to those teens who watched fewer films. The study found that 80 percent of the blockbuster films watched by teenagers have drinking scenes and 65 percent of the films have product placement of alcoholic beverages. The researchers looked at many risk factors including whether the teens parents or friends drank. Exposure to alcohol in films, the study concludes, is a major risk factor for teens to start drinking or to binge drink. Dr. James Sargent, a professor of Pediatrics at Dartmouth Medical School and a lead author of the study, says the film industry needs to be more aware of how alcohol in films can negatively impact the teens who watch these movies. "This study shows that exposure to movie depictions of alcohol predicts alcohol onset and progression to binge drinking during adolescence and argues for greater attention to both smoking and drinking in movie ratings," says Sargent.
Mark Blazis Outdoors Published Friday January 18, 2013 at 6:00 am Reader John Savasta shared that on Dec. 9, he took a very old buck that likely wouldn’t have made it through the winter because its teeth were totally ground down. Deer abrade their teeth at a fairly constant rate because of the hard minerals inadvertently chewed while browsing. That abrasion can vary slightly, region by region, depending on soil types. Nevertheless, the extent of tooth wear is a deer biologist’s most reliable measure for aging. By age 10, an age few deer achieve, all teeth will have abraded to the gum line, dooming them to starvation. Deer would certainly live much longer if they had a good dentist. Savasta’s old buck’s rack, though an impressive 21 inches wide, had atrophied to only four points. Just a few years back, it must have been massive, as antlers typically peak in size during a buck’s prime — between 4-1/2 and 7-1/2 years old. Savasta wondered why this over-the-hill buck was still making a territorial scrape on the ground when he spotted it a full month after the rut. Outside of the breeding season, fresh scrapes and rubs are very uncommon as they have little, if any, function. Even old bucks, however, continue to display dominance and territorial behaviors throughout their lives. It’s not at all surprising that this old geezer made his scrape exactly a month after the November rut. That behavior was a likely response to a hot doe’s monthly cycle, which regularly continues until she gets pregnant. Although the vast majority of all does had bred during the primary early November rut, a small number of them that didn’t mate ovulated again 28 days later, attracting and exciting nearby bucks, all quite ready, able and always eager to mate. With so few does still in breeding condition then, highly competitive bucks resorted to all resources and strategies — including once again marking their territory, first clearing the ground with their hooves, then peeing and stepping in their scrapes. By now, there are few if any mature does that haven’t gotten pregnant. If there were, they would continue ovulating every 28 days until they did get pregnant. But getting pregnant this late in the season is never desirable. Late-conceived fawns have the extreme disadvantage of being born late. That means far less time to develop sufficiently to survive the challenges of the following winter. The genes of does with a tendency for very late breeding consequently get quickly eliminated from the gene pool, reinforcing normal November mating, which affords the best chances for fawn survival. Many local hunters reported doing well in Vermont this season. The once-great deer hunting in the Green Mountain state is finally coming back. According to state deer biologist Adam Murkowski, the herd, which had diminished alarmingly in recent decades, now numbers about 125,000 — larger than Massachusetts’ herd. Scientific management, the mildest winter in four decades, and an early green-up, resulted in good productivity and fawn survival this past season. Archers benefited, tagging 20 percent more deer this year than the previous three-year average. In all, 13,850 were harvested in 2012 — about 11 percent of the population. Vermont moose hunters had a fair season, reflecting a population that is no longer beyond healthy carrying capacity. During the one-week Oct. 1-7 archery season, 50 bow hunters harvested 17 moose, a 34 percent success rate. During the regular season from Oct. 20-25, 385 rifle hunters took 205 moose, a 52 percent success rate. The low harvest numbers are attributable to unseasonably warm temperatures during moose season and lower overall population densities, which, on the positive side, make for better habitat, healthier moose and fewer collisions with vehicles. In contrast, bear and turkey hunters did exceptionally well in 2012. Vermont’s fall turkey harvest of 1,365 birds was up 53 percent from the previous three-year average, while its harvest of 621 bears was up 20 percent compared to that same period. The big increases were due to large populations and weather. In 2011, a huge crop of nuts and seeds kept turkeys and bears contentedly deep in forests. In 2012, a poor crop of nuts and seeds sent both species foraging out of the forest, into open corn and grass fields where hunters had much greater luck finding them. The 2013 Vermont nonresident licenses are relatively inexpensive and provide an opportunity we don’t have in Massachusetts — Sunday hunting. An archery deer license is only $75. A regular hunting license that includes a November rifle season buck tag is just $100. Muzzleloader licenses are only an additional $40. Vermont is proving a nonresident hunters’ bargain. Today — Worcester County League of Sportsmen’s Clubs meeting at the Singletary Rod & Gun Club, 300 Sutton Ave., Oxford. Info: (508) 987-8783. Today through Sunday — Fly Fishing Show, Best Western Plaza Hotel, Marlboro. International Fly Fishing Film Festival, 6:30 tonight. Cost: $15 or $10 with admission to show. Info: www.flyfishingshow.com. Saturday — Massachusetts-Rhode Island Trout Unlimited Banquet, Best Western Plaza Hotel (adjacent to Fly Fishing Show). Cocktails at 5:30 p.m., buffet dinner at 6:30. Saturday — Singletary Rod & Gun Club Game Supper, 300 Sutton Ave., Oxford. Join to participate. Info: (508) 987-8783. Saturday and Sunday — 14th annual Rutland Sportsman’s Club ice fishing derby has been postponed because of unsafe ice. It has been rescheduled for Feb. 9-10. Info: Ronnie Howe at (774) 696-6465 or the club at (508) 886-4721. Tuesday — “Taking on Invasives, Battles Won and Lost and Their Lessons,” Walden Woods Project Stewardship lecture series, 7 p.m., Thoreau Institute, 44 Baker Farm Road, Lincoln. Speaker Tim Simmons, MassWildlife restoration ecologist. Free. Reservations: (781) 259-4707. Thursday — Opening of the CMTA Hartford Boat Show, Connecticut Convention Center, Hartford. Info: www.hartfordboatshow.com. Thursday — World Fishing & Outdoor Exposition, Suffern, N.Y. Info: www.sportshows.com/suffern/index.html. Feb. 8-10 — New England Fishing & Outdoor Expo, DCU Center, Worcester. New leaders and seminars. Meet and learn from local and national authorities on hunting and fishing; meet outfitters to book trips; top equipment at discounted prices. Info: www.newenglandfishingexpo.com, firstname.lastname@example.org or (774) 243-1442. Contact Mark Blazis at email@example.com
This is a lesson on units of measurement for learners in year 5. In this lesson, learners will be able to: - solve problems involving the calculation and conversion of units of measure; and - use, read, write and convert between units of length, mass and volume. They will also be able to convert between miles and kilometres. This lesson follows the National curriculum requirement for year 5 mathematics Lesson can be used as whole class teaching by teachers and at home by learners. Lots of drag and drop activity with instant feedback. To access online choose the multiscreen.html file. To access offline choose the swf file. Click here to view other year 5 topics here. Please leave a review if you find our resources helpful and be sure to follow us if you wish to be kept up to date with when we upload new and exciting resources.
Mid-Eighteenth Century Pennsylvania Records Used to Tell the Story of Black clouds rested heavily on the southern horizon and foretold of an unusually severe storm all port-holes and hatches were closed and fastened, the upper yards were lowered and the sails furled Soon after 8 oclock a hurricane broke loose, far more terrible than we dreamed an ocean could be winds howled, roaring waves ran mountains high All passengers were gathered in the cabins and a solemn stillness reigned about 10 oclock there was a terrible shock the side of the ship against which my wife was leaning was now the bottom and the bottom had become one of the sides of the cabin and we realized the ship had capsized a cry was raised for axes to cut away the masts the Captain bravely climbed the main mast, and under his blows it parted and went over. Instantly, the ship righted itself and floated on even keel! The foregoing is an account of a voyage recorded by a Moravian minister traveling from Germany to Bethlehem, Pennsylvania. It offers the kind of detail most family historians would like to find on their eighteenth-century ancestors. Yet, how often can any historian find this kind of detail? Seldom, if ever! Details concerning what an ancestor may have done on any given day can be difficult, if not impossible, to find. But, details concerning shared or common experience have been recorded, and that information is useful in gaining insight into eighteenth-century life in rural Pennsylvania. Shared experience is an analytical tool used by historians to research, interpret, and analyze the past. Men and women of all generations have shared experience, such as our contemporary habit of purchasing food in a grocery store. Details of that experience include the day of the week, time of day the purchase, and the name of the store. The fact that most of us purchase food in a store provides an experience we all share in common. Immigrants Had Shared Experience The same was true of all eighteenth-century Pennsylvania immigrants. All newcomers had to journey there on a shipan experience shared in common. The specifics of each voyage were unique to that journey and to the passengers who traveled on that particular ship. But, on that ship and others, people had shared experiences as well. Ships captains carried out similar or routine sailing maneuvers on each and every trip across the Atlantic. As a sailing vessel approached the North American continent, for example, the captain of the ship would have ordered a member of his crew to start sounding for the bottom. The crewmember dropped a rope with a heavy lead weight over the side of the ship to test the depth of the water. He was trying to find the bottom. If the weight touched bottom at eight fathoms, that meant the ocean was only forty-eight feel deep. (One fathom equals six linear feet.) That indicated the ship was approaching land. Testing for the bottom was especially important if the ship approached the coast of New England or New York in a fog bank, a common occurrence. An account of one voyage noted, No land was seen even though the ship had proceeded to eight fathoms. When at 10 a.m. the mist lifted, America was seen for the first time. A 1742 account of another voyage noted that the captain found the bottom at 35 fathoms or 210 feet. On May 19 a cold, thick fog covered the sea. The captain of this particular ship dropped anchor, as he wanted to send a small boat ashore to find a local navigatoranother common experience. If a ships captain was unfamiliar with his present location or his destination port, he waited until he could arrange with a local expert who could pilot the boat into the harbor with some degree of safety. Depending on the distance to shore and the condition of the passengers and crew, the captain may have sent a smaller boat ashore for other reasonsto get fresh water or to bury the dead. A record kept of one crossing noted that a boat went ashore near New London, Connecticut, to bury an infant born in route to Pennsylvania. While ashore they encountered a resident who commented on how fit they appeared after such a long voyage. He noted that passengers on most ships usually got a fever and many often perished. He went on to say, They [the dead] were placed in scores in large ditches near the shore and covered with sand Statements similar to this one suggest that the remains of many immigrants were, perhaps, similarly buried on the beaches of New England, Long Island, New Jersey, and Delaware.
Health experts warned Wednesday that Australia's life expectancy could be sent into reverse after a new study found alarming levels of obesity among teenagers. Nearly a quarter of 13-to-18-year-olds are overweight or obese, according to the survey of 12,000 secondary school students, which said Australia was facing a "chronic disease time-bomb". "If ever there was a wake-up call for Australians, this is it," said Professor Ian Olver from the Cancer Council of Australia, which commissioned the National Secondary Students' Diet and Physical Activity survey. "As obese kids move into adulthood, the heightened risk of chronic diseases like cancer means previous gains in life expectancy may be reversed. "We may see today's teenagers die at a younger age than their parents' generation." The study found an "excessive prevalence of overweight and obesity among students", with 23.7 percent of the teenagers above their healthy weight. Just 15 percent met national guidelines for an hour's physical activity every day, with girls more lax than boys and exercise levels diminishing with age. Almost one in three students said they drank four or more cups of sugary or sports drinks per week, and 43 percent ate fast food or takeaways at least once a week. Only 14 percent ate the recommended amount of fruit and vegetables. Nearly half (47 percent) of students had three to four televisions in their home, with a further 17 percent reporting five or more TVs and 47 percent saying they had a set in their bedroom. Only one percent had no TV at all. A majority (71 percent) engaged in "small-screen recreation" -- watching television and DVDs, playing video games and using computers -- for more than two hours on an average school day, exceeding national health guidelines. "This piece of research confirms what we've feared for some time -- that the high school students of today will grow up to be the heart attack victims of tomorrow," said Lyn Roberts, head of the National Heart Foundation, which co-commissioned the research. Australia is one of the world's fattest nations, with the most recent National Health Survey classifying 25 percent of people aged 18 or older as obese, and 37 percent as overweight. The total cost of obesity, including health and productivity costs, was estimated to be around Aus$58 billion ($58 billion) a year in 2008, the most recent available figures. Explore further: Pack a travel first-aid kit for the holidays
In a development offering great promise for additive manufacturing, Princeton University researchers have created a method to precisely create droplets using a jet of liquid. The technique allows manufacturers to quickly generate drops of material, finely control their size and locate them within a 3D space. Although both 3D printers and traditional manufacturers already use droplets to carefully add material to their products, the new jet method offers greater flexibility and precision than standard techniques, the researchers said. For example, delivering droplets with jets allows for extremely small sizes and allows designers to change droplet sizes, shapes and dispersion, as well as patterns of droplets, on the fly. “A key aspect is the simplicity of the method,” said Pierre-Thomas Brun, an assistant professor of chemical and biological engineering at Princeton and the lead researcher. “You draw something on the computer, and you can create it.” In an article published Oct. 28 in the Proceedings of the National Academy of Sciences, the researchers describe how to control the dispersion of drops from a thin jet of liquid. They were able to inject calibrated droplets of glycerin into a liquid polymer to demonstrate placement over three dimensions – a key requirement for manufacturing. By curing the polymer, the researchers were able to affix the droplets in desired locations. Although the researchers used glycerin for the experiment, they said the method would work with a wide variety of substances commonly used in manufacturing and research. The method is scalable and can be adjusted to work with a wide range of printing patterns, the researchers said. The jets can be controlled to disperse drops in lines or in sinusoidal wave patterns, creating flexibility in manufactured forms. The researchers said that the technique could be applied to applications including the creation of biomedical scaffolding, acoustic materials and bioreactors as well as standard 3D manufacturing. The researchers said the method also relieves designers of the need to constantly adjust and fine-tune their machines to create varied shapes and sizes. Because the mathematics controls the dispersion of the droplets, it is easy to make changes to fit a project’s requirements. “Our approach is robust in the sense that all we do is draw the jet and the drops arrange themselves,” Brun said. “I think it is easier and maybe more versatile than current methods.” Other authors of the paper include: Lingzhi Cai, a graduate student in chemical and biological engineering at Princeton; Joel Marthelot, a former postdoctoral research associate in Brun’s lab who is now at the Aix-Marseille University in France.
John Houbolt: NASA’S Unsung Hero: Very few people have ever heard of John Houbolt, but if not for John we would never have been able to land on the Moon in 1969, and probably not for another three years, or more. John Cornelius Houbolt (April 10, 1919 – April 15, 2014) was an aerospace engineer credited with leading the team behind the lunar orbit rendezvous (LOR) mission mode, a concept that was used to successfully land humans on the Moon and return them to Earth. This flight path was first endorsed by Wernher von Braun in June 1961, after 2 years of dedicated and career damaging sponsorship by John, and was chosen for the Apollo program in early 1962. The critical decision to use LOR, “The Way To The Moon” proposed by John Houbolt, was viewed as vital to ensuring that Man reached the Moon by the end of the decade as proposed by President John F. Kennedy. In the process, LOR saved time and billions of dollars by efficiently using existing technology. The book about John, “The Man Who Knew the Way To The Moon,” Tells John’s story, but “CLICK the link below for a short but more detailed version, than the previous paragraphs in this post.
By Storm Dunlop Author and Fellow of the Royal Meteorological Society Dorset has seen some extreme weather over the years Because it lies on the south coast, Dorset, like the neighbouring counties of Devon and Somerset, is often subject to heavy rainfall. It is often brought by warm, moist winds from the sea to the south and southwest. occurred at Martinstown, near Dorchester, on 18 July 1955, where 279mm of rain fell in a single day (0900 to 0900 GMT, 18-19 July), mostly during a 15-hour period, and setting a British record. In fact, this was only recently surpassed by the 317mm recorded in 24 hours at Seathwaite in Cumbria on 19-20 November 2009. Martinstown - formerly known as Winterbourne St Martin, where "Winterbourne" means that the small river flows mainly during the winter - suffered extensive flooding some hours after the peak downpour. Later analysis by the Meteorological Office suggested that even heavier, unrecorded, rainfalls of over 305mm probably occurred at Winterbourne Steepleton and Winterbourne Abbas, upstream of Martinstown. A particularly severe storm hit the coast at Portland on 13 February 1979. At high tide, freak waves between 12m and 18m high battered houses 180m or more from the shore, destroying homes and rendering others uninhabitable. Even lorries were swept down the road. The Royal Naval helicopter base was flooded, the principal gas main was ruptured, and the only road from the Isle of Portland to the mainland was rendered impassable. Severe weather like this hit Dorset in the winter of 1978/79 Although the winter of 1978-1979 was severe across the whole country, Dorset (with Wales and the West Country) suffered particularly badly in February 1978. A combination of meteorological circumstances meant a series of depressions, carrying plentiful moisture-laden air which encountered bitterly cold easterly winds. It had originated in Arctic Russia, and deposited a phenomenal amount of snow across the county, bringing it to a standstill. Considerable snowfall occurred on February 15 and 16, but the real blizzard began on February 18 and continued for about 30 hours. Some 46cm of snow fell, but the high winds created drifts of over 9m in places. All minor roads and nearly all major roads and railway lines were blocked. Twenty-five thousand houses were without electricity, 100,000 premises without water, and 10,000 households and businesses were without telephones. Although the weather conditions eased almost immediately, some people were stranded for days, and even a week later many services were still disrupted. Snow storms are rare but can render rural towns impassable On 5 June 1983 a series of violent thunderstorms swept eastwards from Lyme Bay across Weymouth, Poole, and Bournemouth, and on into Hampshire and Sussex. They were accompanied by torrential rain (74mm fell at Winfrith) and heavy hail with stones the size of golf balls (43mm) in the Poole and Bournemouth areas, with one stone measured at 65mm across. The earliest system had associated tornadic activity that raised material from the ground, possibly from Hamworthy, Poole, because large amounts of coke fell at Poole and Bournemouth, together with some stones, and a piece of coal (and a crab in Sussex). Some hailstones were found that had formed around what appeared to be roof chippings. Two hailstorms affected Dorset on 7 June 1996, one of which had a long track from Portland to north-eastern Oxfordshire, along much of which the hail exceeded 30mm in diameter. Stones about 50mm across fell at Crossways, northeast of Weymouth. Rainfall was about 40mm northeast of Blandford, but the peak (72 mm) occurred near Wantage in Oxfordshire. Wild Weather of the South - BBC One 1930 BST Monday 20 September
There is apparently some confusion about the terms "courts of record" and "courts not of record" and what is meant by recording a decision. I am sorry if my post was unclear. Unfortunately, when you speak "attorney" you are not always understood. There are two separate and distinct uses of the word "record" in conjunction with court cases. The first use of the word is very general meaning anything recorded by the court during the litigation. The second use of the word, the one I am referring to, is very restrictive and refers only specifically to the decision made by the judge on an appeals case. There are two general distinctions made between the types of courts and decisions made by the judges. In trial courts, the legal matters are heard by a judge or arbitrator. When the judge or arbitrator makes a decision, that decision applies only to the case before the judge. No other case can use the decision of the judge or arbitrator as a legal precedent, that is as an authority of the law for a similar case. No "record" is made of judge's decision other than the general case file containing the ruling of the court in the form of a Minute Entry or a Judgment. A minute entry is a ruling from the court on a case that is part of the file. A judgment is a formal document signed by the judge making a decision in the case either in favor of the plaintiff or the defendant. The judgement is said to be "enforceable" meaning the winning party can collect on the judgement if that is part of the award. However, the case file, containing all the pleadings could be called a court record, but that is not what is meant by a "court of record." The confusion comes from the use of the word record to mean a transcription of what goes on in the trial court. You may have a multipage ruling from the trial judge explaining his thought process in deciding the case, but that is not a formal "record of the decision" in the law because it is not binding on any other case or any other parties. If the one or more of the parties disagree with the court's ruling, they can usually appeal the judgment or other appealable ruling. The case file containing all of the pleadings filed in the court and the rulings of the judge are sent to the appeals court where the record of the case is reviewed. The parties are asked to explain why they think the court below (the judge) was wrong or right and the parties file appellate briefs, long documents explaining their legal positions. The appeals court then may or may not hear oral argument of the case (in court before the appeals judges) and then the case is decided. The appeals court judges write a formal opinion expressing the reasons for their decision and then either issue a formal decision or a memorandum decision (binding only on the parties). If the parties still disagree, they can appeal to yet another, higher court such as the various state supreme courts or the U.S. Supreme court. The formal written decisions of the various courts of appeal are collected by the National Reporter System. So, the National Reporter System has only those decisions coming from "Courts of Record" that is, courts that write a formal legally binding precedent setting decision. The record spoken of has nothing to do with the "case record" meaning all the stuff produced in the trial court. If you find a case in the National Reporter System you will have a case number and a court to look to for a full record of the trial and other documents produced in the trial court. I have noted some confusion in the non-legal definitions online that make the issue even further difficult to understand. This is probably the reason that so few genealogists avail themselves of court records because the cases are so confusing. By the way, they are often very confusing to the lawyers and the judges also. It usually does no good to argue with an attorney, since we are always right even when we are wrong. From a genealogist's standpoint, the trial record is much more interesting and helpful than the case decisions in the National Reporter System, but finding a case in the National Reporter System indicates that further investigation is important. You should always try to find the original case file if it is still available. There are hundreds of thousands, perhaps millions of cases reported and it is a valuable resource.
The security flaw means that thieves could potentially steal someone's details just by standing close to them, and then run up large bills. Contactless payment cards have been available for five years. Customers use them to pay for items costing less than £20 by simply swiping the card, without the need to key in a Pin (personal identification number). A card can be used up to five times a day before a Pin is required. Fraud victims would know something was wrong only when they checked their bank accounts, and if they do not do that regularly they could find huge sums of money missing. The stolen details could also be used to run up huge bills at online retailers such as Amazon, which for many purchases do not require the three-digit security code. Martin Emms, from Newcastle University's Centre for Cybercrime and Computer Security, who has published a report on contactless card flaws, said: "We have produced a phone which speaks the same language as the cards and used this to obtain data from them. With it, we have been able to strip contactless cards of the account-holder's name, 16-digit number and expiry date. "In some cases, we have even been able to obtain the last 10 purchases, which is one of the security questions asked by banks. With this information we have been able to make purchases. It is alarming because the information provides the basis that, with a little more research, could see thieves strip a bank account." There are more than 32 million users of contactless payment cards in Britain. In January, banks began to adopt measures to prevent fraud on new cards. But according to the UK Cards Association, they are giving customers the new cards only when their old ones expire, leaving customers vulnerable for up to two years.
Those trouble-making physicists are at it again. During a panel discussion at the SETIcon II conference in Santa Clara, Calif., over the weekend, scientists discussed the Big Bang and whether there was a requirement for some divine power to kick-start the Universe 13.75 billion years ago. Unsurprisingly, the resounding answer was: No. “The Big Bang could’ve occurred as a result of just the laws of physics being there,” said astrophysicist Alex Filippenko of the University of California, Berkeley. “With the laws of physics, you can get universes.” However, Filippenko, a speaker on the “Did the Big Bang Require a Divine Spark?” panel, stopped short of saying there is no god — he’s merely pointing out that the birth of the Universe didn’t require an intervening omnipotent being to get the whole thing started. The laws of physics, pure and simple, sparked universal creation. He then meandered into a classic chicken-and-egg argument: “The question, then, is, ‘Why are there laws of physics?’ And you could say, ‘Well, that required a divine creator, who created these laws of physics and the spark that led from the laws of physics to these universes, maybe more than one.’ “The ‘divine spark’ was whatever produced the laws of physics. And I don’t know what produced that divine spark. So let’s just leave it at the laws of physics.” British astrophysicist and author Stephen Hawking, on the other hand, cares little for society’s belief in supernatural beings (or subtlety for that matter). In his 2010 book, “The Grand Design,” Hawking said, “Because there is a law such as gravity, the Universe can and will create itself from nothing. Spontaneous creation is the reason there is something rather than nothing, why the Universe exists, why we exist.” A “spontaneous Big Bang” is something SETI Institute astronomer Seth Shostak, also a speaker at the SETIcon II panel, agrees with. “Quantum mechanical fluctuations can produce the cosmos,” said Shostak. “If you would just, in this room, just twist time and space the right way, you might create an entirely new universe. It’s not clear you could get into that universe, but you would create it. “So it could be that this universe is merely the science fair project of a kid in another universe. I don’t know how that affects your theological leanings, but it is something to consider.” Whenever leading scientists get embroiled in the debate about the existence of God or a god’s involvement in the Big Bang, I cringe. There’s little doubt that there’s a debate to be had, but until physicists stumble across a bona fide theory of everything, or theologists find physical proof of a god, discussions such as this get stuck in an infinite feedback loop. Last year, Hawking went “all in” and sparked a wave of controversy when he said that there is no God and there is no heaven. In an interview with the Guardian newspaper, Hawking didn’t hold back: “I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.” Filippenko is deliberately vague on whether or not god (or, indeed, heaven) exists. “I don’t think you can use science to either prove or disprove the existence of God,” he said. Hawking would likely disagree. As humans, we naturally hold onto our instincts and beliefs to make sense of the universe we live in. Our capacity to do this no doubt helped us evolve, but in a modern age of incredible scientific discovery, science and faith are increasingly at odds. The fact that we are gradually revealing the true complexity of the quantum world and the unfathomably huge scale of the cosmos tells me that science, not belief in an omnipotent being, will eventually give us the answers we are ultimately looking for. But that doesn’t mean science has (or will have) all the answers, it just means that the Universe cares little for our faiths — the Universe, as far as we can experience it, is powered by physical laws, not mythical gods. So, for now, this is one philosophical debate that will keep generating headlines, but will remain stuck in that infinite feedback loop. Image credit: NASA
Vast Martian glaciers of water ice under protective blankets of rocky debris persist today at much lower latitudes than any ice previously identified on Mars, says new research using ground-penetrating radar on NASA's Mars Reconnaissance Orbiter. Because water is one of the primary requirements for life as we know it, finding large new reservoirs of frozen water on Mars is an encouraging sign for scientists searching for life beyond Earth. The concealed glaciers extend for tens of miles from edges of mountains or cliffs and are up to one-half mile thick. A layer of rocky debris covering the ice may have preserved the glaciers as remnants from an ice sheet covering middle latitudes during a past ice age. "Altogether, these glaciers almost certainly represent the largest reservoir of water ice on Mars that's not in the polar caps. Just one of the features we examined is three times larger than the city of Los Angeles, and up to one-half-mile thick, and there are many more," said John W. Holt of The University of Texas at Austin's Jackson School of Geosciences, lead author of a report on the radar observations in the Nov. 21 issue of the journal Science. "In addition to their scientific value, they could be a source of water to support future exploration of Mars," said Holt. The gently sloping aprons of material around taller features have puzzled scientists since NASA's Viking orbiters revealed them in the 1970s. One theory contended they were flows of rocky debris lubricated by a little ice. The features reminded Holt of massive ice glaciers detected under rocky coverings in Antarctica, where he has extensive experience using airborne geophysical instruments such as radar to study Antarctic ice sheets. The Shallow Radar instrument on the Mars Reconnaissance Orbiter provided an answer to this Martian puzzle, indicating the features contain large amounts of ice. "These results are the smoking gun pointing to the presence of large amounts of water ice at these latitudes," said Ali Safaeinili, a shallow-radar instrument team member with NASA's Jet Propulsion Laboratory in Pasadena, Calif. The radar's evidence for water ice comes in multiple ways. The radar echoes received by the orbiter while passing over these features indicate that radio waves pass through the apron material and reflect off a deeper surface below without significant loss in strength, as expected if the aprons are thick ice under a relatively thin covering. The radar does not detect reflections from the interior of these deposits as would occur if they contained significant rock debris. Finally, the apparent velocity of radio waves passing through the apron is consistent with a composition of water ice. Developers of the Shallow Radar had the mid-latitude aprons in mind, along with Mars' polar-layered deposits, long before the instrument reached Mars in 2006. "We developed the instrument so it could operate on this kind of terrain," said Roberto Seu of Sapienza University of Rome, leader of the instrument science team. "It is now a priority to observe other examples of these aprons to determine whether they are also ice." The buried glaciers reported by Holt and 11 co-authors lie in the Hellas Basin region of Mars' southern hemisphere. The radar has also detected similar-appearing aprons extending from cliffs in the northern hemisphere. "There's an even larger volume of water ice in the northern deposits," said the Jet Propulsion Laboratory's Jeffrey J. Plaut, whose paper on that discovery has been accepted for publication by the journal Geophysical Research Letters. "The fact that these features are in the same latitude bands--about 35 to 60 degrees--in both hemispheres points to a climate-driven mechanism for explaining how they got there." The rocky-debris blanket topping the glaciers has apparently protected the ice from vaporizing as it would if exposed to the atmosphere at these latitudes. "A key question is 'How did the ice get there in the first place?'" said James W. Head of Brown University. "The tilt of Mars' spin axis sometimes gets much greater than it is now, and climate modeling tells us that ice sheets could cover mid-latitude regions of Mars during those high-tilt periods," said Head. He believes the buried glaciers make sense as preserved fragments from an ice age millions of years ago. "On Earth," said Head, "such buried glacial ice in Antarctica preserves the record of traces of ancient organisms and past climate history."
Two dwarfgalaxies thought to be our Milky Way's longtime companions are actually relativenewcomers to our neighborhood that are just passing through, according to a newstudy. Thesurprising finding is a celestial curveball of sorts, sending astronomers backto the clubhouse in order to rework theories that were based on long-lastinginteractions between the Milky Way and the dwarf galaxies, called the Large andSmallMagellanic Clouds. ?Wehave known about the Clouds since the time of Magellan, and a single measurementhas thrown out everything we thought we understood about their history andevolution,? said the study's lead author, Gurtina Besla of the Harvard-Smithsonian Center for Astrophysics in Massachusetts. Forinstance, some astronomers thought a blazing trail of hydrogen gas extendingfrom the Clouds, called the Magellanic Stream, formed due to tidal interactionsbetween the Clouds and the Milky Way. Others explained the gas trail as theresult of hydrogen being stripped from the Cloudsby gas pressure as they plunged through the gas halo around our galaxy. Bothscenarios are false if the galaxies are indeed just passing through. Locatedabout 160,000 light-years from Earth, the Large Magellanic Cloud (LMC) is onlyone-twentieth the diameter of our galaxy and contains one-tenth as many stars.The Small Magellanic Cloud resides 200,000 light-years from Earth and is about100 times smaller than the Milky Way. Earlierthis year, astronomers making the most detailed measurements yet of the 3-dimensionalvelocities of the Magellanic Clouds found they are flying through spacetwice as fast as previously thought. Besla'steam incorporated the new estimates into computer models, finding that bothgalaxies had extremely parabolic orbits and indicated they had entered ourneighborhood for the first time between 1 billion and 3 billion years ago. ?Theproblem is [the LMC] is moving at a velocity that would correspond to aparabolic orbit,? Besla explained. ?It's just moving too fast. Ifthere were no other effects involved, it would just slingshot away.? Sheadded that friction forces from the Milky Way's gas halo and an observed lossof mass in the form of the Magellanic Stream slow down the galaxies. Even still,with such elongated orbits, the galaxies are unlikely to boomerang back towardthe MilkyWay any time soon. ?It will go out really far before it comes backaround again and it will take an extremely long time ... on the order of like 8billion years and beyond,? Besla told SPACE.com. Oneanswer, many questions The resultshave implications for at least two astrophysicalphenomena. Theoriesput forward to explain the Magellanic Stream involved a lengthy interactionbetween the Clouds and our galaxy. An alternative mechanism must be at work,Besla said. Theresearchers suggest a type of stellar feedback. ?As stars form they startlosing a lot of material through stellar winds and they also explode and thatblows out material,? Besla said. ?It's possible some of that materialgets puffed out and then other effects like 'ram pressure' and tidal effectscan then remove this really loosely bound stuff.? Tidaleffects between large objects (such as the moon and Earth, or two galaxies)cause one side of an object to be tugged more than the other side, stretchingit. Inaddition, the LMC and SMC have served as laboratories for understanding howstars evolve. Unlike the Milky Way, which is continually churning out stars, theMagellanic Clouds have undergone several bursts of star formation followed byquiet periods. ?Thosebursts had typically been linked to multiple passages around the MilkyWay,? Besla said. ?Now that doesn't fly.? - Image Gallery: Amazing Galaxies - VIDEO: Fly Through the Milky Way and Beyond - The Strangest Things in Space
The original cash value instrument In response to market pressures, actuaries conceived of an insurance policy with level premium payments that were higher than traditional term insurance contracts. These contracts would offer a “cash value”, which was designed to be a cash reserve that would build up against the policy’s death benefit. These policies would also credit interest to the cash value account and upon maturity of the contract (usually at age 95, 100, or even 120), the cash value would equal the death benefit. This produced a benefit to both the policy owner and the insurance company. By guaranteeing the death benefit, the policy owner was assured that insurance coverage would be in force when the insured died. The insurance company benefited because with every premium payment made, a relatively large percentage was profit, and thus this afforded the insurer the ability to absorb an increase in the cost of insurance, while allowing premiums to remain the same. Types of whole life There are several types of whole life insurance policies, defined by traditional forms which may vary slightly from state to state: As mentioned, other jurisdictions may classify them differently, and not all companies offer all types. There are as many types of insurance policies as can be written in their contracts while staying within the law’s guidelines. All values related to the policy (death benefits, cash surrender values, premiums) are usually determined at policy issue, for the life of the contract, and usually cannot be altered after issue. This means that the insurance company assumes all risk of future performance versus the actuaries’estimates. If future claims are underestimated, the insurance company makes up the difference. On the other hand, if the actuaries’ estimates on future death claims are high, the insurance company will retain the difference. In a participating policy, the insurance company shares the excess profits (dividends or refunds) with the policyholder. Typically these refunds are not taxable because they are considered an overcharge of premium. The greater the overcharge by the company, the greater the refund/dividend. For a mutual life insurance company, participation also implies a degree of ownership of the mutuality. NOTE: the distinctions between a Mutual Life insurance company and a Public Company can be of great fundamental significance. As we stated before, a Mutual Life Insurer shares both in profitability as well as risk with the insured and in fact plays a role of co-ownership (or skin in the game), while a Publicly Traded Company may have different priorities related to share holders and board of directors. This is not to say that principally a Publicly Traded Company does not have a clients best interest at heart; however obviously the paradigm is a different one. It is illegal to market a Life Insurance policy, even that of a Publicly Traded Company as a securities instrument. Although variable policies are treated as a security and require the appropriate licensing and risk disclosure. Similar to non-participating, except that the premium may vary year to year. However, the premium will never exceed the maximum premium guaranteed in the policy. A blending of participating and term life insurance, wherein a part of the dividends is used to purchase additional term insurance. This can generally yield a higher death benefit, at a cost to long term cash value. In some policy years the dividends may be below projections, causing the death benefit in those years to decrease. Similar to a participating policy, but instead of paying annual premiums for life, they are only due for a certain number of years, such as 20. The policy may also be set up to be fully paid up at a certain age, such as 65 or 80. The policy itself continues for the life of the insured. These policies would typically cost more up front, since the insurance company needs to build up sufficient cash value within the policy during the payment years to fund the policy for the remainder of the insured’s life. A form of limited pay, where the pay period is a single large payment up front. These policies typically have fees during early policy years, should the policyholder cash it in. This type is fairly new, and is also known as either excess interest or current assumption whole life. The policies are a mixture of traditional whole life and universal life. Instead of using dividends to augment guaranteed cash value accumulation, the interest on the policy’s cash value varies with current market conditions. Like whole life, the death benefit remains constant for life. Like universal life, the premium payment might vary, but not above the maximum premium guaranteed within the policy. Simplified Issue or Final Expense Insurance companies have in recent years developed products to offer to niche markets, most notably targeting the senior market to address needs of an aging population. Many companies offer policies tailored to the needs of senior applicants. These are often low to moderate face value Whole Life insurance policies, to allow a senior citizen purchasing insurance at an older issue age an opportunity to buy affordable insurance. Simplified Issue may also be marketed as Final Expense insurance inferring coverage of funeral expenses. Simplified Issue insurance policies: are Whole Life policies that, although available at any age, are usually offered to older applicants. This type of insurance is designed to cover funeral expenses when the insured person dies. In some cases, the applicant may even sign a pre-funded funeral arrangement with a funeral home at the time the policy is applied for. The death proceeds are then guaranteed to be directed first to the funeral services provider for payment of services rendered. Most contracts dictate that any excess proceeds will go either to the insured’s estate or a designated beneficiary. Some life insurance companies offer products at lower face amounts that skip many of the physical underwriting steps. There is also usually a shorter application and the policy usually takes less time to issue. However keep in mind that although a Simplified Issue policy may not require a physical exam, this implies higher than normal risk to the insurance company which will be translated into a premium that is slightly higher. However, the face amounts (death benefits) are relatively small and therefore the added cost is generally inconsequential. Some of the Advantages of a Simplified Issue Policy No medical exam Instant approval for qualified applicants Generally smaller face amounts Requirements for Whole Life Whole Life insurance typically requires that the owner pay premiums for the life of the policy. There are some arrangements that let the policy be “paid up”, which means that no further payments are ever required, in as few as 5 years, or with even a single large premium. Typically if the payer doesn’t make a large premium payment at the outset of the life insurance contract, then he or she is not allowed to begin making them later in the contract’s life. However, some Whole Life contracts offer a rider to the policy which allows for a one time, or occasional, large additional premium payment to be made as long as a minimal extra payment is made on a regular schedule. In contrast, Universal Life insurance generally allows more flexibility in premium payment. Guarantees and Liquidity The company generally will guarantee that the policy’s cash values will increase regardless of the performance of the company or its experience with death claims (again compared to Universal Life insurance and Variable Universal Life insurance which can increase the costs and decrease the cash values of the policy). Cash values are considered liquid enough to be used for investment capital, but only if the owner is financially healthy enough to continue making premium payments. Single Premium Whole Life policies avoid the risk of the insured failing to make premium payments and are liquid enough to be used as collateral. Single Premium policies require that the insured pay a one time premium that tends to be lower than the split payments due to the fact that a lump sum is being paid at once. Because these policies are fully paid at inception, they have no financial risk and are liquid and secure enough to be used as collateral under the insurance clause of collateral assignment. Cash value access is tax free up to the point of total premiums paid (what is referred to as Basis), and the rest may be accessed tax free in the form of policy loans. If the policy lapses, taxes would be due on outstanding loans. If the insured dies, death benefit is reduced by the amount of any outstanding loan balance. Internal rates of return for participating policies may be much worse than Universal Life and interest sensitive Whole Life (whose cash values are invested in the money market and bonds) because their cash values are invested in the life insurance company and its general account, which may be in real estate and the stock market. Variable Universal life insurance may outperform Whole Life because the owner can direct investments in sub-accounts that may do better. Then again they may not and Variable Universal life policies hold the inherent risk of market investment and are generally not a conservative position such as Participating Whole Life.
Written by Dr. Soteres in the Starfit Kids Newsletter – www.starfitkids.org Food allergy is of increasing concern in the United States. The most common food allergens are: peanut, egg, milk, wheat, soy, tree nuts, fish, and shellfish. Any food can trigger cause an allergic reaction. Food allergy is different than food sensitivity. Food allergies are controlled through a specific branch of the immune system and can be identified by tests for IgE (Immunoglobulin E) like the allergy skin test or a blood test. Food allergy reactions usually occur within minutes of an exposure and can be life threatening. In general, food sensitivities are not associated with life threatening reactions. Food allergy may trigger symptoms that affect the respiratory, gastrointestinal, skin, and/or cardiovascular systems. Severe reactions can occur at any age and on first known exposure to a food. The greatest risk for a fatal allergic reaction appears to be adolescents and young adults with asthma and a known food allergy to peanut, tree nut, or seafood. The natural history of food allergy varies. For children with allergy to egg, milk, wheat, and soy, about 80 percent will outgrow the allergen at around 5 years old. For peanut allergy, only 20 percent will outgrow it. There is no cure for food allergy. However, some clinics and research studies have had success by desensitizing patients with severe food allergy. In this procedure, the patient is introduced to tiny amounts of the allergic protein. Over the course of several weeks the immune system is trained to tolerate exposures. Successful desensitization seems to reduce the risk of a life threatening reaction due to an accidental exposure. Much research remains to gain a better understanding of the procedure and the risks. The cornerstone for managing food allergy is: Evaluation, Education, Avoidance, and Preparation. Evaluation and diagnosis of food allergy should start with a thorough history followed by appropriate testing. The best screening test for food allergy is the allergy skin prick test. However, both the skin prick test (SPT) and the serum IgE test (RAST Immunocap IgE) may be used. They are highly sensitive but only modestly specific. Therefore, these tests are good when suspicion of a food allergy is high. Both the skin test and the blood test, RAST Immunocap, can identify food allergies. However, the double-blind placebo-controlled oral food challenge is the gold standard to diagnose food allergy. Food challenges can be time consuming and dangerous. It is often proposed that the IgG test for foods can identify a food-sensitivity. However, this test is not validated and making dietary changes based on IgG testing is not advised. So, referral to a board-certified allergist with expertise in food allergy is vital. The primary treatment for food allergy is avoidance. Elimination of the causal foods from the diet is difficult to accomplish. One resource for this information is the Food Allergy & Anaphylaxis Network (www.foodallergy.org). A local resource is the support group, MOSAIC (Mothers of Severely Allergic Infants and Children) that meets monthly. Exposure to a potentially life threatening food allergen is a fact of life for those who live with food allergy. The single most important and potentially life-saving therapy is the use of injectable epinephrine for a life threatening reaction. Delayed administration of epinephrine is a risk factor for poor outcomes like death due to an allergic reaction. In general, having more than one injector is advised. Approximately 30 percent of patients with anaphylaxis will relapse. The effect of epinephrine lasts only about 15 minutes. Therefore if one must use the epinephrine then be prepared to call emergency services. Living with food allergy can be a source of great distress, especially when the patient is a child. Many studies confirm the high degree of anxiety that parents and caregivers experience when their children are allergic to foods. Evaluation, Education, Avoidance and Preparation are the key factors in learning to live with food allergy. Seek out guidance from a board-certified allergy/immunology specialist for more information. Information Provided By: Asthma and Allergy Associates
Water is a critical resource for any business facility, as well as guaranteeing a reputable and sustainable supply of water is vital for smooth procedures. Over the last few years, an increasing number of services have actually resorted to commercial water boreholes as a reliable service to their water requires. These boreholes supply a self-sufficient as well as environmentally friendly resource of water, supplying numerous benefits for services of all dimensions. Among the essential advantages of business water boreholes is their capability to offer a constant and trusted water system. Unlike depending entirely on local water supply, which can be subject to disruptions or scarcities, boreholes use a protected water resource that businesses can rely on. With a properly maintained borehole, organizations can ensure a continuous water supply, which is especially essential for industries that depend heavily on water, such as agriculture, manufacturing, as well as hospitality. Price savings are another substantial advantage of industrial water boreholes. While the initial setup and drilling prices might be higher contrasted to connecting to a community water, the long-term financial savings can be significant. Organizations can minimize or eliminate water expenses, which can be a considerable expenditure, particularly for industries with high water usage. Furthermore, companies may be qualified for tax motivations or refunds for investing in sustainable water solutions, better enhancing the cost-effectiveness of boreholes. Ecological sustainability is a pushing problem in today’s world, and companies are progressively seeking ways to minimize their environmental impact. Business water boreholes supply an eco-friendly alternative to traditional water sources. By utilizing groundwater, services can reduce their reliance on surface water, which is usually restricted as well as much more vulnerable to air pollution and environment modification effects. Additionally, boreholes use energy-efficient pumps and systems, further lessening their ecological impact. Lastly, industrial water boreholes can improve an organization’s reputation and also consumer charm. With growing customer recognition and preference for environmentally friendly techniques, companies that prioritize sustainability acquire a competitive benefit. Demonstrating a dedication to lasting water administration with the use of boreholes can draw in environmentally mindful consumers, enhance brand name photo, and even open new marketing chances. Finally, commercial water boreholes offer services a lasting and also dependable water option. With advantages ranging from consistent water as well as price savings to environmental sustainability and also enhanced track record, it’s clear why more services are choosing to invest in boreholes. By using the earth’s natural resources, companies can make sure long-term water protection while decreasing their impact on the setting.
When looking around a college campus, it's easy to assume half of its doctoral candidates are women. This assumption would be right; women are awarded nearly half of all doctoral degrees annually, according to 2013 statistics by the National Science Foundation (NSF). When looking at full-time professors, however, the number is much lower. Similar patterns are seen in minorities. This is why UCLA has made a commitment to realizing diversity by creating an environment that's inclusive of everyone. And this year, it inspired a student diversity group. Scientific fields see some of the starkest contrasts between the makeup of students and professionals in the field. That's what STEM-PLEDGE — short for Science, Technology, Engineering, and Mathematics Providing Leadership & Enhancing Diversity in Graduate Education — hopes to change. The new organization is open to any graduate and postdoctoral students at UCLA who want to help address the barriers that prevent the full participation of groups often underrepresented in the university's STEM graduate programs. Challenges in diversity "There's a leaky pipeline, particularly among women, but also among other groups historically underrepresented in science," said Dr. Lynn Gordon, Senior Associate Dean of Academic Diversity at the David Geffen School of Medicine at UCLA. Improving diversity means addressing the makeup of professionals in order to ensure adequate representation different races, ethnicities, disabilities, religions and sexual orientations, as described in federal affirmative action guidelines. One mission of STEM-PLEDGE is to perfect the culture by reaching out to researchers to inform them on how their diversity teams can provide a more complete scientific view of this endeavor. STEM retention and career development Above all, the group wants to help its members improve their careers and promote the ways in which diversity can improve scientific research. "Collaboration is a growing theme of science," says Dennis Montoya, PhD, STEM-PLEDGE co-chair. "Everyone needs to collaborate to survive, so we want to talk about this collaboration in light of diversity, while also improving the scientific careers of our members." Dr. Montoya, a graduate of the David Geffen School of Medicine at UCLA who co-chairs the group with Salemiz Sandoval, PhD, helped found the "Scientific Excellence through Diversity" seminar series. This event at UCLA features women and minority professors from around the country who speak about their work and how it has driven their career's progression. It's a great opportunity for learning and networking. Recently, the diversity group cosponsored a campus event aimed at improving scientific writing, collaboration and team science. The symposium, cosponsored by the Clinical and Translational Science Institute, invited well-known speakers to talk about how diversity contributes to research as well as their own personal and professional experiences. Most events are open to the entire academic community, and there's a lot to learn from their accomplished speakers — even for those who aren't female or don't identify as a minority. Reaching the next generation STEM-PLEDGE understands it's not only important to support and retain those currently enrolled in STEM programs, but also to encourage a wide range of people to continue joining these groups. This is why the group also has such a large outreach component. Members don't just reach out to college students to increase their commitment to scientific careers. They also speak at high schools to interest the next generation of scientists and encourage them to start along these rewarding paths. By Patricia Chaney
Blood pressure goes up and down all the time but having consistently high blood pressure (hypertension), can potentially lead to heart disease and other health complications. It is mainly caused by a number of reasons like smoking, excess sodium intake, increased stress, sleep deprivation, obesity and other factors. Luckily, there are ways to reduce your blood pressure. Here are some tips on how to lower blood pressure in minutes. The right tunes can help bring your blood pressure down, according to Italian research. Researchers asked 29 adults who were already taking BP medication to listen to soothing classical, Celtic, or Indian music for 30 minutes daily while breathing slowly. When they followed up with the subjects six months later, their blood pressure had dropped by an average of 4 mmHg. When you get a high blood pressure reading at the doctor's office, it might be tough for you to understand exactly what impact those numbers can make on your overall health, since high blood pressure has no unusual day-to-day symptoms. But the truth is, having high blood pressure is a serious health risk—it boosts the risks of leading killers such as heart attack and stroke, as well as aneurysms, cognitive decline, and kidney failure. What's more, high blood pressure is a primary or contributing cause of death in more than 1,000 deaths a day in the United States. In most cases where high blood pressure is diagnosed, the cause remains unclear. One review paper in the International Journal of Hypertension explains that this accounts for around 90% of all hypertension diagnosis and is usually referred to as essential hypertension. Secondary hypertension is diagnosed when a cause can be identified. Potential causes of secondary hypertension may include: Knowing how to lower blood pressure fast is very important. Uncontrolled high blood pressure can cause irreversible damage to internal organs and shorten your life. When starting anything new please consult your primary care physician. With natural ways to lower your blood pressure always check to see if they will interfere with any current medication, you are taking. You can speak with your local pharmacist. If you’re eating more fruits and vegetables, you’re already taking a positive step toward reducing salt and enhancing potassium intake. Sodium can be found in abundance in processed foods – anything that comes in a package, can, or especially from a fast food restaurant. If you’re over 50, or at higher risk, aim for no more than 1,500 mg/day. Check your food labels for sodium content. If you see that a food has more than 400-500 mg in a serving, see if there’s a lower-sodium option. Sleep. Short and poor-quality sleep are both associated with raised blood pressure (40). On the other side of the spectrum, excessively long sleep may also be harmful. One study found increased blood pressure in those who got fewer than five hours of sleep per night and in those who averaged more than nine hours of sleep per night, when compared with people who slept around seven hours (41). I suspect that it’s not the long sleep itself that is the problem, but that some underlying condition is both increasing sleep requirement and raising blood pressure.
Although various authorities have found that sexual orientation and gender identity have no relationship to workplace performance, during the past four decades a large body of research using a variety of methodologies has consistently documented high levels of discrimination against lesbian, gay, bisexual, and transgender (LGBT) people at work. This chapter reviews recent research regarding such discrimination as well as regarding the effects of such discrimination on LGBT people. The latter research shows that discrimination has negative effects on LGBT people in terms of health, wages, job opportunities, productivity in the workplace, and job satisfaction. Widespread and continuing employment discrimination against LGBT people has been documented in scientific field studies, controlled experiments, academic journals, court cases, state and local administrative complaints, complaints to community-based organizations, and in newspapers, books, and other media. Further, federal, state, and local courts, legislative bodies, and administrative agencies have acknowledged that LGBT people have faced widespread discrimination in employment. Results from all of these sources are discussed below.
Microbes can metabolize more chemical compounds than any other group of organisms. As a result, their metabolism is of interest to investigators across biology. Despite the interest, information on metabolism of specific microbes is hard to access. Information is buried in text of books and journals, and investigators have no easy way to extract it out. Here we investigate if neural networks can extract out this information and predict metabolic traits. For proof of concept, we predicted two traits: whether microbes carry one type of metabolism (fermentation) or produce one metabolite (acetate). We collected written descriptions of 7,021 species of bacteria and archaea from Bergey’s Manual. We read the descriptions and manually identified (labeled) which species were fermentative or produced acetate. We then trained neural networks to predict these labels. In total, we identified 2,364 species as fermentative, and 1,009 species as also producing acetate. Neural networks could predict which species were fermentative with 97.3% accuracy. Accuracy was even higher (98.6%) when predicting species also producing acetate. Phylogenetic trees of species and their traits confirmed that predictions were accurate. Our approach with neural networks can extract information efficiently and accurately. It paves the way for putting more metabolic traits into databases, providing easy access of information to investigators. Most information about microbes and their traits is buried in text of books and journals. Investigators who need information on many species are thus doomed to long literature searches. Investigators could avoid this fate, however, if they had a way to extract information from text computationally. We introduce an approach that can extract information with neural networks, a form of machine learning. For proof of concept, we use our approach to predict two metabolic traits for 7,000 species of microbes. This approach was accurate, and it could be used to construct accurate phylogenetic trees of microbes and traits. The work paves the way to large databases of metabolic traits and other information, helping investigators working with big data. Citation: Hackmann TJ, Zhang B (2021) Using neural networks to mine text and predict metabolic traits for thousands of microbes. PLoS Comput Biol 17(3): e1008757. https://doi.org/10.1371/journal.pcbi.1008757 Editor: Morgan Langille, DAL, CANADA Received: October 7, 2020; Accepted: February 2, 2021; Published: March 2, 2021 Copyright: © 2021 Hackmann, Zhang. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: Articles from Bergey’s Manual used in this study are accessible only with a license. Users with a license can download articles and prepare text with code at https://github.com/thackmann/MicroMetabolism. Code for neural networks is available at the same site. All other relevant data are within the manuscript and its Supporting Information files. Funding: This work was supported by Hatch Project Accession 1019985 (TJH) and 1024983 (TJH) from the United States Department of Agriculture National Institute of Food and Agriculture. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Microbes are everywhere and can metabolize a huge array of chemical compounds. This makes their metabolism important to nutrient cycling in the environment [1–3]. Their metabolism is also important to symbiotic relationships with other organisms [4,5] and for synthetic biologists in the lab [6,7]. As such, information on microbial metabolism is of value to investigators throughout biology. Despite the value, information on microbial metabolism is hard to access. Books and journals are filled with this information, but it remains buried in text. Bergey’s Manual of Systematics of Archaea and Bacteria , for example, reports metabolic traits for thousands of microbes, but in the form of long written descriptions. Looking up information for a few species is feasible, but in the era of big data, investigators often need information on many species. Information on metabolic traits would more useful if extracted from text and summarized in a database. To date, there is no fast and accurate way of extracting this type of information. One method is to employ teams of curators to read articles and extract information manually [9–11]. This method is slow, and information is likely incomplete. Another method is to use machine learning and extract information computationally . This method is fast, but accuracy has not been high enough to be adopted by database curators (see ref. ). The field of machine learning has advanced, and it may now have the accuracy needed to extract metabolic information. Neural networks, one form of machine learning, perform well in extracting other kinds of information from scientific literature [13–17]. When given medical abstracts, for example, neural networks can recognize and extract out names of diseases [14,15]. Their success with other tasks suggests use in extracting information, such as metabolic traits, from microbiology literature. Here we use neural networks to analyze written descriptions of over 7,000 species of microbes and predict their metabolic traits. For proof of concept, we predicted two traits: whether microbes carried out one type of metabolism (fermentation) or produced one metabolite (acetate). Accuracy in predicting these traits was high (>95%). Our approach paves the way to building large databases of metabolic traits, helping investigators working with big data. Collecting text and labels for thousands of microbes Our general approach to predicting metabolic traits is outlined in Fig 1. We obtained text (written descriptions of microbial species) from Bergey’s Manual . From this text, we manually labelled metabolic traits. These labels, along with the written descriptions, served as training data for the network. After training with labels and text, we used the network to predict metabolic traits. From Bergey’s Manual , we obtained written descriptions for a total of 7,021 species (see list in S1 Table). To accomplish this, we downloaded the full text of all genus-level articles (n = 1,503). We extracted out species names, then located relevant sections of text for each species. This extraction was an involved process because names and text for each species were scattered through articles (see Methods). We assembled the text into coherent species descriptions. From these descriptions, we manually labelled species as positive or negative for two metabolic traits. The first trait was general: whether microbes carried out one type of metabolism (fermentation). We searched species descriptions for keyword “ferment”. A total of 4,349 descriptions contained the keyword, and we read these descriptions in full. After reading, we labeled species as positive or negative for the trait. Labels (including justifications) are given in S1 Table. The second trait was more specific: whether fermentative species produced one metabolite (acetate). We searched for keywords (“ferment” plus “acetate” or “acetic”), read matching descriptions (n = 3,987), then labeled species as positive or negative (see S1 Table). Using this approach, we labeled 2,364 species as positive for fermentation, of which 1,009 were also positive for producing acetate. These labels, along with species descriptions, served as training data for the neural network. Neural networks accurately predict metabolic traits After obtaining species descriptions and training data, we trained neural networks and evaluated their performance in predicting metabolic traits. Training was done using TensorFlow as described in the Methods. Evaluations were done with data independent from training. We found neural networks could predict the first metabolic trait (fermentative metabolism) with high accuracy (Fig 2A). Accuracy increased with the amount of training data, and descriptions for 1,000 species were enough to achieve 95.3% accuracy. Besides high accuracy, predictions from neural networks achieved high F1 score, precision, and sensitivity (Fig 2A). Example predictions (from one training with data for 1,000 species) are shown in S1 Table. (A) Predictions for first trait (fermentative metabolism). (B) Predictions for second trait (acetate production). (C) Architecture of model. Values are means ± SEM of five replicates (independent trainings of the network). Some values for precision are missing because they were undefined (one or more replicates had no false or true positives). For clarity, the number of units depicted in neural network layers is fewer than actual. Units in embedding and hidden dense layers had dropout rate of 0.2. Neural networks achieved similarly high accuracy when predicting the second trait (acetate production) (Fig 2B). In sum, neural networks could accurately predict both general and specific traits. Few computational resources were required to train the networks and predict metabolic traits. When descriptions for 1,000 species were used, for example, these steps required less than 1 min and 1.5 GiB of memory to complete (S1 Fig). This result shows that networks were not only accurate, but easy to deploy. Results above are for the best type of neural network. This type was a convolutional network with architecture shown in shown in Fig 2C. We tried other types of networks, and a long short-term memory (LSTM) network also performed well (Fig 3). When little training data was used, its performance equaled or even exceeded that of the convolutional network. However, its performance was overtaken by the convolutional network when using more training data. As Fig 2, except type of network is LSTM. Units in the LSTM layer had a dropout rate of 0.2. Performance depended not only on the type of network, but also how the text was processed before inputted into the network. The highest performance (shown in Fig 2) was achieved when the text (species description) was winnowed down to sentences matching key words (e.g., “ferment”). If the full text was used, much more training data was needed (S2 Fig), and performance was never as high. We have thus taken several steps to optimize the network and ensure predictions of metabolic traits are as high as possible. Predictions from neural networks yield accurate phylogenetic trees We evaluated neural networks further by constructing phylogenetic trees with their predictions. First, we made a phylogenetic tree of all species in Bergey’s Manual (Fig 4A). Next, we highlighted species predicted to have the first trait (fermentative metabolism) (Fig 4B). In a separate tree, we highlighted species observed (manually labeled) to have the trait (Fig 4B). These predicted and observed trees appeared similar, meaning predicted species were similar to those observed as having it. Further, the UniFrac distance between predicted and observed trees was small (S3 Fig), confirming that they are similar. We found similar agreement between trees for the second trait (acetate production) (Figs 4C and S3). For both traits, we used training data for 1,000 species. (A) All species in Bergey’s Manual with available sequences. (B) Species with first trait (fermentative metabolism). (C) Species with second trait (acetate production). To generate the predicted tree, traits were predicted with a convolutional neural network and training data for 1,000 species. The predicted and observed trees shown are representative of five replicates (independent trainings of the network). Trees were constructed with concatenated ribosomal protein sequences as described in Methods. In sum, predictions from neural networks were not just accurate in a statistical sense. They produced phylogenetic trees that were close to the actual ones, showing they are accurate biologically. Databases reporting metabolic traits are incomplete Some information on metabolic traits can already be found in databases, but it is not clear how complete it is. Our work identified two traits for a number of species, and so it can help assess how complete are these databases for these two traits. As mentioned, our work identified 2,364 species that carried out fermentation. By comparison, the best database identified 1,584 species, or 67% of our number (Fig 5). For species that also produce acetate, the best database identified 1.2% of our number. Some databases (e.g., FAPROTAX) were not designed to identify species that produce acetate, explaining the low completeness for this trait. Species in FAPROTAX were counted in two different ways. First, we used it strictly as a database; we counted species in the database packaged with the tool. Second, we used FAPROTAX as a search tool. We inputted into FAPROTAX the n = 7,021 species from Bergey’s Manual used in the current work. See Methods for more details on FAPROTAX and other databases. Our own numbers of species are incomplete, and thus the situation is worse than it first appears. We obtained descriptions for 7,021 species, yet the total number of species validly published in the literature is 20,038 (see ref. ) and increasing by 600 per year . In total, our results suggest that databases reporting the two metabolic traits we investigated are incomplete. Negative labels for traits are reliable When we labeled a species as negative for fermentation, often it was because the species description made no mention of this trait (see S1 Table). It is possible that some species were fermentative, but descriptions in Bergey’s Manual were incomplete. To see if this was a problem, we compared descriptions from Bergey’s Manual with those from the primary literature (journal articles). We did so for 64 species of fermentative bacteria from the cattle rumen (S2 Table), many of which we study in our lab [21–24]. We found that descriptions from Bergey’s Manual and the primary literature agreed closely (Fig 6 and S4 Table). If a description was available in Bergey’s Manual, it always reported the species as positive for fermentation. These results suggest that species descriptions in Bergey’s Manual were reliable, and so too are our labels for metabolic traits. If we labeled a species as negative for fermentation in S1 Table, the species likely has not been described as fermentative before. We found similar agreement between Bergey’s Manual and the primary literature for the second trait (acetate production) (Fig 6 and S4 Table). Bergey’s Manual reported two species as negative for this trait, even though the primary literature reported them as positive. With few exceptions, our negative labels for acetate production would appear reliable, also. Microbial metabolism cuts across many fields of biology, yet information on metabolic traits is still hard to access. The information is locked away in text of books and articles. Several attempts have been made to extract this information and make it available in databases [9–11,25,26]. However, the information collected so far, at least for the two traits we investigated, is incomplete. Most attempts to extract information have done so manually, using teams of curators [9–11]. To provide more complete information, a faster method is needed. We propose neural networks as a fast (and accurate) method to extract information and predict metabolic traits. We provide proof of concept by predicting two metabolic traits for thousands of microbes and with >95% accuracy. This level of performance was high enough to create an accurate phylogenetic tree of these species, and it should be useful for other applications. The performance of our networks represents an improvement over using other types of machine learning to predict metabolic traits of microbes. Mao et al. , for example, predicted traits with a support-vector machine. This approach gave 59% precision and 66% sensitivity when predicting metabolites produced during fermentation. With neural networks, we achieved 93.9% precision and 96.1% sensitivity for a similar prediction (see Fig 2). Despite the promise of our approach with neural networks, there are still areas that need to be explored. We need to explore, first, sources of species descriptions other than Bergey’s Manual . Though Bergey’s Manual gave us descriptions for over 7,000 species, this represents only ~1/3 of all species validly published in the literature . We need to explore, second, well our methods work with rare traits. Both metabolic traits we investigated were relatively common (found in over 1,000 species). Once these uncertainties are resolved, neural networks can be deployed at an even larger scale to predict metabolic traits of microbes. They would enable building of databases of metabolic traits larger than previously imagined. These databases, in turn, will be key to opening up the study of microbial metabolism and bringing it fully into the era of big data. Preparation of text To obtain written descriptions of species, articles from Bergey’s Manual were downloaded and read into R. Names of species were extracted from the full text, then appropriate sections of the full text were assembled into the description. Articles in Bergey’s Manual were downloaded as html files. This was done using article urls in Browse A-Z page in Bergey’s Manual and the download.file() function in R. Only genus-level articles (containing “gbm” in their url) were retained. The html files were read into R. The full text of each article was then obtained using html_nodes() function and css selectors. Names of each species were extracted from the full text. For a given article, the genus name was extracted using css selectors. Names of species were then found under the List of Species of the Genus section using the genus name and regular expressions. We reviewed the list of names manually, identified errors, and refined regular expressions (using different expressions to accommodate varying format of articles). Our list also included names of subspecies, biovars, pathovars, and genomospecies, which we treated as equal to species. We used a similar approach (css selectors and regular expressions) to extract other taxonomic ranks and strain IDs. The full text was parsed to give a written description of each species. The full text typically consisted of 1) Abstract, 2) Further Descriptive Information and other sections about the genus, 3) List of Species of the Genus, and 4) References. These sections were identified using regular expressions. For a given species, we combined text from sections (2) and (3). For (3), we selected only text belonging to the given species, and we excluded text for other species within the genus. This text was selected by using regular expressions for the species name. Labeling of metabolic traits We labeled species as positive or negative for two metabolic traits. Using R and regular expressions, we searched the species descriptions for keywords. For the first trait (fermentative metabolism), the keyword was "ferment". For the second trait (acetate production), the keywords were "ferment" plus "acetate" or "acetic". The regular expression allowed matches not just to the keyword itself, but to any word containing it. For the keyword “ferment”, the words “ferment”, “fermenter”, and “non-fermentative” would all match. When there was a match to the keyword, we read species descriptions in full before labeling the species as positive or negative for the trait. We have experience in reading and labeling species descriptions for these two particular traits . If there was no match, the species was labeled as negative. Construction of neural networks Neural networks were built and trained with TensorFlow . TensorFlow was run in RStudio using the Keras library. Written description of each species were prepared for input into the network. Sentences matching the keywords were kept, and others were discarded. For the first trait (fermentative metabolism), the keyword was "ferment". At least one sentence had to match "ferment" for any to be kept. For the second trait (acetate production), the keywords were "ferment" plus "acetate" or "acetic". Some sentences were duplicated, and these were discarded. The remaining sentences were joined together and truncated at 25,000 characters. Afterwards, the text was tokenized using the text_tokenizer(), fit_text_tokenizer(), and texts_to_sequences() functions with num_words of 3,000. The tokenized text was then inputted into the network as a list with one element per species. The average number of tokens (words) for the input text was 102 for the first trait and 120 for the second trait. Labels of metabolic traits were inputted as a vector with one element per species. The elements were 1 (trait positive) or 0 (trait negative). The networks had architecture as shown in Figs 2 and 3. They were solved with the loss function binary_crossentropy and adam optimizer. The networks were trained with batch size of 32 for 10 epochs. For small amounts of training data, more epochs (up to 40) were needed to minimize the loss function. The amount of training data was as specified in Figs 2 and 3. All data not used for training were used for evaluating predictions. Predictions were evaluated using accuracy, F1 score, precision, and sensitivity. Accuracy was calculated as (TP+TN)/(TP+TN+FP+FN), where TP = true positive, TN = true negative, FP = false positive, and FN = false negative. F1 score was calculated as TP/[TP+1/2(FP+FN)]. Precision was calculated as TP/(TP+FP). Sensitivity was calculated as (TP)/(TP + FN). Computational resources for training and prediction were determined using the time package in Ubuntu 20.04 LTS. The resources were run time and maximum memory. Measurements were completed using all six threads of an Intel Core i5-8500T processor and with 16 GiB of RAM. Construction of phylogenetic trees We constructed a phylogenetic tree of genomes belonging to species from Bergey’s Manual . The construction followed the general approach of ref. [28,29] and used sequences of 14 ribosomal proteins. First, we used the strain IDs of each species to find genome sequences. Specifically, we used the strain ID to find a GOLD organism ID , GOLD project ID , and the IMG/M genome ID (genome sequence) (see S1 Table). Though we could have searched IMG/M directly with the strain ID, this approach was slow. Some strain IDs were generic (e.g., numbers like “238”) and could match multiple GOLD organism IDs. To make matches more specific, we required the species or genus name to match, also. We identified genome IDs for a total of 2,925 species. Next, we downloaded amino acid sequences of the ribosomal proteins from IMG/M . We did this using KO IDs for the respective genes (S2 Table) along with IMG/M genome IDs. We discarded sequences that were short (<75% of the average length for a given ribosomal protein). We used aligned and concatenated sequences to create a phylogenetic tree. The tree was calculated using maximum likelihood with RAxML on the CIPRES web server . The parameters are listed in S3 Table. Final analysis and visualization were done in R. The consensus tree and branch lengths were calculated using phytools . The tree was visualized using ggtree . A total of 2,501 species had genomes with protein sequences that could be included in the final tree. In the full tree, we highlighted branches belonging to species predicted or observed (labeled) to have a metabolic trait. These predictions were made using the convolutional neural network in Fig 2C and training data for 1,000 species. Species part of training data were not highlighted, even if they had the trait. The resulting trees were the predicted or observed trees in Fig 4. We calculated UniFrac distances between these trees using phyloseq . Completeness of databases reporting metabolic traits We investigated the completeness of information in three databases: FAPROTAX , BacDive , and IMG . We did not investigate the IJSEM database because its information has been subsumed by BacDive . We also did not investigate the MACADAM database because its information is in FAPROTAX and IJSEM databases. For the three databases, we counted the number of microbial species they report as having a fermentative metabolism. For FAPROTAX (v. 1.2.3) , we counted species in two ways. First, we used FAPROTAX as a database, counting the number of species in the database packaged with the tool. Only entries containing both genus and species names were counted. Second, we used FAPROTAX as a search tool. We inputted into FAPROTAX the n = 7,021 species from Bergey’s Manual used in the current work. This method led to a higher count of species because it uses all of FAPROTAX’s entries, not just those with genus and species names. For BacDive , we used Advanced search > Morphology and physiology > Metabolite (utilization). We set Kind of Utilization to “fermentation” and Utilization activity to “+”. For IMG/M , genomes with information on metabolism were displayed using Genome Search > Advanced Search Builder > Metabolism. We searched the output for the keyword “ferment” and then read the description in full. We also counted the number of species the databases reported as producing acetate. For FAPROTAX, we counted no species because no functional group indicated both fermentative metabolism and acetate production. For BacDive, we entered the same settings as for the first trait (fermentative metabolism). Additionally, we set Metabolite (production) to “acetate” and Production to “yes”. For IMG/M , we manually searched the output for the keywords “acetate” and “acetic”, then then read the description in full. Species descriptions from the primary literature We compared species descriptions in Bergey’s Manual with those from the primary literature for 64 species of bacteria from the rumen. To be included in the comparison, the species had to - Appear in the List of Prokaryotic names with Standing in Nomenclature ; - Have a type strain isolated from the rumen; - Be described in at least one peer-reviewed journal article; - Be fermentative; - Have products of fermentation reported for at least one substrate. S1 Table. Metabolic traits and other information on species from Bergey’s Manual. S2 Table. Ribosomal proteins and database IDs searched. S3 Table. Parameters for calculating the phylogenetic tree in RAxML. S4 Table. Information on species of rumen bacteria found in the primary literature. S1 Fig. Few computational resources were required to train neural networks and predict metabolic traits. As Fig 2, except values shown are run time and memory required for training and prediction. Training included tokenization of text. S2 Fig. Performance of neural networks when inputting full text. As Fig 2, except the full text, not just sentences containing keywords, was inputted. Before tokenization, sentences were truncated to 200,000 instead of 25,000 characters. During tokenization, num_words was set to 5,000 instead of 3,000. The average number of tokens (words) for the input text was 5,817, and it was the same for both traits. S3 Fig. Low distances between predicted and observed trees in Fig 4 confirm these trees are similar. For comparison, we calculated distances between random trees and observed trees; these distances are high. We constructed random trees by randomly choosing branches from the tree of all species in Fig 4. We ensured that random and predicted trees had the same number of branches. Values are means ± SEM of five replicates (trees generated by independent trainings of the network). One replicate corresponds to trees shown in Fig 4, and four additional replicates correspond to trees that for brevity are not shown in Fig 4. P-values correspond to a t-test. - 1. Falkowski PG, Fenchel T, Delong EF. The microbial engines that drive Earth’s biogeochemical cycles. Science. 2008;320(5879):1034–9. pmid:18497287 - 2. Kuypers MMM, Marchant HK, Kartal B. The microbial nitrogen-cycling network. Nat Rev Microbiol. 2018;16(5):263–76. pmid:29398704 - 3. Fenchel T, Blackburn H, King GM, Blackburn TH. Bacterial biogeochemistry: the ecophysiology of mineral cycling. 3rd ed: Academic Press; 2012. - 4. Duperron S. Microbial symbioses: Elsevier; 2016. https://doi.org/10.1155/2016/2824802 pmid:27123354 - 5. Atlas RM. Microbial ecology: fundamentals and applications. 4th ed: Pearson; 1998. - 6. Agapakis CM, Boyle PM, Silver PA. Natural strategies for the spatial optimization of metabolism in synthetic biology. Nat Chem Biol. 2012;8(6):527–35. pmid:22596204 - 7. McCarty NS, Ledesma-Amaro R. Synthetic biology tools to engineer microbial communities for biotechnology. Trends Biotechnol. 2019;37(2):181–97. pmid:30497870 - 8. Whitman WB, editor. Bergey’s manual of systematics of archaea and bacteria: Wiley; 2020. - 9. Reimer LC, Vetcininova A, Carbasse JS, Sohngen C, Gleim D, Ebeling C, et al. BacDive in 2019: bacterial phenotypic data for high-throughput biodiversity analysis. Nucleic Acids Res. 2019;47(D1):D631–D6. pmid:30256983 - 10. Barberan A, Caceres Velazquez H, Jones S, Fierer N. Hiding in plain sight: mining bacterial species records for phenotypic trait information. mSphere. 2017;2(4):pii: e00237–17. pmid:28776041 - 11. Louca S, Parfrey LW, Doebeli M. Decoupling function and taxonomy in the global ocean microbiome. Science. 2016;353(6305):1272–7. pmid:27634532 - 12. Mao J, Moore LR, Blank CE, Wu EH, Ackerman M, Ranade S, et al. Microbial phenomics information extractor (MicroPIE): a natural language processing tool for the automated acquisition of prokaryotic phenotypic characters from text sources. BMC Bioinformatics. 2016;17(1):528. pmid:27955641 - 13. Marshall IJ, Wallace BC. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst Rev. 2019;8(1):163. pmid:31296265 - 14. Beltagy I, Lo K, Cohan A. SciBERT: A pretrained language model for scientific text. arXiv. 2019:1903.10676. - 15. Lee J, Yoon W, Kim S, Kim D, So CH, Kang J. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2020;36(4):1234–40. pmid:31501885 - 16. Schmitt C, Walker V, Williams A, Varghese A, Ahmad Y, Rooney A, et al. Overview of the TAC 2018 Systematic Review Information Extraction Track. Proceedings of the Eleventh Text Analysis Conference2018. - 17. Cohan A, Feldman S, Beltagy I, Downey D, Weld DS. Specter: Document-level representation learning using citation-informed transformers. arXiv. 2020:2004.07180. - 18. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv. 2016:1603.04467. - 19. Parte AC, Sardà Carbasse J, Meier-Kolthoff JP, Reimer LC, Göker M. List of Prokaryotic names with Standing in Nomenclature (LPSN) moves to the DSMZ. Int J Syst Evol Microbiol. 2020. pmid:32701423 - 20. Parte AC. LPSN—list of prokaryotic names with standing in nomenclature. Nucleic Acids Res. 2014;42(Database issue):D613–6. pmid:24243842 - 21. Tao JY, Diaz RK, Teixeira CRV, Hackmann TJ. Transport of a fluorescent analogue of glucose (2-NBDG) versus radiolabeled sugars by rumen bacteria and Escherichia coli. Biochemistry. 2016;55(18):2578–89. pmid:27096355 - 22. Tao J, McCourt C, Sultana H, Nelson C, Driver J, Hackmann TJ. Use of a fluorescent analog of glucose (2-NBDG) to identify uncultured rumen bacteria that take up glucose. Appl Environ Microbiol. 2019;85(7). pmid:30709823 - 23. Zhang B, Bowman C, Hackmann T. A new pathway for forming acetate and synthesizing ATP during fermentation in bacteria. bioRxiv. 2020 - 24. Dai X, Hackmann TJ, Lobo RR, Faciola AP. Lipopolysaccharide stimulates the growth of bacteria that contribute to ruminal acidosis. Appl Environ Microbiol. 2020;86(4). pmid:31811042 - 25. Chen IA, Chu K, Palaniappan K, Pillay M, Ratner A, Huang J, et al. IMG/M v.5.0: an integrated data management and comparative analysis system for microbial genomes and microbiomes. Nucleic Acids Res. 2019;47(D1):D666–D77. pmid:30289528 - 26. Le Boulch M, Déhais P, Combes S, Pascal GJD. The MACADAM database: a MetAboliC pAthways DAtabase for Microbial taxonomic groups for mining potential metabolic capacities of archaeal and bacterial taxonomic groups. Database. 2019:pii: baz049. pmid:31032842 - 27. Hackmann TJ, Ngugi DK, Firkins JL, Tao J. Genomes of rumen bacteria encode atypical pathways for fermenting hexoses to short-chain fatty acids. Environ Microbiol. 2017;19(11):4670–83. pmid:28892251 - 28. Castelle CJ, Banfield JF. Major new microbial groups expand diversity and alter our understanding of the tree of life. Cell. 2018;172(6):1181–97. pmid:29522741 - 29. Hug LA, Baker BJ, Anantharaman K, Brown CT, Probst AJ, Castelle CJ, et al. A new view of the tree of life. Nat Microbiol. 2016;1:16048. pmid:27572647 - 30. Mukherjee S, Stamatis D, Bertsch J, Ovchinnikova G, Katta HY, Mojica A, et al. Genomes OnLine database (GOLD) v.7: updates and new features. Nucleic Acids Res. 2019;47(D1):D649–D59. pmid:30357420 - 31. Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, et al. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol. 2011;7:539. pmid:21988835 - 32. Bodenhofer U, Bonatesta E, Horejš-Kainrath C, Hochreiter S. msa: an R package for multiple sequence alignment. Bioinformatics. 2015;31(24):3997–9. pmid:26315911 - 33. Hackmann TJ. Accurate estimation of microbial sequence diversity with Distanced. Bioinformatics. 2020;36(3):728–34. pmid:31504180 - 34. Stamatakis A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014;30(9):1312–3. pmid:24451623 - 35. Miller MA, Pfeiffer W, Schwartz T, editors. Creating the CIPRES Science Gateway for inference of large phylogenetic trees. 2010 gateway computing environments workshop (GCE); 2010: Ieee. - 36. Revell L. phytools: an R package for phylogenetic comparative biology (and other things). Methods Ecol Evol. 2012;3(2):217–23. - 37. Yu G, Smith DK, Zhu H, Guan Y, Lam TTYJMiE, Evolution. ggtree: an R package for visualization and annotation of phylogenetic trees with their covariates and other associated data. 2017;8(1):28–36. - 38. McMurdie PJ, Holmes S. phyloseq: an R package for reproducible interactive analysis and graphics of microbiome census data. PLoS One. 2013;8(4):e61217. pmid:23630581
The Morgan-Monroe Forest (MMSF) Ameriflux tower became operational in February 1998 and stands at 48 meters tall. The site is part of the AmeriFlux and FLUXNET networks of sites and regularly contributes data to both AmeriFlux and FLUXNET databases. The MMSF AmeriFlux project is a joint assessment of the role of forest ecosystems asnet carbon sinks by micrometeorological (eddy-flux, EC) and biometric (carbon-pool increments) methods. This data, and the published analyses derived from it, are increasingly used by network-wide synthesis activities to assess, model, and/or up-scale biosphere-atmosphere exchange of carbon on regional to global scales. The research has been supported by the U.S. Department of Energy (USDOE). For more details about the MMSF AmeriFlux tower and site click here.
The Power of Thick Eyebrows Eyebrows are often overlooked in the realm of beauty and attraction, but recent research has shed light on the fascinating link between thick eyebrows and attractiveness. Thick, well-groomed eyebrows have become increasingly popular in recent years, but their appeal goes beyond just being a current trend. This article explores the science behind why thick eyebrows are considered attractive, the evolutionary biology that may explain this preference, and the impact of cultural and media influences on eyebrow aesthetics. The Evolutionary Biology Behind Thick Eyebrows From an evolutionary perspective, thick eyebrows may be seen as a sign of good health and vitality. Research suggests that eyebrows serve as a protective barrier, preventing debris, sweat, and other foreign particles from entering the eyes. Thicker eyebrows may have been favored by evolution as they provide better protection for the delicate eyes, enhancing an individual’s chances of survival in the wild. Facial Symmetry: A Key Indicator of Attractiveness Facial symmetry plays a crucial role in determining attractiveness, and eyebrows contribute to this symmetry. Thick eyebrows can help to balance the face and create a symmetrical appearance. Studies have shown that individuals with more symmetrical faces are perceived as more attractive, possibly because facial symmetry is associated with good health and genetic fitness. The Influence of Thick Eyebrows on Facial Expressions Thick eyebrows have a significant impact on facial expressions and can enhance emotional communication. Research has found that individuals with more prominent eyebrows are better at expressing emotions such as surprise, anger, and fear. This may be because thicker eyebrows provide a more defined frame for the eyes, giving individuals greater control over their facial expressions. Thick Eyebrows as a Sign of Confidence Thick eyebrows are often associated with confidence and assertiveness. Psychologists have found that individuals with thicker eyebrows are perceived as more dominant and self-assured. This perception may be rooted in the fact that thicker eyebrows draw attention to the eyes, which are crucial in non-verbal communication. The enhanced eye contact created by thick eyebrows can convey confidence and engage others in social interactions. Cultural Differences in Eyebrow Preferences While thick eyebrows may be universally attractive, there are cultural variations in eyebrow preferences. Different societies have diverse beauty ideals, which impact the perception of attractiveness. For example, in Western cultures, thin and well-defined eyebrows have been traditionally favored. However, in recent years, there has been a shift towards thicker, more natural-looking eyebrows. In contrast, some Asian cultures still prefer thinner eyebrows as a symbol of femininity and youthfulness. The Role of Media in Shaping Eyebrow Trends The media plays a significant role in shaping eyebrow trends and influencing perceptions of attractiveness. Popular culture, including films, television shows, and magazines, often showcases celebrities with well-groomed and thick eyebrows. This exposure to celebrities with thick eyebrows contributes to the growing desire for fuller eyebrows among the general population. Celebrities and Their Impact on Eyebrow Fashion Celebrities have a significant impact on eyebrow fashion and trends. Iconic figures like Cara Delevingne and Audrey Hepburn have popularized the trend of thick, bold eyebrows. Their influence has inspired many people to embrace and enhance their natural eyebrow shape, resulting in a shift in beauty standards and preferences. The Science of Eyebrow Enhancements Advancements in cosmetic procedures have made it possible to enhance and shape eyebrows according to individual preferences. Techniques such as microblading and eyebrow transplants offer semi-permanent or permanent solutions for those looking to achieve thicker eyebrows. These procedures have gained popularity as individuals seek to achieve the desired aesthetic without relying solely on makeup or natural growth. Enhancing Thick Eyebrows Naturally: Tips and Tricks For those looking to enhance their eyebrows naturally, there are several tips and tricks that can help. Regularly grooming and shaping eyebrows can give the appearance of fullness. Additionally, using brow serums and oils can promote hair growth and improve the overall health of eyebrows. Proper nutrition, including a balanced diet rich in vitamins and minerals, can also contribute to thicker, healthier eyebrows. Eyebrows and Personality Traits: The Connection Recent research suggests that eyebrows may provide subtle cues about an individual’s personality traits. For example, people with thicker eyebrows are often perceived as more outgoing, assertive, and emotionally expressive. On the other hand, individuals with thinner eyebrows may be seen as more introverted and reserved. These perceptions are subjective but offer an intriguing link between eyebrows and personality. The Future of Eyebrow Trends: What to Expect As beauty standards and trends continue to evolve, it is difficult to predict the future of eyebrow aesthetics. However, it is likely that the emphasis on natural-looking, thick eyebrows will persist. With advancements in cosmetic procedures and a growing interest in enhancing natural features, more people may turn to semi-permanent solutions for achieving their desired eyebrow look. Additionally, as diversity and inclusivity become more valued, we may see a celebration of all eyebrow shapes and styles. The link between thick eyebrows and attraction goes beyond mere aesthetics. From an evolutionary perspective to cultural influences and media trends, eyebrows play a significant role in shaping perceptions of beauty and attractiveness. Whether naturally thick or enhanced through cosmetic procedures, eyebrows have the power to transform facial expressions, boost confidence, and communicate personality traits. As we continue to explore the fascinating world of eyebrows, it is evident that they are more than just a small strip of hair above our eyes – they are a powerful tool in the realm of attraction.
- Scalars are quantities which are fully described by a magnitude alone. - Magnitude is the numerical value of a quantity. - Examples of scalar quantities are distance, speed, mass, volume, temperature, density and energy. - Vectors are quantities which are fully described by both a magnitude and a direction. - Examples of vector quantities are displacement, velocity, acceleration, force, momentum, and magnetic field. Categorize each quantity below as being either a vector or a scalar. Speed, velocity, acceleration, distance, displacement, energy, electrical charge, density, volume, length, momentum, time, temperature, force, mass, power, work, impulse. - electrical charge
Having more types of bacteria on your skin might not sound very desirable, but it could save you from a mosquito bite, according to a study published in PLoS ONE on Dec. 28. Human skin is host to a wide range of microorganisms, many of which metabolize components in sweat to produce compounds that make up an individual’s specific body odor. The research findings suggest that how attractive you are to mosquitoes depends on your unique scent, or in other words, the diversity of your skin microbes. The results might have profound implications in the field of malaria prevention. Niels Verhulst of Wageningen University in the Netherlands and colleagues collected scents from the feet of a group of male volunteers and exposed the odors to Anopheles gambiae, a major species of mosquitoes that carry the malaria parasite. The researchers also cultured bacteria samples from the participants and found that individuals who were more attractive to the mosquitoes had a higher bacteria load but lower microbial diversity. Participants with a higher count of Pseudomonas or Variovorax bacterial species or altogether higher microbial diversity were less attractive. The team postulates that individuals with a wider range of microbes are more likely to host specific bacterial types that produce compounds that somehow interfere with the skin’s attractiveness to this African mosquito species. “The discovery of the connection between skin microbial populations and attractiveness to mosquitoes may lead to the development of new mosquito attractants and personalized methods for protection against vectors of malaria and other infectious diseases,” wrote the study authors in their paper. You can read the research paper here. Find us on Facebook: facebook.com/epochtimessci Please send any feedback to email@example.com
Whenever or wherever there is a talk on organic farming these days an accompaniment of some queer sounding words also reaches our attention. We just take them for granted as organic farming invariably means all these things called Pseudomonas, Trichoderma, Asospirillum, Azetobacter and a lot more. At the same time some others say that we are going back to our roots in agricultural tradition through organic farming. My God, was there a Pseudomonas anywhere in the long tradition of local agriculture. Nay, never. Beware, this is the new normal age and the organic farming of these days is the new normal organic farming. Old normal cannot be assumed as the touch stone or reality check for the new normal, we have to admit. Still Pseudomonas and Trichoderma, aren’t we to locate them somewhere in the history of non toxic agriculture. This brief note depicts a search unto the roots of the much familiar word, Pseudomonas, in the present day organic mode of farming. If pseudomonas has a story how long back are we to traverse to locate it in history. Our story will fit comfortably within the span of the twentieth century plus the few years of the present one, and it will have to be divided into stages such as the origin, development and modern times. In the absence of a reliable fossil record or some carbon dating process as to the origin of a huge number of words we will have to restrict ourselves to their appearance in the literatures over a period of time. It is anyhow definite that pseudomonas also might have existed through millions of years as just any other bacterium. We can only chart back their known co existence with mankind. The first known reference to pseudomonas appears in the writing of the German professor Migula of the Karlsruhe Institute at the very end of the nineteenth century. His description of the new genus was short and inaccurate but, even so, it stands worth for publishing. It reads: ‘Cells with polar organs of motility. Formation of spores occurs in some species, but it is rare (for instance: Pseudomonas violacea)’. That was all. We now know that Pseudomonas strains do not produce spores, and it seems possible that Migula may have been observing refractile granules of reserve materials, which often look like spores. Unfortunately even Migula never clarified the etymology of the word Pseudomonas in any of his writings, but it suggested to later taxonomists as a direct derivation from the Greek word monas or unit. We owe this verbal lineage to Bergey's Manual of Determinative Bacteriology of 1957. The next known description of the remarkable versatility of Pseudomonas species appears in a thesis of L.E. den Dooren de Jong (1926), the famous Dutch mycologist and bacteriologist. He was the last student of the famous microbiologist Beijerinck and was assigned with the project of examining the soil microflora with respect to the degradation of organic compounds as part of the process of carbon mineralization. The results were quite dramatic, in the sense that bacteria of the genus Pseudomonas appeared to be endowed with the ability of decomposing a large variety of organic molecules, including many that are often toxic to microorganisms of other groups and to higher organisms. The chaotic condition of Pseudomonas taxonomy was much aggravated by the enormous number of species assigned to the genus. A rough estimate showed that there were more than 800 known names of species assigned to Pseudomonas towards the middle of the twentieth century. Studies were conducted meanwhile particularly in America regarding RNA of several pseudomonas variants, and most of them brought out promising results. In the old times, the names assigned to new species usually referred to some striking the set of observable characteristics such as growth requirements, colour production, colony appearance, etc. As it is quite evident, nobody has so far been able to locate any of the observable characters of pseudomonas as subscribing to the roots of the word that forms its name. Still the darkness that shrouds the etymology or the origin of the name in no way hinders the various applications of this bacterium in modern age organic farming. Among the thousands of strains of pseudomonas, the one variant, fluorescens, opens up a whole world of possibilities in agriculture. This single variant has literally rewritten the destiny of organic farming in contemporary times. Heaven knows how many more variants of pseudomonas could be unearthed in future research and how mighty would be their impact on bettering the prospects of agriculture in this planet. Let’s wait for it. Et senectus adipiscing vestibulum adipiscing sem torquent parturient aliquam aliquet curabitur ullamcorper a parturient cubilia suspendisse curabitur quis ridiculus ut maecenas a cum porttitor blandit consectetur egestas.Sem etiam vestibulum a suspendisse sit sociosqu massa urna elit. Bibendum egestas elit fames adipiscing scelerisque a est amet a nisi volutpat pharetra sed a eget nunc sapien per.
Is Sustainable-Labeled Seafood Really Sustainable? Part one of a three-part series by Daniel Zwerdling and Margot Williams. Rebecca Weel pushes a baby stroller with her 18-month-old up to the seafood case at Whole Foods, near ground zero in New York. As she peers at shiny fillets of salmon, halibut and Chilean sea bass labeled "certified sustainable," Weel believes that if she purchases this seafood, she will help protect the world's oceans from overfishing. But some leading environmentalists have a different take: Consumers like Weel are being misled by a global program that amounts to "greenwashing" — a strategy that makes consumers think they are protecting the planet, when actually they are not. At Whole Foods, the seafood counter displays blue labels from the Marine Stewardship Council (MSC), an international, nonprofit organization. The MSC is a prime example of an economic trend: Private groups, not the government, are telling consumers what is good or bad for the environment. The MSC says its label guarantees that the wild seafood was caught using methods that do not deplete the natural supply. It also guarantees that fishing companies do not cause serious harm to other life in the sea, from coral to dolphins. The idea is spreading fast throughout the food industry. Megachains like Target, Costco and Kroger are selling seafood with the MSC label. McDonald's says you are munching on "certified sustainable" wild Alaskan pollock every time you eat a Filet-O-Fish sandwich. The fast-food company has used MSC-certified fish since 2007 in the U.S., and as of February, they are putting the MSC logo on their fish sandwich boxes. Consumers like Weel say the labels help them feel better about the products they buy. "I want to feel that I'm doing the right thing," says Weel, a pediatrician, as her 4 ½-year-old daughter bolts into the vegetable aisle in neon-colored boots. When Weel shops for seafood, she says, she wants to make choices "that will help preserve the wild fish populations in the oceans." Executives at Whole Foods say they are helping consumers do exactly that, by pledging in recent years to sell as many MSC-certified products as possible. Seafood is the last major food that people catch in the wild, and "we can't just go out and find more fish to catch," says Carrie Brownstein, global seafood quality standards coordinator for Whole Foods. Brownstein cites a 2012 United Nations report that warned that almost 30 percent of the world's wild fisheries are "overexploited," and more than 57 percent of wild fisheries are "at or very close" to the limit. Other groups have devised ranking systems for seafood. The Monterey Bay Aquarium labels products like a traffic light — green, yellow or red — to urge shoppers to buy or avoid a particular fish. The Blue Ocean Institute has a similar system. The MSC reports it has labeled roughly 8 percent of the global seafood catch, worth more than $3 billion. That makes it the most widespread and best-known rating scheme around the world. A recent survey of 3,000 Americans, conducted on behalf of NPR, suggests that a majority of consumers want to feel good about the seafood they buy. The poll by Truven Health Analytics found that almost 80 percent of the people who eat seafood regularly said it is "important" or "very important" that their seafood is sustainably caught. If they buy MSC-labeled seafood, they may be paying a premium. Brownstein says Whole Foods charges more for some of its seafood labeled "certified sustainable," although she wouldn't give numbers. Some fishing industry executives told NPR that they are getting roughly 10 percent more for their MSC-labeled products than for seafood that's not certified sustainable. That's one reason why many environmentalists who supported the MSC in the past say you might be troubled to know what the MSC and supermarkets like Whole Foods are not telling you: "We would prefer they didn't use the word sustainable," says Gerry Leape, an oceans specialist at the Pew Environment Group, one of the major foundations working on oceans policies. Leape has supported the MSC for more than a decade as a member of its advisory Stakeholder Council. But he and other critics say that the MSC system has been certifying some fisheries despite evidence that the target fish are in trouble, or that the fishing industry is harming the environment. And critics say the MSC system has certified other fisheries as sustainable even though there is not enough evidence to know how they are affecting the environment. When a customer sees the MSC's sustainable label at the supermarket, "the consumer looks at the fish and says, 'Oh, it has the label on it, it must be sustainable,' " Leape says. "And in some fisheries that the MSC has certified, that's not necessarily the case." Biologist Susanna Fuller, co-director of marine programs at Canada's Ecology Action Centre, agrees. "We know ... that blue stamp doesn't mean that you're sustainable," she says. When asked if consumers should choose MSC-labeled seafood, Fuller pauses. "It's a gamble," she says. Still, even the MSC's sharpest critics say they support the broad ideas behind the organization and its stated goals. "Originally I thought it was a good idea," says Jim Barnes, director of the Antarctic and Southern Ocean Coalition, a network of dozens of environmental groups around the world. "The world needed something like this to help steer consumer decisions, and so I wasn't against it at all at the beginning. And I'm not totally against it now." But Barnes worries that the MSC is straying from its mission and needs a dramatic overhaul. "It can be a force for good. If it continues on the path that it's on, however, and doesn't solve a lot of these issues that have been raised," he says, "I don't think it will be." Protecting The Oceans And The Bottom Line The MSC was born because of a crisis. Michael Sutton, one of its founders, says that he and his colleagues dreamed up the idea after the cod industry collapsed off the Nova Scotia coast in 1992. Cod fishing had been the foundation of the region's economy and culture, worth an estimated $700 million each year. But when the cod population plunged to a fraction of previous levels, the Canadian government banned cod fishing — putting thousands of people out of work. "It was so bad in some of these coastal communities, the government had to send in suicide-prevention teams," recalls Sutton, who was then vice president of the World Wildlife Fund. "We were not only trashing our marine environment, but we were ruining the character of coastal communities that had existed on fisheries for centuries," Sutton says. Sutton and other environmental advocates, and many scientists, warned that the cod collapse taught the world a sobering lesson: Government agencies that were supposed to monitor and regulate fishing were often doing a lousy job. Cod weren't the only fish in trouble. Studies showed that populations of major species like swordfish, marlin and tuna were plunging too. "So we needed to do something drastic," Sutton says. He and colleagues decided to convince industry executives that protecting the oceans would also protect their bottom line. Sutton made a pilgrimage to the Unilever conglomerate, then one of the largest producers of frozen seafood — including fish sticks. "My pitch to Unilever was, 'The future of their frozen fish business is at stake,' " Sutton remembers. "Overfishing is not only bad for the environment, but it's really bad for business, because it means that they're not going to have fish in the future the way they have them today." Unilever and the World Wildlife Fund joined hands in 1997, and set up the MSC. Unilever eventually sold its seafood subsidiary and left the program, but the founding partner left its mark: From the day the MSC opened its doors in London, it has been a balancing act between industry and the environment. Today, the MSC has more than 100 employees worldwide, including about 60 at its headquarters in a renovated building down the street from St. Paul's Cathedral. "MSC has a global vision," says Rupert Howes, the organization's chief executive officer. "We want to see the global oceans transformed onto a sustainable basis." MSC's System Of Certification Here's the MSC's basic idea: Executives of a growing number of food companies want to be "green." Some genuinely want to protect the environment; others may be mainly seeking a marketing edge. But when it comes to seafood, those executives don't have the time or knowledge to figure out which fishing companies are plundering the ocean and which ones are doing a good job. So the MSC does the work for them. The MSC does not certify fisheries itself. Instead, a fishery that wants the label hires one of roughly a dozen commercial auditing companies to decide whether its practices comply with the MSC's definition of "sustainable." The MSC's standard for sustainability includes dozens of items, but they're designed to assess whether the population of a fishery's target species is healthy; if the fishing practices don't cause serious harm to other life in the sea — including by accidentally catching other animals, which is called bycatch; and if the fishery has good management. If the commercial auditors give the fishery a passing score, then the fishery gets the right to use the blue "Certified Sustainable Seafood" label. It can be a long and expensive process. Some certifications have taken years, and the fisheries have paid the auditing firms up to $150,000 or more. Howes says that when a store sells MSC-certified seafood, the label announces to consumers, "We care where our fish comes from." He adds that as a growing number of food companies sell MSC-labeled seafood, executives of fisheries that don't have it are motivated to join the program. That catalyzes "real and lasting change in the way the oceans are fished," Howe says. During the MSC's first decade, there wasn't much demand for sustainable seafood by the U.S. food industry, and the MSC "almost went bankrupt," Sutton says. And that put the spotlight on the MSC's financial model. The way that executives structured it, MSC's budget comes partly from foundation grants. But some revenue comes from the licensing fees that MSC charges businesses for the right to sell seafood with the MSC label. So as long as many supermarket chains were not promoting it, the MSC wasn't getting much money. Then, in 2006, everything changed. The MSC and its supporters had sent a series of delegations to Bentonville, Ark., world headquarters of Wal-Mart. The delegations helped convince Wal-Mart executives to promise that all the seafood they sell in the U.S. would be MSC-certified by 2012. "We had to get Wal-Mart," Sutton says. "The significance of their commitment, of course, is that once Wal-Mart made a commitment to the Marine Stewardship Council, every other major retailer had to follow suit, because none of them wanted to be less progressive than Wal-Mart." Sure enough, other discount chains promised to go sustainable, too. "Overnight, the demand far outstripped the supply," says Sutton, "and so the suppliers had to catch up." Since Wal-Mart made its pledge in 2006, the MSC system has certified seven times as many fisheries as it did during the same period before, according to NPR's analysis. Still, the MSC system has not been able to certify enough seafood for Wal-Mart to meet its 2012 deadline, according to Bob Fields, a senior buyer for Wal-Mart and Sam's Club. The explosion in sales of MSC-labeled products at leading chain stores has transformed the organization's finances. The year that Wal-Mart pledged to promote MSC-labeled seafood, the MSC received most of its income from foundation grants — 75 percent, according to the MSC annual report. Meanwhile, it received only 7 percent of its income from label licensing fees. Today, those licensing fees generate more than half of the MSC's revenue. And since Wal-Mart executives embraced sustainable seafood, the MSC has also received millions of dollars in grant money from the Walton Family Foundation, which was created by Wal-Mart's founder and is governed by his descendants. The Walton Family Foundation has become one of the MSC's largest donors, according to financial reports. The director of the foundation's environment programs, Scott Burns, served on the MSC's board of directors before he went to Walton. Critics say that the day Wal-Mart embraced sustainable seafood, it was a blessing for the MSC system — and a curse. The critics charge that the MSC system has compromised its standards to keep up with the booming demand from Wal-Mart and other chains that followed suit. Fuller, of the Ecology Action Centre, says she has watched the MSC system "struggling with meeting the demands of the system that they helped create ... They have ended up having to lower the bar." When ocean specialist Daniel Pauly, a fisheries professor at the University of British Columbia, talks about the MSC today, he sounds dispirited. Pauly took part in early meetings in London that helped create the MSC and now says he has lost faith in the system. "The MSC is doing the business of the business community," Pauly says, not the environment. Balancing 'Sustainable' Swordfish With At-Risk Sharks Some environmentalists and scientists say if you want to understand why they're losing faith in the MSC, look at the battle over certifying Canadian swordfish. Next time you buy swordfish at a store like Whole Foods, it might come from a controversial fishery off the coast of Nova Scotia. Fishermen have known for ages that when they go swordfishing in some parts of the Atlantic, they will accidentally catch sharks — lots of sharks, says Steve Campana, who runs the Canadian government's Shark Research Laboratory, near Halifax, Nova Scotia. When NPR caught up with Campana one morning, he and his research crew were heading into the Atlantic on a 34-foot trawler, the Dig It. They were planning to attach sophisticated satellite transmitters to blue sharks. "On average, from what we've seen over the years, the swordfishermen catch about five blue sharks for every one swordfish," Campana said, holding onto a metal strut as the Dig It bounced through the waves. Add it up, studies suggest, and Canada's long-line swordfish boats — so named because they typically let out 30 or 40 miles of fishing line, dangling more than 1,000 hooks — accidentally catch tens of thousands of sharks every year. This touches on one of MSC's three fundamental rules, even though studies show swordfish are plentiful. The second rule says that a fishery is not sustainable if it does not maintain "the integrity of ecosystems" — which means, in part, that it's not sustainable if there is too much bycatch. The Committee on the Status of Endangered Wildlife in Canada, which is funded and appointed by the Canadian government, has warned that the main kinds of sharks that swordfishermen accidentally catch are "threatened" or "endangered" or "of special concern." Swordfishermen generally release the sharks. But there had been few studies on what happens to those sharks after fishermen let them off the hooks — until Campana and his colleagues came along. About six years ago, they started tagging sharks with satellite transmitters before fishermen set them free. During one outing, the crew showed how they do it: They snagged a 5-foot blue shark on a hook baited with mackerel, reeled it in, and then pinned the thrashing shark against the boat's broad, flat railing. They jabbed a satellite transmitter, which looks like a turkey baster with a barb on one end, into the shark's leathery skin. And then they let the shark go, the transmitter protruding like an unsightly growth. The device is equipped with a computer chip that records data every 10 seconds, including where the shark goes, how deep it goes, and how long it stays there. After about 10 months, the tube pops off the shark and floats to the surface, beaming all the information via satellite to Campana. When the transmitter shows that a shark went to the deepest part of the sea and just stayed there, Campana knows when and where the shark died. Campana and his colleagues published some of their first findings based on these studies in July 2009, in the journal Marine Ecology Progress Series. Their studies showed that up to 35 percent of the sharks caught by swordfish boats die, either right on the hook or within days after the fishermen set them free. The findings suggested that Canadian swordfish boats accidentally kill almost two sharks for every swordfish they catch. Campana says that when you put these findings in context, it is troubling. Other studies suggest that the populations of major kinds of sharks in the North Atlantic have plunged as much as 40 to 60 percent in just the past few decades. "Any time you see consistent declines like that, and the fact that all of these large sharks seem to have declined all over the world," Campana says, "it's just a worrisome pattern." The president of Canada's swordfish industry, the Nova Scotia Swordsfishermen's Association, dismisses Campana's conclusions. Campana's report on shark deaths could not have come at a worse time for Canada's swordfish industry. Only months before the report was published, the association, which catches most of Canada's commercial swordfish, had applied to the MSC for certification. The industry sells much of its swordfish to Whole Foods and other stores in the U.S. Those conclusions "were not close to what the industry felt was reality," Troy Atkinson, president of the association, says while sitting in his store, crammed with giant spools of plastic fishing line and boxes of heavy metal hooks. He runs the main business that supplies equipment to Canada's swordfishing fleet. "We're sometimes portrayed as a bunch of cowboys out to harvest the last buffalo," he says. "We're portrayed as some of the worst in the world. And it's just not correct." Atkinson cites reports by other researchers that conclude that the population of blue sharks off the coast of Canada is healthy – especially reports by the International Commission for the Conservation of Atlantic Tunas (ICCAT), which represents dozens of governments whose nations fish the Atlantic. So, Atkinson says, Canada's swordfishermen could catch and kill even more sharks without hurting the environment. Other studies suggest the evidence is contradictory, and that scientists don't know for sure what is happening to sharks across the Atlantic. For example, the optimistic ICCAT researchers whom Atkinson cites acknowledge that their conclusions are "highly uncertain" because they're based on unproven assumptions and incomplete data. However, studies showing that blue sharks have sharply declined focus on a limited region. So scientists and environmentalists were dumbfounded in early 2012 when the MSC system decided that Canada's swordfish industry can use the label "Certified Sustainable Seafood." "That is absolutely the kind of fishery that should not be certified," says Leape of Pew Environment Group. "That fishery is outrageous." Certifying Canadian swordfish "is the worst thing they can do, says Fuller, of the Ecology Action Centre. "That is not at all the way it should go." A Program Based On 'Science And Evidence' The Ecology Action Centre and dozens of other environmental groups denounced the MSC. The groups said in a letter to the MSC system that roughly 10 percent of Canada's swordfish are caught with harpoons — a method environmentalists support because there is hardly any bycatch. But the long-line boats that supply most of the swordfish catch a "staggering" number of sharks, as the environmentalists put it. "Certifying [Canada's long-line swordfish boats] compromises the credibility of the MSC," the groups warned, "and the sustainable seafood movement as a whole." Howes, from the MSC, disagrees. He says the controversy over Canadian swordfish "illustrates a key feature of the MSC program, which is the fact that the program is premised on science and evidence. That fishery has met the MSC standard." The analysts who evaluated the fishery for the MSC system agreed that the swordfish boats do kill large numbers of sharks. They acknowledged that the optimistic studies on sharks that the swordfish industry cites are uncertain, but they concluded that the weight of evidence suggests it is "highly likely" there are plenty of blue sharks left in the sea. The analysts also stressed that, by all accounts, other countries kill far more sharks than Canada's swordfishermen do. So, they said, Canada causes only a small part of the bycatch problem. "We are not saying that shark bycatch doesn't matter," says Howes. "What we're saying implicit within the labeling of that fishery is, the shark bycatch of that unique individual certified fishery is safe. It's within ecological limits." Barnes, of the Antarctic and Southern Ocean Coalition, says the controversy over Canadian swordfish illustrates why the booming demand for sustainable seafood actually threatens to hurt the movement more than help it. "The bottom line is that there are not enough truly sustainable fisheries on the earth to sustain the demand," Barnes says. "The retailers and wholesalers all want access to this kind of label because they're trying to ... make money with their consumers. There's nothing wrong with that; that's how the world works." But Barnes charges that the MSC is labeling some fisheries as sustainable — even when they are not — partly to fill the seafood counters at Wal-Mart and other large chains. "I'm not down on Wal-Mart at all, don't get me wrong," he says. "But to get on line with big chains as your goal leads you down a path that I don't think the originators of the MSC intended." Howes could hardly disagree more. "If you really want to contribute to the transformation of our economic systems more generally, you've got to engage with the big guys. And therefore, I absolutely welcome Wal-Mart's commitment," he says. "That will drive change." Howes continues: "Will that overload the MSC system? No." He argues that there's no way the MSC could label problem fisheries sustainable just to satisfy demand, because, he says, the certifiers evaluate each fishery based only on scientific evidence. But he adds, "We want to see oceans fished sustainably forever. We're not going to achieve that by becoming a small niche organization that engages with a handful of perfect fisheries." Researcher Barbara Van Woerkom contributed to this story. MELISSA BLOCK, HOST: This is ALL THINGS CONSIDERED from NPR News. I'm Melissa Block. ROBERT SIEGEL, HOST: And I'm Robert Siegel. In this part of the program, we're going to hear about seafood, specifically about the two words that are changing the way many of us buy our seafood: Certified Sustainable. BLOCK: The movement to sell seafood that can be fished without doing harm to the species or the environment is no longer limited to high-end grocery stores. Wal-Mart, Kroger's, even McDonald's are getting into the act, selling wild caught seafood with those magic words: Certified Sustainable. SIEGEL: The Marine Stewardship Council gives its seal of approval to fisheries that they say help protect the oceans. The MSC is also a prime example of an important trend: private groups, not the government, telling consumers what's good or bad for the environment. BLOCK: Many environmentalists say the MSC is a good idea. But as NPR's Daniel Zwerdling reports, even some of its supporters warn you don't always get what you pay for. DANIEL ZWERDLING, BYLINE: Come with me for a moment to a supermarket and you'll see why this program for sustainable seafood is so compelling. You'll also learn that shopping for so-called sustainable seafood is trickier than you might like. I'm meeting an executive from Whole Foods at their store near Ground Zero in New York. It's like a temple for food. There are mountains of fruits and vegetables and chocolates and olive oils. And look at the seafood counter; there are shiny whole fish and fragile filets and mussels and clams and shrimps - they're all nestled in beds of ice. CARRIE BROWNSTEIN: I'm Carrie Brownstein. I'm the Global Seafood Quality Standards coordinator. And we're here to talk about seafood sustainability. ZWERDLING: She says companies like Whole Foods have to move to sustainable seafood. Fish and shellfish caught in the sea are the only major foods left that people still get in the wilds. Brownstein cites a recent study from the United Nations. It warns that almost 30 percent of the world's fisheries are over-exploited, which means that people are fishing them faster than they can rebound. And most of the other fisheries are near their limit. So, push them any more and they could decline. BROWNSTEIN: We can't just go out and find more fish to catch. ZWERDLING: You're convinced people need to be worry about seafood and the oceans. BROWNSTEIN: Yeah, people need to be careful. People need to be careful. And I think that retailers like Whole Foods and other companies can play a huge role in making a difference for the oceans. ZWERDLING: So, Whole Foods has promised to sell as much seafood as it can that the Marine Stewardship Council says is sustainable. There are other groups that decide which seafood's good or bad for the environment. For instance, one rates them green, yellow or red, like traffic lights. But the MSC system is the most expensive. They say they've certified more than $3 billion worth of seafood. Brownstein points to the fish with the blue MSC logo. BROWNSTEIN: We have a number of MSC-certified fish here right now. We've got halibut, king salmon, Chilean sea bass... ZWERDLING: The logo is an abstract fish with a check mark. BROWNSTEIN: ...and swordfish. ZWERDLING: And the logo proclaims the words... BROWNSTEIN: Certified sustainable by the Marine Stewardship Council. ZWERDLING: In other words, the MSC promises that If you buy this seafood, you won't contribute to over-fishing. And you won't be killing off other life in the sea, whether it's dolphins or coral. A mother has just parked her baby stroller at the seafood counter. Rebecca Weel says she depends on the MSC and Whole foods to tell her what to buy. REBECCA WEEL: I want to feel that I'm doing the right thing without putting too much effort into figuring it out on my own. ZWERDLING: Now, if you see two fish, one says MSC-certified sustainable and the other one doesn't... WEEL: I would definitely choose the sustainable one over the not-sustainable one. ZWERDLING: Are you willing to pay any more for fish that's sustainably certified? WEEL: I would personally, yes. ZWERDLING: Brownstein wouldn't give details, but she said that Whole Foods does charge more for some seafood that's labeled by the MSC. So, ocean specialists say you might be troubled to hear something that Whole Foods and the MSC are not telling you. GERRY LEAPE: In our view, we would prefer they didn't use the word sustainable. ZWERDLING: Gerry Leape helps oversee oceans programs for the Pew Charitable Trusts. He's worked with the MSC for more than 10 years on their official advisory council. And I asked him... When one of our listeners goes to the supermarket and they see that MSC label - this seafood is sustainable - can they believe it's true? ZWERDLING: That's a long pause. LEAPE: It is a long pause. You can't believe across the board that it's necessarily sustainable. ZWERDLING: Or ask a biologist named Susanna Fuller. She co-directs the Marine Program at the Ecology Action Center in Canada. When I go to the supermarket and I see there's fish with a blue label - certified sustainable - and other fish that aren't, should I buy the MSC fish? SUSANNA FULLER: You know, I - you - you know, you're - it's a gamble. ZWERDLING: To understand why many environmentalists say it's a gamble, I joined a research trip one morning off the coast of Nova Scotia. (SOUNDBITE OF AN ENGINE) ZWERDLING: Remember the MSC swordfish, back at Whole Foods? The chain gets some of it from the swordfish industry right here in Canada's waters. Just last spring, the MSC labeled it certified sustainable. ART: My name is Art. My first mate here is Kyle. If you're going to get sick, if you're going to get sick, come out here and throw up out here. Do not throw up inside. ZWERDLING: We're on a 34-footer called Dig It. And we're heading straight into the Atlantic Ocean. The scientist who's leading this expedition is Steve Campana. He doesn't study swordfish. He runs the Canadian government's Shark Research Laboratory, near Halifax. His studies are one reason why many environmentalists around the world say you can't trust the MSC label. Most swordfish boats are called longliners. They let out up to 40 miles of fishing line, dangling with a thousand hooks. And studies show those hooks accidentally catch tens of thousands of sharks every year. Campana says, especially blue sharks. They're called bycatch. STEVE CAMPANA: One average, from what we've seen over the years, the swordfisherman catch about five blue sharks for every one swordfish. So that suggests that it's not really a swordfish fishery that happens to catch sharks, it's a shark fishery that happens to catch swordfish. ZWERDLING: That's one way of putting it. This should be crucial information for the Marine Stewardship Council. Their rules say that a fishery is not sustainable, if it's depleting the population of target fish. That's not the case here, studies suggest that the swordfish themselves are in good shape. But the MSC rules also says that a fishery is not sustainable if it causes too much damage to other animals in the sea. The Canadian government appoints and finances an agency that studies disappearing wildlife. And the agency warns that the kinds of sharks the swordfishermen accidentally catch are threatened or endangered or of special concern. I told my mother the other day that I was going to come out on this boat. And my mother said, why do I care about sharks? I like eating the swordfish. CAMPANA: Sharks are, well, they're, they're the king of the food chain. So they are the equivalent of the lions on the Serengeti Plains of Africa. And if you suddenly wiped out all of the lions, undoubtedly, you would find very strange things happen to the ecosystem there, probably unpredictable things. ZWERDLING: So, a lot of scientists and environmentalists told us, we don't get it - how can the MSC say the swordfish industry is sustainable? We'll come back to that question in a moment and to this research boat. But first, a quick history of the MSC. Let's go back to 1992. (SOUNDBITE OF ARCHIVED NEWS CLIP) UNIDENTIFIED WOMAN #1: Good evening. The news was expected but that didn't make it any less devastating. It's a moratorium on fishing on Northern Cod, a ban that will affect about 20,000 people and gut the backbone of the Atlantic fishery... ZWERDLING: One of the most important fisheries in the world had collapsed. Canada's cod industry had been worth an estimated $700 million a year. CBC Television announced their government was dealing with the crisis by making cod fishing illegal. (SOUNDBITE OF ARCHIVED NEWS CLIP) UNIDENTIFIED WOMAN #2: And with that, fishermen stormed the doors of John Crosby's news conference. (SOUNDBITE OF A CROWD AND YELLING) UNIDENTIFIED WOMAN #2: Security guards locked the doors and frantically called for help. MICHAEL SUTTON: Thousands of people thrown out of jobs, the closure of the cod fisheries. ZWERDLING: Mike Sutton ran the oceans programs back then for the World Wildlife Fund. SUTTON: I mean, it was so bad in some of these coastal communities, they had to - the government had to send in suicide prevention teams. ZWERDLING: Sutton says the collapse of the cod industry shocked people. They finally realized the oceans are in big trouble. And Sutton says the crisis proved something else: the government agencies around the world that were supposed to protect the oceans were often doing a lousy job. SUTTON: We were not only trashing out marine environment, but we were ruining the character of coastal communities that had existed on fisheries for centuries. So we needed to do something drastic. ZWERDLING: Sutton and his colleagues said, we have a solution. Since government officials aren't protecting the oceans, let's convince industry they have to do it. Sutton met with executives at Unilever. They were one of the biggest seafood suppliers in the world. SUTTON: My pitch to Unilever was, the future of their frozen fish business is at stake. Overfishing is not only bad for the environment but it's really bad for business because it means that they're not going to have fish in the future the way they have them today. ZWERDLING: And in 1997, Unilever and the World Wildlife Fund joined hands. They set up the Marine Stewardship Council. So, the MSC was a balancing act between industry and the environment from the day they opened their doors. The headquarters are down the street from St. Paul's Cathedral in London. RUPERT HOWES: MSC has a global vision. We want to see the global oceans transformed onto a sustainable basis. ZWERDLING: That's the man who runs the MSC, Rupert Howes. These offices could be a software company. They are designer chic, lots of glass, hardly anybody wears a tie. The MSC is registered non-profit. HOWES: Okay. MSC employs about 100 people around the world. We have 12 offices globally. On this floor, we have over here our fundraising team... ZWERDLING: And here's the MSC's basic idea. More and more food companies want to look green, right? Some want to help the environment. Some want to attract customers. But when it comes to seafood, those executives don't know which fishing companies are plundering the ocean and which ones are doing a good job, so the MSC system studies it for them. All food companies have to do is offer seafood with a blue MSC label, certified sustainable - and they can probably tell consumers, hey... HOWES: We care where our fish comes from. We care how it was fished. ZWERDLING: And as more and more food companies sell seafood with the MSC label, fisheries that don't have it will think we better get certified, too, or we're going to lose business to our competitors. Howes calls this the MSC's theory of change. HOWES: The theory of change is very, very simple. If we could use a certification and labeling program to create an incentive for fisheries to improve the way they fish the oceans, we could catalyze real and lasting change in the way the oceans are fished. ZWERDLING: But it turned out that during the MSC's early years, the American Food Industry wasn't excited about sustainable seafood. In fact, Michael Sutton, who helped create it, says the MSC almost went bankrupt. The MSC's budget comes partly from foundation grants and part of it comes from getting licensing fees from businesses that sell seafood with the MSC label. So as long as many supermarket chains weren't promoting it, the MSC wasn't getting much money. But starting in 2005, the MSC and its supporters sent a series of delegations to Bentonville, Arkansas, headquarters of Wal-Mart, and everything changed. (SOUNDBITE OF VIDEO) UNIDENTIFIED WOMAN: Wal-Mart is one of the largest retail purchasers of wild cod and farmeries seafood. ZWERDLING: That's a Wal-Mart company video. Mike Sutton and the other delegates helped convince Wal-Mart's executives to make a promise. Wal-Mart would sell as much seafood as possible that's certified sustainable in all their American stores. (SOUNDBITE OF VIDEO) UNIDENTIFIED WOMAN: The goal of the seafood network is to have our wild cod fisheries certified by the MSC, or Marine Stewardship Council, an... SUTTON: We had to get Wal-Mart and the significance of their commitment, of course, is that once Wal-Mart made a commitment to the Marine Stewardship Council, every other major retailer had to follow suit because none of them wanted to be less progressive than Wal-Mart. ZWERDLING: Target promised to go sustainable. Kroger and Costco promised, too. The problem was the MSC had not labeled enough seafood sustainable. SUTTON: Overnight, the demand far outstripped the supply and so the suppliers had to catch up. ZWERDLING: Listen to this fact. Since Wal-Mart embraced sustainable seafood in 2006, the MSC system has certified at least seven times as many fisheries as it did during the same period before. And critics say they've compromised their standards to do it. And that brings us back to Steve Campana. He is the government scientist on the research boat off the coast of Halifax. CAMPANA: We are hoping to get some satellite tags onto blue sharks. ZWERDLING: Remember, studies showed that the Canadian swordfish boats accidentally catch five times as many sharks as swordfish. The swordfishermen generally release most of those sharks, but until Campana came along, nobody had really studied what happened to the sharks after the fishermen let them off the hooks. So about six years ago, Campana started tagging a random sample of sharks with a satellite gizmo before the swordfishermen let them go. On this particular outing, Campana's not on a commercial swordfish boat. He's on a charter that does sport fishing. He's also studying how that affects sharks. The captain has tossed out a fishing line with a chunk of mackerel on the hook. CAMPANA: We have a shark. Yeah, baby. ZWERDLING: The shark's really thrashing. ZWERDLING: They haul a blue shark up on the boat. It's about five feet long, wild eyes. Two men hold it down. CAMPANA: The flipper just knocked his hat off. ZWERDLING: They record the vitals. CAMPANA: Sex? Female. ZWERDLING: Female. Then Campana jabs a tube into the skin. It looks like a turkey baster with a barb on the end, which attaches to the shark. CAMPANA: They're not like mammals. They're not like people. They don't feel pain the same way. ZWERDLING: The tracking device will record where the shark goes, how long it stays there. And then, about 10 months from now, the tube will pop off the shark and float to the surface and it'll beam the information by satellite to Campana. CAMPANA: All right. So the satellite tag's in. Good to go here. ZWERDLING: And they ease her back into the ocean. This shark looks okay for now, but Campana's studies show that up to 35 percent of the sharks caught by swordfish boats die within days after they're caught. Add it all up and Campana's findings suggest that Canadian swordfish boats accidentally kill almost two sharks for every swordfish they catch. Other studies estimate that the populations of major kinds of sharks in the North Atlantic have plunged 40 percent, even 60 percent in just the past few decades. CAMPANA: Any time you see, you know, consistent declines like that and the fact that all of these large sharks seem to have declined all over the world, for them all to be declining, it's just a worrisome pattern. ZWERDLING: Campana published some of his major findings back in 2009 and they couldn't have come at a worse time for Canada's swordfish industry because only a few months before, the industry's president had gone to the Marine Stewardship Council. He said, we want Canadian swordfish to be certified sustainable. Now, here was a troubling study about bycatch by one of the government's top researchers. The industry president didn't buy it. TROY ATKINSON: I had a disagreement with the results. They were not close to what the industry felt was reality. ZWERDLING: You're basically rejecting the findings of, you know, one of the most respected shark scientists. ATKINSON: Yep, I am. ZWERDLING: Troy Atkinson runs the Nova Scotia Swordfishermen's Association. It's based in Halifax. They catch most of the swordfish exported from Canada. Atkinson says, there are some studies out there that conclude that blue sharks in the North Atlantic are doing great. ATKINSON: We're sometimes portrayed as a bunch of cowboys out to harvest the last buffalo, you know. We're portrayed as, you know, some of the worst in the world. And it's just not correct. ZWERDLING: You're saying that your swordfish fleet could be catching or killing more blue sharks and the oceans would still be healthy. ATKINSON: Correct. Yes, that is indeed the fact. ZWERDLING: Actually, the evidence is contradictory. It suggests that scientists don't know for sure what's happening to sharks across the Atlantic. For instance, the optimistic researchers that Atkinson cites who say that blue sharks are doing great, they acknowledge that their conclusions are highly uncertain - their words - because they're based on all kinds of assumptions and incomplete data. On the other hand, studies that show that blue sharks have sharply declined focus on a limited region. So some scientists and environmentalists were dumbfounded last year when the Marine Stewardship Council added up all this confusion and they proclaimed Canada's swordfish industry is certified sustainable. Is it sustainable? FULLER: No, no. It's not at all. ZWERDLING: That's Susanna Fuller again, from the Ecology Action Center in Halifax. The Center joined forces with other groups like the Sierra Club, Oceana, the Shark Research Institute and they denounced the MSC. They said a small amount of Canada's swordfish is caught with harpoons and we're all for that. But 90 percent of the swordfish comes from longline boats, which studies show catch tens of thousands of sharks. How can the MSC call that sustainable? FULLER: It's so egregious. We said, you know, why are you guys doing this because we're trying to actually help get some trust behind that label? And by certifying this fishery, you are just, like, undermining a whole bunch of consumer trust. ZWERDLING: The analysts who evaluated the fishery for the MSC system agreed that the swordfish boats do kill large numbers of sharks and they agreed that the optimistic studies on sharks are uncertain. Still, they concluded that when you put all the evidence together, it is highly likely, their words, that there are plenty of blue sharks left in the sea. The analysts also said that other countries kill way more sharks than Canada's swordfishermen do, so Canada's only a small part of the bycatch problem. We asked the head of the MSC in London what he thinks about bycatch. Rupert Howes. Suppose you said to your family, listen guys, every time we eat a swordfish here in our house, the fishermen who caught it killed two sharks and just dumped their bodies in the ocean. How would your family react to that? HOWES: It's a very good question and I think it illustrates a key feature of the MSC program, which is the fact that the program is premised on science and evidence. That fishery has met the MSC standard. We are not saying that shark bycatch doesn't matter. What we're saying implicit within the labeling of that fishery is, the shark bycatch of that unique individual certified fishery is safe. It's within ecological limits. ZWERDLING: Back on Dig It, off the coast of Canada, I ask the scientist who is studying the sharks, Steve Campana, given all your studies that show that the swordfish industry is accidentally killing huge numbers of sharks, how can the Marine Stewardship Council say that this swordfish industry is sustainable? CAMPANA: That's an excellent question and I don't have the answer to that. JIM BARNES: Well, I don't know why he ducked that one. Well, I think the answer's obvious. ZWERDLING: Jim Barnes is a lawyer. He runs an international coalition of 30 environmental groups. He says ever since the Wal-Marts of the world said they want sustainable seafood, it's been a blessing for the sustainable movement and a curse. BARNES: The bottom line is that there are not enough truly sustainable fisheries on the earth to sustain the demand. The retailers and wholesalers all want access to this kind of label because they're trying to, you know, make money with their consumers. Again, there's nothing wrong with that. That's how the world works. ZWERDLING: Jim Barnes, are you saying that the MSC system is rushing to certify seafood as sustainable when they know it's not, at least partly because they need to fill the seafood counters of Wal-Mart and other huge chains? BARNES: Yes, in some cases, they're doing that. I'm not down on Wal-Mart at all, don't get me wrong. But to get on line with big chains as your goal leads you down a path that I don't think the originators of the MSC intended. HOWES: If you really want to contribute to the transformation of our economic systems more generally, you've got to engage with the big guys. ZWERDLING: The MSC's president, Rupert Howes. HOWES: And therefore, I absolutely welcome Wal-Mart's commitment. That will drive change. And so your question about will that overload the MSC system? No. ZWERDLING: Howes says there's no way the MSC could label problem fisheries just to satisfy demand, because, he says, the certifiers evaluate every fishery based only on scientific evidence. But Howes says, you also have to realize, this is the real world. There is no such thing as perfection. HOWES: We want to see oceans fished sustainably forever. We're not going to achieve that by becoming a small niche organization that engages with a handful of perfect fisheries. ZWERDLING: Even the biggest critics told me that the Marine Stewardship Council has accomplished some important things and they say maybe here's the most important, the MSC has helped get people to think about sustainable seafood. Back when the MSC got started, not many people did. But NPR conducted a survey for this story with a company called Truven Health Analytics. The results show that almost 80 percent of Americans who eat seafood regularly said it is important or it's very important that their seafood is sustainably caught. Daniel Zwerdling, NPR News. SIEGEL: Our story was co-reported by NPR's Margot Williams. Tomorrow, critics say the MSC can make a positive difference when it honors its original goals. Meanwhile, you can learn more about the pros and cons of sustainable seafood at NPR.org. Transcript provided by NPR, Copyright NPR.
There’s still more than enough science to figure out in regard to how we’ll successfully make it to Mars, and survive once we get there. But some scientists are working to make sure we don’t kill each other on the way. A NASA-funded study is set to crank up soon, and will trap assign six volunteer “astronauts” to a faux Mars base in Hawaii for a full year to see how they handle working together in tight quarters for such an extended amount of time. The mission is the culmination of a previous four-month and eight-month mission at the same site. The study will be held at the Hawaii Space Exploration Analog and Simulation (HI-SEAS). Kim Binsted, HI-SEAS principal investigator and a professor at University of Hawaii at Mānoa, had this to say about the mission goals: “The longer each mission becomes, the better we can understand the risks of space travel. We hope that this upcoming mission will build on our current understanding of the social and psychological factors involved in long duration space exploration and give NASA solid data on how best to select and support a flight crew that will work cohesively as a team while in space.” These folks will spend all their time in the habitat, and any time they go outside, they’ll have to wear faux spacesuits. Though it seems almost insane to give up a full year of your life to perform fake tests and handle fake crises on a fake base station in Hawaii, these brave volunteers will almost certainly provide valuable data that could come in handy once NASA starts putting together a team for the Red Planet. Considering a Mars mission will certainly be the longest, and most challenging, human journey in history, figuring out a way to make sure everyone can get along will be a major part of the equation. The last thing we need is a case of space madness that far from Earth. Godspeed, fake astronauts.
NNSA's Los Alamos National Laboratory undergoes 10-year effort to completely rebuild key nuclear weapons component LOS ALAMOS, NM -- The Department of Energy's National Nuclear Security Administration (NNSA) today recognized the production of the first replacement pit in 18 years for a nuclear weapon. An essential piece of every U.S. nuclear weapon, the pit is typically made of plutonium and acts as a trigger, allowing a weapon to function. The pit was built for the W88 nuclear warhead ahead of schedule and under budget by NNSA's Los Alamos National Laboratory, with support from other sites in the nuclear weapons complex. "Having this capability means that we can maintain the safety, security and reliability of the W88 nuclear weapon without having to conduct underground nuclear tests," said NNSA's Acting Administrator Bill Ostendorff. "This achievement could not have been possible without the tremendous scientific and technical expertise at NNSA's Los Alamos National Laboratory and the very important contributions from the rest of the nuclear weapons complex." After the Rocky Flats Plant in Colorado stopped production in 1989, the country lacked the capability to manufacture pits, also called primaries, for the stockpile. However, under its Stockpile Stewardship Program to ensure the reliability, safety and security of the weapons without underground nuclear testing, NNSA regularly takes apart and examines weapons. Most of the weapons are reassembled and returned to the stockpile through this process, but some of the inspections are so thorough that the pit and other components must be destroyed. An insufficient number of W88 pits were manufactured at Rocky Flats to allow for the necessary destructive evaluations. Thus, replacement pits are needed to provide continued confidence in the current W88 stockpile. Because the Los Alamos laboratory had the only remaining complete plutonium manufacturing capability in the country, it was tasked in 1996 with recreating the W88 pit that was built at Rocky Flats. This led to a 10-year effort to plan, develop, design, build, qualify and guarantee a replacement W88 pit process at Los Alamos. In 2000, NNSA established an office and project schedule to build and certify the first pit by September 2007. The capability for the W88 pit culminated with NNSA's recent certification for inclusion into the stockpile. "This success is due to dedication and hard work by the people of Los Alamos National Laboratory," said Michael Anastasio, laboratory director. "Meeting technical, scientific, and manufacturing challenges is what this laboratory is all about. I am very proud of the work and of everyone involved in this important accomplishment." By using small-scale plutonium experiments, data from past underground nuclear tests, groundbreaking materials science, extensive statistical analysis and adapting computer codes, Los Alamos was able to guarantee the structural and thermal integrity of the pit, ensuring it will have the same reliability and performance as the original pit. More than 900 laboratory personnel contributed to the effort, which included developing and qualifying 100 different production processes and installing more than 20 major pieces of manufacturing equipment. In addition to Los Alamos, other parts of NNSA's nuclear weapons complex contributed to the efforts. The Lawrence Livermore National Laboratory supplied radiographic inspection capabilities, produced small scale plutonium samples for testing and provided engineering evaluations and technical peer reviews. The Kansas City Plant provided engineering support and tooling manufacture expertise to streamline the project. Sandia National Laboratories supported manufacturing and calibration efforts. The W88 pit has been sent to NNSA's Pantex Plant for installation into a warhead. Los Alamos is on schedule to make 10 W88 pits each year. In order to keep up with the certification demands of the stockpile, the laboratory is on track to demonstrate with its pit manufacturing capability that it has the capacity to make 30 to 50 pits per year by the end of the 2012 to 2014 timeframe. Established by Congress in 2000, NNSA is a separately organized agency within the U.S. Department of Energy responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, reliability and performance of the U.S. nuclear weapons stockpile without nuclear testing; works to reduce global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. NNSA Public Affairs (202) 586-7371
Extensive research has made it clear that the Mediterranean diet is one of the healthiest ways to eat. A growing number of people are discovering the advantages of this method, which is good for various systems of the body. Residents of countries along the Mediterranean Sea region have long consumed certain foods that ward off illnesses and diseases while promoting longevity. Their cuisine consists primarily of plant-based foods, along with fish and whole grains. Saturated fats and salt are avoided, and red meat and white grains are severely limited. The Mediterranean plan placed first on a list of healthiest diets that U.S. News & World Report compiled. The newspaper also rated the diet No. 1 for diabetics, people wishing to bolster their cardiovascular system, and those looking for an “easy to follow” culinary program. Reasons to Switch to the Diet Mediterranean nutrition involves much less low-density lipoprotein, the dangerous kind of cholesterol that damages arteries. As a result, those who consume the foods in this diet are less vulnerable to dying from heart disease, according to a study of more than 1.5 million people. They experience fewer heart attacks and strokes; and tend to maintain more stable blood pressure, cholesterol and triglyceride levels. A review of many studies revealed that the risk of contracting cancer is 13 percent lower for people on the Mediterranean diet. Breast cancer is less common for women because of the regimen’s emphasis on nuts and olive oil. This manner of eating also has been found to deter prostate, liver, colorectal, gastric and other cancers. Mediterranean dieters are 40 percent less likely to suffer from dementia. There are far fewer cases of Alzheimer’s and Parkinson’s disease, as well as depression. Experts also believe the diet fosters bone strength, fends off osteoporosis, and helps people with Type 2 diabetes control their blood sugar. Ingredients of the Mediterranean Diet It all starts with vegetables and fruits. Experts recommend 10 daily servings. The best choices are green, leafy veggies – especially kale, collared greens, spinach and cabbage. Fresh, organic produce is preferable. Fish, chicken and turkey are much leaner than red meats such as beef and pork. Salmon, albacore tuna, trout, herring and sardines contain extremely beneficial omega-3 fatty acids. Many nutritionists suggest eating fish two times each week. Grill or bake it instead of frying. Bread, pasta, rice and cereal should consist of whole grains. They leave people less susceptible to hypertension, heart disease and diabetes. Refined grains contribute to obesity and inflammation. Beans and other legumes, as well as nuts and seeds, provide protein. While high in fat, they do not have saturated or trans fats. Try almonds, cashews, pistachios and walnuts. Limit consumption to about one handful of nuts daily. Mediterranean cooks traditionally use olive oil, though organic canola oil is an acceptable alternative. Olive oil offers antioxidants and anti-inflammatories that lessen the risk of heart attacks, strokes, high cholesterol and blood-sugar levels, and Alzheimer’s disease. The extra-virgin type is the best. Most other oils are packed with saturated fats. The same is true with butter and margarine. In addition to olive oil, alternatives include nut butter, coconut oil, a clarified butter called ghee, Greek yogurt, avocados, mashed pumpkins or bananas, and applesauce. Salt raises blood pressure and causes other health problems. The Mediterranean diet features herbs and spices, which can be purchased individually or in seasoning mixes. The foods in this style of eating are often eaten with heart-healthy wine, which has numerous qualities that enhance well-being. However, moderation is important since over-consumption of alcohol causes multiple health problems. Foods to Limit or Avoid Among the things to not eat (in addition to those previously mentioned) are processed meats like hot dogs and sausage, anything with added sugars, and processed and packaged foods. Most restaurant meals come with unwanted ingredients such as salt and unhealthy fats. Dairy is not a significant element of Mediterranean cuisine. Experts advise only small amounts of low-fat milk, cheese and yogurt. However, beware of all types of products claiming to be “diet,” “low-fat” or “natural.” Read the labels to get the facts. This article originally appeared on our sister site, Flyost.com © 2019 Diversify Media, Inc. All rights reserved.
Rank : 1st Sgt. Regiment : 15th Connecticut Infantry Regiment. Co. D (1862-1865) Service : 1862 July 31-1865 March 26 Rank : Pvt. Regiment : 27th Connecticut Infantry Regiment. Co. A (1862-1863) New Haven grocer Alexander Storer and his wife had four children: Justus (b. ca. 1837), Mary, George, and Henry. Justus enlisted as a 4th sergeant in the 15th Connecticut Infantry, Company D, on July 31, 1862. He was captured on March 8, 1865, at Kinston, North Carolina, and paroled on March 26, 1865. He mustered out as a 1st sergeant on June 27, 1865. His brother George enlisted as a private in Company A, 27th Connecticut Infantry, which was organized at New Haven in October 1862. He fought at Fredericksburg, and was taken prisoner at Chancellorsville, but survived the war. Henry Storer received notice that he had been drafted on July 18, 1863, but apparently did not enroll in any Connecticut regiment.
Forensics, DNA Fingerprinting and Human This week we take a foray into forensics, as Detective Inspector Alan Cook from Essex Police joins us to talk about how DNA is used to solve crimes, Professor Sir Alec Jeffreys from Leicester University helps us brush up on how DNA fingerprinting works, Dr Tamsin O'Connell from the University of Cambridge describes how archaeologists extract DNA from old bones and how DNA can help us track down our human origins, and in Kitchen Science we have the first ever radio DNA fingerprinting race, in which schools will battle it out to find out which of the Naked Scientists is the foul footed felon with the criminally smelly feet... Download as mp3 We have a picture of my wife's father in the police force in 1947. He's wearing a hat and holding a truncheon, which we now have in our possession and have been kept safely in a box. Would it be possible to extract DNA from the truncheon to... How can someone extract and sequence DNA from something that's been buried for centuries? When you have DNA left on a piece of clothing, how do modern techniques match the DNA with the guilty suspect? Matthew has emailed us with an addition to an answer last week. I've heard that there's a single origin for the emergence of Homo sapiens. Is there any scientific background to this claim? What is the possibility of erroneous matching of DNA? There was a case I saw about 8 years ago in which a man was convicted. When the evidence was re-examined at a later date, the bars of the DNA fingerprint did not match. What's the probab... I've heard that bone marrow transplants can cause havoc with DNA testing. Is this the case, and if so, how can the problem be resolved? Is there any such thing that could erase or change DNA?
Think about the structure that you had while you were a child in school between grades K-12. You had teachers and parents and system designed to move you up and through the system. The very design of the American school system is one of incremental learning progression with a promotion to a higher grade eventually leading to a graduation of the system. What if there was no system in place for education, would you have had the inclination or the discipline to get the education you got? Why is it that in school you have the system to get you through this education process then leaves you alone without a basic structure or system of success beyond the basics? Why are people going through our school systems not taught about their potential? Why are our children not given the tools to understand their potential and to make the most of it as they grow and develop? The sad truth is that after the education we are forced into, most people stop learning. How to Learn If you want to be successful in any endeavor, you must commit to learning. Most people look at learning as what they did in school. After graduation, they feel like their learning is over. Never at any time in history is there so much free knowledge available on any subject. From libraries to the internet, there is enough information available to get your doctorate degree in any chosen field. Why is it then that most people resist learning and a system of learning outside of what we had an informal education? Commencement is a ceremony that most go through at graduation. Both the words graduation and commencement have meanings intended to continue on. Commencement quite literally means beginning or originating with graduating implying moving to another level. There are so many things to learn, and with that knowledge, improve and enrich our lives. Knowledge and education and training could not be a more powerful lever for applying Incremental Advantage to your life. The Japanese have a term for the process never-ending improvement, called kaizen. The problem with knowledge is, most people have a built-in resistance to such a commitment and instead place a greater emphasis on entertainment and escape rather than improvement. Improving yourself is a daily and incremental activity. Look for things to peak your curiosity and new things to learn. Adapt your learning to small changes and improvements. Continue to learn and apply knowledge in small ways to further improve. Then build on those improvements. This positive kaizen cycle is the bedrock of creating incremental improvement in your life and achieving your goals. The principles we have discussed are known by many yet applied by few. Being part of that group sets you apart and allows you to separate yourself from the crowd. This post is an excerpt from the book Stack the Logs written by Kahuna Business Group’s Founder/ CEO Frank F. Lunn. Interested in learning more about how Kahuna Accounting can help you grow your business? Schedule a strategy call with Kahuna Accounting and we can discuss best practices for bookkeeping, profit, and growth!Schedule Discovery Meeting
A fable is a very short story with a lesson to teach, called a moral. Fables are fun to tell, and help teach good behavior as well. Aesop was a storyteller. He lived around 2,500 years ago in ancient Greece. Aesop's favorite story to tell was a fable, because they were short and they were fun. Some scholars think Aesop never existed. Others believe he was a slave in ancient Greece. Since nobody knows for sure, we prefer to believe there was a storyteller in ancient Greece named Aesop. There is no record that Aesop ever wrote anything down. He probably just remembered the stories he told, and told them orally. About 2,000 years later, a monk wrote down these wonderful tales of talking animals and little morals. They have been know as "Aesop's Fables" ever since. Here is one of our favorite fables told long ago by an ancient storyteller named Aesop. The Fox and the Goat Once upon a time, a long time ago, a fox fell down a well. He was stuck there for a quite a while. Finally, a goat wandered by. "What are you doing?" asked the goat curiously. "Stay away," snarled the fox. "This is my water." "That's not fair," snapped the goat. "Why should you get all the water?" Before the fox could say another word, the goat jumped in the well. Quick as a flash, the fox leaped on the goat's back and out of the well. He ran happily off, leaving the goat stuck in the well. THE MORAL OF THIS STORY: Do not always believe what you hear from someone who is in trouble.
The powerful imagery and intensity of Ward's wordless novels have elicited comparisons to Hawthorne, Melville, and Poe. This 1930 work tells a gripping tale through imagery alone, consisting solely of 128 hauntingly rendered woodcuts. The powerful imagery and psychological intensity of Ward's wordless novels have elicited comparisons to the writings of Hawthorne, Melville, and Poe, and they continue to influence modern graphic novelists such as Frank Miller. This 1930 work tells a gripping tale through imagery alone, consisting solely of hauntingly rendered woodcuts. 128 illustrations. HILDEGARDE H. SWIFT (1890-1977) wrote several books for children. Best known for “The Railroad to Freedom, which was cited for a Newbery Honor, Ms. Swift spent her life recording the lives of heroic Americans. ”The Little Red Lighthouse and the Great Gray Bridge is her most popular picture book. LYND WARD (1905-1985) illustrated more than two hundred books for children and adults throughout his prolific career. Winner of the Caldecott Medal for his watercolors in "The Biggest Bear, Mr. Ward was also famous for his wood engravings, which are featured in museum collections throughout the Uni
In 1912, an unsinkable ship, the Titanic, sank when it hit an iceberg. In modern-day learning situations, unwitting trainers sink training programs when they announce, "We're going to start with an icebreaker." Most of the participants, in response, experience a sinking feeling. And they may be right. The human brain is wired to preserve the person it inhabits. Consequently, it has two primary goals: survival and pleasure. Survival is obviously important. Without survival, the bra
Metal detection has become a popular pastime and hobby, especially for people who are interested in local history and culture. However, over the past few years, metal detecting has also become a way to earn money since there have been instances where people have found some massive amounts of treasures. In 2009, a man with a basic metal detector found the largest Anglo-Saxon treasure hoard found till now. it included thousands of pieces of ornaments constructed using gold. Before that, in 1977 a metal detection enthusiast found a 4.9 kgs chunk of gold panning in California. Despite the above-mentioned instances, you have to understand that it is very rare to find something so unique and expensive. Most of the time you will end up with lost coins, buries cans, pieces of metal, etc. At this point, you need to understand that metal detection is not only about treasure hunting. You might not win a jackpot or a lottery like the other person. Metal detection is more like a hobby that helps people build and improve their peace of mind and spend their time productively. Unfortunately, a common misconception prevailing in our society is that metal detect is for old people. It is not, it is just like any other hobby you can take up. Now that we have been introduced to how things work, let’s move on to the step by step guide. 7 Important Terms to Know About Metal Detection Before moving on to the basics of metal detection in every area, let’s look into some of the technical terms that you will regularly come across and that you should know. - Ground balance: - Target separation: 1. Ground Balance: The changes made in the metal detector based on the type of soil and the type of ground it is being used on. 2. Target Separation: The ability of the metal detector to detect and identify the targets or findings that are close to each other. However, in most cases, this is rarely used. By changing the settings of the metal detector, users can differentiate different between different types of metals and objects detected. This helps the users in deciding where to dig and where not to dig. This refers to the type of metal. Ferrous metals are metals containing iron, while non-ferrous metals do not contain iron and are non-magnetic. The latter include brass, gold, silver, nickel, tin, lead, etc. You might have heard this term before, but it means something else when it comes to metal detection. The frequency of every metal detector varies depending on the number of magnetic fields sent to the ground. It is measured in kilohertz and different metals react to different frequencies. Hence, the frequency settings can help you discriminate among types of metals. This term refers to the quantity of iron and salt already present in the ground. This can interfere with the working of the metal detector. Hence, while going to a place like black sand beaches (due to iron) or red dirt areas, be rest assured you might not get the desired results. Last but not least is interference. Metal detectors use magnetic pulses to find the metal pieces in the ground, hence, anything with an electromagnetic field will interfere with the working of the metal detector. Do Your Treasure Hunting Research? Now that you have decided that you want to get into the world of metal detection, its time for some research. Before looking into the equipment and the things you need for this new endeavor, it is important to research the type of areas you have in your locality that are ideal for metal detection. Make sure you familiarize yourself with the terms, local and common practices as well as the proper method of metal detection. The best way to do this is by going to a local library and finding books on metal detection. Metal Detection Equipment Requirements Moving on to the type of equipment you will need. The market is loaded with entry-level, mid-range and premium level or flagship metal detectors. However, since you will just be stepping into the field, it is advised you don’t invest a lot of money in the gear. Before jumping into spending large amounts of money, make sure you know the art of metal detection. What’s the point of spending money on something even if you are not good at it? As far as the equipment required during metal detection is concerned, there are no bounds. However, for starters, you will need an entry-level metal detector, some gloves, a scoop, pouches, and bags to carry the findings, some towels, and extra detection search coils. As far as the price of entry-level metal detectors is concerned, you can easily find a good specialized one between $200 to $500. Just so you know, premium level metal detectors can cost as much as $10 grand. Some of the most renowned brands when it comes to creating good quality and authentic metal detectors in each segment are: Finding the best metal detector is a very tricky job, and massively depends on the type of metal detection you want to get done. However, some of the all-rounders available on the market that have been labeled as fan-favorites are: Garett Ace 300: This is five items search indicator and only costs $250. It can be used to detect coins, relics, pieces of jewelry etc. However, it does not come with discrimination options. This is a waterproof metal detector that, despite being affordably priced, provides the users with top-notch and premium quality features. Bounty Hunter TK4 Tracker: Even though metal detectors don’t usually recommend bounty hunter metal detectors, this one is pretty good at what it does. It can automatically set the ground balance as well and costs somewhere around $100. That’s why, Most of the metal detectorists like this tracker. This metal detector can be used by adults and children alike, it comes with top-notch discrimination and is very lightweight. It will set the users back by $170. Teknetics EuroTek PRO: It is lightweight and has good discrimination abilities. However, it is one of the most underrated metal detectors available on the internet. Best Places to Go for Metal Detection As mentioned, proper research is the key to finding the best spots to go to for metal hunting. However, one thing is understood, you will only have luck in finding treasure when you go to old and unused placed. Areas with construction around usually don’t house any treasure. The only thing you will detect is metals in the garbage. However, while looking for the best places to hunt, make sure you don’t go to off-limits areas (private property). In most cases, historical parks, monuments, and historical sites are off-limits for metal detection. Some of the most common places people go to for metal detection are: - Old Homesteads: - Swimming areas: These are some of the most popular places people like to go for meta detection. Many metal detectors have found amazing things and treasures from old homesteads. Some found buries treasures while others found boxes and jars filled with money and jewelry. If you know someone who lives on an old farm or has bought an old house, this might be your golden chance to get your hands on something fun and maybe expensive. The campground is vast and is usually the area where people lose things. If you are interested in finding things like pocket knives or coins, going to a campground might be a good option. You won’t be allowed into army campgrounds but you can always check children’s summer campgrounds out. As weird as it sounds, people usually lose their precious jewelry and similar items when they go to the beach for swimming. Looking for metallic objects in these swimming areas during low tides might help you find something precious. If you can, returning it to the grateful owner will have an inner satisfaction of its own. However, if you cannot find the owner, the findings are yours. The above-mentioned areas are most common when it comes to finding stuff, however, apart from this, you can also go to abandoned areas around the city. People have been known to find amazing things in areas that have been considered haunted since most people don’t go there. Moreover, you can also look into towns built around waterways. Abandoned buildings, schools, large playgrounds, lakes, and local gathering grounds are some of the best places to go for metal detection. Metal Detection Rules and Regulations As far as metal detection and law is concerned, you are not supposed to break certain rules. There is a code of conduct that every metal detection enthusiast needs to follow. Some of the most basic rules are: 1. No Trespassing: As mentioned, trespassing should be avoided at any cost. Don’t forget that every land has an owner and before you set out to find any buried metals, make sure you have the respective permission do to it. This is important to avoid any arguments and disagreements. 2. Protected Sites: Some of the areas are protected by law, nobody is allowed to go detect metal at these places. They include sites of scientific interests and historic sites mostly. 3. Handling of Artifacts: Even though the chances are low, you might find a precious artifact or something of historical importance. Hence, you should know how to handle them and keep them safe to prevent any degradation. 4. Reporting Finding: In case you end up with a historical finding, make sure you report it to the landowner or the tenant as well as to the local body that deals with artifacts. Apart from these basics, some ethics of metal detection are: - Don’t ruin crops and contaminate water areas. - If you come across garbage, pick it up. Keep a separate garbage bag for such things. - Make sure you have the right digging tools so that the damage to the property is minimum. - Don’t mess with the animals roaming the area. - Don’t ruin the cemeteries. - In case you find a deadly weapon buries, notify the police immediately. - Don’t dig dry soil grounds. - Do not leave trash around. - Make sure you fix the holes you make. Keep in Touch With News If, after trying out metal detection, you decide you like it, make sure you keep updated about things happening around you. News of new artifacts being found is usually broadcasted via news. Apart from this, people usually go on metal detection marathons together once a year or once in two years. Being a part of such activities not only gives you confidence but also helps you learn new ways to detect metals. Hence, make sure you are well aware of the current happening in the area. Metal detection is more of a hobby than a business. Based on a common misconception, people believe that they will come across a huge treasure that will make them rich. However, it’s exactly on the contrary. You have to invest a lot of money before you even start finding worthless coins and metal pieces. Scientifically, metal detection has been proven to improve the overall thinking ability and peace of mind of the individual. Hence, it is advised that you consider the activity as a hobby rather than as a business. Moreover, in any case, make sure you know the local rules regarding metal detection and comply with them. You don’t need any lawsuits to be filed against you. However, overall, metal detection is a fun activity and can be taken up by elders, children and teenagers alike. So, keep having fun, enjoy your time metal detecting and keep researching. You may like this post: metal detector history.
Diesel engines have been popular in commercial vehicles and heavy-duty trucks for decades, and they are becoming more common in passenger vehicles as well due to their efficiency and power. However, diesel engines have different maintenance needs and repair procedures than gasoline engines. In this blog, we’ll take a closer look at diesel auto repair. Diesel Engine Maintenance Diesel engines require regular maintenance to ensure optimal performance and longevity. Some common maintenance tasks for diesel engines include: - Regular oil changes: Diesel engines typically require more frequent oil changes than gasoline engines due to their higher operating temperatures and combustion pressures. - Fuel filter replacement: Diesel engines require clean fuel to operate properly, so it’s important to replace the fuel filter regularly to prevent contaminants from entering the engine. - Air filter replacement: Diesel engines require a steady supply of clean air to operate efficiently, so the air filter should be checked and replaced as needed. - Coolant system maintenance: Diesel engines generate a lot of heat, so it’s important to maintain the coolant system to prevent overheating and engine damage. Diesel Engine Repair When it comes to diesel engine repair, it’s important to work with a qualified technician who has experience working with diesel engines. Some common diesel engine repairs include: - Fuel system repair: Diesel engines have a complex fuel system, and issues with the fuel injectors, fuel pump, or fuel lines can cause a variety of problems. - Turbocharger repair: Many diesel engines are equipped with a turbocharger, which can fail due to issues with the bearings, turbine, or wastegate. - Glow plug replacement: Glow plugs are used to preheat the engine during cold weather, and they can wear out or fail over time. - Compression testing: Diesel engines rely on high compression to ignite the fuel, so if the engine is experiencing issues with starting or performance, a compression test may be necessary to diagnose the problem. Diesel Engine Repair Costs The cost of diesel engine repair can vary depending on the make and model of the vehicle and the specific repairs needed. In general, diesel engine repairs can be more expensive than gasoline engine repairs due to the complexity of the engine and the specialized knowledge and equipment required for repair. Diesel engines are becoming more common in passenger vehicles, and they offer many benefits, including improved fuel efficiency and power. However, diesel engines have different maintenance needs and repair procedures than gasoline engines, so it’s important to work with a qualified technician who has experience working with diesel engines. Regular maintenance and timely repairs can help ensure optimal performance and longevity for your diesel vehicle.
Children are influenced by their surroundings. It’s important to start young when teaching them habits, especially considering that children develop habits by age nine. Both bad and healthy habits can be hard to break. The earlier we introduce healthy habits, the more likely these habits will stick for years to come. Do It Now, Not Later Take advantage of your child’s formative years and show them the most important habits they should follow so you can make sure they’ll continue them as they mature. Don’t postpone this until you feel like your child can understand what you’re talking about — they probably process and pick up on more things than you realize. Look at handwashing for instance. It’s an easy habit you can teach very early on. Lead by example, and make it a point to emphasize how you wash your hands because you know it’s important. Introduce the habit to children by reminding them of every time you’re washing your hands. Ask them to join you, and sing a song together to make sure you’ve washed your hands long enough. This turns the habit into a game and allows children to associate feelings of fun and positivity with handwashing. When your child grows old enough to master handwashing, you can start explaining when they must wash hands, such as before eating, after playing, after sneezing or coughing and after using the restroom. Help Children Learn Self-Control Even as adults, it’s difficult to make the right decisions every time. Now think of how difficult it must be for a child to prioritize eating healthful snacks over sugary treats. It’s easy to not consider this an issue, but children who learn self-control at an early age are set up for more success later in life. It’s important to, again, lead by example, and help children develop self-control when they’re still young. Teach children about healthy eating habits and how to incorporate more fruits and vegetables in their diet. Turn dinnertime into a bonding experience. Give kids age-appropriate tasks to help out in the kitchen as you make dinner. Make healthy snacks easily accessible for kids, too. One of the reasons they opt for sugary treats is because they’re ready to grab on the go. Pre-sliced vegetables and fruit in snack-sized containers — on a shelf in the fridge that the kids can reach — make healthy eating more convenient and appealing. There are also many children’s books available that can teach about healthy eating habits, as well as other good habits you want your children to learn. Make a day of visiting the library and finding these books together. Breaking Bad Habits Is Difficult Habits are actions and behaviors we perform subconsciously, and they’re insanely difficult to break. This is because when we form and repeat habits, the chemical dopamine is released to the brain, causing a feeling of pleasure and a strengthened habit. This is why it’s so important for your children to develop healthy habits from an early age. Make it easy for your child to form healthy habits. Maybe every day when they come home from school, they head straight to the fridge for those sliced fruits and veggies, or they’re enrolled in a weekly gymnastics or sports program. Another healthy habit could be playing outside for 30 minutes before starting on homework. Whatever the habit, make it easy to adopt and continue.
Article By: Amanda D. Stein In recent years, the Department of Defense has increasingly integrated the use of Unmanned Air Systems (UAS) into a number of missions, from aerial surveillance to data collection. For men and women in theater, specially-designed UAS are lightweight, portable and capable of saving lives. In the growing field of unmanned systems, NPS researchers continue to explore ways to maximize the efficiency and capabilities of UAS, and use them to give the warfighter a leg up over the adversary. For Systems Engineering students, Marine Corps Captains Derek Snyder and Dino Cooper, the mission is to make a back-packable UAS a multi-dimensional tool, capable of fulfilling more than intelligence, surveillance and reconnaissance operations (ISR). Their research focuses on equipping the UAS with retrofitted, consumable kits that could assist in a variety of missions, from enemy tracking to carrying small arms to counter snipers. “Concealment provides opportunities for adversaries to coordinate attacks,” explained Cooper. “New technologies must be utilized to improve the current ISR capabilities to ascertain combatants in unfamiliar environments. Effective systems to tag, track, locate and identify non-state adversaries must be developed and employed.” Although both men have completed separate theses projects, they began to converge when Snyder realized that, although his intention was to develop a UAS small-arms kit, the same concept could be applied to delivering chemical tags, which paralleled Cooper’s project. In both projects, the use of air vehicles to deliver taggants meant the warfighter can keep a safe distance from the adversary, and leave little trace of having ever been there – a component critical to many operations in theater. “The goal is that the adversary won’t know that you’ve been there,” explained Snyder. “The airplane doesn’t touch the ground. There won’t be any footprints. You don’t want there to be any convoy activity, because whenever anyone goes out, there is a lot of activity and the adversary can see that. So, if a small, battery powered airplane delivers taggant, they really won’t know that it happened.” Taggants can take many forms, so the delivery methods, particularly for combat scenarios, are generally not something that companies focus on during development. There, both students saw potential to adapt tools that the Armed Forces already have readily available – UAS – to deliver these taggants. Cooper explored methods to quietly and safely deliver Perfluorocarbon Tracers (PFT) pellets, a form of taggant, from UAS. The chemical tags are odorless, colorless, and virtually undetectable without the appropriate equipment, but were light enough to be delivered by a RQ-11 Raven B. Cooper enlisted the help of his co-advisor, Mechanical and Aersopace Engineering Professor Dr. Kevin Jones, who built a prototype of the pocket delivery system to be attached to the Raven. U. S. Marine Corps Capt. Derek Snyder prepares to hand-launch the FQM-151 Pointer Small Unmanned Air System, armed with a non-lethal payload, while conducting experiments at Camp Roberts, Calif. Once deployed, the Raven would release the payload, creating a boundary with the pellets that, if crossed, would mark an individual with the taggant, leaving little question that the person had been somewhere that they should not have been – such as setting a roadside IED or tampering with equipment. That taggant could then be detected at checkpoints or inspections using special detecting equipment. The Skate small UAS performs a taggant payload dispersal test in flight during a series of experiments conducted at Camp Roberts, Calif. The project is part of Marine Corps Capt. Dino Cooper’s thesis project on UAS taggant delivery. |For Snyder’s thesis, testing included attaching a modified paintball gun to the UAS and testing its capabilities as a system for countering snipers. After hearing a story about a Navy SEAL who had lost two men, pinned down by enemy sniper fire, Snyder saw an opportunity for a SUAS to, at the very least, fire non-lethal rounds that would serve as a distraction to the adversary. Although his project only went as far as to deliver non-lethal force, he hopes that the lethal concept will be explored further by future students. | “Right now the smallest operational weaponized UAS is the Hunter, which is still around 2,000 lbs. and requires a protected runway for launch and recovery,” said Snyder. “No others are operational yet. So I wanted to see if there was a way to get something that had an ISR capability, the ability to be back packable, and could be affordably weaponized.” After ironing out the logistics of getting a three-pound paintball gun down to a single pound, Snyder tested the attachment on the Quadrotor and Raven UAS. As the project progressed, he began to look at how his thesis intersected with Cooper’s. “I think the main purpose for gaining autonomy is to have the ability to multi-task in situations where it is definitely needed,” explained Cooper. “There is always that situation where there is a convoy pinned down and you are trying to track down your adversary, which has the ability to blend right in to the environment. “You need the ability to covertly fulfill your mission, to tag or to take out the sniper, and you need to be able to do this in such a way that you avoid collateral damage … the more collateral damage that you get, that’s more fuel for the adversary to kind of turn that against young men out there.” One of the complexities of working with UAS is the number of restrictions placed on testing them. Because of airspace and safety regulations, they can only be tested in designated locations. Through NPS’ Field Experimentation Program, Directed by Information Sciences Professor Ray Buettner, both students were able to utilize the Tactical Network Testbed (TNT) at Camp Roberts, Calif., to test their projects in a controlled environment. “If you didn’t have a place like Camp Roberts and TNT to go and test these things, you really are just stuck. I don’t know where else you could go,” Snyder said. The combination of FX support and the expertise of NPS’ students and faculty helped Snyder and Cooper not only test their projects, but receive invaluable feedback on the applications of their designs. “From the companies that develop these products, you’ve got engineers who are right out of school, never been in the military, and don’t have the experience to understand the real world applications,” explained Snyder. “And they are really begging for valuable feedback. “We had Army operators provide feedback to us,” he continued. “I am a pilot but not a UAV guy, and these guys may not be UAV guys either, but they have been under enemy fire. So the feedback from active duty operators who have experience with things like this is huge for us.” Buettner praised the efforts of both students, noting that their projects represent just a fraction of the valuable, hands-on research being done through the FX programs. Between the resources of the program and the interest of sponsors, the technologies being developed at NPS have incredible potential to make it to the battlefield and into the hands of the warfighter. “We are very excited about their work,” explained Buettner. “Snyder and Cooper are very good examples of NPS thesis work that has immediate payoff. In terms of their individual capabilities, they will be more likely to get to the battlefield rapidly because they are at NPS and we cooperate with the U.S. Special Operations Command in evaluating and exploring new technologies. So not only is it a good idea, it’s a good idea in the right place.” Snyder and Cooper have completed their theses and are scheduled to graduate in September. But they hope that the research on UAS will continue, giving the warfighter the option to utilize their UAS for more than just ISR capabilities. “With the technology available to us, we have to be equally as innovative as our adversaries. They are adapting car alarms to set off IEDs,” explained Cooper. “They have the ability to look at technology in such a simple way and use it to their advantage. “With our investments in technology, we should be able to do that on our end as well,” he continued. “We can look at what’s readily available. Rather than reinventing the wheel and developing a UAS from the ground up for a specific task, why can’t we capitalize on what we have already and make slight modifications to adapt to new situations.” Posted July 25, 2011 Marine Corps Capt. Dino Cooper demonstrates the ease of mounting a specially designed taggant pocket, built by NPS Professor Dr. Kevin Jones, to the RQ-11 Raven.
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? Connect your Facebook account to Prezi and let your likes appear on your timeline. You can change this under Settings & Account at any time. Natural Selection Vocabulary Transcript of Natural Selection Vocabulary Real, Cast,& Trace The structures generally just carry out the same function, like for birds and dragonflies, but they may show evidence of a common ancestor!!!!!! Example: Polar bears have black skin, but their white fur helps them blend in! Example: The King Snake isn't poisonous, but the Coral Snake is! Most animals avoid the king snake for fear it's really a
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? You can change this under Settings & Account at any time. The Jarawa Tribe Transcript of The Jarawa Tribe Emily Klepczarek & Rhiannon Smith The Jarawa Tribe Life before colonisation Before the 19th Century they were located in the southeast part of South Andaman. When white people settled in the area the numbers of the Jarawa people decreased, mostly from disease. They live in approximately 650 square km of land. The surrounding sea was rich in marine life as the forests are devoid of large wildlife. Migration of the tribes on the Andamans has been by crossing the sea using the Islands as stepping stones. Different tribes on the Island traded fish for wild berries and honey and were interactive with the other tribes. The Jarawa tribe believed that they belonged to the land and were free to fish and catch what they needed. Before colonisation the Jarawa tribe was free to roam the Andaman Islands without worrying about illnesses and diseases but that all changed in 1789. The Jarawa people are a small society of hunters and gatherers who live on the isolated Andaman Islands in the Bay of Bengal India. Their present population is estimated between 250-350 people. The ancestors of the Jarawa and the other tribes of the Andaman Islands are thought to have been part of the first successful human migrations out of Africa. Several hundred thousand Indian settlers now live on the islands, vastly outnumbering the tribes. The Jarawa hunt pig and monitor lizard, fish with bows and arrows, and gather seeds, berries and honey. They are nomadic, living in bands of 40-50 people. In 1998, some Jarawa started coming out of their forest to visit nearby towns and settlements for the first time. A brief history of the Indigenous people •'Towa' has a shape of an arrow with no stick but has a width of 4-6 inches. They keep Towa in their waist guard known as 'tohe'. •Their iron edge tied with wooden handle is called 'toub' by which they mainly make bow shape in wood. •The Jarawa engrave geometric designs on their bow with an iron knife. •'Thom' is a Jarawa word for pointed iron arrow, which is generally 10-12 cm in length. •Thosulatetotoha is an iron arrow head while its supporting stick or shaft is called 'thene' or 'thenang' by the Jarawa. To sharpen the edge of their weapons and implements they use stone called 'Ulli'. • Their digging stick of iron is called 'wohen' while bucket of taung peing wood is called 'uhuo'. They call their basket 'taj', which they use cane to make. •They collect nylon nets from the sea shore and shallow water to make small net bags to store their fish and collected stuff. They also construct their fishsing hand net with bark, bamboo and cane and call it 'potochehut'. •Their indigenous knife made of stone is called 'ulihe'. Their indigenous torch (tuhu-ga), is made with 'dhup' (canarium euphyllam) and dhani leaves. Tools and way of life Before the 19th century, the Jarawa homelands were located in the southeast part of the Andaman Islands. In 1789 British settlement largely decimated the Jarawan population by disease, alcoholism and destruction leaving the western areas open where the Jarawa people gradually made their new homeland. By 1997 the Jarawa people vigorously maintained their independence and distance from external groups, actively discouraging most raids and attempts at contact. In 1998, they became in contact with the outside world and have increasingly been the choosers of such contact. All contact, especially with tourists, remains extremely dangerous to the Jarawa due to the risk of disease. Impacts of colonisation Thank you The indigenous people today Fact file By Olivia Blunden •Total population: approx. 250-400. •The Jarawa are a small society of hunter-gatherers who live on the isolated Andaman Islands in the Bay of Bengal India. •The tribe had little real contact with the outside world until 1998 when growing numbers of Indian settlers began to creep into their forest home. •For centuries the Jarawa tribe were notorious for using bows and arrows to kill all intruders into their jungle home. •Tourists are not supposed to enter the reserve or have any contact with the tribe as they can bring diseases to which the Jarawa have no immunity. •The Jarawas who live in the rainforests hunt wild pigs, monitor lizards, fish and gather fruits and berries. Their lives are synchronised with the environment around them. •As nomadic tribes who depend on hunting, fishing and gathering activities, their traditional food articles consist of wild boar, turtles and their eggs, crabs and other shore animals, fruits like jackfruit and honey. •They call themselves 'Ya-eng-nga' which, means 'human being'. •Since 1998, they have been in increasing contact with the outside world. All contact, especially with tourists, remains extremely dangerous to the Jarawa due to the risk of disease. •Once they were feared warriors, now the Jarawa can be found around the roadside waiting to beg for biscuits and cakes from tourists. •Children loiter by the road and men sometimes try to trade wild honey they have gathered for packets of biscuits. •There are about 400 Jarawas living today. •Neither women nor men wear any clothes, they walk around naked with some type of accessory.
Pancreatic cancer is a highly lethal disease whose aggressive biology that is driven by mitochondrial oxidative metabolism. Mitochondria normally form a network of fused organelles, but we find that patient-derived and genetically engineered murine pancreatic cancer cells exhibit highly fragmented mitochondria with robust oxygen consumption rates (OCR). When mitochondrial fusion was activated by the genetic or pharmacological inhibition Drp1, the morphology and metabolism of human and murine pancreatic cancer cells more closely resembled that of normal pancreatic epithelial cells. Read more . . .
Battle of the Bulge Battle of the Bulge |Battle of the Bulge| |Part of the Western Front of World War II| |Commanders and leaders| |12th Army Group:6th Army Group:21st Army Group:||Army Group B:| |Casualties and losses| |Approximately 3,000 civilians killed| The Battle of the Bulge, also known as the Ardennes Counteroffensive, was the last major German offensive campaign on the Western Front during World War II, and took place from 16 December 1944 to 25 January 1945. It was launched through the densely forested Ardennes region of Wallonia in eastern Belgium, northeast France, and Luxembourg, towards the end of the war in Europe. The offensive was intended to stop Allied use of the Belgian port of Antwerp and to split the Allied lines, allowing the Germans to encircle and destroy four Allied armies and force the Western Allies to negotiate a peace treaty in the Axis powers' favor. The Germans achieved a total surprise attack on the morning of 16 December 1944, due to a combination of Allied overconfidence, preoccupation with Allied offensive plans, and poor aerial reconnaissance due to bad weather. American forces bore the brunt of the attack and incurred their highest casualties of any operation during the war. The battle also severely depleted Germany's armored forces, and they were largely unable to replace them. German personnel and, later, Luftwaffe aircraft (in the concluding stages of the engagement) also sustained heavy losses. The Germans had attacked a weakly defended section of the Allied line, taking advantage of heavily overcast weather conditions that grounded the Allies' overwhelmingly superior air forces. Fierce resistance on the northern shoulder of the offensive, around Elsenborn Ridge, and in the south, around Bastogne, blocked German access to key roads to the northwest and west that they counted on for success. Columns of armor and infantry that were supposed to advance along parallel routes found themselves on the same roads. This, and terrain that favored the defenders, threw the German advance behind schedule and allowed the Allies to reinforce the thinly placed troops. The furthest west the offensive reached was the village of Foy-Nôtre-Dame, south east of Dinant, being stopped by the US 2nd Armoured Division on 24 December 1944. Improved weather conditions from around 24 December permitted air attacks on German forces and supply lines, which sealed the failure of the offensive. On 26 December the lead element of Patton's US Third Army reached Bastogne from the south, ending the siege. Although the offensive was effectively broken by 27 December, when the trapped units of 2nd Panzer Division made two break-out attempts with only partial success, the battle continued for another month before the front line was effectively restored to its position prior to the attack. In the wake of the defeat, many experienced German units were left severely depleted of men and equipment, as survivors retreated to the defenses of the Siegfried Line. The Germans' initial attack involved 410,000 men; just over 1,400 tanks, tank destroyers, and assault guns; 2,600 artillery pieces; 1,600 anti-tank guns; and over 1,000 combat aircraft, as well as large numbers of other armored fighting vehicles (AFVs). These were reinforced a couple of weeks later, bringing the offensive's total strength to around 450,000 troops, and 1,500 tanks and assault guns. Between 63,222 and 98,000 of these men were killed, missing, wounded in action, or captured. For the Americans, out of a peak of 610,000 troops, 89,000 became casualties out of which some 19,000 were killed. The "Bulge" was the largest and bloodiest single battle fought by the United States in World War II and the second deadliest campaign in American history. |Battle of the Bulge| |Part of the Western Front of World War II| |Commanders and leaders| |12th Army Group:6th Army Group:21st Army Group:||Army Group B:| |Casualties and losses| |Approximately 3,000 civilians killed| troops were fatigued by weeks of continuous combat supply lines were stretched extremely thin supplies were dangerously depleted. General Dwight D. Eisenhower (the Supreme Allied Commander on the Western Front) and his staff chose to hold the Ardennes region which was occupied by the U.S. First Army. The Allies chose to defend the Ardennes with as few troops as possible due to the favorable terrain (a densely wooded highland with deep river valleys and a rather thin road network) and limited Allied operational objectives in the area. They also had intelligence that the Wehrmacht was using the area across the German border as a rest-and-refit area for its troops. Allied supply issues The speed of the Allied advance coupled with an initial lack of deep-water ports presented the Allies with enormous supply problems. Over-the-beach supply operations using the Normandy landing areas, and direct landing ships on the beaches, were unable to meet operational needs. The only deep-water port the Allies had captured was Cherbourg on the northern shore of the Cotentin peninsula and west of the original invasion beaches, but the Germans had thoroughly wrecked, and mined, the harbor before it could be taken. It took many months to rebuild its cargo-handling capability. The Allies captured the port of Antwerp intact in the first days of September, but it was not operational until 28 November. The estuary of the Schelde river, that controlled access to the port, had to be cleared of both German troops and naval mines. These limitations led to differences between General Eisenhower and Field Marshal Bernard Montgomery, commander of the Anglo-Canadian 21st Army Group, over whether Montgomery or Lieutenant General Omar Bradley, commanding the U.S. 12th Army Group, in the south would get priority access to supplies. German forces remained in control of several major ports on the English Channel coast until the end of the war in May 1945. The Allies' efforts to destroy the French railway system prior to D-Day were successful. This destruction hampered the German response to the invasion, but it proved equally hampering to the Allies, as it took time to repair the rail network's tracks and bridges. A trucking system nicknamed the Red Ball Express brought supplies to front-line troops, but used up five times as much fuel to reach the front line near the Belgian border. By early October, the Allies had suspended major offensives to improve their supply lines and supply availability at the front. Montgomery and Bradley both pressed for priority delivery of supplies to their respective armies so they could continue their individual lines of advance and maintain pressure on the Germans, while Eisenhower preferred a broad-front strategy. He gave some priority to Montgomery's northern forces. This had the short-term goal of opening the urgently needed port of Antwerp and the long-term goal of capturing the Ruhr area, the biggest industrial area of Germany. With the Allies stalled, German Generalfeldmarschall (Field Marshal) Gerd von Rundstedt was able to reorganize the disrupted German armies into a coherent defensive force. Field Marshal Montgomery's Operation Market Garden achieved only some of its objectives, while its territorial gains left the Allied supply situation stretched further than before. In October, the First Canadian Army fought the Battle of the Scheldt, opening the port of Antwerp to shipping. As a result, by the end of October, the supply situation had eased somewhat. Despite a lull along the front after the Scheldt battles, the German situation remained dire. While operations continued in the autumn, notably the Lorraine Campaign, the Battle of Aachen and fighting in the Hürtgen Forest, the strategic situation in the west had changed little. The Allies were slowly pushing towards Germany, but no decisive breakthrough was achieved. The Western Allies already had 96 divisions at or near the front, with an estimated ten more divisions en route from the United Kingdom. Additional Allied airborne units remained in England. The Germans could field a total of 55 understrength divisions. Adolf Hitler first officially outlined his surprise counter-offensive to his astonished generals on September 16, 1944. The assault's ambitious goal was to pierce the thinly held lines of the U.S. First Army between Monschau and Wasserbillig with Army Group B (Model) by the end of the first day, get the armor through the Ardennes by the end of the second day, reach the Meuse between Liège and Dinant by the third day, and seize Antwerp and the western bank of the Scheldt estuary by the fourth day. Hitler initially promised his generals a total of 18 infantry and 12 armored or mechanized divisions "for planning purposes." The plan was to pull 13 infantry divisions, two parachute divisions and six panzer-type divisions from the Oberkommando der Wehrmacht combined German military strategic reserve. On the Eastern Front, the Soviets' Operation Bagration during the summer had destroyed much of Germany's Army Group Center (Heeresgruppe Mitte). The extremely swift operation ended only when the advancing Soviet Red Army forces outran their supplies. By November, it was clear that Soviet forces were preparing for a winter offensive. Meanwhile, the Allied air offensive of early 1944 had effectively grounded the Luftwaffe, leaving the German Army with little battlefield intelligence and no way to interdict Allied supplies. The converse was equally damaging; daytime movement of German forces was rapidly noticed, and interdiction of supplies combined with the bombing of the Romanian oil fields starved Germany of oil and gasoline. This fuel shortage intensified after the Soviets overran those fields in the course of their August 1944 Jassy-Kishinev Offensive. One of the few advantages held by the German forces in November 1944 was that they were no longer defending all of Western Europe. Their front lines in the west had been considerably shortened by the Allied offensive and were much closer to the German heartland. This drastically reduced their supply problems despite Allied control of the air. Additionally, their extensive telephone and telegraph network meant that radios were no longer necessary for communications, which lessened the effectiveness of Allied Ultra intercepts. Nevertheless, some 40–50 messages per day were decrypted by Ultra. They recorded the quadrupling of German fighter forces and a term used in an intercepted Luftwaffe message—Jägeraufmarsch (literally "Hunter Deployment")—implied preparation for an offensive operation. Ultra also picked up communiqués regarding extensive rail and road movements in the region, as well as orders that movements should be made on time. Drafting the offensive Hitler felt that his mobile reserves allowed him to mount one major offensive. Although he realized nothing significant could be accomplished in the Eastern Front, he still believed an offensive against the Western Allies, whom he considered militarily inferior to the Red Army, would have some chances of success. Hitler believed he could split the Allied forces and compel the Americans and British to settle for a separate peace, independent of the Soviet Union. Success in the west would give the Germans time to design and produce more advanced weapons (such as jet aircraft, new U-boat designs and super-heavy tanks) and permit the concentration of forces in the east. After the war ended, this assessment was generally viewed as unrealistic, given Allied air superiority throughout Europe and their ability to continually disrupt German offensive operations. Given the reduced manpower of their land forces at the time, the Germans believed the best way to seize the initiative would be to attack in the West against the smaller Allied forces rather than against the vast Soviet armies. Even the encirclement and destruction of multiple Soviet armies, as in 1941, would still have left the Soviets with a numerical superiority. Hitler's plan called for a Blitzkrieg attack through the weakly defended Ardennes, mirroring the successful German offensive there during the Battle of France in 1940—aimed at splitting the armies along the U.S.—British lines and capturing Antwerp. The plan banked on unfavorable weather, including heavy fog and low-lying clouds, which would minimize the Allied air advantage. Hitler originally set the offensive for late November, before the anticipated start of the Russian winter offensive. The disputes between Montgomery and Bradley were well known, and Hitler hoped he could exploit this disunity. If the attack were to succeed in capturing Antwerp, four complete armies would be trapped without supplies behind German lines. Several senior German military officers, including Generalfeldmarschall Walter Model and Gerd von Rundstedt, expressed concern as to whether the goals of the offensive could be realized. Model and von Rundstedt both believed aiming for Antwerp was too ambitious, given Germany's scarce resources in late 1944. At the same time, they felt that maintaining a purely defensive posture (as had been the case since Normandy) would only delay defeat, not avert it. They thus developed alternative, less ambitious plans that did not aim to cross the Meuse River (in German and Dutch: Maas); Model's being Unternehmen Herbstnebel (Operation Autumn Mist) and von Rundstedt's Fall Martin ("Plan Martin"). The two field marshals combined their plans to present a joint "small solution" to Hitler. When they offered their alternative plans, Hitler would not listen. Rundstedt later testified that while he recognized the merit of Hitler's operational plan, he saw from the very first that "all, absolutely all conditions for the possible success of such an offensive were lacking." Model, commander of German Army Group B (Heeresgruppe B), and von Rundstedt, overall commander of the German Army Command in the West (OB West), were put in charge of carrying out the operation. In the west supply problems began significantly to impede Allied operations, even though the opening of the port of Antwerp in late November improved the situation somewhat. The positions of the Allied armies stretched from southern France all the way north to the Netherlands. German planning for the counteroffensive rested on the premise that a successful strike against thinly manned stretches of the line would halt Allied advances on the entire Western Front. The Wehrmacht's code name for the offensive was Unternehmen Wacht am Rhein ("Operation Watch on the Rhine"), after the German patriotic hymn Die Wacht am Rhein, a name that deceptively implied the Germans would be adopting a defensive posture along the Western Front. The Germans also referred to it as "Ardennenoffensive" (Ardennes Offensive) and Rundstedt-Offensive, both names being generally used nowadays in modern Germany. The French (and Belgian) name for the operation is Bataille des Ardennes (Battle of the Ardennes). The battle was militarily defined by the Allies as the Ardennes Counteroffensive, which included the German drive and the American effort to contain and later defeat it. The phrase Battle of the Bulge was coined by contemporary press to describe the way the Allied front line bulged inward on wartime news maps. While the Ardennes Counteroffensive is the correct term in Allied military language, the official Ardennes-Alsace campaign reached beyond the Ardennes battle region, and the most popular description in English speaking countries remains simply the Battle of the Bulge. The German plan There is a popular impression that the chief trouble in the Ardennes is the lack of good roads. As anyone on the ground will agree, the Ardennes has a fairly good road system. It is not the lack of roads as much as the lack of almost anything else on which to move that matters. — Theodore Draper, 84th Infantry Division in the Battle of the Ardennes, December 1944 - January 1945 , 1945, p. 11 of 58 The OKW decided by mid-September, at Hitler's insistence, that the offensive would be mounted in the Ardennes, as was done in 1940. In 1940 German forces had passed through the Ardennes in three days before engaging the enemy, but the 1944 plan called for battle in the forest itself. The main forces were to advance westward to the Meuse River, then turn northwest for Antwerp and Brussels. The close terrain of the Ardennes would make rapid movement difficult, though open ground beyond the Meuse offered the prospect of a successful dash to the coast. Four armies were selected for the operation. Adolf Hitler personally selected for the counter-offensive on the northern shoulder of the western front the best troops available and officers he trusted. The lead role in the attack was given to 6th Panzer Army, commanded by SS-Oberstgruppenführer Sepp Dietrich. It included the most experienced formation of the Waffen-SS: the 1st SS Panzer Division Leibstandarte Adolf Hitler. It also contained the 12th SS Panzer Division Hitlerjugend. They were given priority for supply and equipment and assigned the shortest route to the primary objective of the offensive, Antwerp, starting from the northernmost point on the intended battlefront, nearest the important road network hub of Monschau. The Fifth Panzer Army under General Hasso von Manteuffel was assigned to the middle sector with the objective of capturing Brussels. The Seventh Army, under General Erich Brandenberger, was assigned to the southernmost sector, near the Luxembourgish city of Echternach, with the task of protecting the flank. This Army was made up of only four infantry divisions, with no large-scale armored formations to use as a spearhead unit. As a result, they made little progress throughout the battle. Also participating in a secondary role was the Fifteenth Army, under General Gustav-Adolf von Zangen. Recently brought back up to strength and re-equipped after heavy fighting during Operation Market Garden, it was located on the far north of the Ardennes battlefield and tasked with holding U.S. forces in place, with the possibility of launching its own attack given favorable conditions. For the offensive to be successful, four criteria were deemed critical: the attack had to be a complete surprise; the weather conditions had to be poor to neutralize Allied air superiority and the damage it could inflict on the German offensive and its supply lines; the progress had to be rapid—the Meuse River, halfway to Antwerp, had to be reached by day 4; and Allied fuel supplies would have to be captured intact along the way because the combined Wehrmacht forces were short on fuel. The General Staff estimated they only had enough fuel to cover one-third to one-half of the ground to Antwerp in heavy combat conditions. The plan originally called for just under 45 divisions, including a dozen panzer and Panzergrenadier divisions forming the armored spearhead and various infantry units to form a defensive line as the battle unfolded. By this time the German Army suffered from an acute manpower shortage, and the force had been reduced to around 30 divisions. Although it retained most of its armor, there were not enough infantry units because of the defensive needs in the East. These 30 newly rebuilt divisions used some of the last reserves of the German Army. Among them were Volksgrenadier ("People's Grenadier") units formed from a mix of battle-hardened veterans and recruits formerly regarded as too young, too old or too frail to fight. Training time, equipment and supplies were inadequate during the preparations. German fuel supplies were precarious—those materials and supplies that could not be directly transported by rail had to be horse-drawn to conserve fuel, and the mechanized and panzer divisions would depend heavily on captured fuel. As a result, the start of the offensive was delayed from 27 November to 16 December. Before the offensive the Allies were virtually blind to German troop movement. During the liberation of France, the extensive network of the French resistance had provided valuable intelligence about German dispositions. Once they reached the German border, this source dried up. In France, orders had been relayed within the German army using radio messages enciphered by the Enigma machine, and these could be picked up and decrypted by Allied code-breakers headquartered at Bletchley Park, to give the intelligence known as Ultra. In Germany such orders were typically transmitted using telephone and teleprinter, and a special radio silence order was imposed on all matters concerning the upcoming offensive. The major crackdown in the Wehrmacht after the 20 July plot to assassinate Hitler resulted in much tighter security and fewer leaks. The foggy autumn weather also prevented Allied reconnaissance aircraft from correctly assessing the ground situation. German units assembling in the area were even issued charcoal instead of wood for cooking fires to cut down on smoke and reduce chances of Allied observers deducing a troop buildup was underway. For these reasons Allied High Command considered the Ardennes a quiet sector, relying on assessments from their intelligence services that the Germans were unable to launch any major offensive operations this late in the war. What little intelligence they had led the Allies to believe precisely what the Germans wanted them to believe-–that preparations were being carried out only for defensive, not offensive, operations. The Allies relied too much on Ultra, not human reconnaissance. In fact, because of the Germans' efforts, the Allies were led to believe that a new defensive army was being formed around Düsseldorf in the northern Rhineland, possibly to defend against British attack. This was done by increasing the number of flak (Flugabwehrkanonen, i.e., anti-aircraft cannons) in the area and the artificial multiplication of radio transmissions in the area. The Allies at this point thought the information was of no importance. All of this meant that the attack, when it came, completely surprised the Allied forces. Remarkably, the U.S. Third Army intelligence chief, Colonel Oscar Koch, the U.S. First Army intelligence chief and the SHAEF intelligence officer Brigadier General Kenneth Strong all correctly predicted the German offensive capability and intention to strike the U.S. VIII Corps area. These predictions were largely dismissed by the U.S. 12th Army Group. Strong had informed Bedell Smith in December of his suspicions. Bedell Smith sent Strong to warn Lieutenant General Omar Bradley, the commander of the 12th Army Group, of the danger. Bradley's response was succinct: "Let them come." Historian Patrick K. O'Donnell writes that on 8 December 1944 U.S. Rangers at great cost took Hill 400 during the Battle of the Hürtgen Forest. The next day GIs who relieved the Rangers reported a considerable movement of German troops inside the Ardennes in the enemy's rear, but that no one in the chain of command connected the dots. Because the Ardennes was considered a quiet sector, considerations of economy of force led it to be used as a training ground for new units and a rest area for units that had seen hard fighting. The U.S. units deployed in the Ardennes thus were a mixture of inexperienced troops (such as the raw U.S. 99th and 106th "Golden Lions" Divisions), and battle-hardened troops sent to that sector to recuperate (the 28th Infantry Division). Two major special operations were planned for the offensive. By October it was decided that Otto Skorzeny, the German SS-commando who had rescued the former Italian dictator Benito Mussolini, was to lead a task force of English-speaking German soldiers in "Operation Greif". These soldiers were to be dressed in American and British uniforms and wear dog tags taken from corpses and prisoners of war. Their job was to go behind American lines and change signposts, misdirect traffic, generally cause disruption and seize bridges across the Meuse River. By late November another ambitious special operation was added: Col. Friedrich August von der Heydte was to lead a Fallschirmjäger-Kampfgruppe (paratrooper combat group) in Operation Stösser, a night-time paratroop drop behind the Allied lines aimed at capturing a vital road junction near Malmedy. German intelligence had set 20 December as the expected date for the start of the upcoming Soviet offensive, aimed at crushing what was left of German resistance on the Eastern Front and thereby opening the way to Berlin. It was hoped that Soviet leader Stalin would delay the start of the operation once the German assault in the Ardennes had begun and wait for the outcome before continuing. After the 20 July attempt on Hitler's life, and the close advance of the Red Army which would seize the site on 27 January 1945, Hitler and his staff had been forced to abandon the Wolfsschanze headquarters in East Prussia, in which they had coordinated much of the fighting on the Eastern Front. After a brief visit to Berlin, Hitler traveled on his Führersonderzug ("Special Train of the Führer" (Leader)) to Giessen on 11 December, taking up residence in the Adlerhorst (eyrie) command complex, co-located with OB West's base at Kransberg Castle. Believing in omens and the successes of his early war campaigns that had been planned at Kransberg, Hitler had chosen the site from which he had overseen the successful 1940 campaign against France and the Low Countries. Von Rundstedt set up his operational headquarters near Limburg, close enough for the generals and Panzer Corps commanders who were to lead the attack to visit Adlerhorst on 11 December, traveling there in an SS-operated bus convoy. With the castle acting as overflow accommodation, the main party was settled into the Adlerhorst's Haus 2 command bunker, including Gen. Alfred Jodl, Gen. Wilhelm Keitel, Gen. Blumentritt, von Manteuffel and SS Gen. Joseph ("Sepp") Dietrich. In a personal conversation on 13 December between Walter Model and Friedrich von der Heydte, who was put in charge of Operation Stösser, von der Heydte gave Operation Stösser less than a 10% chance of succeeding. Model told him it was necessary to make the attempt: "It must be done because this offensive is the last chance to conclude the war favorably." Initial German assault Situation on the Western Front as of 15 December 1944 On 16 December 1944 at 05:30, the Germans began the assault with a massive, 90-minute artillery barrage using 1,600 artillery pieces across a 130-kilometer (80 mi) front on the Allied troops facing the 6th Panzer Army. The Americans' initial impression was that this was the anticipated, localized counterattack resulting from the Allies' recent attack in the Wahlerscheid sector to the north, where the 2nd Division had knocked a sizable dent in the Siegfried Line. Heavy snowstorms engulfed parts of the Ardennes area. While having the effect of keeping the Allied aircraft grounded, the weather also proved troublesome for the Germans because poor road conditions hampered their advance. Poor traffic control led to massive traffic jams and fuel shortages in forward units. In the center, von Manteuffel's Fifth Panzer Army attacked towards Bastogne and St. Vith, both road junctions of great strategic importance. In the south, Brandenberger's Seventh Army pushed towards Luxembourg in its efforts to secure the flank from Allied attacks. Units involved in initial assault Forces deployed North to South Northern Sector: Monschau to Krewinkel Central Sector: Roth to Gemünd Southern Sector: Hochscheid to Mompach Attack on the northern shoulder While the Siege of Bastogne is often credited as the central point where the German offensive was stopped, the battle for Elsenborn Ridge was actually the decisive component of the Battle of the Bulge, stopping the advance of the best equipped armored units of the German army and forcing them to reroute their troops to unfavorable alternative routes that considerably slowed their advance. Best German divisions assigned The attack on Monschau, Höfen, Krinkelt-Rocherath, and then Elsenborn Ridge was led by the units personally selected by Adolf Hitler. The 6th Panzer Army was given priority for supply and equipment and was assigned the shortest route to the ultimate objective of the offensive, Antwerp. The 6th Panzer Army included the elite of the Waffen-SS, including four Panzer divisions and five infantry divisions in three corps. SS-Obersturmbannführer Joachim Peiper led Kampfgruppe Peiper, consisting of 4,800 men and 600 vehicles, which was charged with leading the main effort. Its newest and most powerful tank, the Tiger II heavy tank, consumed 3.8 liters (1 gal) of fuel to go 800 m (.5 mi), and the Germans had less than half the fuel they needed to reach Antwerp. German forces held up Sepp Dietrich led the Sixth Panzer Army in the northernmost attack route. The attacks by the Sixth Panzer Army's infantry units in the north fared badly because of unexpectedly fierce resistance by the U.S. 2nd and 99th Infantry Divisions. Kampfgruppe Peiper, at the head of the Sepp Dietrich's Sixth Panzer Army, had been designated to take the Losheim-Losheimergraben road, a key route through the Losheim Gap, but it was closed by two collapsed overpasses that German engineers failed to repair during the first day. Peiper's forces were rerouted through Lanzerath. To preserve the quantity of armor available, the infantry of the 9th Fallschirmjaeger Regiment, 3rd Fallschirmjaeger Division, had been ordered to clear the village first. A single 18-man Intelligence and Reconnaissance Platoon from the 99th Infantry Division along with four Forward Air Controllers held up the battalion of about 500 German paratroopers until sunset, about 16:00, causing 92 casualties among the Germans. This created a bottleneck in the German advance. Kampfgruppe Peiper did not begin his advance until nearly 16:00, more than 16 hours behind schedule and didn't reach Bucholz Station until the early morning of 17 December. Their intention was to control the twin villages of Rocherath-Krinkelt which would clear a path to the high ground of Elsenborn Ridge. Occupation of this dominating terrain would allow control of the roads to the south and west and ensure supply to Kampfgruppe Peiper's armored task force. German troops advancing past abandoned American equipment Scene of the Malmedy massacre At 12:30 on 17 December, Kampfgruppe Peiper was near the hamlet of Baugnez, on the height halfway between the town of Malmedy and Ligneuville, when they encountered elements of the 285th Field Artillery Observation Battalion, U.S. 7th Armored Division. After a brief battle the lightly armed Americans surrendered. They were disarmed and, with some other Americans captured earlier (approximately 150 men), sent to stand in a field near the crossroads under light guard. About fifteen minutes after Peiper's advance guard passed through, the main body under the command of SS-Sturmbannführer Werner Pötschke arrived. The SS troopers suddenly opened fire on the prisoners. As soon as the firing began, the prisoners panicked. Most were shot where they stood, though some managed to flee. Accounts of the killing vary, but at least 84 of the POWs were murdered. A few survived, and news of the killings of prisoners of war spread through Allied lines. Following the end of the war, soldiers and officers of Kampfgruppe Peiper, including Joachim Peiper and SS general Sepp Dietrich, were tried for the incident at the Malmedy massacre trial. Kampfgruppe Peiper deflected southeast Driving to the south-east of Elsenborn, Kampfgruppe Peiper entered Honsfeld, where they encountered one of the 99th Division's rest centers, clogged with confused American troops. They quickly captured portions of the 3rd Battalion of the 394th Infantry Regiment. They destroyed a number of American armored units and vehicles, and took several dozen prisoners who were subsequently murdered. Peiper also captured 50,000 US gallons (190,000 l; 42,000 imp gal) of fuel for his vehicles. Peiper advanced north-west towards Büllingen, keeping to the plan to move west, unaware that if he had turned north he had an opportunity to flank and trap the entire 2nd and 99th Divisions. Instead, intent on driving west, Peiper turned south to detour around Hünningen, choosing a route designated Rollbahn D as he had been given latitude to choose the best route west. To the north, the 277th Volksgrenadier Division attempted to break through the defending line of the U.S. 99th and the 2nd Infantry Divisions. The 12th SS Panzer Division, reinforced by additional infantry (Panzergrenadier and Volksgrenadier) divisions, took the key road junction at Losheimergraben just north of Lanzerath and attacked the twin villages of Rocherath and Krinkelt. Another, smaller massacre was committed in Wereth, Belgium, approximately 6.5 miles (10.5 km) northeast of Saint-Vith on 17 December 1944. Eleven black American soldiers were tortured after surrendering and then shot by men of the 1st SS Panzer Division belonging to Schnellgruppe Knittel. The perpetrators were never punished for this crime and recent research indicates that men from Third Company of the Reconnaissance Battalion were responsible. Germans advance west American soldiers of the 3rd Battalion 119th Infantry Regiment are taken prisoner by members of Kampfgruppe Peiper in Stoumont, Belgium on 19 December 1944. An American soldier escorts a German crewman from his wrecked Panther tank during the Battle of Elsenborn Ridge. By the evening the spearhead had pushed north to engage the U.S. 99th Infantry Division and Kampfgruppe Peiper arrived in front of Stavelot. Peiper's forces were already behind his timetable because of the stiff American resistance and because when the Americans fell back, their engineers blew up bridges and emptied fuel dumps. Peiper's unit was delayed and his vehicles denied critically needed fuel. They took 36 hours to advance from the Eifel region to Stavelot, while the same advance required nine hours in 1940. Kampfgruppe Peiper attacked Stavelot on 18 December but was unable to capture the town before the Americans evacuated a large fuel depot. Three tanks attempted to take the bridge, but the lead vehicle was disabled by a mine. Following this, 60 grenadiers advanced forward but were stopped by concentrated American defensive fire. After a fierce tank battle the next day, the Germans finally entered the town when U.S. engineers failed to blow the bridge. Capitalizing on his success and not wanting to lose more time, Peiper rushed an advance group toward the vital bridge at Trois-Ponts, leaving the bulk of his strength in Stavelot. When they reached it at 11:30 on 18 December, retreating U.S. engineers blew it up. Peiper detoured north towards the villages of La Gleize and Cheneux. At Cheneux, the advance guard was attacked by American fighter-bombers, destroying two tanks and five halftracks, blocking the narrow road. The group began moving again at dusk at 16:00 and was able to return to its original route at around 18:00. Of the two bridges remaining between Kampfgruppe Peiper and the Meuse, the bridge over the Lienne was blown by the Americans as the Germans approached. Peiper turned north and halted his forces in the woods between La Gleize and Stoumont. He learned that Stoumont was strongly held and that the Americans were bringing up strong reinforcements from Spa. To Peiper's south, the advance of Kampfgruppe Hansen had stalled. SS-Oberführer Mohnke ordered Schnellgruppe Knittel, which had been designated to follow Hansen, to instead move forward to support Peiper. SS-Sturmbannführer Knittel crossed the bridge at Stavelot around 19:00 against American forces trying to retake the town. Knittel pressed forward towards La Gleize, and shortly afterward the Americans recaptured Stavelot. Peiper and Knittel both faced the prospect of being cut off. German advance halted M3 90mm gun-armed American M36 tank destroyers of the 703rd TD, attached to the 82nd Airborne Division, move forward during heavy fog to stem German spearhead near Werbomont, Belgium, 20 December 1944. Froidcourt castle near Stoumont in 2011 At dawn on 19 December, Peiper surprised the American defenders of Stoumont by sending infantry from the 2nd SS Panzergrenadier Regiment in an attack and a company of Fallschirmjäger to infiltrate their lines. He followed this with a Panzer attack, gaining the eastern edge of the town. An American tank battalion arrived but, after a two-hour tank battle, Peiper finally captured Stoumont at 10:30. Knittel joined up with Peiper and reported the Americans had recaptured Stavelot to their east. Peiper ordered Knittel to retake Stavelot. Assessing his own situation, he determined that his Kampfgruppe did not have sufficient fuel to cross the bridge west of Stoumont and continue his advance. He maintained his lines west of Stoumont for a while, until the evening of 19 December when he withdrew them to the village edge. On the same evening the U.S. 82nd Airborne Division under Maj. Gen. James Gavin arrived and deployed at La Gleize and along Peiper's planned route of advance. German efforts to reinforce Peiper were unsuccessful. Kampfgruppe Hansen was still struggling against bad road conditions and stiff American resistance on the southern route. Schnellgruppe Knittel was forced to disengage from the heights around Stavelot. Kampfgruppe Sandig, which had been ordered to take Stavelot, launched another attack without success. Sixth Panzer Army commander Sepp Dietrich ordered Hermann Prieß, commanding officer of the I SS Panzer Corps, to increase its efforts to back Peiper's battle group, but Prieß was unable to break through. Small units of the U.S. 2nd Battalion, 119th Infantry Regiment, 30th Infantry Division, attacked the dispersed units of Kampfgruppe Peiper on the morning of 21 December. They failed and were forced to withdraw, and a number were captured, including battalion commander Maj. Hal McCown. Peiper learned that his reinforcements had been directed to gather in La Gleize to his east, and he withdrew, leaving wounded Americans and Germans in the Froidcourt Castle. As he withdrew from Cheneux, American paratroopers from the 82nd Airborne Division engaged the Germans in fierce house-to-house fighting. The Americans shelled Kampfgruppe Peiper on 22 December, and although the Germans had run out of food and had virtually no fuel, they continued to fight. A Luftwaffe resupply mission went badly when SS-Brigadeführer Wilhelm Mohnke insisted the grid coordinates supplied by Peiper were wrong, parachuting supplies into American hands in Stoumont. In La Gleize, Peiper set up defenses waiting for German relief. When the relief force was unable to penetrate the Allied lines, he decided to break through the Allied lines and return to the German lines on 23 December. The men of the Kampfgruppe were forced to abandon their vehicles and heavy equipment, although most of the 800 remaining troops were able to escape. The US 99th Infantry Division, outnumbered five to one, inflicted casualties in the ratio of 18 to one. The division lost about 20% of its effective strength, including 465 killed and 2,524 evacuated due to wounds, injuries, fatigue, or trench foot. German losses were much higher. In the northern sector opposite the 99th, this included more than 4,000 deaths and the destruction of 60 tanks and big guns. Historian John S.D. Eisenhower wrote, "... the action of the 2nd and 99th Divisions on the northern shoulder could be considered the most decisive of the Ardennes campaign." The stiff American defense prevented the Germans from reaching the vast array of supplies near the Belgian cities of Liège and Spa and the road network west of the Elsenborn Ridge leading to the Meuse River. After more than 10 days of intense battle, they pushed the Americans out of the villages, but were unable to dislodge them from the ridge, where elements of the V Corps of the First U.S. Army prevented the German forces from reaching the road network to their west. Operation Stösser was a paratroop drop into the American rear in the High Fens (French: Hautes Fagnes; German: Hohes Venn; Dutch: Hoge Venen) area. The objective was the "Baraque Michel" crossroads. It was led by Oberst Friedrich August Freiherr von der Heydte, considered by Germans to be a hero of the Battle of Crete. It was the German paratroopers' only night time drop during World War II. Von der Heydte was given only eight days to prepare prior to the assault. He was not allowed to use his own regiment because their movement might alert the Allies to the impending counterattack. Instead, he was provided with a Kampfgruppe of 800 men. The II Parachute Corps was tasked with contributing 100 men from each of its regiments. In loyalty to their commander, 150 men from von der Heydte's own unit, the 6th Parachute Regiment, went against orders and joined him. They had little time to establish any unit cohesion or train together. The parachute drop was a complete failure. Von der Heydte ended up with a total of around 300 troops. Too small and too weak to counter the Allies, they abandoned plans to take the crossroads and instead converted the mission to reconnaissance. With only enough ammunition for a single fight, they withdrew towards Germany and attacked the rear of the American lines. Only about 100 of his weary men finally reached the German rear. Attack in the center Hasso von Manteuffel led Fifth Panzer Army in the middle attack route. The Germans fared better in the center (the 32 km (20 mi) Schnee Eifel sector) as the Fifth Panzer Army attacked positions held by the U.S. 28th and 106th Infantry Divisions. The Germans lacked the overwhelming strength that had been deployed in the north, but still possessed a marked numerical and material superiority over the very thinly spread 28th and 106th divisions. They succeeded in surrounding two largely intact regiments (422nd and 423rd) of the 106th Division in a pincer movement and forced their surrender, a tribute to the way Manteuffel's new tactics had been applied. One of those wounded and captured was Lieutenant Donald Prell of the Anti-Tank Company of the 422nd Infantry, 106th Division. The official U.S. Army history states: "At least seven thousand [men] were lost here and the figure probably is closer to eight or nine thousand. The amount lost in arms and equipment, of course, was very substantial. The Schnee Eifel battle, therefore, represents the most serious reverse suffered by American arms during the operations of 1944–45 in the European theater." Battle for St. Vith In the center, the town of St. Vith, a vital road junction, presented the main challenge for both von Manteuffel's and Dietrich's forces. The defenders, led by the 7th Armored Division, included the remaining regiment of the 106th U.S. Infantry Division, with elements of the 9th Armored Division and 28th U.S. Infantry Division. These units, which operated under the command of Generals Robert W. Hasbrouck (7th Armored) and Alan W. Jones (106th Infantry), successfully resisted the German attacks, significantly slowing the German advance. At Montgomery's orders, St. Vith was evacuated on 21 December; U.S. troops fell back to entrenched positions in the area, presenting an imposing obstacle to a successful German advance. By 23 December, as the Germans shattered their flanks, the defenders' position became untenable and U.S. troops were ordered to retreat west of the Salm River. Since the German plan called for the capture of St. Vith by 18:00 on 17 December, the prolonged action in and around it dealt a major setback to their timetable. Meuse River bridges British Sherman "Firefly" tank in Namur on the Meuse River, December 1944 To protect the river crossings on the Meuse at Givet, Dinant and Namur, Montgomery ordered those few units available to hold the bridges on 19 December. This led to a hastily assembled force including rear-echelon troops, military police and Army Air Force personnel. The British 29th Armoured Brigade of British 11th Armoured Division, which had turned in its tanks for re-equipping, was told to take back their tanks and head to the area. British XXX Corps was significantly reinforced for this effort. Units of the corps which fought in the Ardennes were the 51st (Highland) and 53rd (Welsh) Infantry Divisions, the British 6th Airborne Division, the 29th and 33rd Armoured Brigades, and the 34th Tank Brigade. Unlike the German forces on the northern and southern shoulders who were experiencing great difficulties, the German advance in the center gained considerable ground. The Fifth Panzer Army was spearheaded by the 2nd Panzer Division while the Panzer Lehr Division (Armored Training Division) came up from the south, leaving Bastogne to other units. The Ourthe River was passed at Ourtheville on 21 December. Lack of fuel held up the advance for one day, but on 23 December the offensive was resumed towards the two small towns of Hargimont and Marche-en-Famenne. Hargimont was captured the same day, but Marche-en-Famenne was strongly defended by the American 84th Division. Gen. von Lüttwitz, commander of the XXXXVII Panzer-Korps, ordered the Division to turn westwards towards Dinant and the Meuse, leaving only a blocking force at Marche-en-Famenne. Although advancing only in a narrow corridor, 2nd Panzer Division was still making rapid headway, leading to jubilation in Berlin. Headquarters now freed up the 9th Panzer Division for Fifth Panzer Army, which was deployed at Marche. On 22/23 December German forces reached the woods of Foy-Nôtre-Dame, only a few kilometers ahead of Dinant. The narrow corridor caused considerable difficulties, as constant flanking attacks threatened the division. On 24 December, German forces made their furthest penetration west. The Panzer Lehr Division took the town of Celles, while a bit farther north, parts of 2nd Panzer Division were in sight of the Meuse near Dinant at Foy-Nôtre-Dame. A hastily assembled Allied blocking force on the east side of the river prevented the German probing forces from approaching the Dinant bridge. By late Christmas Eve the advance in this sector was stopped, as Allied forces threatened the narrow corridor held by the 2nd Panzer Division. Operation Greif and Operation Währung For Operation Greif ("Griffin"), Otto Skorzeny successfully infiltrated a small part of his battalion of English-speaking Germans disguised in American uniforms behind the Allied lines. Although they failed to take the vital bridges over the Meuse, their presence caused confusion out of all proportion to their military activities, and rumors spread quickly. Even General George Patton was alarmed and, on 17 December, described the situation to General Dwight Eisenhower as "Krauts ... speaking perfect English ... raising hell, cutting wires, turning road signs around, spooking whole divisions, and shoving a bulge into our defenses." Checkpoints were set up all over the Allied rear, greatly slowing the movement of soldiers and equipment. American MPs at these checkpoints grilled troops on things that every American was expected to know, like the identity of Mickey Mouse's girlfriend, baseball scores, or the capital of a particular U.S. state—though many could not remember or did not know. General Omar Bradley was briefly detained when he correctly identified Springfield as the capital of Illinois because the American MP who questioned him mistakenly believed the capital was Chicago. The tightened security nonetheless made things very hard for the German infiltrators, and a number of them were captured. Even during interrogation, they continued their goal of spreading disinformation; when asked about their mission, some of them claimed they had been told to go to Paris to either kill or capture General Dwight Eisenhower. Security around the general was greatly increased, and Eisenhower was confined to his headquarters. Because Skorzeny's men were captured in American uniforms, they were executed as spies. This was the standard practice of every army at the time, as many belligerents considered it necessary to protect their territory against the grave dangers of enemy spying. Skorzeny said that he was told by German legal experts that as long he did not order his men to fight in combat while wearing American uniforms, such a tactic was a legitimate ruse of war. Skorzeny and his men were fully aware of their likely fate, and most wore their German uniforms underneath their American ones in case of capture. Skorzeny was tried by an American military tribunal in 1947 at the Dachau Trials for allegedly violating the laws of war stemming from his leadership of Operation Greif, but was acquitted. He later moved to Spain and South America. Operation Währung was carried out by a small number of German agents who infiltrated Allied lines in American uniforms. These agents were tasked with using an existing Nazi intelligence network to bribe rail and port workers to disrupt Allied supply operations. The operation was a failure. Attack in the south Erich Brandenberger led Seventh Army in the southernmost attack route. Belgian civilians killed by German units during the offensive Further south on Manteuffel's front, the main thrust was delivered by all attacking divisions crossing the River Our, then increasing the pressure on the key road centers of St. Vith and Bastogne. The more experienced US 28th Infantry Division put up a much more dogged defense than the inexperienced soldiers of the 106th Infantry Division. The 112th Infantry Regiment (the most northerly of the 28th Division's regiments), holding a continuous front east of the Our, kept German troops from seizing and using the Our River bridges around Ouren for two days, before withdrawing progressively westwards. The 109th and 110th Regiments of the 28th Division fared worse, as they were spread so thinly that their positions were easily bypassed. Both offered stubborn resistance in the face of superior forces and threw the German schedule off by several days. The 110th's situation was by far the worst, as it was responsible for an 18-kilometer (11 mi) front while its 2nd Battalion was withheld as the divisional reserve. Panzer columns took the outlying villages and widely separated strong points in bitter fighting, and advanced to points near Bastogne within four days. The struggle for the villages and American strong points, plus transport confusion on the German side, slowed the attack sufficiently to allow the 101st Airborne Division (reinforced by elements from the 9th and 10th Armored Divisions) to reach Bastogne by truck on the morning of 19 December. The fierce defense of Bastogne, in which American paratroopers particularly distinguished themselves, made it impossible for the Germans to take the town with its important road junctions. The panzer columns swung past on either side, cutting off Bastogne on 20 December but failing to secure the vital crossroads. In the extreme south, Brandenberger's three infantry divisions were checked by divisions of the U.S. VIII Corps after an advance of 6.4 km (4 mi); that front was then firmly held. Only the 5th Parachute Division of Brandenberger's command was able to thrust forward 19 km (12 mi) on the inner flank to partially fulfill its assigned role. Eisenhower and his principal commanders realized by 17 December that the fighting in the Ardennes was a major offensive and not a local counterattack, and they ordered vast reinforcements to the area. Within a week 250,000 troops had been sent. General Gavin of the 82nd Airborne Division arrived on the scene first and ordered the 101st to hold Bastogne while the 82nd would take the more difficult task of facing the SS Panzer Divisions; it was also thrown into the battle north of the bulge, near Elsenborn Ridge. Siege of Bastogne U.S. POWs on 22 December 1944 Letter to 101st soldiers, containing Gen. McAuliffe's "Nuts!" response to the Germans A German machine gunner marching through the Ardennes in December 1944 Senior Allied commanders met in a bunker in Verdun on 19 December. By this time, the town of Bastogne and its network of 11 hard-topped roads leading through the widely forested mountainous terrain with deep river valleys and boggy mud of the Ardennes region was under severe threat. Bastogne had previously been the site of the VIII Corps headquarters. Two separate westbound German columns that were to have bypassed the town to the south and north, the 2nd Panzer Division and Panzer-Lehr-Division of XLVII Panzer Corps, as well as the Corps' infantry (26th Volksgrenadier Division), coming due west had been engaged and much slowed and frustrated in outlying battles at defensive positions up to sixteen kilometers (10 mi) from the town proper, but these defensive positions were gradually being forced back onto and into the hasty defenses built within the municipality. Moreover, the sole corridor that was open (to the southeast) was threatened and it had been sporadically closed as the front shifted, and there was expectation that it would be completely closed sooner than later, given the strong likelihood that the town would soon be surrounded. Gen. Eisenhower, realizing that the Allies could destroy German forces much more easily when they were out in the open and on the offensive than if they were on the defensive, told his generals, "The present situation is to be regarded as one of opportunity for us and not of disaster. There will be only cheerful faces at this table." Patton, realizing what Eisenhower implied, responded, "Hell, let's have the guts to let the bastards go all the way to Paris. Then, we'll really cut 'em off and chew 'em up." Eisenhower, after saying he was not that optimistic, asked Patton how long it would take to turn his Third Army, located in northeastern France, north to counterattack. To the disbelief of the other generals present, Patton replied that he could attack with two divisions within 48 hours. Unknown to the other officers present, before he left Patton had ordered his staff to prepare three contingency plans for a northward turn in at least corps strength. By the time Eisenhower asked him how long it would take, the movement was already underway. On 20 December, Eisenhower removed the First and Ninth U.S. Armies from Gen. Bradley's 12th Army Group and placed them under Montgomery's 21st Army Group. By 21 December the Germans had surrounded Bastogne, which was defended by the 101st Airborne Division, the all African American 969th Artillery Battalion, and Combat Command B of the 10th Armored Division. Conditions inside the perimeter were tough—most of the medical supplies and medical personnel had been captured. Food was scarce, and by 22 December artillery ammunition was restricted to 10 rounds per gun per day. The weather cleared the next day and supplies (primarily ammunition) were dropped over four of the next five days. Despite determined German attacks the perimeter held. The German commander, Generalleutnant (Lt. Gen.) Heinrich Freiherr von Lüttwitz, requested Bastogne's surrender. When Brig. Gen. Anthony McAuliffe, acting commander of the 101st, was told of the Nazi demand to surrender, in frustration he responded, "Nuts!" After turning to other pressing issues, his staff reminded him that they should reply to the German demand. One officer, Lt. Col. Harry Kinnard, noted that McAuliffe's initial reply would be "tough to beat." Thus McAuliffe wrote on the paper, which was typed up and delivered to the Germans, the line he made famous and a morale booster to his troops: "NUTS!" That reply had to be explained, both to the Germans and to non-American Allies. Both 2nd Panzer and Panzer-Lehr division moved forward from Bastogne after 21 December, leaving only Panzer-Lehr division's 901st Regiment to assist the 26th Volksgrenadier-Division in attempting to capture the crossroads. The 26th VG received one Panzergrenadier Regiment from the 15th Panzergrenadier Division on Christmas Eve for its main assault the next day. Because it lacked sufficient troops and those of the 26th VG Division were near exhaustion, the XLVII Panzerkorps concentrated its assault on several individual locations on the west side of the perimeter in sequence rather than launching one simultaneous attack on all sides. The assault, despite initial success by its tanks in penetrating the American line, was defeated and all the tanks destroyed. On the following day of 26 December the spearhead of Gen. Patton's 4th Armored Division, supplemented by the 26th (Yankee) Infantry Division, broke through and opened a corridor to Bastogne. The original objectives are outlined in red dashed lines. The orange line indicates their furthest advance. On 23 December the weather conditions started improving, allowing the Allied air forces to attack. They launched devastating bombing raids on the German supply points in their rear, and P-47 Thunderbolts started attacking the German troops on the roads. Allied air forces also helped the defenders of Bastogne, dropping much-needed supplies—medicine, food, blankets, and ammunition. A team of volunteer surgeons flew in by military glider and began operating in a tool room. By 24 December the German advance was effectively stalled short of the Meuse. Units of the British XXX Corps were holding the bridges at Dinant, Givet, and Namur and U.S. units were about to take over. The Germans had outrun their supply lines, and shortages of fuel and ammunition were becoming critical. Up to this point the German losses had been light, notably in armor, with the exception of Peiper's losses. On the evening of 24 December, General Hasso von Manteuffel recommended to Hitler's Military Adjutant a halt to all offensive operations and a withdrawal back to the Westwall (literally Western Rampart). Hitler rejected this. Disagreement and confusion at the Allied command prevented a strong response, throwing away the opportunity for a decisive action. In the center, on Christmas Eve, the 2nd Armored Division attempted to attack and cut off the spearheads of the 2nd Panzer Division at the Meuse, while the units from the 4th Cavalry Group kept the 9th Panzer Division at Marche busy. As result, parts of the 2nd Panzer Division were cut off. The Panzer-Lehr division tried to relieve them, but was only partially successful, as the perimeter held. For the next two days the perimeter was strengthened. On 26 and 27 December the trapped units of 2nd Panzer Division made two break-out attempts, again only with partial success, as major quantities of equipment fell into Allied hands. Further Allied pressure out of Marche finally led the German command to the conclusion that no further offensive action towards the Meuse was possible. In the south, Patton's Third Army was battling to relieve Bastogne. At 16:50 on 26 December, the lead element, Company D, 37th Tank Battalion of the 4th Armored Division, reached Bastogne, ending the siege. P-47s destroyed at Y-34 Metz-Frescaty airfield during Operation Bodenplatte On 1 January, in an attempt to keep the offensive going, the Germans launched two new operations. At 09:15, the Luftwaffe launched Unternehmen Bodenplatte (Operation Baseplate), a major campaign against Allied airfields in the Low Countries, which are nowadays called the Benelux States. Hundreds of planes attacked Allied airfields, destroying or severely damaging some 465 aircraft. The Luftwaffe lost 277 planes, 62 to Allied fighters and 172 mostly because of an unexpectedly high number of Allied flak guns, set up to protect against German V-1 flying bomb/missile attacks and using proximity fused shells, but also by friendly fire from the German flak guns that were uninformed of the pending large-scale German air operation. The Germans suffered heavy losses at an airfield named Y-29, losing 40 of their own planes while damaging only four American planes. While the Allies recovered from their losses within days, the operation left the Luftwaffe ineffective for the remainder of the war. On the same day, German Army Group G (Heeresgruppe G) and Army Group Upper Rhine (Heeresgruppe Oberrhein) launched a major offensive against the thinly-stretched, 110 kilometers (70 mi) line of the Seventh U.S. Army. This offensive, known as Unternehmen Nordwind (Operation North Wind), was the last major German offensive of the war on the Western Front. The weakened Seventh Army had, at Eisenhower's orders, sent troops, equipment, and supplies north to reinforce the American armies in the Ardennes, and the offensive left it in dire straits. By 15 January Seventh Army's VI Corps was fighting on three sides in Alsace. With casualties mounting, and running short on replacements, tanks, ammunition, and supplies, Seventh Army was forced to withdraw to defensive positions on the south bank of the Moder River on 21 January. The German offensive drew to a close on 25 January. In the bitter, desperate fighting of Operation Nordwind, VI Corps, which had borne the brunt of the fighting, suffered a total of 14,716 casualties. The total for Seventh Army for January was 11,609. Total casualties included at least 9,000 wounded. First, Third, and Seventh Armies suffered a total of 17,000 hospitalized from the cold. Erasing the Bulge—The Allied counterattack, 26 December – 25 January While the German offensive had ground to a halt during January 1945, they still controlled a dangerous salient in the Allied line. Patton's Third Army in the south, centered around Bastogne, would attack north, Montgomery's forces in the north would strike south, and the two forces planned to meet at Houffalize. The temperature during that January was extremely low, which required weapons to be maintained and truck engines run every half-hour to prevent their oil from congealing. The offensive went forward regardless. Eisenhower wanted Montgomery to go on the counter offensive on 1 January, with the aim of meeting up with Patton's advancing Third Army and cutting off most of the attacking Germans, trapping them in a pocket. Montgomery, refusing to risk underprepared infantry in a snowstorm for a strategically unimportant area, did not launch the attack until 3 January, by which time substantial numbers of German troops had already managed to fall back successfully, but at the cost of losing most of their heavy equipment. At the start of the offensive, the First and Third U.S. Armies were separated by about 40 km (25 mi). American progress in the south was also restricted to about a kilometer or a little over half a mile per day. On 2 January, the Tiger IIs of German Heavy Tank Battalion 506 supported an attack by the 12th SS Hitlerjugend division against U.S. positions near Wardin and knocked out 15 Sherman tanks. The majority of the German force executed a successful fighting withdrawal and escaped the battle area, although the fuel situation had become so dire that most of the German armor had to be abandoned. On 7 January 1945 Hitler agreed to withdraw all forces from the Ardennes, including the SS-Panzer divisions, thus ending all offensive operations. On January 14, Hitler granted Gerd von Rundstedt permission to carry out a fairly drastic retreat in the Ardennes region. Houffalize and the Bastogne front would be abandoned. Considerable fighting went on for another 3 weeks; St. Vith was recaptured by the Americans on 23 January, and the last German units participating in the offensive did not return to their start line until 25 January. Force comparisons by date and assault guns Strategy and leadership Hitler's chosen few German field commanders plan the advance. The plan and timing for the Ardennes attack sprang from the mind of Adolf Hitler. He believed a critical fault line existed between the British and American military commands, and that a heavy blow on the Western Front would shatter this alliance. Planning for the "Watch on the Rhine" offensive emphasized secrecy and the commitment of overwhelming force. Due to the use of landline communications within Germany, motorized runners carrying orders, and draconian threats from Hitler, the timing and mass of the attack was not detected by ULTRA codebreakers and achieved complete surprise. Hitler when selecting leadership for the attack, felt that the implementation of this decisive blow should be entrusted to his own Nazi Party army, the Waffen-SS. Ever since German regular Army officers attempted to assassinate him, he had increasingly trusted only the SS and its armed branch, the Waffen-SS. After the invasion of Normandy, the SS armored units had suffered significant leadership casualties. These losses included SS-Gruppenführer (Major General) Kurt Meyer, commander of the 12th SS Panzer (Armor) Division, captured by Belgian partisans on 6 September 1944. The tactical efficiency of these units were somewhat reduced. The strong right flank of the assault was therefore composed mostly of SS Divisions under the command of "Sepp" (Joseph) Dietrich, a fanatical political disciple of Hitler, and a loyal follower from the early days of the rise of National Socialism in Germany. The leadership composition of the Sixth Panzer Division had a distinctly political nature. None of the German field commanders entrusted with planning and executing the offensive believed it was possible to capture Antwerp. Even Sepp Dietrich, commanding the strongest arm of the attack, felt that the Ardennes was a poor area for armored warfare, and that the inexperienced and badly equipped Volksgrenadier units would clog the roads that the tanks would need for their rapid advance. In this Dietrich was proved correct. The horse drawn artillery and rocket units were a significant obstacle to the tanks. Other than making futile objections to Hitler in private, he generally stayed out of the planning for the offensive. Model and Manteuffel, the technical experts from the eastern front, took the view that a limited offensive with the goal of surrounding and crushing the American 1st Army would be the best the offensive could hope for. These revisions shared the same fate as Dietrich's objections. In the end, the headlong drive on Elsenborn Ridge would not benefit from support from German units that had already bypassed the ridge. The decision to stop the attacks on the twin villages and change the axis of the attacks southward to the hamlet of Domäne Bütgenbach, was also made by Dietrich. This decision played into American hands, as Robertson had already decided to abandon the villages. The staff planning and organization of the attack was well done; most of the units committed to the offensive reached their jump off points undetected and were well organized and supplied for the attack. Allied high-command controversy Field Marshal Montgomery General Eisenhower, the Supreme Allied Commander One of the fault lines between the British and American high commands was General Dwight D. Eisenhower's commitment to a broad front advance. This view was opposed by the British Chief of the Imperial General Staff, Field Marshal Alan Brooke, as well as Field Marshal Montgomery, who promoted a rapid advance on a narrow front, with the other allied armies in reserve. British Field Marshal Bernard Montgomery had differing views of how to approach the German attack with the U.S. command. His ensuing public pronouncements of opinion caused tension in the American high command. Major General Freddie de Guingand, Chief of Staff of Montgomery's 21st Army Group, rose to the occasion, and personally smoothed over the disagreements on 30 December. As the Ardennes crisis developed, at 10:30 a.m. on 20 December, Eisenhower telephoned Montgomery and ordered him to assume command of the American First (Hodges) and Ninth Army (Simpson) – which, until then, were under Bradley's overall command. This change in command was ordered because the northern armies had not only lost all communications with Bradley, who was based in Luxembourg City, and the US command structure, but with adjacent units. Describing the situation as he found it on 20 December, Montgomery wrote; The First Army was fighting desperately. Having given orders to Dempsey and Crerar, who arrived for a conference at 11 am, I left at noon for the H.Q. of the First Army, where I had instructed Simpson to meet me. I found the northern flank of the bulge was very disorganized. Ninth Army had two corps and three divisions; First Army had three corps and fifteen divisions. Neither Army Commander had seen Bradley or any senior member of his staff since the battle began, and they had no directive on which to work. The first thing to do was to see the battle on the northern flank as one whole, to ensure the vital areas were held securely, and to create reserves for counter-attack. I embarked on these measures: I put British troops under command of the Ninth Army to fight alongside American soldiers, and made that Army take over some of the First Army Front. I positioned British troops as reserves behind the First and Ninth Armies until such time as American reserves could be created. Slowly but surely the situation was held, and then finally restored. Similar action was taken on the southern flank of the bulge by Bradley, with the Third Army. Due to the news blackout imposed on the 16th, the change of leadership to Montgomery did not become known to the outside world until eventually SHAEF made a public announcement making clear that the change in command was "absolutely nothing to do with failure on the part of the three American generals". This resulted in headlines in British newspapers. The story was also covered in Stars and Stripes and for the first time British contribution to the fighting was mentioned. Montgomery asked Churchill if he could give a conference to the press to explain the situation. Though some of his staff were concerned at the image it would give, the conference had been cleared by Alan Brooke, the CIGS, who was possibly the only person to whom Montgomery would listen. On the same day as Hitler's withdrawal order of 7 January, Montgomery held his press conference at Zonhoven. Montgomery started with giving credit to the "courage and good fighting quality" of the American troops, characterizing a typical American as a "very brave fighting man who has that tenacity in battle which makes a great soldier", and went on to talk about the necessity of Allied teamwork, and praised Eisenhower, stating, "Teamwork wins battles and battle victories win wars. On our team, the captain is General Ike." Then Montgomery described the course of the battle for a half-hour. Coming to the end of his speech he said he had "employed the whole available power of the British Group of Armies; this power was brought into play very gradually ... Finally it was put into battle with a bang ... you thus have the picture of British troops fighting on both sides of the Americans who have suffered a hard blow." He stated that he (i.e., the German) was "headed off ... seen off ... and ... written off... The battle has been the most interesting, I think possibly one of the most interesting and tricky battles I have ever handled." Despite his positive remarks about American soldiers, the overall impression given by Montgomery, at least in the ears of the American military leadership, was that he had taken the lion's share of credit for the success of the campaign, and had been responsible for rescuing the besieged Americans. His comments were interpreted as self-promoting, particularly his claiming that when the situation "began to deteriorate," Eisenhower had placed him in command in the north. Patton and Eisenhower both felt this was a misrepresentation of the relative share of the fighting played by the British and Americans in the Ardennes (for every British soldier there were thirty to forty Americans in the fight), and that it belittled the part played by Bradley, Patton and other American commanders. In the context of Patton's and Montgomery's well-known antipathy, Montgomery's failure to mention the contribution of any American general besides Eisenhower was seen as insulting. Indeed, General Bradley and his American commanders were already starting their counterattack by the time Montgomery was given command of 1st and 9th U.S. Armies. Focusing exclusively on his own generalship, Montgomery continued to say he thought the counteroffensive had gone very well but did not explain the reason for his delayed attack on 3 January. He later attributed this to needing more time for preparation on the northern front. According to Winston Churchill, the attack from the south under Patton was steady but slow and involved heavy losses, and Montgomery was trying to avoid this situation. Many American officers had already grown to dislike Montgomery, who was seen by them as an overly cautious commander, arrogant, and all too willing to say uncharitable things about the Americans. The British Prime Minister Winston Churchill found it necessary in a speech to Parliament to explicitly state that the Battle of the Bulge was purely an American victory. Montgomery subsequently recognized his error and later wrote: "Not only was it probably a mistake to have held this conference at all in the sensitive state of feeling at the time, but what I said was skilfully distorted by the enemy. Chester Wilmot explained that his dispatch to the BBC about it was intercepted by the German wireless, re-written to give it an anti-American bias, and then broadcast by Arnhem Radio, which was then in Goebbels' hands. Monitored at Bradley's HQ, this broadcast was mistaken for a BBC transmission and it was this twisted text that started the uproar." Montgomery later said, "Distorted or not, I think now that I should never have held that press conference. So great were the feelings against me on the part of the American generals that whatever I said was bound to be wrong. I should therefore have said nothing." Eisenhower commented in his own memoirs: "I doubt if Montgomery ever came to realize how resentful some American commanders were. They believed he had belittled them—and they were not slow to voice reciprocal scorn and contempt." Bradley and Patton both threatened to resign unless Montgomery's command was changed. Eisenhower, encouraged by his British deputy Arthur Tedder, had decided to sack Montgomery. Intervention by Montgomery's and Eisenhower's Chiefs of Staff, Maj. Gen. Freddie de Guingand, and Lt. Gen. Walter Bedell Smith, moved Eisenhower to reconsider and allowed Montgomery to apologize. The German commander of the 5th Panzer Army, Hasso von Manteuffel said of Montgomery's leadership: The operations of the American 1st Army had developed into a series of individual holding actions. Montgomery's contribution to restoring the situation was that he turned a series of isolated actions into a coherent battle fought according to a clear and definite plan. It was his refusal to engage in premature and piecemeal counter-attacks which enabled the Americans to gather their reserves and frustrate the German attempts to extend their breakthrough. The Mardasson Memorial near Bastogne, Belgium Casualty estimates for the battle vary widely. According to the U.S. Department of Defense, American forces suffered 89,500 casualties including 19,000 killed, 47,500 wounded and 23,000 missing. An official report by the United States Department of the Army lists 105,102 casualties, including 19,246 killed, 62,489 wounded, and 26,612 captured or missing, though this incorporates losses suffered during the German offensive in Alsace, Operation "Nordwind." A preliminary Army report restricted to the First and Third U.S. Armies listed 75,000 casualties (8,400 killed, 46,000 wounded and 21,000 missing). The Battle of the Bulge was the bloodiest battle for U.S. forces in World War II. British casualties totaled 1,400 with 200 deaths. The German High Command estimated that they lost between 81,834 and 98,024 men in the Bulge between 16 December 1944 and 28 January 1945; the accepted figure was 81,834, of which 12,652 were killed, 38,600 were wounded, and 30,582 were missing. Allied estimates on German casualties range from 81,000 to 103,000. Some authors have estimated German casualties as high as 125,000. German historian Hermann Jung lists 67,675 casualties from 16 December 1944 to late January 1945 for the three German armies that participated in the offensive. The United States Army Center of Military History's official numbers are 75,000 American casualties and 100,000 German casualties. German armored losses to all causes were between 527 and 554, with 324 tanks being lost in combat. Of the German write-offs, 16–20 were Tigers, 191–194 Panthers, 141–158 Panzer IVs, and 179–182 were tank destroyers and assault guns. The Germans lost an additional 5,000 soft-skinned and armored vehicles. US losses alone over the same period were similarly heavy, totaling 733 tanks and tank destroyers. The outcome of the Ardennes Offensive demonstrated that the Allied armored forces were capable of taking on the Panzerwaffe on equal terms. Although the Germans managed to begin their offensive with complete surprise and enjoyed some initial successes, they were not able to seize the initiative on the Western front. While the German command did not reach its goals, the Ardennes operation inflicted heavy losses and set back the Allied invasion of Germany by several weeks. The High Command of the Allied forces had planned to resume the offensive by early January 1945, after the wet season rains and severe frosts, but those plans had to be postponed until 29 January 1945 in connection with the unexpected changes in the front. The Allies pressed their advantage following the battle. By the beginning of February 1945, the lines were roughly where they had been in December 1944. In early February, the Allies launched an attack all along the Western front: in the north under Montgomery toward Aachen; in the center, under Courtney Hodges; and in the south, under Patton. The German losses in the battle were especially critical: their last reserves were now gone, the Luftwaffe had been shattered, and remaining forces throughout the West were being pushed back to defend the Siegfried Line. In response to the early success of the offensive, on 6 January Churchill contacted Stalin to request that the Soviets put pressure on the Germans on the Eastern Front. On 12 January, the Soviets began the massive Vistula–Oder Offensive, originally planned for 20 January. It had been brought forward from 20 January to 12 January because meteorological reports warned of a thaw later in the month, and the tanks needed hard ground for the offensive (and the advance of the Red Army was assisted by two Panzer Armies (5th and 6th) being redeployed for the Ardennes attack). During World War II, most U.S. black soldiers still served only in maintenance or service positions, or in segregated units. Because of troop shortages during the Battle of the Bulge, Eisenhower decided to integrate the service for the first time. This was an important step toward a desegregated United States military. More than 2,000 black soldiers had volunteered to go to the front. A total of 708 black Americans were killed in combat during World War II. The Germans officially referred to the offensive as Unternehmen Wacht am Rhein ("Operation Watch on the Rhine"), while the Allies designated it the Ardennes Counteroffensive. The phrase "Battle of the Bulge" was coined by contemporary press to describe the bulge in German front lines on wartime news maps, and it became the most widely used name for the battle. The offensive was planned by the German forces with utmost secrecy, with minimal radio traffic and movements of troops and equipment under cover of darkness. Intercepted German communications indicating a substantial German offensive preparation were not acted upon by the Allies. The Battle of the Bulge diorama at the Audie Murphy American Cotton Museum The battle around Bastogne received a great deal of media attention because in early December 1944 it was a rest and recreation area for many war correspondents. The rapid advance by the German forces who surrounded the town, the spectacular resupply operations via parachute and glider, along with the fast action of General Patton's Third U.S. Army, all were featured in newspaper articles and on radio and captured the public's imagination; but there were no correspondents in the area of Saint-Vith, Elsenborn, or Monschau-Höfen. Bletchley Park post-mortem At Bletchley Park, F. L. Lucas and Peter Calvocoressi of Hut 3 were tasked by General Nye (as part of the enquiry set up by the Chiefs of Staff) with writing a report on the lessons to be learned from the handling of pre-battle Ultra. The report concluded that "the costly reverse might have been avoided if Ultra had been more carefully considered". "Ultra intelligence was plentiful and informative" though "not wholly free from ambiguity", "but it was misread and misused". Lucas and Calvocoressi noted that "intelligence staffs had been too apt to assume that Ultra would tell them everything". Among the signs misread were the formation of the new 6th Panzer Army in the build-up area (west bank of the Rhine about Cologne); the new 'Star' (signals control-network) noted by the 'Fusion Room' traffic-analysts, linking "all the armoured divisions [assembling in the build-up area], including some transferred from the Russian front"; the daily aerial reconnaissance of the lightly-defended target area by new Arado Ar 234 jets "as a matter of greatest urgency"; the marked increase in railway traffic in the build-up area; the movement of 1,000 trucks from the Italian front to the build-up area; disproportionate anxiety about tiny hitches in troop movements, suggesting a tight timetable; the quadrupling of Luftwaffe fighter forces in the West; and decrypts of Japanese diplomatic signals from Berlin to Tokyo, mentioning "the coming offensive". For its part, Hut 3 had grown "shy of going beyond its job of amending and explaining German messages. Drawing broad conclusions was for the intelligence staff at SHAEF, who had information from all sources," including aerial reconnaissance. Lucas and Calvocoressi added that "it would be interesting to know how much reconnaissance was flown over the Eiffel sector on the US First Army Front". E. J. N. Rose, head Air Adviser in Hut 3, read the paper at the time and described it in 1998 as "an extremely good report" that "showed the failure of intelligence at SHAEF and at the Air Ministry". Lucas and Calvocoressi "expected heads to roll at Eisenhower's HQ, but they did no more than wobble". Five copies of a report by "C" (Chief of the Secret Intelligence Service), Indications of the German Offensive of December 1944, derived from ULTRA material, submitted to DMI, were issued on 28 December 1944. Copy No. 2 is held by the UK National Archives as file HW 13/45. It sets out the various indications of an impending offensive that were received, then offers conclusions about the wisdom conferred by hindsight; the dangers of becoming wedded to a fixed view of the enemy's likely intentions; over-reliance on "Source" (i.e. ULTRA); and improvements in German security. "C" also stresses the role played by poor Allied security: "The Germans have this time prevented us from knowing enough about them; but we have not prevented them knowing far too much about us". After the war ended, the U.S. Army issued battle credit in the form of the Ardennes-Alsace campaign citation to units and individuals that took part in operations in northwest Europe. The citation covered troops in the Ardennes sector where the main battle took place, as well as units further south in the Alsace sector, including those in the northern Alsace who filled in the vacuum created by the U.S. Third Army racing north, engaged in the concurrent Operation Nordwind diversion in central and southern Alsace launched to weaken Allied response in the Ardennes, and provided reinforcements to units fighting in the Ardennes. In popular culture The battle has been depicted in numerous works of art, entertainment, and media, including: Games: Over 70 board wargames have been created about the battle, the earliest in 1965. As of 2014, the battle has been the scene for about 30 video games, mostly strategy games, beginning with Tigers in the Snow (1981). Literature: In Kurt Vonnegut's postmodern novel Slaughterhouse-Five, or The Children's Crusade: A Duty-Dance with Death (1969), the protagonist Billy Pilgrim is captured by the advancing German army during the Battle of the Bulge. Television: The battle was the subject of the PBS American Experience episode, "The Battle of the Bulge". The battle was prominently featured in two episodes of the miniseries Band of Brothers (2001). Additionally, the Military/American Heroes TV series Greatest Tank Battles featured an episode on the Battle of the Bulge as "The Battle of the Bulge: S.S. Panzers Attack!" Battle of Garfagnana German occupation of Luxembourg during World War II Operation Spring Awakening
Horatio Lloyd Gates came to this world on July 26, 1727, in Maldon a small town in Essex, UK. His parents were not particularly wealthy, but nonetheless, he was born under a lucky star. With the intercession of his mother and the financial support of a generous Duke, Horatio joined the ranks of the British Army in 1745. The pinnacle of his Military career was in 1777 when the American forces he commanded emerged victorious from the Battle of Saratoga, arguably the greatest victory of the American Army to date. His military fame, however, was short-lived and lasted only three years. In 1780, he tragically lost the Battle of Camden. Horatio Gates Facts 1. From soldier to farmer Horatio Gates’s career in the British army was marked by his participation in the French and Indian War. At first, he served under General Edward Braddock and was part of his failed attempt to capture Fort Duquesne in 1755. Then, he continued his service under General John Stanwix and General Robert Monckton, eventually reaching the rank of major, an interesting fact about Horatio Gates. However, Gates was a deeply-practical man. He was quick to realize that he lacked the ties and the money necessary to push him beyond the rank of major in the British Army. Therefore, in 1769, he made the most out of his military status by selling his major’s commission. Gates then used the money to emigrate to the New World with his wife, Elizabeth, and his son, Robert. He chose Virginia as the place to set a new beginning and purchased a small plantation where he settled with his family. 2. Back in service When the American Revolutionary War broke out in 1775, Horatio Gates was quick to feel the call of duty. Or maybe he saw some new opportunities and quickly rushed to seize them. For one thing, the newly-formed Revolutionary Army lacked experienced officers, and Gates was quickly promoted to the rank of Brigadier General, and later to an Adjutant General. He was patronized by George Washington himself, whom he secretly despised. Washington highly valued Horatio Gates’ organizational skills, but the latter had been longing for some real action on the battlefield. 3. The Battle of Saratoga and Gates’ ascent to fame Horatio Gates’ ascend to fame began in 1776, when he strategically maneuvered some of his troops southwards to join forces with Washington’s army in Pennsylvania. Then, suddenly, a wayward move followed. Instead of staying in charge of his forces during a planned nighttime raid of Trenton, Gates fled to Baltimore to attend a sitting of the Continental Congress. The pretext was that he did not agree with Washington’s more aggressive tactics. An interesting fact about Horatio Gates is that he tried to undermine Washington’s positions in the Congress, but the latter achieved sweeping victories at both Trenton and Princeton. As a result, Gates was sent northwards to serve under General Philip John Schuyler. Schuyler eventually fell into disfavor with the High Command after the defeat of his army at Fort Ticonderoga, and thus Gates assumed command of the Northern Department in August that year. So, thanks to this lucky turn of events, Gates happened to be at the helm of the Northern Army when it thrashed the invading forces of British General Burgoyne at Saratoga. 4. Conway Cabal and Gates’ attempt to supersede Washington Horatio gates had many times before reiterated that he, but not Washington, should have been installed as the chief commander of the Continental Army. Immediately after his victory at Saratoga, he seized the chance to ask the Congress for promotion to a higher office. As a result, he became simultaneously President of the Board of War and a field commander, which was a textbook example of conflict of interests. Even more embarrassing was the fact that this position put him above his commanding officer, Washington. Eventually, Horatio Gates managed to overcome his thirst for power. He apologized to Washington, stepped down as President of the Board of War, and kept his military rank. 5. Gates’ debacle at Camden The great Napoleon had his Waterloo, and similarly, Horatio Gates’ most tragic defeat was at Camden. There, in 1780, he pulled the devil by the tail. Blinded by his recent military success, he rushed a motley crew of paramilitary factions, mercenaries, and fatigued troops, to face general Charles Cornwallis’ much better organized and, more importantly, well-rested army. In the days preceding the attack, Gates had made his men ride over 270 miles on horseback, and when the two armies finally clashed, they were tired as hell. Even more shameful was Horatio’s disorganized retreat in the face of an inevitable defeat, an unfortunate fact about Horatio Gates. 6. He appears unshaken by the storms of life It is said that Vikings were tough because of the northern winds that they had to constantly withstand. Similarly, the greatness of Horatio Gates represents itself most clearly in the context of two tragic events of his life that he managed to overcome and move on. First, in October 1780, just five months after his hapless defeat at Camden, Gates learned that his only son, Robert, had died in combat. Then, just three years later, in 1783, Gates also lost his wife, Elizabeth. Yet, after his retirement the following year, he managed to overcome his grief and again engage in public service. Having returned to his estate in Virginia, presently known as Traveler’s Rest historical site, Horatio Gates was elected president of the Virginia branch of the Society of the Cincinnati, an organization of former Continental Army officers. Then, three years after his wife’s death, Gates married Mary Valens, but unfortunately, the two were already too old to beget any offspring. Despite their advanced age, the couple remained socially and politically active until Gates’ death in 1806. Horatio Gates should forever remain among America’s greatest military officers because of his courage that was well-balanced with cautiousness. He always advocated that success on the battlefield was largely based on a successful defense strategy. Ironically, his debacle came exactly when he abandoned his basic principles and rushed his men into a reckless, all-out attack! I hope that this article on Horatio Gates facts was helpful. If you are interested, visit the Historical People Facts Page!
The Welfare of Salmon During Net-Pen Roundups In some fish farming industries, Atlantic salmon and rainbow trout are transported from fish farms to separate processing facilities via well-boats. Once at the processing facility, fish are often crowded into net-pens from anywhere between one and six days before being pumped to the slaughter line. They are not fed during this time or for several days prior to transport. Previous studies haven’t shown that the transportation process increases fish stress levels, negatively impacts fish welfare, or decreases the fishs’ flesh quality. This study, conducted at a salmon and trout facility in central Norway, was the first to examine the impacts of fish crowding and the conditions of the net-pens in depth. The facility has a total of eight net-pen units, each with a bulk volume of 24 by 24 meters. Researchers placed data loggers in the second net-pen and measured the dissolved oxygen (DO), temperature, depth, and current velocity of the water three times: prior to the scheduled slaughter when fish crowding occurs, and during two different crowding operations. The researchers placed GoPro action cameras in the pens to observe fish behavior and ventilation rates throughout the duration of the crowding process. They also sampled the blood, muscle pH, and body temperature of several fish from each of the three test groups. The fish were promptly tested and killed with a cranial blow within five to ten seconds of being removed from the water. Then they were placed in ice storage and later measured for onset of rigor mortis. The physiological data samples showed that fish were more stressed than data from the loggers and GoPro cameras would suggest. In terms of water quality, DO varied between 98% and 109% saturation before crowding began. During crowding, DO measured between 95% and 106% in the bulk volume of the net-pen and between 78% and 106% in the sweep-net itself. The researchers concluded that the fish therefore had access to oxygen at all times. Salinity, temperature, and current velocity stayed constant or varied within acceptable levels, and pH was similar to fresh seawater. The density of the fish in the bulk-volume of the net-pen was predictably lower in the second crowding operation since a number of fish had already removed. Fish behavior did not appear to change during the crowding operation, and few signs of stress such as burst activity or white-muscle swimming were observed. The underwater footage revealed that the fish swam calmly among one another in irregular patterns when space was available for them to do so. However, as density in the sweep nets increased, it became harder for the fish to maintain their normal swimming patterns. Many of the fish near the bottom were forced to swim vertically and directly touch the sides of the net. Others swam in normal patterns but with variance in direction. Ventilation rates of rainbow trout increased by nearly twice as much during crowding. Blood analysis confirms that both the trout and salmon were stressed. In unstressed fish, cortisol levels should be near zero. Results showed that the fish were stressed even prior to crowding. During crowding, the stress levels of fish in the both the bulk-volume and the sweep-nets jumped even more. Cortisol levels peaked after about 20 minutes and did not continue to increase after that. They returned to baseline levels about 24 hours after crowding. The blood pH, blood lactate, white muscle pH, and rigor mortis onset-time demonstrated that the fish were significantly stressed from the very beginning of the experiment. The measurements of these same variables were not drastically different after crowding. It’s possible that the fish in the bulk-volume had still not recovered from a previous day’s crowding incident (before the start of the experiment). It’s also possible that the transportation or pumping of fish into the processing plant increased their stress levels. The effect of crowding on the blood chemistry and muscle pH of unstressed fish is therefore difficult to determine. While the researchers admit that their data might not be representative of all Norwegian fish processing facilities, the results clearly demonstrate the importance of considering multiple variables when assessing animal welfare. If only swimming behavior and water conditions were taken into account, the fish would appear to be minimally stressed. Ventilation rates and cortisol levels showed that crowding does indeed have a stressful affect on fish. Body chemistry analysis also showed that the fish were stressed prior to the crowding process altogether. These results should prompt animal advocates and fish farmers to question welfare conditions in several parts of the fish-farming industry.
Dr Serap Beyazyuz Yuva M.D now practicing at a nearby medical centre made this workshop with practical information on symptoms and the prevention of this important issue. Now they also plan to make a interactive exhibition in their school on 7 April 2013 or any other day when day find a gap within school load This will make their all schoolmates aware of the complications of high blood pressure and beyond that, they will ask their school mates to make aware their elder family members on the issue.(chain reaction a Scoutig tradition) They are real messengers of peace carrying issues to benefit their surraundings to make this world better. World Health Day is celebrated on 7 April to mark the anniversary of the founding of WHO in 1948. Each year a theme is selected for World Health Day that highlights a priority area of public health concern in the world. The theme for 2013 is high blood pressure.
There was something about July. It was the month for regime changes in St. Augustine. When the Spanish turned over Florida to the British, the new flag was raised over the Castillo de San Marcos on July 20, 1763. When Great Britain gave Florida back to the Spanish, incoming Governor Zespedes watched the Spanish flag once again ascend the flagpole on the fortress on July 12, 1784. On July 10, 1821, the Spanish flag was lowered for the last time to be replaced by the incoming United States. By 1821 Spain had lost most of its colonies in the Americas to independence movements and no longer needed Florida and St. Augustine to protect the sea routes from Mexico and South America to Europe. Why July? Perhaps it was something cosmic. For a July date was not always in the plans and each change in government and departure of residents followed a different path. When the British arrived in 1763, departing Spanish residents took about six months to evacuate. Most of them went to Havana with a few heading to Mexico. In 1784, dual British and Spanish governments and evacuations dragged on for more than a year and a half. Zespedes set sail for St. Augustine from Havana on June 19, but bad weather prolonged his arrival. Two governors in our town at the same time made for contention and violence. Evacuating British headed for the Bahamas, the British West Indies, the British Isles and parts of Canada. Some of the British evacuees returned to Florida when their new homes proved unsatisfactory only to look for another destination. Spanish residents who had evacuated 20 years earlier returned from Cuba intending to reclaim their abandoned lands. When the U.S. became the possessor of Florida in 1821, all but a handful of the evacuees had sailed before the transfer of governments. Yet many residents remained in St. Augustine and the countryside. Not only the flags changed with the regimes, but so did the official language. Under Spanish rule, English speakers had to pay for the official correspondence to be translated. Documents written in Florida's early American years appear in both Spanish and the official language of English in books at the St. Johns County Courthouse. The records of St. Augustine's Catholic Church in the 1820s truly evidence a town in transition. Minutes of the parish's wardens (trustees) meetings frequently contained a paragraph in English only to be followed by one in Spanish and then back to English. Read more St. Augustine History online
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? You can change this under Settings & Account at any time. Energy of the Future: Biodiesel Transcript of Energy of the Future: Biodiesel | | | | | | | | | What is it? Common Definition: A sustainable fuel substitute for diesel fuel made from vegetable oils Chemical Definition: Fatty acid alkyl esters So if Biodiesel is so much better... What exactly is it made of? So what are the advantages of Biodiesel? cleaner sustainable affordable The use of biodiesel in a diesel engine has a substantial reduction of unburned hydrocarbons, carbon monoxide, and particulate matter compared to emissions from diesel fuel. Fossil fuels are very limited and are only produced over millions of years. Because about 62% of our oil is imported, we are dependant upon foreign countries for our energy. What's the point of relying on an energy source that we know is rapidly diminishing? Biodiesel is made from farming crops and therefore is sustainable and domestic. Because biodiesel is less expensive to make and produce... and can be made and produced within the U.S.... it can be sold back to consumers at a much lower price. And how is it made? Energy of the Future 95% - 99% Vegetable Oil NaOH Catalyst:a substance that accelerates a chemical reaction without affecting the reaction itself (Sodium Hydroxide) + And why use it? H H Methanol Biodiesel All of which are very harmful the environment and your health. Fuel Cells Polymer electrolyte membrane (PEM) fuel cells convert chemical energy in hydrogen and oxygen gasses directly to electrical energy utilizing a catalyst. Advantages What is a Fuel Cell? Sustainable Cleaner PEM Fuel Cells are powered by hydrogen and oxygen. Both are simple gases that are fairly easy to harness. The only products are water, heat and electricity. All of which are harmless to the environment. Research Findings: Biodiesel Processing Biodiesel was successfully made from both soybean and corn oil. The density of the biodiesel is less than that of the parent oil. A sample of each type of biodiesel completely dissolved in 30 mL of methanol, indicating all glycerol had been removed. The washed biodiesel product from the soybean parent oil contained an additional immiscible layer of white, waxy looking substance that was absent in the biodiesel from the corn parent. The gel points of both soy and corn biodiesel are just a few degrees below zero Celsius. Insights Gained: Biodiesel Processing The devised recipes are highly effective at converting the triglycerides to fatty acid methyl esters (biodiesel). Biodiesel neither from soy nor corn would be suitable for use in vehicles during sub-zero climate temperatures; more research is needed to achieve a winter ready biodiesel. Given the similar fatty acid distribution for corn and soybean oils, the absence of the white waxy material in the corn oil biodiesel product is most likely due to the variation in the specific triglycerides constituting each. How does a fuel cell work? light source electrolyzer fuel cell current/voltage measurement box UV Light did not cause any current to flow even after 20 minutes. The rate of rise in current was inversely proportional to distance from light source. The rate of rise in current seemed to quicken at an angle (45 degrees) for a distance greater than 4". Research Findings: PEM Fuel Cell Current Behavior Works Cited Carlson, Scott. “Colleges Convert Fryer Oil into Fuel for Vehicles.” The Chronicle of Higher Education. 55.23 13 Feb 2009. General OneFile. Web. 20 July 2010. "Enterprise Holdings to Convert Fleet to Biodiesel." Travel Business Review [TBR] 10 Feb. 2010. General OneFile. Web. 20 July 2010. Freund, Ken. "Biodiesel benefits?" Motorhome Aug. 2008: 80. General OneFile. Web. 20 July 2010. "Biodiesel and Other Renewable Diesel Fuels [electronic resource]." Office of Scientific and Technical Information, U.S. Dept. of Energy, 2006. MelCat. Web. 20 July 2010. Hess, M. Scott. "How Biodiesel Works." 18 June 2003. HowStuffWorks.com. Thomas,Valarie. Biodiesel Lecture 1. University of Michigan. Computer Science Building, Ann Arbor, Michigan. 13 July 2010. Class Lecture. Thomas,Valarie. Biodiesel Lecture 2. University of Michigan. Computer Science Building, Ann Arbor, Michigan. 14 July 2010. Class Lecture. Thomas,Valarie. Biodiesel Lecture 3. University of Michigan. Computer Science Building, Ann Arbor, Michigan. 15 July 2010. Class Lecture. GO GREEN AT BLUE! LEAD Chemical Engineering 2010 Acknowledgements: Special Thanks to... Dr. Valarie Thomas Ms. Lauralyn Taylor Ms. Angela Newing Mr. Ben Mason Mrs. Lucie Howell LEAD 2010 Engineering Team and Facilitators
Recent analysis by the U.S. Department of Energy (DOE) shows that buildings meeting a 2010 energy efficiency standard will use 18.5 percent less energy than those using the 2007 version of the standard, helping commercial building owners cut operating costs. According to the DOE report, “Energy Standard for Buildings, Except Low-Rise Residential Buildings,” the new version of Standard 90.1 will also help building owners achieve sustainability goals and reduce carbon pollution. States must update their building codes to meet or exceed the energy efficiency of the new standard within two years, with certification statements due on October 18, 2013. Standard 90.1-2010 contains 19 requirements for energy efficiency, including the following: cool roofs in hot climates; skylights and daylighting in certain building types; daylighting controls under skylights; commissioning of daylighting controls; increased use of heat recovery; supply air temperature resetting for non-peak conditions; lower lighting power densities; control of exterior lighting; occupancy sensors for specific applications and efficiency requirements for data centers. Some changes in the standard were the result of public comments. DOE analyzed energy codes published by the American National Standards Institute/American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) and the Illuminating Engineering Society of North America to determine potential energy-efficiency savings in buildings that adhere to the code. DOE simulated 16 representative building types, ranging from single-family homes to the world’s largest buildings, in 15 U.S. climate locations. Comparing hypothetical buildings constructed to the old standard to the same buildings constructed to the new standard is “a lot like MPG ratings for cars—results will vary with actual use,” says Robert Diemer, director of In Posse, a national environmental consulting firm, headquartered in Philadelphia, focusing on sustainable design in a wide range of market sectors. DOE’s assessment is consistent with the usual result of design team and building owners working together to implement best practices in energy-efficient design following the standard, says Michael Gresty, executive vice president of Sustainability Roundtable Inc., a Cambridge, Mass.-based research and consulting firm focused on the development of sustainable business practices. However, he adds, for Standard 90.1-2010 to improve a structure’s energy efficiency, “it is essential that the project be commissioned and be operated and designed.” Time will tell Although the impact of each updated standard usually takes years, notes Matthew Dugan, president of DVL Automation, a Philadelphia-based HVAC system integration firm, the U.S. Green Building Council (USGBC) is accelerating the impact of this standard ahead of building codes by basing the 2012 version of LEED criteria on the ASHRAE 90.1-2010 version. “This represents the next step in our evolution towards impacting global climate change,” Dugan says, “with the reduction of U.S. building stock from its current average of 91 KBTU [Kilo British Thermal Units] per sq. ft. per year to our interim goal of 47 KBTU per sq. ft. per year by 2030.” Each time ASHRAE Standard 90.1 is updated, “the bar is raised and we have to work harder to generate equivalent savings,” says Diemer. Not only must states comply by updating building codes, but equipment manufacturers are forced to improve the performance of their products to meet increasingly more stringent efficiency thresholds. However, Diemer believes the impact of adopting each new standard falls mostly on building design professionals, who must undergo a learning curve as well as potential pushback from commercial building developers over the need to add additional costs to account for new requirements, such as mandatory daylight harvesting in certain situations. Dugan notes that state building codes no longer adopt ASHRAE standards by “reference,” and instead simply select portions they choose to “edit” or “copy.” While he expects code officials to align with the new standard, he says he is concerned that “several of its elements reach into new areas of influence—in particular, the scope of the standard has been expanded to include process loads, e.g. data centers, and receptacles.” The 2007 version of Standard 90.1 has actually already been surpassed by ASHRAE 189.1, Gresty points out. According to an estimate by the National Renewable Energy Laboratory (NREL), “Standard 189.1 could lead to site energy savings between 10 percent and 34 percent over Standard 90.1-2007, based on the minimum prescriptive recommendations of the new standard—and possibly even higher, since further energy-saving measures were incorporated after the NREL inquiry,” he says. Because the standard only results in theoretical improvements rather than real and absolute performance improvements, says Diemer, “the impact on actual building energy use is most likely less than assumed by DOE’s analysis of findings.” He would prefer to see the standard require “actual, documented energy performance rather than allow for prescriptive compliance or calculated theoretical savings.” Although Diemer recognizes that such an approach isn’t practical or politically feasible, he believes that requiring disclosure of actual energy performance could have an impact on building performances exceeding Standard 90.1, because “such measures could be applied to all buildings, not just those requiring building permits or LEED certification.” Energy disclosure requirements are starting to be put into place at state and local levels—including in Austin, Texas; Boulder, Colo.; New York City; Chicago and Washington, D.C.—and are currently under consideration in Philadelphia, says Janet Milkman, LEED Green Associate and executive director of the Delaware Valley Green Building Council. Milkman says her organization is “supporting these requirements as a strong measure to drive the market for energy-efficiency applications.”
HAMILTON — Jolly old St. Nick is happy for a reason, and it may be the same reason his red coat could double as a sail. Scientists out of McMaster University have discovered a happy gene — and it just so happens to be the same one that is a major contributor to obesity. “So you can be obese, and happy,” said David Meyre, an associate professor at McMaster’s department of clinical epidemiology and biostatistics and a senior author of the study released Tuesday in the journal Molecular Psychiatry. “This is the first time we know there is some biological reason, or a pathology, behind something like mental health, especially depression,” added first author Zena Samaan, an assistant professor at McMaster’s department of psychiatry and behavioural neurosciences. The scientists set out to find out which genes leave people more susceptible to depression. What they found was that a gene that predisposes some people to obesity, called FTO, also offers some protection against depression. Meyre, who holds the Canadian research chair in genetic epidemiology, recognized that the “unexpected” result contradicts what has long been thought to be a positive link between obesity and increased incidence of depression. The connection between the FTO gene, obesity and lower risk of depression was found through statistical analysis of data on 27,000 people, comparing their weight, genetics and depression levels, Samaan said. It showed that the risk of depression is reduced by eight per cent when a person carries the FTO mutation, a number both Meyre and Samaan said was considered modest. However, Samaan added, for a disease that affects nearly one in five Canadians, understanding the role genetics plays in depression could lead to a greater understanding of its biological mechanisms and possibly even potential treatments.
Last updated 21 October 1997 |B-41 (38 K)||Bassoon Prime Device (110 K)| The Mk/B-41 was the highest yield nuclear weapon ever deployed by the U.S. It was also the only three-stage thermonuclear weapon ever developed by the U.S., and it achieved the highest yield-to-weight ratio of any U.S. weapon design. |Length||12 ft. 4 in (148 in)| |Diameter (body)||52 in| |Diameter (tail fin)||74 in| |Number Manufactured||About 500| |Manufactured||September 1960 to June 1962| |Retired||November 1963 to July 1976| Three stage radiation implosion weapon Deuterium-tritium boosted primary. Fusion stages presumalby use Lithium-6 (95% enrichment) deuteride fusion fuel. The B-41 was deployed in a a "dirty" version (the Y1, with a U-238 encased tertiary stage) and a "clean" version (the Y2, with a lead encased tertiary stage). It may be that both used a secondary with a lead fusion tamper. There are actually two reported yields for this bomb, "less than 10 Mt" and 25 Mt. It is possible that the 25 Mt yield applies only to the dirty Y1 version, with the clean Y2 version having the lower yield. According to Dr. Theodore Taylor (physicist and former weapons designer), the practical limit for nuclear weapon yield to weight ratio is about 6 Kt/kg. Using the deployed weapon weight (10,670 lb), and a yield of 25 Mt, the Mk-41 achieved 5.2 Kt/kg. If we look at the test devices fired in Hardtack I however (see below), which lack such weighty and in principle unnecessary things as parachutes, we see weights 8,752 - 9,723 lb. Taylor's maximum achievable yield-to-weight ratio of 6 Kt/kg corresponds to a device weight of 9,190 lb; well within the weight range of these devices. Strategic bomber - most recently the B-52G (internal bomb bay) "Full Fuzing Options" (FUFO), options probably selected on ground prior to mission. Five fuzing options: Parachutes used: parachutes 4-5 ft diameter pilot chute, and a 16.5 ft diameter main ribbon chute for high-speed stabilization. The B-41 program originated in 1955 when the Air Force issued a requirement and a feasibility study for a Class "B" (10,000 lb), 62 inch diameter high yield thermonuclear weapon. UCRL proposed adapting an experimental three-stage thermonuclear system they were developing, which was subsequently scheduled for test-firing during Operation Redwing in 1956. Two version of the proposed UCRL test device, named "Bassoon" and "Bassoon Prime", were test-fired in "clean" and "dirty" configurations during the Zuni and Tewa shots of Redwing. The Bassoon device fired in Redwing Zuni (27 May 1956) was 39 inches in diameter, 135.5 inches long, and weighed 12,158 lbs. The predicted yield for Zuni was 2-3 Mt, it achieved 3.5 Mt. This device used a lead fusion tamper and was quite clean, with 85% of the energy coming from fusion, and only 15% from fission. The Bassoon Prime device fired in Redwing Tewa (20 July 1956) was 39 inches in diameter, and 135.5 inches long, and weighed 15,735 lb. The predicted yield for Tewa was was 6-8 Mt, the actual yield was 5 Mt. In contrast to Zuni, Tewa used an uranium fusion tamper and was quite dirty, with only 13% of the energy coming from fusion, and 87% from fission. This device produced a fusion yield of only 650 Kt compared to the 3 Mt of Zuni. Both were experimental "proof of concept" systems only, not test version of actual designs intended for deployment. Redesign to meet military requirements and additional testing was thus required, which was carried out in Operation Hardtack Phase I in 1958. In November 1956 the feasibility study was completed and the designation TX/XW-41 for a bomb and a missile warhead version was assigned. On January 28, 1957, the DOD formally requested that the AEC develop a new Class "B" weapon using the UCRL design. The military characteristics for the bomb and warhead were approved in mid-February, and development engineering of the designs began. In June the proposed ordnance characteristics of the TX-41 bomb and XW-41 warhead were accepted by the Special Weapons Development Board; the ICBM warhead application was canceled at the end of July. A test of the boosted TX/XW-41 warhead primary and secondary in a bomb mockup, was fired in Plumbbob Smoky at the NTS on 31 August 1957. The device yielded 44 Kt (predicted yield was 48 Kt, range 45-50 Kt); it measured 50" in diameter and 126.2" in length and weighed 9,408 lbs. The test included some thermonuclear yield. Drop testing of the TX-41 ballistic shape was conducted between December 1957 and December 1959 at the AEC's Tonopah (Nevada) and Salton Sea (California) test ranges. Prototypes of the TX-41 bomb, all of them clean variants, were fired during the Sycamore, Poplar and Pine shots of Operation Hardtack Phase I at the PPG between May 31 and July 27, 1958. The Sycamore shot (31 May 1958) used a two-stage clean version of the TX-41. The predicted yield was five megatons, of which just 200 kilotons was to be fission yield. The device fizzled though, with a total actual yield of only 92 Kt although low level burning was detected in the second stage. The test device was 50 inches in diameter by 112.6 inches long and weighed 9,723 lbs. The Poplar shot (12 July 1958) was a repeat test of the two-stage variant. This device had a diameter of 48.2 inches, a length of 112.1 inches, and a weight reduced of 9,316 lbs. The Poplar device was predicted to yield either 5-10 Mt, of which only 450 Kt was to be fission yield. This test was successful, with a yield of 9.3 Mt (the largest of Hardtack I, and the fifth largest U.S. test ever). The Pine shot (26 July 1958) used a three-stage configuration. This device had a diameter of 50 inches, a length of 112.6 inches, and a weight reduced of 8,752 lbs. The predicted total yield was 4-6 Mt, only 200 Kt was to be from fission. Actual yield was only 2 Mt. The device is said to have had dual-primaries. The ordnance characteristics of the TX-41 were revised and accepted by the SWDB in mid-October 1958. Production engineering of the TX-41 started soon afterwards. |1955||Air Force issued a requirement and a feasibility study for a Class "B" weapon (high megaton range, 10,000 lb, 62 inch diameter or less)| |27 May||Bassoon device fired in Redwing Zuni (3.5 Mt), test firing of UCRL "clean" 3-stage concept for class "B" requirement| |20 July||Bassoon Prime device fired in Redwing Tewa (5 Mt), test firing of UCRL "dirty" 3-stage concept for class "B" requirement| |November||Feasibility study completed and designation TX/XW-41 assigned| |28 January||DOD formally requested that the AEC develop the TX-41 weapon using the UCRL design, development engineering begins| |June||Proposed ordnance characteristics of the TX-41 bomb accepted| |31 August||Plumbbob Smoky shot: test of the boosted TX-41 warhead primary and secondary in a bomb. Yield 44 Kt, test included some thermonuclear yield.| |May-July||Prototype tests of the Tx-41 weapon fired in Operation Hardtack Phase I at Enewetak: 31 May (GMT) Sycamore - two-stage clean version of the TX-41. Predicted yield 5 Mt total, 200 Kt fission. The device fizzled with total actual yield of 92 Kt although low level burning was detected in the second stage. 12 July (GMT) Poplar - repeat test of the two-stage variant. Predicted yield 5-10 Mt, 450 Kt fission. Successful, with a yield of 9.3 Mt. 26 July (GMT) Pine - used a three-stage configuration. Predicted total yield 4-6 Mt, 200 Kt fission. Actual yield was 2 Mt. The device said to have had dual-primaries. |October||Ordnance characteristics revised and accepted by SWDB; production engineering of the TX-41 begins| |September||Early production of the Mk-41 Mod 0 bomb begun| Early production of the Mk-41 Mod 0 bomb began in September 1960; by June 1962, approximately 500 units had been manufactured. These weapons were retired between November 1963 and July 1976 as the more-versatile Mk-53 replaced them in the stockpile.
Russell Brand and Elton John share many things in common besides being in entertainment and Brits: they once battled bulimia, an eating disorder. Many people tend to have a stereotype when it comes to those with eating disorders. They are female, probably young, with self-image body issues. They might also be overweight or obese, as well as prone to bullying. In reality, about one in every three with the disorder is male, according to the National Eating Disorders Association (NEDA). This infographic also highlights the prevalence of these conditions among the male population: - For ten years since 1999, hospitalizations among men due to eating disorders rose by over 52%. - Among those with bulimia, about 25% of them are males. - Over 4.5% of heterosexuals can also have an eating disorder. The Necessity of Treatment for Bulimia among Men Bulimia is an eating disorder characterized by purging, fasting, or vomiting after a meal to lose weight. Both men and women can consume laxatives or go through episodes of binging. Either way, bulimia treatment is necessary, especially among the men, because of its co-morbidities. Eating disorders can increase the risk of heart disease. People with the condition tend to experience bradycardia or a slower heart rate and lower blood pressure. The heart can also atrophy or stiffen, which can prevent it from contracting or pumping blood properly. Poor eating habits can also lead to electrolyte imbalance. These electrolytes are needed to maintain the electrical function of the cardiac muscles. In general, studies show that men are likely to develop heart disease, and having an eating disorder will only exacerbate or speed up the appearance of the symptoms. NEDA also explains that males with this condition are prone to mental disorders such as depression and anxiety. They can also suffer from substance abuse or subject themselves to excessive exercise due to body dysmorphia. Men Need a Male-centric Treatment Despite the increasing incidence of eating disorders, such as bulimia, among the male population, only a few seek treatment. Those who do are usually in the moderate to severe stages of their condition. Two factors may contribute to this. One, unlike women, men can still report a feeling or sense of control with their condition. This notion can prevent them from seeking early intervention. Second is the stigma that comes with having an eating disorder. In a 2013 study published in CMAJ, many men have to live by the macho code: men have control. If they have issues, they have to toughen it up. It doesn’t help these conditions are thought of as “female problems.” For those who seek therapy, the setting might eventually discourage or ward them off. Most are female-dominated, which can make a supposed macho man feel awkward and isolated. Men and women can experience bulimia and other eating disorders differently, but the root causes remain the same. Males can be susceptible to them due to genetics, emotional trauma, and poor self-esteem. They, therefore, need as much support as the women from professionals.
By Greg DiNatale, Director of Fitness Education Sleep. It may be the most underrated component of your fitness and weight loss routine. Of course, it goes much further than that as researchers are now studying the link between a lack of sleep and some cancers. While some wear a lack of sleep as a badge of courage, the evidence for getting more shut-eye is overwhelming. How much sleep is enough? It varies, but most studies look at the effect of getting less than seven hours. Research published in the Annals of Internal Medicine reported that sleeping less than seven hours can undo the benefits of dieting. The subjects of the study who received inadequate sleep lost less than half the amount of fat than those who got seven hours or more, even though they were on the same diet. “Metabolic grogginess” is what University of Chicago researchers happens after four days of poor sleep. Your body’s ability to use insulin (the master storage hormone) becomes disrupted, and insulin sensitivity dropped by more than 30%. As a result of the reduced sensitivity, your body pumps out more insulin, leading us to store fat in all of the wrong places. This is just the beginning. Get less than six hours of sleep, and you start to lose control of the hormones that control hunger. You increase the production of Ghrelin, which stimulates appetite. The levels of Leptin decrease, which leads to you feeling like your stomach is empty. That is a lousy combination if weight loss is your goal. However, the hits keep on coming! Cortisol (a stress hormone associated with fat gain) production increases which activate areas of your brain that makes you want food. Combined with your elevated Ghrelin level leaves you feeling hungry all of the time, even if you just ate. Think it can’t get any worse? It can! Your ability to make decisions is also impaired which leads to poor food choices and eating more of it. Let’s recap what we know so far. Because you are getting less than seven hours of sleep each night you are: always hungry, want food that is bad for you, deal out more substantial portions, and you cannot say, “No!” To put the cherry on top (probably not a good analogy here), your night owl ways make it harder to recover from exercise by slowing down the production of growth hormone – a natural source of anti-aging and fat burning that also facilitates recovery. Combine this with the fact that most people when confronted with stalled weight loss increase their activity and you have yet another reason why they don’t see improvements. The only answer (and it is not an easy one for most) is to get between seven to nine hours of sleep each night. If you do have a night of poor sleep, do your best not to follow up with any more! The evidence is overwhelming that a lack of sleep will not only short your ability to lose weight but also has connections to diabetes, high blood pressure, heart failure, and cognitive degeneration. Not only might your appearance depend on adequate sleep but your health and quality of life. Interested in starting your fitness journey?
Evidence-based governance of schooling: how actors at school-level recontextualize new governance instruments Sprache des Titels: In order to promote quality assurance and quality improvement in schools, most European countries introduced instruments supporting the idea of evidence-based policy (such as e.g. school inspections, educational standards and annual comparative student assessment) in order to enhance the quality of the educational system. According to the idea of this evidence-based policy, goals have to be communicated more explicitly, goals achievement has to be measured more rigorously, and feedback data (i.e. of student assessment) has to be communicated back to actors on all levels of the school system to provide information and motivational stimuli for quality improvement. In order to make this rationale work, re-contextualisation processes are necessary (Fend, 2006). Actors on all levels of schooling have to understand both, the goals and the performance feedback and have to know how to translate them into practical action on their respective level.
What Is DNS? Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 What Is DNS? In this section Domain Name System (DNS) is one of the industry-standard suite of protocols that comprise TCP/IP. Microsoft Windows Server 2003. DNS is implemented using two software components: the DNS server and the DNS client (or resolver). Both components are run as background service applications. Network resources are identified by numeric IP addresses, but these IP addresses are difficult for network users to remember. The DNS database contains records that map user-friendly alphanumeric names for network resources to the IP address used by those resources for communication. In this way, DNS acts as a mnemonic device, making network resources easier to remember for network users. The Windows Server 2003 DNS Server and Client services use the DNS protocol that is included in the TCP/IP protocol suite. DNS is part of the application layer of the TCP/IP reference model. DNS in TCP/IP For more information and to view logical diagrams illustrating how DNS fits with other Windows Server 2003 technologies, see “How DNS Works" in this collection. By default, Windows Server 2003 DNS is used for all name resolution in a Windows Server 2003 network. In the most typical scenario, when a Windows Server 2003 network user specifies the name of a network host or an internet DNS domain name, the DNS Client service running on the Windows Server 2003 computer of the user contacts a DNS server to resolve the name to an IP address. Technologies That Use DNS DNS and Active Directory Windows Server 2003 Active Directory directory service uses DNS as its domain controller location mechanism. When any of the principal Active Directory operations is performed, such as authentication, updating, or searching, Windows Server 2003 computers use DNS to locate Active Directory domain controllers and these domain controllers use DNS to locate each other. For example, when a network user with an Active Directory user account logs in to an Active Directory domain, the user’s computer uses DNS to locate a domain controller for the Active Directory domain to which the user wants to log in. For more information about integrating DNS and Active Directory, see “How DNS Works" in this collection. DNS and WINS The earlier method of name resolution for a Windows network was Windows Internet Name Service (WINS). DNS is different than WINS in that DNS is a hierarchical namespace and WINS is a flat namespace. Down-level clients and applications that rely on NetBIOS names continue to use WINS for name resolution. Since Windows Server 2003 DNS is WINS-aware, a combination of both DNS and WINS can be used in a mixed environment to achieve maximum efficiency in locating various network services and resources. For more information about using DNS in a mixed environment, see “How DNS Works" in this collection. DNS and DHCP For Windows Server 2003 DNS, the DHCP service provides default support to register and update information for legacy DHCP clients in DNS zones. Legacy clients typically include other Microsoft TCP/IP client computers that were released prior to Windows 2000. The Windows Server 2003 DNS-DHCP integration enables a DHCP client that is unable to dynamically update DNS resource records directly to have this information updated in DNS forward and reverse lookup zones by the DHCP server. The following resources contain additional information that is relevant to this section:
June 4 – Today’s Food History National Eggs Benedict Day Today’s Food History on this day in… 1845 Hatch’s sowing machine for wheat, oats and other grasses was first demonstrated. 1872 Robert Chesebrough of New York patented a method for making Vaseline. 1895 African American inventor Joseph Lee patented a machine for “bread crumbing.” It was intended for use by restaurants to crumb large quantities of bread scraps. 1907 The automatic washer & dryer are introduced. 1936 Sylvan Goldman ran a successful chain of grocery stores, where customers could carry hand baskets while they shopped. In 1936, when he was a major owner of the Piggly-Wiggly supermarket chain, he invented the shopping cart. He got the idea from a wooden folding chair. He designed the cart by putting a basket on the seat, another below and wheels on the legs. He and a mechanic, Fred Young put one together with a metal frame, and wire baskets. The frames could be folded up and the baskets stacked, which took up less storage room. Customers were reluctant to use this new contraption, so Goldman hired fake shoppers to wheel the carts around pretending to shop so people could see how useful the cart could be! They became a hit, and he formed a new company to manufacture the carts. It is hard to imagine a supermarket or discount store without shopping carts today. 1970 At the 43rd National Spelling Bee, Libby Childress wins spelling the word ‘croissant.’ 1974 The Cleveland Indians were playing bad, and fewer and fewer fans came to watch them play. They had a ‘Ten Cent Beer Night’ to bring out the fans. Only 22,000 fans turned out in a stadium that could seat 60,000, but they made up for the low numbers by becoming so drunk and unruly, going on the field and disrupting the game, that the Indians had to forfeit the game to the Texas Rangers. 1980 Earle McAusland, publisher/editor of Gourmet magazine died at age 89. 2007 Vincent Sardi Jr. died. He operated the famous Broadway restaurant, ‘Sardi’s’ for 50 years. He retired in 1997.
The metabolic syndrome is not a disease, but rather a group of risk factors for heart disease. Elevated blood pressure, abnormal lipid levels, large waistline and a high fasting blood glucose level are all risk factors that can make up metabolic syndrome. According to the American Heart Association, metabolic syndrome affects 35% of adults in the United States. It is important to be tested for these risk factors and have them managed because they put you at a higher risk for a number of serious conditions, including type 2 diabetes, heart disease, heart attack, and stroke.What are the risk factors for metabolic syndrome?What are the symptoms of metabolic syndrome?How is metabolic syndrome diagnosed?What are the treatments for metabolic syndrome?Are there screening tests for metabolic syndrome?How can I reduce my risk of metabolic syndrome?What questions should I ask my doctor about metabolic syndrome?Where can I get more information about metabolic syndrome? - Reviewer: Kim A. Carmichael, MD, FACP - Review Date: 05/2015 - - Update Date: 05/20/2015 -
Experience interdisciplinary Texas. Explore history, science, language arts, ESL, the arts, and many other stories of Texas through the Bullock Museum's school field trip program. Plan Your Field Trip What do you want to do and see when you get here? The Bullock Museum’s school tour program invites teachers and students from across the state and beyond to explore the history, culture and science of Texas past and present, and to inspire thought about what lies in the future. With three floors of exhibitions including two special exhibition galleries, two theaters, volunteer-led conversation carts, and a variety of Student Special Event days, there is something for every grade level and subject area. Museum education staff members are happy to help plan your field trip and can suggest itineraries based on your needs. Download our TEKS-aligned curriculum resources, exhibition and film activity guides, lesson plans, and additional resources. You'll find great suggestions to tie your Museum experiences to classroom activities. Please note: The portion of the first-floor Texas History Gallery covering American Indians, early Texas colonization, and westward expansion content is unavailable. The first-floor Texas History Gallery is currently undergoing renovation to provide a new interpretive approach to the encounters between the French, Spanish, and regional American Indians. During renovations, students can explore the early history of Texas through the story of French exploration in the Museum’s new installation featuring the 17th-century French ship La Belle. Gather a Group To receive the Museum's school group price, make sure: - there are at least 10 students of similar age, grades K-12, in your group; - you are enrolled in an educational institution or are a home school organization; - you make a reservation at least 2 weeks in advance; and - there is at least one teacher/chaperone for every 10 students. |Exhibitions - 3 floors |Texas Spirit Theater: Star of Destiny |Texas Spirit Theater: Visiting the Bullock Texas State History Museum is always my 1st graders' favorite field trip! The whole museum is interactive and fun for all ages. Texas teacher Make a Reservation If possible, plan a trip to the museum for the fall semester. Reservation schedules fill up quickly in the spring. To make a reservation, please have the following information handy: - school name, contact information, district, and grade level - lead teacher's name, phone number, and email address - preferred and alternate dates and arrival times for visit - estimated number of students, teachers, and chaperones (1 adult per 10 students) - any special needs - lunch plans (Consider ordering from the Story of Texas Café. Or, plan to bring your lunch and eat outside on picnic tables or on the bus.) Submit reservations at least two weeks before your ideal date. A reservation specialist will review the information and send an email confirming exhibition entry and theater times. If first choice selections are not available, a specialist will contact you about alternative options. Please note: Confirmation is communicated via email within a week of submitting a reservation. If, after a week, you do not receive confirmation, first check spam or junk folders then email firstname.lastname@example.org or call the Teacher Hotline at (512) 463-6712 for more information. Come in Advance Try out your lesson plans, find the best exhibition stops, and check out film options before your trip. If you're an educator and you'd like a pre-trip Educator Preview Visit, we'll waive your exhibition admission when you arrive. Just show your school ID badge at the ticketing counter and tell the staff member that you are here to plan a field trip. You can also explore current and upcoming exhibitions and events and peruse the Artifact Gallery online. Don't just see films—experience them. Sit in the huge IMAX® Theatre in front of the biggest screen in Texas and become part of stunning giant-screen documentaries about distant countries, space, oceans, animals, and many other amazing places and events. In the multimedia, 4D special effects Texas Spirit Theater, watch as Texas history steps out on stage in The Star of Destiny, an epic saga about Texas's past, present, and future. Or, experience the story of La Belle as seen through the eyes of a young colonist in Shipwrecked. Science and history TEKS have never been so fun! Apply for Scholarships Scholarships provide free admission to films. Scholarships are available on a first-come, first-served basis to groups demonstrating need and meeting reservation criteria. Please complete our online reservation form and indicate your interest in applying. Our scholarship coordinator will follow up to complete the application process. Special Days for Students American Indian Heritage Day September 30, 2016 Join us to celebrate the historic, cultural, and social contributions American Indian communities and leaders have made to Texas. Experience dancing and drumming performances, and interactive, hands-on activities for school groups. Living History Days First Thursdays of the Month Meet costumed Museum volunteers interpreting a character from Texas history every first Thursday of the month. Travel through the Museum exhibitions, stop for a chat, and hear stories of what Texas was like in days past. 2016-2017 dates: October 6, November 3, December 1, February 2, March 2, April 6, May 4, June 1 Select Thursdays each Month Explore the Bullock Museum, see demonstrations, and participate in hands-on activities with STEM experts from Central Texas Discover Engineering at drop-in activity stations. 2016-2017 dates: October 20, November 17, December 15, January 19, February 16, March 9, April 20, May 18 Support for the Bullock Museum's exhibitions and education programs provided by the Texas State History Museum Foundation.
A wrist fracture is a break in one or more of the bones in the wrist. The wrist is made up of the two bones in the forearm called the radius and the ulna. It also includes 8 carpal bones. The carpal bones lie between the end of the forearm bones and the bases of the fingers. The most commonly fractured carpal bone is called the scaphoid or navicular bone. This fact sheet will focus on fractures of the carpal bones of the wrist. Wrist fractures of the radius, often called Colles' fracture, can be found on a separate sheet. A wrist fracture is caused by trauma to the bones in the wrist. Trauma may be caused by: - Falling on an outstretched arm - Direct blow to the wrist - Severe twist of the wrist Factors that increase your chance of a wrist fracture include: A wrist fracture may cause: - Swelling and tenderness around the wrist - Bruising around the wrist - Limited range of wrist or thumb motion - Visible deformity in the wrist Your doctor will ask about your symptoms, physical activity, and how the injury occurred. The injured area will be examined. Imaging tests assess the bones, surrounding structures, and soft tissues. This can be done with: Proper treatment can prevent long-term complications or problems with your wrist. Treatment will depend on how serious the fracture is, but may include: Extra support may be needed to protect, support, and keep your wrist in line while it heals. Supportive steps may include a splint or cast to immobilize the injury. Some fractures cause pieces of bone to separate. Your doctor will need to put these pieces back into their proper place. This may be done: - Without surgery—you will have anesthesia to decrease pain while the doctor moves the pieces back into place - With surgery—pins, screws, plates, or wires may be needed to reconnect the pieces and hold them in place Children’s bones are still growing at an area of the bone called the growth plate. If the fracture affected the growth plate, your child may need to see a specialist. Injuries to the growth plate will need to be monitored to make sure the bone can continue to grow as expected. The following medications may be advised: - Over-the-counter medication to reduce inflammation and pain - Prescription pain medication Check with your doctor before taking nonsteroidal anti-inflammatory drugs, such as ibuprofen or aspirin. Note: Aspirin is not recommended for children or teens with a current or recent viral infection. This is because of the risk of Reye syndrome. Ask your doctor which medications are safe for your child. You may be referred to physical therapy or rehabilitation to start range-of-motion and strengthening exercises. To help reduce your chance of a wrist fracture: - Do not put yourself at risk for trauma to the bone. - Always wear a seatbelt when driving or riding in a car. - Do weight-bearing and strengthening exercises regularly to build strong bones. - Wear proper padding and safety equipment when participating in sports or activities. To help reduce falling hazards at work and home, take these steps: - Clean spills and slippery areas right away. - Remove tripping hazards such as loose cords, rugs, and clutter. - Use non-slip mats in the bathtub and shower. - Install grab bars next to the toilet and in the shower or tub. - Put in handrails on both sides of stairways. - Walk only in well-lit rooms, stairs, and halls. - Keep flashlights on hand in case of a power outage. - Reviewer: Warren A. Bodine, DO, CAQSM - Review Date: 08/2015 - - Update Date: 09/30/2013 -
I was looking through a solution someone on here posted to a q and I'm not sure I follow... Can someone pls explain the highlighted step and how you'd go on from there? (Original post by Horizontal 8) STEP I question 3 Hence no solutions arise from this. So we use to find the other angles. Cosx is -ve for when considering 0<x<2pi Therefore the angles that we want are: And this is equal to hmm I still don't understand why you wouldn't just use the cosx instead of cos(-x) to work through the remainder of the question??? oh of course! got it, thank you (Original post by z0tx) I think it is because the interval you are dealing with in the equation is . The absolute value gives two possibilities. Since , you rule it out and use the other (positive) value, which is Thanks for posting! You just need to create an account in order to submit the post Already a member? Oops, something wasn't right please check the following: Not got an account? Sign up now © Copyright The Student Room 2016 all rights reserved The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd. Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE
While China is now one of Africa's most important trading partners, the reverse is not true. Africa represents less than 5% of China's global trade balance, most of it in commodities that can be sourced elsewhere. Africa barely registers in terms of the value of its trade compared to China's major trading partners in Europe, the US and the Middle East. Because China is now so economically important to Africa, many people believe the reverse must also be true. It's not. Not a single African country is on the list of China’s top 10 trading partners. While Africa is no doubt important to China for a number of different reasons, trade is not the dominant one. Africa-China scholar Daouda Cissé, an independent researcher based in Montreal, places the continent's trading relationship with China in the proper context. For two decades or so, the relationship between African countries and China has grown, and cooperation has been increasing in various areas, including media, culture and education. While China is Africa’s largest trading partner, Africa is not China’s main trading partner. In fact, Africa lags far behind the European Union (EU), the US and the Association of Southeast Asian Nations (ASEAN) countries, which are China’s largest trading partners. China’s status as Africa’s largest trading partner depends largely on the volume of China’s exports to Africa. When one looks at African exports alone, Africa’s largest trading partners are the EU and the US. However, Africa-Asia trade (particularly fuelled by China) is growing and Africa’s imports from Europe are declining while its imports from Asia are increasing. This piece contributes to debunking the myth that Africa is China’s largest trading partner, often relayed in academia, the media, policy environments, etc., and explores to what extent the Forum on China-Africa Cooperation (FOCAC) could enhance Africa-China trade. FOCAC serves as a dialogue platform between African and Chinese officials to deepen economic, political and diplomatic cooperation between China and African countries. Areas of cooperation include trade, investments, education, development assistance, tourism, etc. While several meetings and fora are organized to foster and deepen partnerships between African countries and China, economic negotiations represent the major agenda of the FOCAC meetings. Trade, investments and aid are at the heart of the meetings between African and Chinese officials. China’s interest in enhancing its Outward Foreign Direct Investment (OFDI) and foreign trade meets Africa’s willingness to ‘look east’ and to diversify its global partnerships. China concentrates on investing and selling abroad in order to promote its economic development through investments and trade, not least with African countries. Besides growing investments in other sectors (telecommunications, manufacturing, media, insurance, among others), China mainly invests in resource-rich African countries through infrastructure projects (pipelines, refineries, smelters and so on). These investments enable those countries to enhance resource exploration, exploitation and production. But such investments also contribute to securing China’s energy needs. As for trade, China’s production surplus over the years, fuelled by labour and capital intensive industries, requires the exploration of overseas markets. Africa’s lack of manufacturing industries and need for finished products coincides with China’s market-seeking motivation to enable its companies to sell products made in China to African countries. According to China’s General Administration of Customs data on merchandise trade published on 21 August 2015, China’s total merchandise trade volume with Africa from January to July 2015 amounts to US$101.37 billion, comprised of US$61.69 billion in exports to Africa and US$ 39.67 billion in imports from Africa. While exports contribute to growth, Africa’s exports, strongly based on resources, have not enabled sustainable development across the continent. FOCAC: Changing Africa-China trade patterns? China’s trade and investments with African countries strongly shape Africa-China relations. Yet, Africa’s investments and trade with China raise questions about Africa’s economic interests in China. Africa’s trade remains based on the export of resources, (and some agricultural products). Meanwhile, African countries import manufactured goods. In the growing Africa-China relationship, few African countries (Nigeria, South Africa, Mauritius and Seychelles) have explored the economic potential that China offers in terms of setting up businesses as well as diversifying their exports and export destinations. While the world’s largest companies and small and medium sized enterprises have set up businesses in China, despite its challenges as a business environment, very few African companies have tapped into China. That said, there are a number of African entrepreneurs who have established businesses in China, but these mostly engage in exporting Chinese manufactured products to Africa. Very few of these entrepreneurs diversify their businesses by importing African products into the Chinese market. Business forums are organized on the margins of the FOCAC official meetings, in order to foster business ties between Chinese and African companies. But very often, when agreements are signed for joint ventures, they serve the interests of Chinese companies and entrepreneurs who bring in more capital to set up businesses in Africa. The opposite – where African businesses expand into Chinese markets – is much rarer, despite the massive opportunities offered by the Chinese markets.
AHMEDABAD: The data of the Central Pollution Control Board (CPCB) show that Gujarat, Maharashtra and Andhra Pradesh contribute to around 80 per cent of hazardous waste including heavy metals in the country. This was stated by the ministry of environment and forests (MoEF) in an official release. The ministry further stated that due to rapid industrialization and consumerist life style, anthropogenic sources of environmental pollution have also increased. The ministry stated that pollution occurs both at the level of industrial production as well as end use of the products and run-off. These toxic elements enter the human body mostly through food and water and to a lesser extent through inhalation of polluted air, use of cosmetics, drugs, poor quality herbal formulations. The MoEF said that Vadodara was found to be highly polluted with chromium (Cr) which is found in mining, industrial coolants, chromium salts manufacturing and leather tanning. Apart from Vadodara, chromium pollutants were found in Ranipet in Tamil Nadu, Kanpur, Uttar Pradesh, and Talcher in Orissa. Also the MoEF stated that lead(Pb) pollution was also found in Vadodara. Lead is released in industries like lead acid batteries, paints, e-waste, smelting operations, coal-based thermal power plants, ceramics, and bangle industry. The MoEF note also stated that in India, 19 out of 35 states and Union territories have ground water highly contaminated with fluoride. Gujarat is also one of these 19 states. MoEF further said that in states like Andhra Pradesh, Gujarat and Rajasthan, 70 to 100 per cent districts contain high fluoride levels in food and water. The ministry further said that public awareness is needed on fluoride contamination. Also dental products, anti-depressant and anti-cholesterol drugs used for long term treatment are important sources of fluoride.
From Publishers Weekly In ancient times war was universally held to be a natural condition, so Rome's claim to fight only just wars, hypocritical though it may have been, reflected a genuine conflict of values. University of Cambridge historian Finley brings to these reflections on ancient history an insistence that we avoid imposing our own preconceptions on the way things actually were. His short polemic reminds us that primary sources are inexact, archeological evidence is scanty, statistics are suspect, and that the ancients liked to invent; Thucydides' chronicles notwithstanding, we will never be certain what Pericles told the Athenian assembly in 430 b.c. Finley knocks sociologist Max Weber for putting too much emphasis on Greek demagogues' charisma; rabble-rousers' platforms and promises are what swayed the masses, he maintains. He asserts that it is up to each historian to ask the right questions and supply a meaningful conceptual framework, and this underlying credo gives these scholarly essays point and pith. February Copyright 1986 Reed Business Information, Inc. --This text refers to an out of print or unavailable edition of this title. 'The most influential ancient historian of our time Indisputably the most valuable writing on ancient history since 1945.', New York Review of Books --This text refers to the
10 Awesome Printable Map for August 2018 – A map can be a representational depiction emphasizing relationships involving elements of a space, including objects, regions, or themes. Most maps are somewhat inactive, adjusted into paper or any other durable medium, while some are interactive or dynamic. Even though most widely utilised to portray geography, maps can reflect virtually any distance, literary or real, with no regard to scale or circumstance, like in mind mapping, DNA mapping, or even computer network topology mapping. The distance being mapped can be two dimensional, like the top layer of the earth, 3d, like the inner of the earth, or even more summary areas of almost any measurement, such as arise in modeling phenomena having many independent factors. Even though earliest maps understood are of the skies, geographical maps of territory possess a exact long tradition and exist in prehistoric days. The word”map” stems from the medieval Latin Mappa mundi, wherein mappa meant napkin or cloth and mundi the world. So,”map” grew to become the shortened term referring to a two-way rendering of this surface of the world. Road maps are probably the most widely used maps today, and sort a subset of navigational maps, which likewise consist of aeronautical and nautical charts, railroad system maps, and hiking and bicycling maps. Regarding volume, the biggest amount of pulled map sheets is most likely constructed by community polls, performed by municipalities, utilities, tax assessors, emergency services providers, and also other local businesses. Printable Map Northern Europe Valid Printable Map Africa With Countries And Capitals Lovely Map Od, Source : uptuto.com Daesh lost further territory in Iraq as well as in Syria The group have now lost of the territory they controlled in Iraq in August, Source : theglobalcoalition.org Printable World Maps for Students Example How to Draw Map Australia Inside Od Noavg southeast, Source : uptuto.com United States Map Quiz East Coast Map Od Australia ispsoemalaga Newtown Australia Map United States Map Quiz East Coast Map Od Australia, Source : southerncoloradoonline.com Many national surveying projects are carried from the military, like the British Ordnance Survey: a civilian federal government bureau, internationally renowned because of its detailed operate. Along with location data maps may possibly likewise be utilised to spell out contour lines suggesting constant values of altitude, temperatures, rain, etc.
During the late 17th Century, a succession of Pacific plundering pirates parked at the shores of the Galapagos Islands, bringing Norwegian and black rats, which were to have a devastating effect on the island’s wildlife for years to come. For centuries these new inhabitants lorded over the islands, eating turtle eggs, sullying the place with disease, and bullying rare birds into obscurity. That was until recently, when scientists decided that enough was enough. Mobilising the Galapagos National Park Service and Charles Darwin Foundation, the scientists launched a “full-scale assault” on the dirty assailants, dumping roughly ten tonnes of poison soaked biscuits from helicopters in two daring fly overs. The poison used on the unnamed biscuit brand allegedly entices the rats while repelling other rare and endangered inhabitants, such as sea lions, marine iguanas, turtles, and birds. While the mission has been labelled a “cavalier biscuit fiasco”, its work has already achieved levels of success, with monitors spotting dead rats on the rocky terrain of several islands. Felipe Cruz, director of technical assistance at the foundation, says the rats’ ‘party’ could prove catastrophic to this unique ecosystem, should it be allowed to persist. As to date, the rodents have endangered roughly 50 bird species, including the rare Galapagos Petrel. The biscuit bombed islands will continue to be monitored over the next two years, before plans get underway to rid black rats from the island of Pinzon. Here the rats have been sucking the eggs of the Galapagos giant tortoise dry, restricting its breeding for over a century. While the horizon appears significantly brighter for the endangered residents of this Pacific archipelago, zoologist and rodent behavioural specialist, Dr Dan Pinstle, has labelled the entire operation an “anthropocentric disgrace”. Dr Pinstle likens ‘egg stealing’ and ‘bird bullying’ to shopping for breakfast and courting young ladies on a typical island holiday. He says all the rodents were doing was capitalizing on a delightful destination when they were bombarded with ludicrous amounts of death soaked biscuits, adding – how would you like it?
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? You can change this under Settings & Account at any time. Transcript of John Dewey 20th century" Honors Born October 20, 1859 in Burlington, Vermont Attended traditional public school Studied at the University of Vermont Exposed to evolutionary theory and theory of natural selection Dewey's personal theories begin to take root Taught 3 years of elementary and high school Earned his doctorate from Johns Hopkins University in 1884 Devoted the next decade to positions at the Universities of Michigan and Minnesota The Laboratory School University of Chicago, 1894 Head professor of the department of philosophy began striving to apply his pedagogical ideas based on Unfoldment Theory Rousseau, Pestalozzi, and Frobel Founded and directed the University Laboratory School opened in 1896 16 pupils and 2 teachers demonstrated, tested, and criticized his theories Dewey published several influential works on education My Pedagogic Creed (1897) The School and Society (1900) The Child and the curriculum (1902) Reflections on the laboratory school experiments These works all expressed a similar theme: Education was in need of reform Dewey's Criticism of the Traditional School "Children should not talk to one another; all communication should be between the teacher and the class" (Tyler,1975) Teachers were authoritarians Desks were arranged in rows Students were passive and well-drilled Curriculum was predetermined Emphasis was placed on rote memory of facts Students acquire information; not the power to use knowledge Students understand individual concepts, but can't deal with real-world problems in their own lives Experience and Education (1938) Dewey argued this traditional form of education would be ineffective for students entering a society that would soon be dominated by industrial technology. Should produce productive members of society The New School "Give the pupils something to do, not something to learn; and if the doing is of such a nature as to demand thinking; learning naturally results." -John Dewey be based on equality and democracy focus on real-world problems develop critical thinking and problem-solving skills give students small-group experiences to stimulate social learning train students to work cooperatively and collaboratively Dewey believed the new school system should: Inquiry Learning What is inquiry? The process of being open to understanding the world A study into a worthy question, issue, problem, or idea The work is authentic, real work that people in a community would tackle Involves serious engagement and investigation Requires the active creation and testing of new knowledge The Teacher's Role Inquiry Learning The Student's Role Inquiry Learning A successfully functioning democracy requires that its citizens develop habits that enable them to communicate, to learn, to compromise, to respect others, and to tolerate the variety of norms and interests that exist in life. take control of learning identify problems or issues of interest set learning goals develop organization and self-management skills participate in investigations form and test hypotheses conduct field work reflect on problems and thinking processes share ideas and information support, challenge, and respond to peers negotiate and discuss create presentations "The teacher is not in the school to impose certain ideas or to form certain habits in the child, but is there as a member of the community to select the influences which shall affect the child and to assist him in properly responding to these influences." -John Dewey, 1897 facilitate learning guide and assist students be a motivator and a supporter structure classroom as a democratic society provide students with small-group learning opportunities create an environment that fosters open communication encourage students to solve problems that arise focus on students' interests and choices What Inquiry Learning is Not Chambliss, J. J. (1996). Philosophy of education: an encyclopedia. (pp. 146-153). New York: Garland Press. Dewey, J. (1938). Experience and education. New York: Macmillan. Dewey, J. (1938). Logic: The theory of inquiry. New York: Henry Holt and Company. Dewey, J. (1969). J. Boydston (Ed.), The collected works of John Dewey: The middle works (Vol. 1, pp. 56-67). Carbondale: Southern Illinois University Press. Tanner, D., & Tanner, L. (1995). Curriculum development theory into practice. (3rd ed.). Columbus: Prentice Hall. Tracey, D., & Morrow, L. (2012). Lenses on reading: An introduction to theories and models. (2nd ed.). New York: The Guilford Press. Tyler, R. W. (1949). Basic principles of curriculum and instruction. Chicago: University of Chicago References Dewey's Lasting Legacy Final Thoughts a method for teaching any one subject a linear sequence of tasks a bunch of isolated projects meant to be implemented according to a script -John Dewey "Education is a social process. Education is growth. Education is life itself." retired from active teaching in 1930 published final version of his theory in 1938 "Logic: The Theory of Inquiry" Died June 2, 1952 at age 92 Dewey's theory has inspired numerous modifications to traditional classrooms and curricula for decades. Teachers across the country continue to immerse their students in problem-based and social learning experiences today.
Arundinaria gigantea (Walter) Muhlenberg. Phen: Apr-Jul. Hab: Bottomland and riparian forests, lower slopes and bluffs along streams, seeps, stream banks, and extending into less mesic and even dry settings on circumneutral or alkaline soils over limestone or dolomite, or in loess deposits along the Mississippi River. Dist: S. OH south to FL and e. TX. Origin/Endemic status: Endemic Taxonomy Comments: There has been much disagreement over the recognition of one, two, or several taxa of cane in the Southeastern United States. This species reaches heights of 6-7 (-10) m and is supposed to flower only once every 40-50 years. A. macrosperma Michaux is controversial; it has sometimes been considered to be a synonym of A. gigantea or to represent hybridization or introgression between A. gigantea and A. tecta. Synonymy: = F, FlGr, FNA24, HC, Il, K3, K4, Mo1, NcTx, S, Tn, Va, WV, Triplett, Weakley, & Clark (2006), Tucker (1988); = Arundinaria gigantea (Walter) Muhl. ssp. gigantea – ETx1, K1, McClure (1973); = Arundinaria macrosperma Michx. – Ward (2009c); < Arundinaria gigantea (Walter) Muhl. – C, GW1, RAB, WH3; > Arundinaria gigantea (Walter) Muhl. ssp. gigantea – Judziewicz et al (2000); > Arundinaria gigantea (Walter) Muhl. ssp. macrosperma (Michx.) McClure – Judziewicz et al (2000)
We've moved to http://groupprops.subwiki.org -- see you there! This is an old version that is not updated! Please visit the new version! This article is about a particular group, viz a group unique upto isomorphism This particular group is the smallest (in terms of order): non-T-group This particular group is a finite group of order: 8 Definition by presentation The dihedral group D8, sometimes called D4, also called the dihedral group of order eight or the dihedral group acting on four elements, is defined by the following presentation: The dihedral group D4 is defined as the group of all symmetries of the square (the regular 4-gon). This has a cyclic subgroup comprising rotations (which is the cyclic subgroup generated by a) and has four reflections each being an involution: reflections about lines joining midpoints of opposite sides, and reflections about diagonals. There are five conjugacy classes of elements of the dihedral group: - The identity element - The rotation by π, which is given as a2 in the presentation - The two-element conjugacy class comprising rotations by , namely a and a3 - The two-element conjugacy class comprising the two reflections: x,xa2 - The two-element conjugacy class comprising the two reflections: xa,xa3 Under the equivalence relation of automorphisms, the last two conjugacy classes merge into one. There are thus four equivalence classes under the actions of automorphisms, of sizes 1, 1, 2 and 4. The center of the quaternion group is the two-element subgroup comprising the identity and a2 (rotation by π) The commutator subgroup of the dihedral group is the same as its center. In particular this shows that the dihedral group is a group of nilpotence class two. The Frattini subgroup coincides with the center and commutator subgroup. This dihedral group is thus an extraspecial group. The center is the unique minimal normal subgroup, and hence is also the socle.
A memorial for James Cook - Sarah Read 27 August - 23 September 2018 It’s 250 years since James Cook first set sail for the South Pacific. Cook’s voyages were unprecedented feats of navigation, exploration and scientific documentation that greatly expanded Western knowledge of the globe. Charles Darwin’s take: “Cook gave us a Southern Hemisphere.” For the people of the island nations, Cook’s voyages presaged loss and colonisation. The enforcement of Western systems altered the way they had lived for thousands of years, to the net detriment of their cultures, languages, values and identities: As the repercussions of globalisation intensify, its negative effects are felt disproportionately in smaller, less technologically advanced countries: * By 2015, the population of Pacific Island nations had risen to the top of the worldwide obesity scale, with 70-75% of deaths attributable in part to non-communicable diseases (www.healthcareglobal.com). **In 2017, Henderson Island in the South Pacific was announced the most plastic-polluted place on Earth (earthsky.org). Sincere thanks to Viv Atkinson and Caroline Thomas sarahread.com | firstname.lastname@example.org
Although plant neurobiology is not a new field new material is always a fascinating topic. In this case, Carolin Seele (Leipzig University) and Stefan Meldau (Max-Planck-Institute for Chemical Ecology) have discovered that young beech trees (Fagus sylvatica) and maple trees (Acer pseudoplatanus), can protect themselves from roe deer that love to eat their new shoots and buds. This amazing discovery found that the trees can determine whether the shoot and/or bud was eaten by roe deer or damaged by other circumstances. The trees are able to detect roe deer saliva which triggers an increase in its production of salicylic acid. This hormone then signals an increase in the production of specific tannins which the deer do not like to eat. Not only that, but the growth hormone is also increased to compensate for the lost bud or shoot. If the damage is by other means, such as a storm, the trees only produce a growth hormone. Many years ago people found that if they talked to their plants they would grow better. Maybe there was something to this. There are so many wonderful things yet to be discovered about plants. What a fun topic!
The Lost Years of Jesus Palaniappa at AOL.COM Sat Feb 21 04:52:38 UTC 1998 You can check out the web site: http://www.ascension-research.org/lyj-rev.html for information about a book entitled "Lost Years of Jesus: On the Discoveries of Notovitch, Abhedananda, Roerich, and Caspari" by Elizabeth Clare Prophet regarding the very subject. "Well-written and provocative. The research was not only thorough and accurate but very, very careful." -- ROBERT S. RAVICZ, Ph.D., Professor of Anthropology, California State University, Northridge "The Autobiography of Jesus of Nazareth and the Missing Years" by Richard G. Patton seems to be another book to look into. Whether Jesus came to India/Tibet or not, a serious scholar D. D. Kosambi, who was no jingoist, says the following about the westward influence of Buddhism in his book "Ancient India". "The religion not only influenced Manichaeanism but must earlier have helped the formation of Christianity. The scholars who wrote the Dead Sea scrolls, though good Jews, show peculiarities that appear to be of Buddhist origin. Their practice of living in a monastery almost on top of a necropolis would be repulsive to Judaism, though quite agreeable to Buddhists. The 'Teacher of Righteousness' mentioned in the documents of this (probably Essene) Palestinian foundation bears the precise title of the Buddha. It is not, therefore, surprising that the Sermon on the Mount should sound more familiar to Buddhists than to the followers of the Old Testament who first heard it preached. Some of the Christ's miracles such as walking on the water were current much earlier in literature about the life of the Buddha. For that matter, the Christian saint's legend that goes under the title 'Barlaam and Josaphat' is a direct adaptation of the Buddha's life-story". Kosambi's book was published in 1965. Does anybody know if the research on Dead Sea scrolls since then supports or opposes his conclusions? More information about the INDOLOGY
The article considers the political aspect of land reform in the Republic of Zimbabwe. The problem of land reform has been one of the crucial ones in the history of this African country, which celebrated 40 years of independence on April 18, 2020. In recent decades, it has been constantly in the spotlight of political and electoral processes. The land issue was one of the key points of the political program from the very beginning of Robert Mugabe’s reign in 1980. The political aspect of land reform began to manifest itself clearly with the growth of the opposition movement in the late 1990s. In 2000–2002 the country implemented the Fast Track Land Reform Program (FTLRP), the essence of which was the compulsory acquisition of land from white owners without compensation. The expropriation of white farmers’ lands in the 2000s led to a serious reconfiguration of land ownership, which helped to maintain in power the ruling party, the African National Union of Zimbabwe – Patriotic Front (ZANU – PF). The government was carrying out its land reform in the context of a sharp confrontation with the opposition, especially with the Party for the Movement for Democratic Change (MDC), led by trade union leader Morgan Tsvangirai. The land issue was on the agenda of all the election campaigns (including the elections in July 2018); this fact denotes its politicization, hence the timeliness of this article. The economic and political crisis in Zimbabwe in the 2000–2010s was the most noticeable phenomenon in the South African region. The analysis of foreign and domestic sources allows us to conclude that the accelerated land reform served as one of its main triggers. The practical steps of the new Zimbabwean president, Mr. Emmerson Mnangagwa, indicate that he is aware of the importance of resolving land reform-related issues for further economic recovery. At the beginning of March 2020, the government adopted new regulations defining the conditions for compensation to farmers. On April 18, 2020, speaking on the occasion of the 40th anniversary of the independence of Zimbabwe, Mr. E. Mnangagwa stated that the land reform program remains the cornerstone of the country’s independence and sovereignty. land issue, land reform, ZANU – PF, land redistribution, opposition, crisis, sanctions, droughts, Mr. E. Mnangagwa, compensation 1. Abramova I.O., Fituni L.L. Tendencii ekonomicheskogo razvitiya i inflyacionnye processy v stranah Afriki yuzhnee Sahary (Economic development trends and inflationary processes in Sub-Saharan Africa). Problemy sovremennoj ekonomiki. 2016. № 4 (60). pp. 159–165. ISSN 1818-3395. 2. Abramova I.O., Fituni L.L., Sapuncov A.L. «Voznikayushchie» i «nesostoyavshiesya» gosudarstva v mirovoj ekonomike i politike (“Emerging” and “failed” states in global economy and politics). Moscow, Institute for African Studies RAS. 2007. 197 с. ISBN 978-5-91298-020-6. 3. Bond P. Zimbabve: politika v stile Mugabe (Zimbabwe: Politics, Mugabe Style). http://rabkor.ru/ columns/analysis/2019/03/19/mugabe-style (accessed 20.04.2020) 4. Chida P. Covid-19: Chamisa chides government. https://www.thestandard.co.zw/2020/04/19/covid-19-chamisa-chides-government (accessed 19.04.2020) 5. Chilunjika A., Uwizeyimana D.E. Shifts in the Zimbabwean Land Reform Discourse from 1980 to the present. African Journal of Public Affairs. 2015. vol. 8, issue 3, pp. 131–144. https://pdfs. semanticscholar.org/7771/61cfcd6e1bff87f82bf0ff8dd8ecd0670bbd.pdf (accessed 18.04.2020) 6. Cole B. Zimbabwe’s compensation promise to white farmers causes fury. https://www.iol.co. za/sunday-tribune/news/zimbabwes-compensation-promise-to-white-farmers-causes-fury-21114654 (accessed 18.04.2020)
Physicist Stephen Hawking has warned humanity that we probably only have about 1,000 years left on Earth, and the only thing that could save us from certain extinction is setting up colonies elsewhere in the Solar System. “We must … continue to go into space for the future of humanity,” Hawking said in a lecture at the University of Cambridge this week. “I don’t think we will survive another 1,000 years without escaping beyond our fragile planet.” artificial intelligence (AI) The fate of humanity appears to have been weighing heavily on Hawking of late – he’s also recently cautioned that artificial intelligence (AI) will be “either the best, or the worst, thing ever to happen to humanity”. powerful autonomous weapons Given that humans are prone to making the same mistakes over and over again – even though we’re obsessed with our own history and should know better – Hawking suspects that “powerful autonomous weapons” could have serious consequences for humanity. Colonies on Mars Hawking has estimated that self-sustaining human colonies on Mars are not going to be a viable option for another 100 years or so, which means we need to be “very careful” in the coming decades. Without even taking into account the potentially devastating effects of climate change, global pandemics brought on by antibiotic resistance, and nuclear capabilities of warring nations, we could soon be sparring with the kinds of enemies we’re not even close to knowing how to deal with. Late last year, Hawking added his name to a coalition of more than 20,000 researchers and experts, including Elon Musk, Steve Wozniak, and Noam Chomsky, calling for a ban on anyone developing autonomous weapons that can fire on targets without human intervention. Elon Musk : Our robots are perfectly submissive now, but what happens when we remove one too many restrictions?What happens when you make them so perfect, they’re just like humans, but better, just like we’ve always wanted? Imagine we’re dealing with unruly robots that are so much smarter and so much stronger than us, and suddenly, we get the announcement – aliens have picked up on the signals we’ve been blasting out into the Universe and made contract. Great news, right? Well, think about it for a minute. In the coming decades, Earth and humanity isn’t going to look so crash-hot. We’ll be struggling to mitigate the effects of climate change, which means we’ll be running out of land to grow crops, our coasts will be disappearing, and anything edible in the sea is probably being cooked by the rapidly rising temperatures. If the aliens are aggressive, they’ll see a weakened enemy with a habitable planet that’s ripe for the taking. And even if they’re non-aggressive, we humans certainly are, so we’ll probably try to get a share of what they’ve got, and oops: alien wars. As Hawking says in his new online film, Stephen Hawking’s Favourite Places, “I am more convinced than ever that we are not alone,” but if the aliens are finding us, “they will be vastly more powerful and may not see us as any more valuable than we see bacteria”. Clearly, we need a back-up plan, which is why Hawking’s 1,000-year deadline to destruction comes with a caveat – we might be able to survive our mistakes if we have somewhere else in the Solar System to jettison ourselves to.
Apparently, they “compete with cattle for grazing space on public lands outside the park," which adversely affects the livestock industry. If this news isn’t shocking enough, you’ll be disheartened to learn that up to 900 wild bison in the Yellowstone National Park will soon be slaughtered, as they are “considered a threat to the livestock industry.” A recent article in The Wall Street Journal relays that government agencies will soon take action to kill off between 500 and 900 bison. The reason? They “compete with cattle for grazing space on public lands outside the park.” It is stated: “Government agencies aim to kill or remove up to 900 wild bison from Yellowstone National Park this winter as part of a continuing effort to reduce the animals’ annual migration into Montana by driving down their population.” At present, around 5,000 buffalo roam in the Wyoming Park. On February 15th, however, that number will be reduced. The annual bison cull is carried out for two reasons: to avoid overwhelming the land, and to protect farmers’ beef cattle from a disease called brucellosis, says Stefanie Wilson, a lawyer for the Animal Legal Defense Fund (ALDF). “Each year since 2000, the National Park Service and other agencies … determine a number of buffalo that they’d ideally like to ‘remove’ from the Yellowstone population,” Wilson told The Dodo. Brucellosis is a bacterial infection that can cause fever, joint pain, and fatigue. It is often passed from livestock to humans via unpasteurized milk and other dairy products. However, according to ALDF’s lawyer, there has never been a documented case of a bison passing the disease to a cow or bull. “It is only theoretically possible based on a lab test,” she said. Regardless, the annual cull is set to take place on February 15th, 2016, to appease agitated farms in the neighboring state of Montana. Buffalo Field Campaign is one of the organizations working to stop the barbaric slaughter of hundreds of wild bison. Apparently, according to facts cited by The Dodo, there are no cattle in West Yellowstone for most of 2016, therefore, there is no risk of transmission (between November and June, cattle are trucked out of the area and into milder temperatures). Other wildlife will also be disrupted by the process of herding the wild bison and their calves to holding facilities to be slaughtered. Says the Campaign: “During a typical hazing operation, the … agencies ride noisy, smelly snowmobiles, fly helicopters, run horses, and ride ATVs throughout sensitive habitat important to numerous wildlife species,” the Buffalo Field Campaign wrote. When they flee from the noise, many animals — elk, moose, swans — use up energy they can’t afford to spend during a cold winter. Ethically, the treatment of the wild animals is also a concern. ALDF is presently representing wildlife advocate Stephany Seay and journalist Chris Ketcham in a case against the National Park Service. It is being noted that during the event, the park – public land – will be closed to all visitors. ALDF and animal rights advocates want to know exactly what will happen to the bison when the capture and kill operation takes place — especially since bison are prone to injuring each other in their extremely close quarters. ”The federal government manages the National Parks, and the wildlife that reside in those refuges, in trust for the American public and future generations.” What are your thoughts on the annual bison cull? Are you for it or against it? Comment your thoughts below and please share this news! This article (In February, 900 Bison Will Be Slaughtered For A Very Disturbing Reason…) is free and open source. You have permission to republish this article under a Creative Commons license with attribution to the author and TrueActivist.com
Distrust of the medical system has been a barrier to care for African-Americans long before the AIDS epidemic. Blacks have the highest mortality rates due to heart disease, diabetes, and some cancers, partially because of their distrust of medical providers, and the poor treatment they experience when they get into the system. There is also the lingering legacy of mistreatment by researchers—particularly during the Tuskegee Syphilis Experiment—which left Blacks wary of medical programs and clinical trials.34,35 Black immigrant populations (PDF – 700 KB) face additional language and cultural barriers to care. Some may fear arrest due to immigration issues, while others are reticent to seek care due to the ongoing association of HIV with the Caribbean and Africa. Indeed, simply being Haitian was labeled one of the original four risk factors for AIDS in the early 1980s—known as the “4H club”—along with hemophilia, heroin addiction, and homosexuality.36,37 The continued stigma and mistrust of health care professionals threatens the well being of African-Americans communities nationwide. Indeed, Blacks tend to test for HIV later in their infection than Whites, and many, like Delores, are diagnosed with both HIV and AIDS, or progress to AIDS within a year of HIV diagnosis. Due to the lapse in time from seroconversion to development of symptoms, African-Americans diagnosed with HIV in their 20s or even 30s may have actually become infected in their teens. During this time, they may have unknowingly transmitted the virus to others. Entering HIV primary care later in their disease progress also could undermine their health outcomes and shorten their lifespan. HRSA Responds to HIV Epidemic Among African-Americans In the early days of the AIDS epidemic, care was virtually nonexistent. Many patients died in hospital emergency wards. Those who were admitted encountered terrified health care workers and indifferent medical doctors who refused to touch them without wearing HAZMAT “moon suits.” In the midst of this suffering, a loosely connected system of community-based organizations, providers, activists, patients, friends, parents, and partners formed, providing the first real network of health and support services for PLWHA.38 Cultivating a supportive and active community is crucial to improving health outcomes for African-Americans living with HIV/AIDS. HRSA recognized and harnessed the power of this community-based response to HIV/AIDS and funded the first HIV-specific Federal health initiatives, known as the AIDS Service Demonstration Grants, in 1986. Launched in four of the Nation’s most heavily impacted cities—New York, San Francisco, Los Angeles, and Miami—the grants provided much needed financial and political capital to those delivering HIV care on the ground. They also legitimized and heralded the community-based response’s ability to effectively “provide the spectrum of needed services for people with HIV infection and its complications and provide appropriate alternatives to inpatient care.” For the first time, there was a sense of compassion and support for PLWHA among government officials, the media, and the general public.39
The Atlantean language was created for the film Atlantis: The Lost Empire by Marc Okrand, who also created the Klingon and Vulcan languages for the Star Trek films and television shows. Okrand worked with John Emerson, a designer at Disney, to produce an alphabet for the language. The language is spoken and written by the people of Atlantis in the film and is integral to the plot. Okrand based the Atlantean language on a hypothetical reconstruction of the language spoken by the early Indo-Europeans using sounds common to modern Indo-European languages and some not found in any of them. He wanted it to sound like a real human language, to be easy to speak and to be unlike English. All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. (Article 1 of the Universal Declaration of Human Rights) Atlantis: The Lost Empire A young adventurer named Milo Thatch joins an intrepid group of explorers to find the mysterious lost continent of Atlantis. Further details of the Atlantean language, including grammar and a English <> Atlantean dictionary:
The gematria decoder is a more engaging way to find integer data in the most well-known Gematria against the supplied alphabet. We can simply convert simple words into the most common gematria codes. This is useful for determining the Gematria value of any phrase, word, or sentence. It’s great to be able to convert a word or a phrase into its Gematria code, and we can start debating various topics without fear of our content being decoded. Table of Contents What Is the Purpose of the Gematria Decoder? In Jewish culture, gematria has been most frequently used. It can function as a number system. The most common application of gematria, however, is to gain a more spiritual awareness of a religious text. The precise meaning derived from gematria is determined by the individual’s beliefs. An ancient edition of gematria was a properly mature mathematical model that used phrases to reveal sums that future readers were supposed to discover. The Hebrew word chai (), which means “alive,” is an example of gematria. The term chai has a quantitative score of 18 in the Mispar Gadol variation, according to gematria. 18 has become a favorite number for many Jewish people due to the numerical worth of the word in gematria. How they handle to code everything with Gematria Decoder – and whether numbers are ‘good’ or ‘evil’ Gematria was exceptionally important in the time ago, before the English language, to figure out and fully comprehend all the hidden meanings in all old religious and occult texts with the help of gematria decoder, including the bible – and thus recognize God’s words and find out almost all the names for him and the angles (and the demons.) As Christianity spread and the Catholic Church was established, priests devoted their entire lives to translating documents and comprehending these secrets. And as the English language evolved and the new Gregorian Calendar was established (in 1582), everything has been coded using Gematria. Again, priests have devoted their lives to this throughout history. For the gematria decoder purpose, we now have powerful software and computers. For learning to code all media stories, they have advertorial staff and office buildings all over the world. They take the same approach to their occurrences and operations. You enter ‘keywords,’ and a software program calculates and returns dates, day spans, fits with important events such as rituals or astronomy, and so on. Of course, you can do this in any direction, allowing you to look at exact dates and get keywords for what type of ritual might be appropriate. Adding With the addition of – After the question about ‘fear mongering’ or ‘evil’ numbers It’s not about whether a number is “good” or “bad.” You must determine the message as well as their intentions. The phrase “Work from Home,” for example, equals ’56’. That does not have to be a bad thing if there are no connections. Working from home and spending more time with your family can be beneficial at times. When that phrase is merged with their scheduled pandemic and phrases such as “Close Schools,” “Online e-learning,” “Digital Slaves,” “Virus Outbreak,” “Anti-Freedom,” and so on, all of which have the value of ’56,’ the phrase “Work from Home” takes on a new meaning. It now denotes isolation, avoiding other people, and going against our interactive nature. So, figures are neutral because they are associated with words that can be regarded as positive or negative. Finally, it comes down to context and message, and the majority of the time, the message is hidden within the gematria decoder. Having said that, some figures will appear more ‘frequent’ in one type of story than others, simply since several phrases that are essential to them are choices with that specific number. One example is 25 and 52, which are the matching figures of the words ‘Enlil’ (Goddess of flooding), ‘Flood,’ and ‘Earth,’ and are used on days with 25- and 52 numerology to perform weather disasters through weather modification (climate engineering/weather warfare). Another example is 44, which you should know about. 44 has been used to differentiate and divide people in racist and deceit ritual practices. It’s particularly prevalent when “black” people are the focus of a story, similar to how the number 76 is. 44, for example, was used many times with President Barack Obama. Obama was number 44, but the 43rd President. Obama’s book, ‘A Promised Land,’ will be released on November 17, 2020, the last day of the year. And when Obama criticized the GOP on June 7, 2020, it was the first day of his 44th week. - 44 African-Americans (Septenary) - 44 black people - 44 Years of Black History (Septenary) - 44 deceptions - Disengagement: 44 - Slave Population: 44 The history of Gematria Decoder The Gematria was first utilized by the Jewish people in 8BC when they assigned numeric values to the Hebrew alphabets. The gematria numbers and values can be from various languages, as the Hebrew decimal numbers for the letter “A” differ from those in English. Even today, the Hebrew gematria is thought to be the world’s oldest. Gematria, numbers, and codes Decoder? Numerology and gematria are ancient practices. It dates back to man’s first language. Actually, all languages are coded to varying degrees using gematria, and the majority of modern languages correspond to English. English, the “accepted world language,” is fully decoded with a gematria decoder and also in sync with the recently adopted Gregorian calendar. Gematria is simply the secret societies’ way of communicating, casting spells, and shaping our reality, just as God created the universe with language, by integrating the number, the letter, and the word. FAQs on Gematria Decoder What is the gematria code and how does it work? The scheme of assigning numeric values to a name, word, or phrase is known as gematria (/metrics/; Hebrew: or gematria, plural or, gematriot). This can be accomplished by employing a specialized decoder, such as a gematria decoder. What does the number 13 symbolize spiritually in various cultures? The number 13 represents a test, suffering, and death. This is why, in many cultures, the number 13 is considered a curse or a bad number. How do you interpret gematria results? The first ten letters are assigned numerical values ranging from 1 to 10. The numbers assigned to the next eight letters rise by a factor of ten from 20 to 90. From 100 to 400, the final four letters are assigned number values that boost by a factor of one hundred. What does the number 3 mean in gematria? The Fathers. The Hebrew letter “s gematria The Fathers is three in number. Why is the number three so potent? Pythagoras, who postulated that the meaning underneath numbers was top, stated that the number 3 has always had a special relevance throughout human history. The number three was thought to be the perfect number, representing harmony, wisdom, and knowledge. - Gematria calculator free - Gematrinator Calculator Download - English Gematria Number Meanings - What is Gematria Chart - What does Divoc Mean In Hebrew?
Non-Hodgkin lymphoma is a cancer of the lymphatic system. Cancer is a disease in which cells grow in an abnormal way. Normally, new cells develop in a controlled manner to replace old or damaged cells. With lymphoma, white blood cells develop abnormally and grow at an abnormal rate. The lymphatic system is part of the immune system that helps fight off infections and illnesses. Non-Hodgkin lymphoma can make the body more vulnerable to other illnesses and infections. Normal Anatomy and the Development of Non-Hodgkin Lymphoma All blood cells start as stem cells in the bone marrow. Stem cells then mature into a variety of different blood cell types that have specific functions in the body. Non-Hodgkin lymphoma is an abnormality with a type of white blood cells called lymphocytes. There are different types of lymphocytes, but the main types are: - B-cell—Makes antibodies that help the body identify foreign substances in the body. The sooner the substance is identified, the sooner the immune system can work on eliminating it. - T-cell—T-cells have a number of jobs including destroying invading bacteria and viruses, stimulating an immune response, or slowing an immune response. - Natural killer (NK)—NK cells defend the body against invading viruses and cancer cells. The lymphatic system is a network of fluid, vessels, organs, and lymph nodes throughout the body that carry fluids and immune cells throughout the body. Lymphoid tissues and organs include: - Lymph fluid—Clear fluid made up of plasma (a blood component that comes from general circulation), lymphocytes, cellular by-products, and proteins. - Lymph vessels—Fluid from spaces between the cells and other bodily structures is collected by lymph capillaries (microscopic vessels) and moved into larger lymph vessels. Lymph is moved toward the heart by lymphatic and muscular contractions. The lymph is filtered through lymph nodes and eventually returned to the blood supply by draining into large veins near the collarbone. - Lymph nodes—Lymphoid tissue that contains lymphocytes and other immune system cells. Lymph nodes are scattered throughout the body in clusters. Lymph vessels pass through lymph nodes. As lymph passes through, it is filtered for foreign bodies, including cancer cells. Lymph nodes can become swollen or painful when the body is fighting an infection. - Bone marrow—All blood cells start as stem cells and are formed in bone marrow. Stem cells can mature into a variety of different blood cell types that have specific functions in the body. - Spleen—Located under the rib cage on the left side of the body. The spleen helps the body fight infection by making lymphocytes and other immune system cells. It filters out cellular by-products from circulation. The spleen also removes and destroys old, damaged red blood cells. - Thymus—Located behind the breastbone in the middle of the chest. The thymus makes T-cell lymphocytes where they stay until they are matured. - Adenoids and tonsils—Located in the back of the throat. Tonsils make lymphocytes. They offer protection against foreign bodies that are inhaled or swallowed. Lymphatic tissue can also be found throughout the body in the digestive tract, nervous system, and skin. With non-Hodgkin lymphoma, there is an excessive development abnormal lymphocytes. The cancerous cells are not able to carry out their normal function. The abnormal lymphocytes can also crowd out healthy cells in the lymph nodes decreasing the number of effective cells and weakening the immune system. Cancerous blood cells also circulate in the blood and lymph systems and can gather in organs like the spleen, bone marrow, lungs, and liver. Types of Lymphoma Lymphoma is a cancer of a type of blood cell called white blood cells. Nearly all non-Hodgkin lymphomas develop in a type of white blood cell known as the B-cell lymphocytes. Other types of non-Hodgkin lymphoma arise from other white blood cells known as T-cell lymphocytes and NK cells. There are approximately another 60 subtypes of lymphomas that are determined by how the cancer cells appear under a microscope, the type of cell the cancer starts in, the presence of specific proteins, and its DNA make-up. These characteristics will help determine treatment steps and prognosis. Non-Hodgkin lymphomas are also described by the rate of disease progression: - Indolent—Slow-growing, often without symptoms. Indolent lymphomas can be managed, but are generally not curable. - Aggressive—Fast-growing, often with symptoms (sometimes severe). Aggressive lymphomas can be treated and are generally curable. Lymphomas grow and develop differently, which affects the choice and course of treatment. - Reviewer: Mohei Abouzied, MD - Review Date: 03/2015 - - Update Date: 03/30/2016 -
|Orthopaedics and Neurosurgery International 6A Napier Road #02-42 Gleneagles Hospital Annex Block Singapore, 258500 SGP Ulnar nerve entrapment occurs when the ulnar nerve in the arm becomes compressed or irritated. The ulnar nerve is one of the three main nerves in your arm. It travels from your neck down into your hand, and can be constricted in several places along the way, such as beneath the collarbone or at the wrist. The most common place for compression of the nerve is behind the inside part of the elbow. Ulnar nerve compression at the elbow is called "cubital tunnel syndrome." Numbness and tingling in the hand and fingers are common symptoms of cubital tunnel syndrome. In most cases, symptoms can be managed with conservative treatments like changes in activities and bracing. If conservative methods do not improve your symptoms, or if the nerve compression is causing muscle weakness or damage in your hand, your doctor may recommend surgery. At the elbow, the ulnar nerve travels through a tunnel of tissue (the cubital tunnel) that runs under a bump of bone at the inside of your elbow. This bony bump is called the medial epicondyle. The spot where the nerve runs under the medial epicondyle is commonly referred to as the "funny bone." At the funny bone the nerve is close to your skin, and bumping it causes a shock-like feeling. Beyond the elbow, the ulnar nerve travels under muscles on the inside of your forearm and into your hand on the side of the palm with the little finger. As the nerve enters the hand, it travels through another tunnel (Guyon's canal). The ulnar nerve gives feeling to the little finger and half of the ring finger. It also controls most of the little muscles in the hand that help with fine movements, and some of the bigger muscles in the forearm that help you make a strong grip. In many cases of cubital tunnel syndrome, the exact cause is not known. The ulnar nerve is especially vulnerable to compression at the elbow because it must travel through a narrow space with very little soft tissue to protect it. Common Causes of Compression There are several things that can cause pressure on the nerve at the elbow: - When your bend your elbow, the ulnar nerve must stretch around the boney ridge of the medial epicondyle. Because this stretching can irritate the nerve, keeping your elbow bent for long periods or repeatedly bending your elbow can cause painful symptoms. For example, many people sleep with their elbows bent. This can aggravate symptoms of ulnar nerve compression and cause you to wake up at night with your fingers asleep. - In some people, the nerve slides out from behind the medial epicondyle when the elbow is bent. Over time, this sliding back and forth may irritate the nerve. - Leaning on your elbow for long periods of time can put pressure on the nerve. - Fluid buildup in the elbow can cause swelling that may compress the nerve. - A direct blow to the inside of the elbow can cause pain, electric shock sensation, and numbness in the little and ring fingers. This is commonly called "hitting your funny bone." Some factors put you more at risk for developing cubital tunnel syndrome. These include: - Prior fracture or dislocations of the elbow - Bone spurs/ arthritis of the elbow - Swelling of the elbow joint - Cysts near the elbow joint - Repetitive or prolonged activities that require the elbow to be bent or flexed Cubital tunnel syndrome can cause an aching pain on the inside of the elbow. Most of the symptoms, however, occur in your hand. - Numbness and tingling in the ring finger and little finger are common symptoms of ulnar nerve entrapment. Often, these symptoms come and go. They happen more often when the elbow is bent, such as when driving or holding the phone. Some people wake up at night because their fingers are numb. - The feeling of "falling asleep" in the ring finger and little finger, especially when your elbow is bent. In some cases, it may be harder to move your fingers in and out, or to manipulate objects. - Weakening of the grip and difficulty with finger coordination (such as typing or playing an instrument) may occur. These symptoms are usually seen in more severe cases of nerve compression. - If the nerve is very compressed or has been compressed for a long time, muscle wasting in the hand can occur. Once this happens, muscle wasting cannot be reversed. For this reason, it is important to see your doctor if symptoms are severe or if they are less severe but have been present for more than 6 weeks. There are many things you can do at home to help relieve symptoms. If your symptoms interfere with normal activities or last more than a few weeks, be sure to schedule an appointment with your doctor. - Avoid activities that require you to keep your arm bent for long periods of time. - If you use a computer frequently, make sure that your chair is not too low. Do not rest your elbow on the armrest. - Avoid leaning on your elbow or putting pressure on the inside of your arm. For example, do not drive with your arm resting on the open window. - Keep your elbow straight at night when you are sleeping. This can be done by wrapping a towel around your straight elbow or wearing an elbow pad backwards. Medical History and Physical Examination Your doctor will discuss your medical history and general health. He or she may also ask about your work, your activities, and what medications you are taking. After discussing your symptoms and medical history, your doctor will examine your arm and hand to determine which nerve is compressed and where it is compressed. Some of the physical examination tests your doctor may do include: - Tap over the nerve at the funny bone. If the nerve is irritated, this can cause a shock into the little finger and ring finger — although this can happen when the nerve is normal as well. - Check whether the ulnar nerve slides out of normal position when you bend your elbow. - Move your neck, shoulder, elbow, and wrist to see if different positions cause symptoms. - Check for feeling and strength in your hand and fingers. X-rays. These imaging tests provide detailed pictures of dense structures, like bone. Most causes of compression of the ulnar nerve cannot be seen on an x-ray. However, your doctor may take x-rays of your elbow or wrist to look for bone spurs, arthritis, or other places that the bone may be compressing the nerve. Nerve conduction studies. These tests can determine how well the nerve is working and help identify where it is being compressed. Nerves are like "electrical cables" that travel through your body carrying messages between your brain and muscles. When a nerve is not working well, it takes too long for it to conduct. During a nerve conduction test, the nerve is stimulated in one place and the time it takes for there to be a response is measured. Several places along the nerve will be tested and the area where the response takes too long is likely to be the place where the nerve is compressed. Nerve conduction studies can also determine whether the compression is also causing muscle damage. During the test, small needles are put into some of the muscles that the ulnar nerve controls. Muscle damage is a sign of more severe nerve compression. Unless your nerve compression has caused a lot of muscle wasting, your doctor will most likely first recommend nonsurgical treatment. Non-steroidal anti-inflammatory medicines. If your symptoms have just started, your doctor may recommend an anti-inflammatory medicine, such as ibuprofen, to help reduce swelling around the nerve. Although steroids, such as cortisone, are very effective anti-inflammatory medicines, steroid injections are generally not used because there is a risk of damage to the nerve. Bracing or splinting. Your doctor may prescribe a padded brace or split to wear at night to keep your elbow in a straight position. Nerve gliding exercises. Some doctors think that exercises to help the ulnar nerve slide through the cubital tunnel at the elbow and the Guyon's canal at the wrist can improve symptoms. These exercises may also help prevent stiffness in the arm and wrist. Your doctor may recommend surgery to take pressure off of the nerve if: - Nonsurgical methods have not improved your condition - The ulnar nerve is very compressed - Nerve compression has caused muscle weakness or damage There are a few surgical procedures that will relieve pressure on the ulnar nerve at the elbow. Your orthopaedic surgeon will talk with you about the option that would be best for you. These procedures are most often done on an outpatient basis, but some patients do best with an overnight stay at the hospital. Cubital tunnel release. In this operation, the ligament "roof" of the cubital tunnel is cut and divided. This increases the size of the tunnel and decreases pressure on the nerve. After the procedure, the ligament begins to heal and new tissue grows across the division. The new growth heals the ligament, and allows more space for the ulnar nerve to slide through. Cubital tunnel release tends to work best when the nerve compression is mild or moderate and the nerve does not slide out from behind the bony ridge of the medial epicondyle when the elbow is bent. Ulnar nerve anterior transposition. In many cases, the nerve is moved from its place behind the medial epicondyle to a new place in front of it. Moving the nerve to the front of the medial epicondyle prevents it from getting caught on the bony ridge and stretching when you bend your elbow. This procedure is called an anterior transposition of the ulnar nerve. The nerve can be moved to lie under the skin and fat but on top of the muscle (subcutaneous transposition), or within the muscle (intermuscular transposition), or under the muscle (submuscular transposition). Medial epicondylectomy. Another option to release the nerve is to remove part of the medial epicondyle. Like ulnar nerve transposition, this technique also prevents the nerve from getting caught on the boney ridge and stretching when your elbow is bent. Depending on the type of surgery you have, you may need to wear a splint for a few weeks after the operation. A submuscular transposition usually requires a longer time (3 to 6 weeks) in a splint. Your surgeon may recommend physical therapy exercises to help you regain strength and motion in your arm. He or she will also talk with you about when it will be safe to return to all your normal activities. The results of surgery are generally good. Each method of surgery has a similar success rate for routine cases of nerve compression. If the nerve is very badly compressed or if there is muscle wasting, the nerve may not be able to return to normal and some symptoms may remain even after the surgery. Nerves recover slowly, and it may take a long time to know how well the nerve will do after surgery. If you found this article helpful, you may also be interested in. The American Academy of Orthopaedic Surgeons 9400 West Higgins Road Rosemont, IL 60018
By Dur e Shahwar Hidayat. In Pakistan many private schools fail to provide students with playgrounds due to their small size building. During last couple of decades, private school industry in Pakistan has flourished at large, in almost every city and district of Pakistan. However, most of them are built in houses or small buildings. Small size schools have an immense impact on student’s lives, which in one way or the other affect student’s educational and physical development. Extra curriculum activities are vital for the brain development of students, though many private schools do not consider it as they lack playgrounds. Sports are important for the student’s social, emotional and physical development. It is incredibly essential for boosting creativity and strong imaginative powers in them. According to the experts, it is believe that size of school drastically affects not only the activities of children but also their mental fitness. Unfortunately, in Pakistan many private schools fail to provide their students with playgrounds due to their small size building. The extensive growth of schools in houses paints the grim picture of children’s mental health. Sports engage a student in healthy activity which is vital for character building. Sports are a recreational activity used by students to get relief from their hectic academic schedules. During break hours they usually go to playgrounds of their respective schools to play different games. If there is no playground inside the school then they go outside the school to play. The schools that are unable to accommodate playgrounds let their students to play on roads, residential streets or playgrounds near their schools. This situation not only creates problem for residents but also generates an alarming threat for security authorities. No one can cage students in a building as sports and playing activities have a major role in their lives. The condition of state – sponsored schools are far much better in this case as they are built on vast land. However, for quality education parents are forced to send their children to private schools operating in small houses. Despite charging heavy tuition fee, these schools had failed to provide quality sports and extra curriculum activities. There are many other factors that are directly linked with the size of schools. Many private schools lack the facility of libraries as well. Owners of these schools give priority to their income resource over the basic requirements of student’s educational development. According to a recent survey, students score significantly higher than students at schools with less-well-equipped and staffed libraries. These small size schools stress more upon studies and take extra curriculum activities of students as for granted. Moreover, the schools build in houses also cause nuisance for the adjacent residents. It not only causes inconvenience to the locals but also result in traffic jam on the thoroughfares during school timings and security risks for residents. Therefore, the impact must not be neglected. Many parents urge government to force these schools to construct playgrounds in their respective schools. Yet no effective action has been taken by the government over this issue. Only government is not solely to be blamed for this problem, school’s management must also be aware of the importance of games and extra curriculum activities for student’s educational growth. Schools not just enhance the information of students but it builds personality of students as well. Schools educate students about every field of life thus creating their persona, therefore, it is very important for government to keep an eye upon private schools and bring this issue under its consideration. Security of schools is of focus in Pakistan. Many terrorist attacks on schools have brought attention over the security of students. When students find no playgrounds in their schools, they try to go outside of it. They go outside to play on roads, residential streets or playgrounds nearby. This situation leads to create several problems. In a nut shell this issue is of immense importance and government must come up with some effective policies to tackle this issue. Writer is Peshawar based student of English literature.
The legendary “Fighting 69th” took part in five major engagements during World War I. It served in the front lines for almost 170 days, suffering hundreds killed and thousands wounded. This highly decorated unit was inspired by its chaplain, the famous Father Francis Duffy (whose statue stands in Times Square), and commanded by the future leader of the OSS (predecessor of the CIA), “Wild Bill” Donovan. One of its casualties was the poet Joyce Kilmer. Due in large part to the classic 1940 movie The Fighting 69th, starring James Cagney and Pat O’Brien (as Duffy), the unit still has strong name recognition. But until now, no one has recounted in detail the full story of this famous Irish outfit in World War I. The exciting Duffy’s War brings to life the men’s blue-collar neighborhoods—Irish mostly and Italian and overwhelmingly Catholic. These boys came from the East Side, the West Side, Hell’s Kitchen, the Gashouse, and Five Points; from Brooklyn, Queens, Long Island City, and Staten Island; and from Father Duffy’s own parish in the Bronx. They streamed out of the tenements and apartment houses, enlisting en masse. Brothers joined up, oftentimes three and four from one family. Published during a resurgent interest in the doughboy experience of World War I, Duffy’s War also tells the fascinating history of New York City and the Irish experience in America. With this book, Stephen L. Harris completes his outstanding trilogy on New York National Guard regiments in World War I.
The 420 is the next number in a series of numbers that matches numbers commonly used by the Babylonian s, the 60 (60 minutes in an hour), and the 12 (12 hours during daylight, 12 starsigns in the zodiac The series is made up by the smallest common multiple of the first n numbers, which is the smallest number that can be divided without a rest by the input numbers. scm(1,2,3,4)=12, not 4*6=24, because 6 is already divisible by 2, 2*6=12 is divisible by 4 scm(1,2,3,4,5,6)=60, as 60 is already divisible by 6 Babylonian mathematics knew fractions, and there were cuneiform symbols for fractions, so one can imagine that the Babylonians liked to have numbers that could be evenly divided. So, they would like the 420. btw. the smallest common multiple is to the gcd what, in the boolean algebra of logic, is the OR to the AND. However, there's no meaningful negation that would make the smallest common multiple and the gcd part of a boolean algebra.
According to the National Alliance on Mental Illness, 1 in 5 U.S. adults experience mental illness each year and 1 in 6 U.S. youth aged 6-17 experience a mental health disorder each year. The fields of psychiatry and psychology are still relatively young which allows for a lack of knowledge and understanding. Though both fields have come a long way in recent years, there is still work to be done. From misunderstanding and preconceived ideas about what it actually means to be mentally ill, comes prejudice and stigma. And in an attempt to destigmatize mental health issues, they have become a joke to where the legitimacy of them is sabotaged. Jokes about suicide have become seemingly more common. In an attempt to talk openly about the issues that 1 in 5 adults experience and 1 in 6 youth experience, it’s actually just become a joke. Liking things to be in order, does not necessarily mean someone has OCD, being bummed about a poor homework grade does not necessarily mean someone has depression, being routinely worried about something that warrants worry does not necessarily mean one has an anxiety disorder. Being worried is different than having a panic attack. Being moody does not mean someone has bipolar disorder. And using suicide as a joke, desensitizes the subject, to where it really is just used as a joke. This fuels the stigma. To have OCD, an anxiety disorder, or depression is to have an actual medical diagnosis. Having a mental illness diagnosis is no different than having a diagnosis of any other chronic illness. Regardless of what part of the human body it affects, an illness is an illness. But when it comes to the brain, there’s a misunderstanding of what it actually means to have a mental illness, and ignorance fuels stigma. Though there may be events that trigger feelings of depression or anxiety, it is almost always traced back to chemicals in the brain. Just like chemicals in the rest of one’s body might not work right sometimes, sometimes the chemicals in the brain, their levels, and the way they’re transmitted can cause depression and other mental illnesses. Anxiety is not being worried. According to Mayo Clinic, symptoms of an anxiety disorder include: Hypervigilance, irritability, restlessness, lack of concentration, racing thoughts, unwanted thoughts, fatigue or sweating, fear, feeling of impending doom, insomnia, and shaking among many others. Experiencing some kind of anxiety is a normal part of life, but an anxiety disorder can be constant and obsessive worry that interferes with normal everyday tasks. OCD (Obsessive Compulsive Disorder) is not needing to have everything in order. Mayo Clinic describes OCD as having compulsive behavior, agitation, compulsive hoarding, hypervigilance, impulsivity, meaningless repetition of own words, repetitive movements, ritualistic behavior, social isolation, intrusive thoughts, and or persistent repetition of words or actions. Even though liking things to be orderly and symmetrical does not mean an OCD diagnosis, it can be attributed to the illness. Throwing the term around carelessly diminishes the legitimacy of the approximate 2.2 million adults actually affected by it. Being moody does not equal having bipolar disorder. Symptoms of bipolar disorder according to Mayo Clinic include: extreme mood swings most commonly consisting of manic highs and depressive lows. Manic episodes include feelings of high energy, reduced need for sleep, and a loss of touch with reality. Depressive episodes include feelings of low energy, loss of motivation, and loss of interest in everyday activities. Most episodes can last a few days to months. Approximately 43.8 million people experience some form of mental illnesse(s). According to the National Alliance on Mental Illness, “Sensationalizing mental illness can be harmful, especially for impressionable young teenagers. Images of self-harm might encourage others to view mental illness as something that is ‘tragically beautiful’.” The National Alliance on Mental Illness also added that, “Studies show that when the news offers sensationalized stories of suicide or reports attempts in detail, suicide rates increase. The release of ‘13 Reasons Why’ caused an uptick in suicide searches, which is concerning because research has shown that such searches correlate with actual suicides.” Mental illnesses are not something to be ashamed of but they are also not something to be proud of. Find a healthy medium. Don’t be afraid to reach out if you feel as if you might need help with your mental health. Acknowledge that you need help, but don’t define yourself by your illness. You are more than what makes you hurt. And certainly don’t joke about it. You wouldn’t joke about any other physical illness. Your school counselors are here to help you, along with a number of trusting and caring teachers, and the newly implemented HOPE squad.
Slide guitar is a particular method or technique for playing the guitar. The term slide refers to the motion of the slide along the strings. Instead of altering the pitch of the strings in the normal manner (by pressing the string against frets), an object called a “slide” is placed upon the string to vary its vibrating length, and pitch. This slide can then be moved along the string without lifting, creating smooth transitions in pitch and allowing wide, expressive vibrato. Slide guitar is most often played (assuming a right-handed player and guitar): - With the guitar in the normal position, using a slide on one of the fingers of the left hand. - With the guitar held horizontally, belly-up, using a metal bar called a “steel” (“slides” generally fit around a finger) held with the hand and wrist above the frets, fingers pointing away from the player’s body; this is known as “lap steel guitar.” This same technique is used to play Pedal steel guitar and the “Dobro” Resonator guitar used in Bluegrass music.
Tree nuts is the most common food allergy across the globe, said Vicki McWilliam, a clinical allergy dietician and researcher at Royal Children’s Hospital, in a separate presentation on the issue during the World Nut and Dried Fruit Congress in Boca Raton. Nut allergies affect about 2% of the global population, but it can be as low as less than 1% in Mediterranean countries but more than 10% in some European countries and about 3% in Australia, she said. While many people grow out of other food allergies, nut allergies tend to last a lifetime and can cause severe respiratory and cardiac reactions, even death. There is no treatment for nut allergies, but some research suggests early exposure to infants as young as three to four months can reduce allergic development, said Patrick Archer, president of the American Peanut Council, during the peanut panel. Peanuts have an advantage in the nut category on the issue of sustainable growth, Archer added. Because it’s a nitrogen-fixing plant, it doesn’t require much chemical fertilization, and it needs relatively little water compared to other nuts. “Sustainability is necessary to do business in the 21st century,” Whitehouse said. “Peanuts have a fantastic sustainability story to tell.”
Bioenergy is energy derived from biofuels, produced directly or indirectly from organic material called biomass. It is the single largest renewable energy source today, providing 10% of the world’s primary energy supply. Traditional unprocessed biomass such as fuelwood, charcoal and animal dung accounts for most of this and represents the main source of energy for a large number of people in developing countries who use it mainly for cooking and heating. More advanced and efficient conversion technologies now allow the extraction of biofuels from materials such as wood, crops and waste material. Biofuels can be solid, gaseous or liquid, even though the term is often used in reference to liquid biofuels for transport. Many American consumers are familiar with biofuel through their use of gasoline blended with corn-based ethanol. Georgia is a leader in the bio-energy revolution; in 2015, Forbes ranked Georgia third in the nation for potential biomass energy as measured by the amount of biomass available in the state. Georgia boasts an abundance of crops that can be converted to energy resources, including traditional feedstocks, such as corn and soybeans, and non-traditional feedstocks, such as switch grass and pine trees. Processing wood as biomass is considered carbon-neutral since the resultant emissions equal the carbon dioxide absorbed by the trees as they mature. With almost 24 million acres of forestland, much of Georgia’s biomass is derived from low-grade wood waste like woodchips, wood pellets and tree limbs resulting from tree-thinning activities. An abundant fiber supply located in close proximity to coastal transportation has created a fertile environment for companies specializing in the production of wood pellets, which can be used as fuels for power generation, commercial or residential heating, and cooking. Georgia Biomass LLC, based in Waycross, opened in 2010 and now is the largest pellet producer in the world with a design capacity of 750,000 metric tons per year. The majority of Georgia Biomass’s production is intended to power Europe’s energy generating plants, but Georgia’s energy plants are also looking to the resource to help meet their own renewable energy goals. Georgia Power has incorporated into its Green Power Program more than 280 megawatts of power purchase agreements with various Biomass Proxy Qualified Facilities, including a 30-year, 53.5 megawatt capacity contract with biomass facility Piedmont Green Power, LLC, a woody biomass facility in Barnesville. Landfill gas is a type of biomass energy categorized as “waste energy.” The process of decomposition—when organic material is broken down by microorganisms—generates methane gas. Most landfills simply burn off this gas, but through innovative Green Energy initiatives, some power companies have partnered with landfills to turn garbage into kilowatts. To produce power, gas wells slowly draw methane from the landfill and pipe it to a facility where it’s burned to turn engines or turbines and create electricity. In 2010, Georgia Power partnered with Waste Management and its Superior Landfill in Savannah to add 6.4 megawatts of clean, renewable energy to the grid. Four years later, Environmental services provider Advanced Disposal partnered with Energy Systems Group (ESG), an energy services provider, to design, build, own and operate the landfill gas-to-electricity plant at the Pecan Row Landfill. Landfill gas can also be used as renewable fuel. ESG has partnered with DeKalb County and the Clean Cities Atlanta Petroleum Reduction to convert emissions from the Seminole Hills Landfill to compressed natural gas (CNG), which is used to power DeKalb’s sanitation vehicles. Surplus gas will be inserted into the natural gas pipeline to be transported to other CNG stations or sold as renewable natural gas. ESG operates additional landfill facilities at the Marine Corps Logistics Base in Albany and the Live Oak Gas to Energy Facility in Atlanta. Georgia’s ample resources for biomass could not attract companies willing to invest in the production of biofuel without Georgia’s entrepreneurial-friendly policies, reduced taxes on bioscience energy companies, and expedited environmental permits for biofuel plants. The state currently has over $2 billion worth of active renewable energy-related projects that are projected to drive nearly $5 billion dollars into the state’s economy over the next 10 years. Georgia also maintains an avid “brain trust” of university research and development. The University of Georgia, with its extensive laboratories and agricultural experiment stations, is world-renowned for research in fermentation, enzymes and genetic engineering. Its Athens campus boasts a pilot scale model biorefinery, where feedstocks are tested to produce bio-oil, syngas, char, and a variety of industrial chemicals. The Georgia Institute of Technology is home to the Strategic Energy Institute, which conducts research on biomass gasification and biochemistry, and is a leader in industrial process engineering. The Herty Advanced Materials Development Center, a $150-million-dollar non-profit research center, turned its focus from the pulp and paper industry to embrace biomass commercialization. With copious biomass resources, innovative technology and research on biomass conversion, and state-level incentives to help green companies succeed, Georgia will continue to be an influential leader in the world-wide bio-energy revolution. Forbes: Georgia 3rd state in nation for biomass energy (Creative Loafing, Friday, 11 July 2008) Georgia is a Leader in Biomass (Renewable Energy World, Monday, 5 April 2010) Georgia Power, Waste Management partner on landfill gas projects (Electric Light & Power, Tuesday, 30 March 2010) Landfill gas-to-energy plant opens in Georgia (BioEnergy Insight, Tuesday, 6 May 2014)
There are several stone arch bridges in Watkins Glen State Park. They were designed for pedestrians to use to access the park attractions and date to the 1930s. They were built as a public works project, and as such are historically significant for their association of Depression-related recovery programs. Sentry Bridge is perhaps the best known of the bridges since it is in a very scenic location, and also is right at the front of the park and is even visible to motorists driving by on the nearby highway. This bridge is a stunning example of how man-made bridges and natural landscapes can combine to create a truly beautiful scene. Click on a thumbnail or gallery name below to visit that particular photo gallery. If videos are available, click on a video name to view and/or download that particular video. |A collection of overview and detail photos. This photo gallery contains a combination of Original / Full Sized photos and Mobile/Smartphone Optimized (Reduced Size) photos. Alternatively, view this photo gallery using a popup slideshow viewer by clicking the link below. Browse Gallery With Popup Viewer © Copyright 2003-2016, HistoricBridges.org. All Rights Reserved. Disclaimer: HistoricBridges.org is a volunteer group of private citizens. HistoricBridges.org is NOT a government agency, does not represent or work with any governmental agencies, nor is it in any way associated with any government agency or any non-profit organization. While we strive for accuracy in our factual content, HistoricBridges.org offers no guarantee of accuracy. Information is provided "as is" without warranty of any kind, either expressed or implied. Information could include technical inaccuracies or errors of omission. Opinions and commentary are the opinions of the respective HistoricBridges.org member who made them and do not necessarily represent the views of anyone else, including any outside photographers whose images may appear on the page in which the commentary appears. HistoricBridges.org does not bear any responsibility for any consequences resulting from the use of this or any other HistoricBridges.org information. Owners and users of bridges have the responsibility of correctly following all applicable laws, rules, and regulations, regardless of any HistoricBridges.org information.
Cardiovascular diseases are the leading cause of death worldwide. According to the World Health Organisation (WHO), an estimated 17.7 million people died from cardiovascular diseases in 2015, representing a third of all global deaths. Most of us think that if we do not smoke, do not carry extra baggage around the waist, we’ll keep our heart in good health. In a way, we are right, since smoking is a major cause of heart disease (estimated to account for about 20% of all cardiovascular death), and obesity is linked to several factors that increase the risk of coronary artery disease and stroke. However, there are many other habits that can damage the heart – habits that are so mundane, they are often overlooked. Some habits are plainly obvious, such as eating too much fat, sugar and salt, not exercising, and neglecting regular health check-ups. It is worth reviewing your everyday habits and learning how you can reduce your risks to prevent heart disease. Get more sleep A study showed that people who slept less than 6 hours each night were 79% more likely to develop coronary heart disease than those who slept up to 8 hours. Sleeping reduces blood pressure, and those who do not sleep enough are more likely to have hypertension. Experts also point out that the quality of sleep also matters. People who snore loudly are more likely to have sleep apnoea, a disorder in which breathing stops and starts repeatedly during sleep, and often without knowing it. When we are stressed, our body secretes adrenaline and cortisol. This increases the rate and force of cardiac contractions and narrows the arteries – a dangerous combination for heart health. In addition to stress, anger and depression can also negatively affect the cardiovascular system. The antidote? Laughter. Interestingly, laughing relaxes and enlarges the arteries, thus promoting cardiovascular health. There is truth in the old saying ‘laughter is the best medicine’ after all. Brush your teeth (please) Research has shown that there is a link between gum disease and heart problems. There are two main types of gum disease: gingivitis, which causes red, painful, tender gums; and periodontitis, which leads to infected pockets of germy pus. Scientists believe that bacteria collected in the gums can cause inflammation in other parts of the body. Thus, poor oral hygiene can increase the likelihood of arteriosclerosis (stiffened arteries) and thrombosis (blood clot). So, brush your teeth at least twice a day and use a mouthwash. Your family and friends may even thank you for it. Take a break from city life It doesn’t require a stretch of the imagination to know that the pressing and fast living conditions in a big city can overwhelm your poor heart. But stress is not the only factor. In a study published in The Lancet, researchers looked at the long-term effects of air pollution on the heart’s arteries. Poor air quality leads to accelerated plaque build-up in arteries, leading to heart disease, stroke and high blood pressure. If living in a city is unavoidable, make sure to retreat into the countryside from time to time, even if it’s only for a day or two. Research in Japan involving more than 500 adults has shown that people who are flexible tend to have more flexible arteries and therefore better regulation of their blood pressure. Flexibility is one of the main components of physical fitness, including cardiovascular fitness, muscular strength and endurance. So perhaps it is not a bad idea to include yoga or Pilates in your exercise routine. This will have the added benefit of preventing exercise-induced injuries, back pain, and balance problems. Break a sweat While many chemical elements are essential for life, some such as arsenic, cadmium, lead, and mercury have no known beneficial effect in humans. These elements are confirmed or probable carcinogens, and they exhibit wide-ranging toxic effects on many bodily systems, including the cardiovascular system. All people have some level of toxic metals in their bodies, circulating and accumulating with acute and chronic lifetime exposures. Research shows that sweating with heat or exercise may help to eliminate these toxic substances. Sit less, move more It can be argued that chairs are detrimental to our health. Indeed, ‘sitting is the new smoking’. According to the WHO, 60 to 85 per cent of people globally lead sedentary lifestyles (i.e. remaining seated for much of the day), making it one of the more serious yet inadequately addressed public health problems of our time. A sedentary lifestyle, along with smoking and poor diet and nutrition, is increasingly being adopted as the norm, which is resulting in the rapid rise of cardiovascular diseases, diabetes, obesity and cancer. For every 30 minutes of sitting still, be sure to walk, stretch or jog on spot for 1 to 2 minutes. In a previous article, it was stated that the WHO have classified processed meats as a Class I carcinogen. It turns out that these meats, which include bacon, sausages and pepperoni, also increase the chance of having cardiovascular problems. Processed meats not only have a lot of salt, which elevates our blood pressure but large amounts of saturated fat, which contribute to chronic inflammatory diseases.
Ez 47:1-2,8-9,12; Ps 46; 1 Cor 3:9-11,16-17; Jn 2:13-22 The Holy Dwelling of the Most High “Do you not know that you are the temple of God, and that the Spirit of God dwells in you?” (1 Cor 3:16) Today the readings for the Feast of the Dedication of the Ba-silica of St. John Lateran in Rome supersede those assigned for the Thirty-second Sunday in Ordinary Time. Built in the fourth century, this church remains a magnificent and lively place of worship today. It is the cathedral church of the bishop of Rome (the pope). The various Scripture readings revolve around the theme of the “temple” and illustrate the different ways in which that motif appears in the Bible: the Jerusalem Temple, the ideal temple, the person of Jesus and individual Christians. The archaeological evidence for temples in the ancient world goes back many thousands of years. A temple was a place where a god was believed to be present in a special way, and where rituals honoring the god (especially sacrifices) were conducted. For a large part of ancient Israel’s history from Solomon onward, the Jerusalem Temple was the people’s central shrine and the only place where sacrifices were to be offered to Yahweh, the God of Israel. Many of the Old Testament psalms celebrate the presence of Yahweh in the Jerusalem Temple. Indeed, the Book of Psalms is sometimes called the hymnbook of the Jerusalem Temple. We get a glimpse of how much the Temple meant to ancient Israel in today’s excerpts from Psalm 46. There the psalmist describes the Temple as “the holy dwelling of the Most High” and as Israel’s “stronghold,” its source of security, safety and hope because of Yahweh’s special presence there. Nevertheless, the Temple built by King Solomon was destroyed in 587 B.C., along with the city of Jerusalem. The prophet Ezekiel was among the exiles in Babylon, and there he reflected on how such a catastrophe could have happened. While his book is full of denunciations and warnings, it ends on a note of hope when in Chapters 40 to 48 it provides a de-tailed verbal picture of the ideal New Jerusalem and its rebuilt Temple. The imagery of water in both Psalm 46 and Ezekiel 47 allude to its life-giving and life-sustaining power and its healing properties. Even after the Second Temple was built in the late sixth century B.C. and rebuilt in grand style under Herod the Great (37-4 B.C.), many early Jewish writers kept alive and embellished Ezekiel’s hope for a new and better Temple. The Qumran New Jerusalem texts and the Temple Scroll, as well as the New Testament Book of Revelation, are good examples of these hopes. The Jerusalem Temple to which Jesus came, according to John 2, was a large complex of buildings whose Herodian re-furbishing had been in progress for 46 years. We ought to en-vision the Temple not as one huge church building (like St. John Lateran or St. Patrick’s Cathedral in New York) but as a campus with many buildings and installations. By Jesus’ time, the Temple had become the major industry of Jerusalem. It employed construction workers and an administrative staff, and innkeepers and other service-providers profited from crowds of pilgrims coming regularly into the city. In this historical context the symbolic demonstration by Jesus the Galilean prophet of God’s kingdom against the excessive commercialization of the Jerusalem Temple complex is un-derstandable both in Jesus’ program and in the effect it had on the local Jewish and Roman leaders. In John’s account Je-sus raises the stakes further by referring to the Temple as “my Father’s house” and proclaiming himself as the locus of God’s presence (“this temple”). As readers of John’s Gospel, we already know that Jesus is the Word of God who has be-come flesh and made his dwelling among us. As followers of Jesus and so members of the body of Christ, we as individuals have become “the temple of God” through the indwelling of the Holy Spirit. Through our faith in Christ and baptism, we have been made into places where God is now present in a special way. Of course, we still need buildings where we may worship God in community and express our shared identity and dignity. Yet we do so convinced that Christ is the reality to which all earthly temples, shrines and churches point, and that through Christ God dwells in us and makes us holy through the Spirit. As God’s people in Christ, we are now dwelling places of the Most High. • How could Jesus, according to John, identify himself as the temple of God? • Do you ever think of yourself as a temple of God? How might such a concept affect your actions? • Why do you go to your local church? What do you hope to find there? Daniel J. Harrington, S.J. is professor of New Testament at Boston College School of Theology and Ministry in Chestnut Hill, Mass. © 2008 America Magazine
Ann Lyon: Duchy Originals, Female Heirs and the British ThroneRoundup: Talking About History Ann Lyon is a lecturer in law at the Plymouth Law School and author of Constitutional History of the United Kingdom (Cavendish, 2003). email@example.com There has been much discussion in recent years of amending the Act of Settlement 1701 so that the first child will succeed to the throne rather than the first son - though at the time it was passed it was a pragmatic piece of legislation, and quite advanced, in that it provided for female succession at all. Less attention has been given to how the heir to the throne is supported - but if we want to change the succession, we will need to look at a Royal Patent of 1337 as well as an Act of Parliament of 1701. The dukedom of Cornwall and the extensive properties of the Duchy - not to mention the profits from Duchy Originals - provide for the heir apparent and his family and have been entailed on the eldest living son of the monarch since 1337. In the absence of a male child, the Duchy's revenues go to the monarchy directly, as in the period when the present Queen was her father's heir. Changing Britain's succession laws would do nothing to change this legal position. If nothing is done, a female heir apparent will find herself with no means of support - and the revenues of the Duchy would go a son who would no longer be first in line to the throne. Duchy Originals' turnover was £2.2 million in 2010 and the Duchy of Cornwall consists of 54,090 hectares of land, so its financial impact is far from small. For the moment, of course, changing the succession laws would make no difference: both Prince Charles and Prince William are their parents' eldest children. If any change from male primogeniture is to be made, it should be done before William and Kate's first child is born (which, if the precedents of William's own birth and that of his father are followed, may be no more than a year from now). In Sweden, the rules of succession were changed to make King Carl Gustav's daughter Victoria heir apparent after the birth of her brother, Carl Philip: by providing that the change should not affect persons already living, Britain could avoid this experience. The Act of Settlement is a product of the political and religious context of its time, and has remained substantially unaltered simply because there has been no obvious need to amend it. When Parliament declared that James II had deserted the throne and offered the throne to William and Mary, the Bill of Rights vested the succession in them and their descendants. The succession would then fall to any issue William or Mary might have by later marriages, and then to Mary's younger sister Anne and her issue. William and Mary were childless, but Anne gave birth to an apparently healthy son on 24 July 1689. Mary II died of smallpox in 1694, leaving no children, and William did not remarry. Anne's son followed Mary to the grave in 1700. James II had given up hopes of his own restoration, but his son James had the active support of Louis XIV of France, whose purposes would be only too well served by having a client king on the thrones of England, Scotland and Ireland. There was thus a very real threat of invasion in support of the younger James, and the civil wars of the 1640s were within living memory. Meanwhile, the death of the Spanish king Carlos II without obvious heirs precipitated a general European war between supporters of the rival candidates. This was what the Act of Settlement was designed to prevent. The kingdom which had deposed one Catholic monarch had no intention of accepting another, and so Parliament sought a Protestant heir. In order to find one, it was necessary to go back to the descendants of James I - to Sophia Dorothea, widow of the Elector Ernst August of Hanover. Sophia was now aged 70, but in good health, and had four living adult children, together with grandchildren. The Act of Settlement therefore vested the succession in Sophia and her issue, provided they were not 'Papists' and did not marry 'Papists'. Although contemporary attitudes to gender played a major role in succession laws, it is worth remembering that fears could also be rooted in distinctly practical worries about female rule in the past. Mary, Queen of Scots was a romantic figure, but notably unsuccessful as a ruler, her brief but melodramatic career ending in deposition, flight, and imprisonment and execution. Her rival, Elizabeth I, failed to provide her kingdom with a definite heir, in no small measure because of the problems of marriage for a female ruler. England was fortunate that the accession of James VI of Scots, Mary's son, was not disputed. Previous practice elsewhere in Europe had allowed female succession only where there were no male heirs. A recent example was Queen Christina of Sweden, who had succeeded her father, Gustavus Adolphus, on his death in 1632 when she was six, refused to consider marriage, and abdicated in 1654 to devote herself to intellectual matters. France, by contrast, forbade the succession of women to the throne and even men whose claim was through a female ancestor. When Henri III died in 1588, his heir, Henri IV, was related to him in the male line only through Louis IX, who had been dead over 300 years. In Russia, the succession law introduced by Peter the Great allowed each ruler to choose his own successor, and there were four reigning Empresses in the eighteenth century. Monarchical discretion proved unrealistic in practice: Peter himself failed to name a successor, and Anna made the wholly impractical choice of her two-month-old great-nephew, who nominally reigned for two years as Ivan VI until he was deposed by Peter's illegitimate daughter, Elizabeth. A new system was introduced by Paul, far more rigid than the Act of Settlement. Under the Pauline Law an heir must be: - A male descended from Paul himself - A member of the Orthodox Church and born to parents who were both Orthodox at the time of their marriage - Born of an 'equal' marriage, i.e. his mother must be a member of a reigning family A female would only be a possible heir if there were no surviving males within the Pauline Law. Though not all European succession laws have imposed strict religious requirements, these certainly existed in practice. The Habsburg rulers of Austria were strongly Catholic, and selected their consorts from other Catholic dynasties, as did the Bourbons of France. Any non-Catholic bride was expected to convert, as was also the case in Spain. So the Act of Settlement may seem like an anomaly, and it is certainly true that its origins come from a very different period with very different needs. But there is no need to regard Britain as uniquely anachronistic: its rules, like other monarchies, reflect the circumstances of its past. If we want to change them, we had better bear the law of unintended consequences in mind. comments powered by Disqus - Rubio Surges Into Second In New Hampshire - Branstad Says Cruz Ran ‘Unethical’ Campaign - Christie Highlights Santorum’s Endorsement of Rubio - Portman Comes Out Against Trade Deal - Megyn Kelly Gets a Book Deal - A Big List of the Bad Things Clinton Has Done - An Unambiguous Sign Sanders Won Last Night’s Debate - Still Friends at the End - Quote of the Day - Trump Still Leads as Clinton Slips - Clinton Can’t Shake Image as Wall Street’s Friend - Maddow Doesn’t See Sanders Winning - Why Does the Media Still Shield Chelsea Clinton? - Bush Jokes His Mother May Have Abused Him - Rubio Closes the Gap in New Hampshire - Newly released interactive map shows images of destroyed monuments of Mosul - How the Rise of the Post Office Explains American Innovation - These Americans are reliving history and don’t mind repeating it - Britain largest home is saved for the nation - Shelter and the slums: capturing bleak Britain 50 years ago - WSJ features an article by a conservative calling for the abolition of Black History Month - Mary Beard, herself a bestselling author, wonders why more women historians aren't - Princeton U. historian Imani Perry claims mistreatment in parking ticket arrest - Retired historian George Dennison remains on the payroll at the U. of Montana while faculty are cut - The Atlantic profiles exciting ways to teach history