text stringlengths 222 548k | id stringlengths 47 47 | dump stringclasses 95 values | url stringlengths 14 1.08k | file_path stringlengths 110 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 53 113k | score float64 2.52 5.03 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
By Courtney Knapp
Earlier this year, Robert Putnam, the Harvard sociologist noted for his “Bowling Alone” thesis that community associations create “social capital” which improves their communities, issued a report titled “E Pluribus Unum: Diversity and Community in the Twenty-First Century.” Using data from 2000 Social Capital Community Benchmark Survey, Putnam concluded that diverse communities and neighbors have significantly less social capital than culturally homogeneous ones because people in diverse communities tend to “hunker down” and isolate themselves from their neighbors. Nevertheless, Putnam argues that social diversity and multiculturalism have huge social benefits and should remain goals that we aspire toward in our communities.
One effective strategy for combating this “hunkering down” is the development of vibrant, welcoming public destinations in our communities. Putnam writes, “To strengthen shared identities, we need more opportunities for meaningful interaction across ethnic lines where Americans (new and old) work, learn, recreate, and live. Community centers, athletic fields, and schools were among the most efficacious instruments… a century ago, and we need to reinvest in such places and activities once again, enabling us all to become comfortable with diversity.”
Project for Public Spaces is committed to promoting parks, plazas, markets, civic buildings, business districts and neighborhoods as a way to bring people together. These places are critical for building relationships and creating communities. In public spaces, we escape the insular environments of our work and home land enjoy the opportunity to meet our neighbors, visit old friends, and encounter other people. Even when we feel like keeping to ourselves — simply walking through a park or downtown on our own–the experience of being out in public creates an experience of togetherness that contributes to the sense of community. Social scientists as well as planners and community organizers argue that public places are essential to community’s success, and that neighborhoods with vibrant public spaces have greater levels of social capital than those cities and towns with few public destinations.
The same principle holds true for vibrant multicultural communities: public spaces have the potential to bring people of many different cultural backgrounds together. People have the opportunity to experience togetherness–which can help break down the social barriers that far too often divide us.
Many studies have been conducted to understand the ways different ethnic and cultural groups relate to public spaces. Since we live and work in a world that is increasingly diverse and multicultural, a clearer understanding of how various populations (defined by race, ethnicity, income, socioeconomic status, age, or other indicators) use public spaces and what cultural values they attach to them will benefit planners, policymakers, social scientists, community organizers, as well as citizens themselves.
Based on the studies that have been conducted since the mid-1970s, two dominant theses have emerged to make sense of the reasons cultural groups use public spaces.
Rooted primarily in a political-economic analysis, this theory proposes that groups use public spaces (especially parks) differently because of differences in access and the inequitable distribution of resources (such as urban open space, neighborhood amenities, etc.)
This explanation essentially argues that varying patterns of use in public spaces is the result of differences in cultural values attached to a space or activity, not merely differences in access.
For marginality theorists, equity issues can either facilitate or prohibit use of the public realm. Knowing the degrees to which different groups can access elements of the built environment–as well as knowing some of the structural and cultural barriers at work in any community–is important for understanding how an idea such as “placemaking” can be applied to make a difference in the lives of people.
Subscribers to the ethnicity theory, conversely, assert that more time and energy needs to be focused on designing places that are reflective of the diverse cultural values and preferences of particular communities. From this perspective, it’s erroneous to assume that there is an ideal way of designing and programming a space to attract users. The “best” plan for placemaking is one that results from extensive and continuous community participation and ultimately is flexible enough to accommodate shifting preferences and values over time.
A third less popular–though no less real or important–theory explaining differences in use is the discrimination thesis. According to this view, particular cultural groups may choose not to engage in certain activities or visit certain public places because they do not feel welcome in the space due to the experience of discrimination or the expectation that discrimination will occur. Surveys have shown that some people of color have experienced overt hostility by both other users and management staff, ranging from racist utterances to violent actions. In this sense it is not that certain groups inherently do not value certain types of public places, it is rather that the places in their communities are neglected and therefore may not be or feel safe, comfortable, or fun to use.
One lesson that that all three of these perspectives point to is the idea that the physical aspects of the built environment–although critical to a place’s success–should be viewed alongside, and perhaps even secondarily, to the creation of a welcoming social or cultural space. Latino urban planner James Rojas describes how Mexican and Mexican American communities in East Los Angeles succeed in placemaking and building community even when there is very little attention from planners and public officials to urban design in these neighborhoods. Similarly, in their book Rethinking Parks: Public Space and Cultural Diversity (University of Texas Press, 2005), Setha Low, Dana Toplini and Suzanne Scheld argue that the most successful multicultural public spaces are not necessarily the ones with the snazziest physical design or the most amenities. More important is the creation of a space where people’s identities are affirmed and where people feel they can use the space with out feeling conspicuous or looked down upon by people of different cultural groups. In short, a ‘successful’ multicultural environment is one where various group’s sense of comfort is combined with good physical design to create an atmosphere that can nurture many preferences; a place that fosters social interaction while simultaneously creating distinct “spaces” where individual cultures can be emphasized and celebrated.
While studies of public places conclude that groups have the tendency to self-segregate, the same studies point to certain elements of the built environment where divisions dissolve and people naturally come together. Public markets, playgrounds, boardwalks, streets, and beaches are arguably the most successful types of “multicultural places” because they can foster the kind of organic interaction between people that placemakers, social scientists, and cultural theorists consider so critical to the development of community across social divides.
Public markets are often among the most socially diverse of public places, bringing people of different ages, genders, races, ethnicities, and socioeconomic status together for the experience of food, shopping, and conversation. PPS’s report “Public Markets as a Vehicle for Social Integration and Upward Mobility”, funded by the Ford Foundation, examined eight markets around the United States–ranging from weekend farmers’ markets to outdoor flea markets to traditional market halls–and concluded that public markets hold special power in communities in so far that “public markets enhance the potential for social interaction in public spaces–attracting diverse income levels, ages, and ethnicities–and thereby create a sustainable vehicle for upward mobility and individual empowerment for low-income communities.”
PPS’s research discovered that the cultural composition of the vendors often influences the demographics of the customer base: in general, the greater the variety of vendors the greater the diversity of the clientele. For some immigrant communities where language and socioeconomic barriers may compound feelings of isolation, public markets provide a much-needed venue for connecting with a familiar community while simultaneously offering food at affordable prices. The rising trend to accept WIC vouchers and food stamps at farmers markets is an additional step in the right direction, reflecting a commitment not only to diversity but more importantly, to social justice.
In short, the value of public markets as multicultural places should not be underestimated. They bring people with different backgrounds together while promoting sustainable food production and offering the opportunity to launch local small businesses.
“Public markets are valued because they create common ground in the community, where people feel comfortable to mix, mingle, and enjoy the serendipitous pleasure of strolling, socializing, people watching, and shopping in a special environment,” PPS research found.
“The children’s playground is the closest thing to a melting pot a neighborhood can have. People of different races and ages were spotted engaging in friendly conversations… the joys and agonies of raising children provide some common experiences that all parents can relate to and often want to share,” writes Anastasia Loukaitou-Sideris, department chair in urban planning at UCLA. Her 1996 study of uses and activities in Los Angeles parks revealed that one of the most popular activities among all racial groups was watching children on the playground. Presumably parents are attracted to the activity because it allows them to share stories and strategies with each other while their children play together.
In 2003, Loukaitou-Sideris conducted another study of public spaces in Los Angeles, this time focused on intergroup relations among children. Playgrounds were ultimately found to “promote interaction, exchange, and comfort for a wide range of children… [catalyzing] changes and [making] diverse communities more livable and exciting for young people.”
While research projects focused on the preferences that adults attach to public spaces often depict a variety of values, Loukaitou-Sideris’s piece, titled “Children’s Common Ground” discovered that youth, regardless of cultural background, had quite similar ideas about the importance of public spaces and use them in similar ways. Common values included recreational and sports opportunities, spending time in nature, and meeting new friends.
Playgrounds allow interaction among different cultural groups to begin during childhood, and continue on into parenthood. Creating public places that promote the discovery of common ground are essential for building communities that transcend cultural divides and promote a sense of inclusion.
Beaches are an interesting example of the public spaces that reflect the diversity of our society because they often function as a multicultural space in which individual cultural groups are spatially divided along the length of the waterfront — often by age, ethnicity, and/or the social composition of the group (for example, families versus groups of teens). Many different groups are represented on public beaches, but they do not necessarily intermix.
In 1977, PPS conducted a user study of Jacob Riis Park in Queens, New York, and discovered that groups self-segregate along the individual bays of the beach–multi-ethnic teen groups at one bay, homosexual men another, and families a third. In 2004, the Public Space Research Group at CUNY drew similar conclusions about the park–the territorialization of the beach remains the same twenty five years later. The conclusion suggests that successful multicultural spaces need not necessarily be “melting pots.” As important as places where different people mix together are other places where people can spend time cultivating relationships with others in their own unique cultural group.
Although beach studies often reveal self-segregation, territorialization is not the rule. The social composition of some public beaches show different patterns than at Jacob Riis park. A weekend visit to Seaside Park in Bridgeport, CT found people from many different backgrounds utilizing the same space comfortably.
Beaches do represent truly public environments by virtue of the fact that they are accessible regardless of socioeconomic status. Regardless of how people organize themselves in the space, beaches are a place where an inclusive environment persists.
Boardwalks and Promenades
PPS’s Riis Park report referred to the beach’s boardwalk as the “spine” of the space because it created a common meeting ground for everyone who visited the park. Because racial and ethnic segregation often corresponds to the territorialization found on beaches, boardwalks and promenades are critical sites for social integration because they bridge divides between groups of people. Boardwalks usually feature concession stands and cafes, which attract people from all walks of life. A great example of this is Coney Island in Brooklyn, where the presence of food and entertainment along the boardwalk effectively draws a diverse crowd into a common space where vibrancy and spontaneous interaction characterize the social landscape.
To date, there has been little research conducted that focuses specifically on boardwalks and promenades as public spaces. But given the notion of these places as “spines” and connection points, their significance should not be ignored. Offering favorite activities ranging from people watching simply walking along the waterfront, these pedestrian corridors are an important element of the built environment in terms of fostering multiculturalism and social diversity.
Courtney Knapp is a former Project for Public Spaces intern. | <urn:uuid:ec66749f-4535-4ab2-b235-6e957e59c3f2> | CC-MAIN-2014-42 | http://www.pps.org/blog/multicultural_places/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450767.7/warc/CC-MAIN-20141017005730-00313-ip-10-16-133-185.ec2.internal.warc.gz | en | 0.940317 | 2,658 | 2.6875 | 3 |
Fats, Oils, and Greases aren't just bad for your arteries and your waistline; they're bad for sewers, too!!
As you may know, sewer overflows and backups can cause health hazards, damage home interiors, and threaten the environment. An increasingly common cause of overflows and backups is sewer pipes blocked by GREASE.
Grease commonly enters residential sewer systems through household drains.
The City of Muscatine needs the help of all its residents to keep our sewer system running properly and to keep your residential service lines clear.
You can help avoid sewer overflows, backups, and repeated maintenance problems by following a few simple guidelines. Read on to learn how: What we can do to help?
The easiest way to solve the grease problem and help prevent overflows of raw sewage into your homes or the environment is to keep this material out of the sewer system in the first place.
There are several ways to do this.
NEVER pour grease down sink or tub drains, garbage disposals, or into toilets.
Scrape grease and food scraps from trays, plates, pots, pans, utensils, grills and cooking surfaces into a can or the trash for disposal. Put baskets/strainers in sink drains to catch food scraps and other solids, and empty the drain baskets/strainers into the trash for disposal. Pass this information concerning the problem of grease in the sewer system and how to keep it out with your friends and neighbors, please contact the Department of Public Works, Collection and Drainage Division 15 563.263.8933 if you have any questions. | <urn:uuid:a45e117c-3cc8-4a64-8126-22fb9ab60763> | CC-MAIN-2021-04 | http://ia-muscatine.civicplus.com/231/Grease-Disposal-Tips | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00340.warc.gz | en | 0.923098 | 333 | 2.6875 | 3 |
April 6, 2000
A weekly feature provided by scientists at the Hawaiian Volcano Observatory.
Kulanaokuaiki campground: a whole lot of shaking going on
On March 31, Hawai`i Volcanoes National Park officially opened Kulanaokuaiki campground, a barrier-free facility along the Hilina Pali Road south of Kilauea's caldera. The new campground replaces the Kipuka Nene picnic area, 2 km (1.2 miles) farther southwest, which is now closed to protect Hawai`i's state bird. Kulanaokuaiki campground occupies a dynamic landscape controlled by Kilauea's eruptive and faulting history.
The name tells a lot about the area. Kulanaokuaiki means "the shaking of a small spine [or sharp ridge]." The campground is just north of the 15-m-high (50-foot-high) pali that bears its name. This pali is an earthquake fault, formed by vertical ground movement during periods of intense shaking. Imagine people on top of the pali during an earthquake. Rocks would crash from the face of the pali to the ground below, and the pali would tremble from one shock after another during a swarm of earthquakes.
Such an earthquake swarm took place 35 years ago. On Christmas Eve and Day in 1965, strong shaking and faulting broke the Hilina Pali Road, where it crosses Kulanaokuaiki Pali near the new campground. The pavement was offset vertically 2.6 m (8.4 feet). The campground side of the fault went down 1.8 m (6 feet), and the other side (the south side) went up 0.8 m (2.4 feet). During the swarm, a truck from HVO carrying portable seismic equipment was parked near the broken road when it was nearly toppled by a large earthquake.
Hundreds of other faults and cracks north and northeast of Kulanaokuaiki Pali broke open and moved during the 1965 swarm. Similar, though smaller, episodes of ground breakage had occurred in 1960 and 1963, and another was to take place in 1975. In fact, this area, part of the Koa`e fault system, is one of the most active areas of faulting in the world. In the past 700 years or so, the Koa`e system has opened nearly 20 m (65 feet) in a north-south direction along a traverse that passes just west of the new campground. Along another traverse 2 km (1.2 miles) northeast of the first, the amount of opening is even greater, more than 30 m (100 feet).
The Koa`e fault system is part of the breakaway zone that, over long periods of time, separates Kilauea's mobile south flank from the rest of the volcano. Swarms of earthquakes and ground ruptures will recur for thousands of years to come, and the name of the new campground will remain pertinent.
Lava flows as well as shaking and cracking have impacted the area. The campground is located in a small kipuka. The older flow in the kipuka is more than 1,300 years old, and the younger surrounding the kipuka is about 700 years old. Both flows were erupted from the summit of Kilauea at times when lava flows could escape the caldera. The present caldera will have to fill up more before any lava erupted in it can reach the new campground, and there are several pali between that would have to be overtopped or run around. For those reasons, the campground seems rather safe from lava inundation for some time to come.
Between the two flows at the campground are several beds of volcanic ash and blocks. These layers were well exposed when the pit for the campground toilet was being dug. That is now off limits, but more layers can be seen by walking westward for 200 m (yards) or so to a place where little grass-covered mesas of ash stand above the older flow. Careful looking will find heavy gray rocks 3 cm (1 inch) or more in diameter lying on the surface of the older flow. These rocks rained from the sky during one or more powerful explosions before about A.D. 1000. Such explosions could happen again, though they are rarer events than lava flows.
Kulanaokuaiki campground is in a dynamic geologic setting. Enjoy!
Eruptive activity of Kilauea Volcano continued unabated during the past week. Lava is erupting from Pu`u `O`o and flowing through a network of tubes toward the coast. Lava is visible at times on Pulama pali, and surface flows are active in the area between the Royal Gardens subdivision private access road and the sea coast. Lava is intermittently entering the ocean between Waha`ula and Kamokuna. The public is reminded that the ocean-entry areas are extremely hazardous, with explosions accompanying sudden collapses of the new land. The active lava flows are hot and have places with very thin crust. The steam clouds are highly acidic and laced with glass particles.
Residents in all districts of the Big Island were shaken by a magnitude-5.0 earthquake at 8:18 p.m. on Saturday, April 1. The large temblor was located 10.0 km (6 miles) southeast of the summit of Kilauea Volcano at a depth of 8.5 km (5.1 miles). The shaking did not affect the eruption at Pu`u `O`o, and, except for falling items, there were no reports of damages or injuries resulting from the earthquake.
The URL of this page is http://hvo.wr.usgs.gov/volcanowatch/archive/2000/00_04_06.html
Updated: 10 Apr 2000 | <urn:uuid:c74bed97-68d7-41bc-bb40-ac3ec8d13986> | CC-MAIN-2014-35 | http://hvo.wr.usgs.gov/volcanowatch/archive/2000/00_04_06.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909051424-00277-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.962611 | 1,216 | 3.109375 | 3 |
You are given a word and must provide a second word to complete a familiar two-word phrase. The first letter of the word must be the last letter of the word given, and the last letter of the word must be the first letter of the word given. For example, given the clue "photo," the answer would be "op."
Last Week's Challenge
From Michael Arkelian, of Sacramento, Calif.: Name a creature in six letters. Move the first three letters to the end and read the result backward to name another creature. Clue: If you break either six-letter word in half, each pair of three letters will themselves spell a word.
Answer: Parrot, raptor (par, rot, rap, tor)
Winner: Bill Dey of Champaign, Ill.
Next Week's Challenge
From the 2011 calendar "Mensa 365 Brain Puzzlers" by Mark Danna and Fraser Simpson: Write out the 26 letters of the alphabet. Take a sequence of seven letters, change one letter in that sequence to a U, and rearrange the result to name something you might find in your refrigerator. Hint: The answer is a two-word phrase.
Submit Your Answer
If you know the answer to next week's challenge, submit it here. Listeners who submit correct answers win a chance to play the on-air puzzle. Important: Include a phone number where we can reach you Thursday at 3 p.m. Eastern. | <urn:uuid:3a807431-cc18-4132-a596-395bddd611a1> | CC-MAIN-2015-40 | http://www.npr.org/templates/story/story.php?storyId=131130047 | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913406.61/warc/CC-MAIN-20151001221833-00170-ip-10-137-6-227.ec2.internal.warc.gz | en | 0.877264 | 302 | 2.5625 | 3 |
Kids are naturally very interested in money and financial transactions as they apply to getting things they want. This is a terrific teachable moment, and activities and games to do with money are interesting for younger kids with even with just basic math skills.
To get kids interested in money and aware of the importance of budgeting and planning, parents can:
- Play board games such as Life, Monopoly and even The Farming Game that are easy for older children or even younger kids with help. For older kids and teens, Daytrader, a board game about the stock market, is a great option.
- Have an allowance that the child or children are responsible for choosing to use or spend. If you aren’t comfortable with paying a child to do activities around the home, make the allowance based on extra activities and not on what you consider the basics.
- Start a bank account. Take your child to the bank and let him or her open an account in their own name. Most banks have an incentive program for kids, and it’s a great way to learn about earning interest and saving.
- Model budgeting. Instead of just going to the store and paying with credit or debit card, bring cash and talk to children about how you make choices to stay within the budget. Having the money in hand is more concrete for children.
- Set goals as a family. This could be for a family vacation or something special for the entire household, such as new computers or televisions. When kids see Mom and Dad saving, they are more likely to see the benefit in this as they get older.
It is important to keep everything age appropriate for the child. The more comfort children have with budgeting and planning when it comes to finances the easier it will be for them when they get older. | <urn:uuid:f285eddb-9e9e-4132-8ee8-32a5bb867840> | CC-MAIN-2022-49 | https://waynebelisle.com/tips-for-teaching-kids-about-money/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00690.warc.gz | en | 0.968032 | 368 | 3.5625 | 4 |
Coordinates: 51°27'29?N 0°33'07?W? / ?51.458, -0.552
Wraysbury (archaic spelling Wyrardisbury) is a village in Berkshire, England. It is located in the very east of the county, in the part that was in Buckinghamshire until 1974. It sits on the northern bank of the River Thames, situated some 22 miles (35 km) west of London.
The village name is Anglo Saxon in origin and means 'Waerheard's town'. In the Domesday Book of 1086 the village is recorded as Wirecesberie. A nearby pub, the Bells of Ouseley in Old Windsor, refers to another archaic spelling of Wraysbury.
The village was a portion of hunting grounds when the Saxons resided at Old Windsor. New Windsor was built in 1110 by King Henry I and he moved in, in 1163. The lands around Wraysbury were held by a number of noblemen.
St Andrew's Church in Wraysbury
Magna Carta Island and Ankerwyke
Magna Carta Island, in the parish of Wraysbury, was the location of the sealing of the Magna Carta in 1215.
On the Ankerwycke estate in the village are the ruins of a Benedictine nunnery, founded in the reign of King Henry II. One of the 50 oldest trees in the United Kingdom can be found in Wraysbury. At around 2000 years old, the Ankerwyke yew dates from the Iron Age, and is so wide that you can fit a Mini Cooper behind it's trunk and not see it from the other side! Local legend says that Anne Boleyn once sat under the tree, while residing at the Ankerwyke Estate, but this still has to be verified.
St Andrew's Church
The parish church of St Andrew is a Gothic structure, between Norman and Early English, supposed to have been built by King John. The parish registers date from the year 1734.
Changing face of Wraysbury in the 19th Century
The population of Wraysbury remained relatively static during the 19th century, with a slight increase between the 1801 return of 616 and the final census of the century giving a population figure of 660. This compares to the early part of the 21st century with population figures for Wraysbury, standing at 3,641 in the 2001 census.
For centuries, agricultural and mill work had been the principal areas of employment for the villagers and as late as 1831, census returns show that of the 135 families in the village, 62 were employed in agriculture while 68 made their living in the Mills.
This compares to the most recent census where around 12% of the population work from home and the average distance travelled to work is now 14.24km. wraysbury cricket club
The Wraysbury Enclosure
The Enclosure of the Parish of Wraysbury was ordered by a private Enclosure Act of 1799 and was signed by the commissioners in 1803. The map of the village was redrawn by Thomas Bainbridge and shows the distribution of the lands in the following the enclosure.
Prior to this, the Common Lands of the village were owned by the Lord of the Manor of Wraysbury, at that time John Simon Harcourt, the Church, and the Trustees of William Gyll esq., although, as common land, they were subject to legal rights of pasture and grazing for copyholders and other tenants. In addition to those with legal rights over the land, the poor of the district would have had ‘real’ or ‘customary rights’, for example to feed their livestock or gather wood for fuel.
The only ones benefiting from enclosure were those who could show legal rights over the common land, such as copyholders and tenants of the manor. The enclosure enshrined their rights, converting ‘rights of common’ and allocating an area of land commensurate to their rights, as close to their farmhouse as was convenient. The poor were overlooked in this process, and were no longer able to forage for fuel or graze their animals.
The smaller landowners of Wraysbury to benefit from Enclosure were people such as Nathanial Wilmot, Nathanial Matthews, Shadrach Trotman and Thomas Buckland, all of whose names had previously appeared on the Wraysbury Court rolls as copyhold owners.
Coming of the Railway
The village saw another major change in 1848 with the arrival of the railway which opened up employment opportunities and afforded the chance to travel easily and quickly to and from the village. In the History of Wraysbury published in 1862, G.W.J. Gyll extolled the benefits to the village:
||Railways have much improved the locality and the condition of the people also, and it is a powerful solvent to diminish provincial rusticity, local and self-importance; class prejudice and all the elements of isolation melt away in its presence. The railway through our parish has been of great use to it; has enhanced the value of property, as is the case wherever such a project has been executed, despite the fears of those who repressed the enterprise.
William Thomas Buckland was the local surveyor and valuer employed to handle the compensation claims resulting from the purchases of land for the new railway. This business of Buckland & Sons grew into an estate agency, which had an office in Windsor High Street for the following 150 years.
New Road and Suspension Bridge
Where is Wraysbury, I can scarce find it on the map? asked an associate of G.W.J. Gyll. Once the railway had put the village on the map, the next step was to improve road access, and more importantly, to alleviate the adverse effects of the annual floods which frequently resulted in the village being cut off from the rest of the county. Lord of the Manor, George Harcourt suggested that a new road should be built on higher ground from Bowry’s Barn to the Colne Bridge, to replace the old road which ran along ditches susceptible to flooding. The 1848 Tithe Map, drawn by surveyor WT Buckland showing the proposed route of the new road can be seen at the Centre for Buckinghamshire Studies in Aylesbury. Harcourt also suggested a replacement for the old Long Bridge over the River Colne should be built, and a new suspension bridge, designed and paid for by Harcourt, was built by civil engineer Mr Dredge.
Baptist Chapel in Wraysbury
Non-Conformists in Wraysbury
The only place of worship in Wraysbury until 1827 was the Anglican Church of St Andrew. Local farmer, surveyor and auctioneer, William Thomas Buckland, wishing to provide an alternative place of worship for non-conformists, built the Wraysbury Baptist Chapel to his own design. The original Baptist meeting place was opened in 1827 and WT Buckland was the principal minister until his death some 40 years later. Gyll, in his History of Wraysbury, described the establishment of the chapel:
||Much praise is to be given to the officiating, minister of the Baptists in Wraysbury, Mr. William Thomas Buckland, who exercises his vocation at the chapel here to a well disposed and confiding auditory, while to his wife and family are entrusted the religious education of the Baptist flock.
The new chapel, with its elegant slender tower was opened on 16 October 1862; the building works had cost around £800. The striking terracotta relief panel, The City of Refuge, on the front elevation of the chapel, was created by the renowned Doulton & Co artist George Tinworth and is signed with his monogram.
After the death of WT Buckland, James Doulton, his son-in-law and a cousin of Sir Henry Doulton, took over the preaching duties. Later James' son-in-law the Reverend Arthur Gostick Shorrock took over the duties. Arthur had been a student preacher in Wraysbury in the 1880s, after which he spent 35 years as a missionary work in Shaanxi, China.
Wraysbury Today
Wraysbury Railway Station
Due to the various gravel pits, the River Thames, lakes and reservoirs, Wraysbury has plenty of wildlife and wonderful walks.
The village has two railway stations, Wraysbury railway station and Sunnymeads railway station on the line from Windsor to London Waterloo.
In June, Wraysbury holds its annual fete, where stands such as the local vintage and classic car clubs show off their member's vehicles. There are also activities for children and the Tug Of War held by the scouts, beavers and cubs. There are also the stands of local charities, the local school, usually giving out ice creams, and of course the church's stands. wraysbury cricket club on the village green host the mcc in 2008
Famous Residents
Images of Wraysbury
Another view of Wraysbury Lake
- , The National Archives documents online website, Crown copyright material is reproduced with the permission of the Controller Office of Public Sector Information (OPSI)
- , The Heritage Trees of Britain and Northern Ireland by Jon Stokes and Donald Roger: The Tree Council [ISBN 1-84119-959-1]
- , a b Parishes: Wyrardisbury or Wraysbury, A History of the County of Buckingham: Volume 3, W. Page (Editor), 1925, pp. 320-325.
- , "Ankerwycke Burned Down", The New York Times: Picture Section Rotogravure: Part 1, Page 15, September 19, 1915, Sunday, <http://query.nytimes.com/mem/archive-free/pdf?res=9E00E4DF1431E733A0575AC1A96F9C946496D6CF>
- , a b c d History of the Parish of Wraysbury, Ankerwycke Priory, and Magna Charta Island; with the History of Horton, and the town of Colnbrook, Bucks., G.W.J. Gyll, 1862, London: H. G. Bohn. Online Version at Google Books [OCLC: 5001532]
- , National Statistics website: Crown copyright material is reproduced with the permission of the Controller Office of Public Sector Information (OPSI)
- , The History of Buckland & Sons by Edward Barry Bowyer FRICS (1973) ©STEAM 2005
- , History of the Auction by Brian Learmount, Iver: Barnard & Learmont, 1985 [ISBN 0951024000]
- , The Baptist Magazine, J. Burditt and W. Button: Baptist Missionary Society, 1862 p.779 Online version at Google Books
- , The Doulton Lambeth Wares, Desmond Eyles and Louise Irvine: Richard Dennis, Shepton Beauchamp, 2002, p49.
External Links | <urn:uuid:23ad6e37-3bfb-4180-a361-b68b78dc8515> | CC-MAIN-2014-41 | http://www.any-village.com/UK/England/Berkshire/Wraysbury/home.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133132.72/warc/CC-MAIN-20140914011213-00222-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | en | 0.964748 | 2,305 | 3 | 3 |
When preparing a study, the secret to success is to supply a concise introduction to the message, and afterwards to offer information of what is to comply with. The introduction must offer the keynotes regarding the subject as well as the primary objective of this document. Then, follow with the specifics of exactly how you arrived at this point in the paper.
A brief, concise description of the Case Study Walt Disney And The 1941 Animators Strike can likewise be useful. A lot of instances of case studies are given in a report or a journal Walt Disney And The 1941 Animators Strike. It's not required to consist of all the information in the intro; just the most important info that associates with your study will do. In this section, additionally consist of a brief background of what your study is about. Finally, insert a quick, a couple of sentence recap of each case study at the end of the introduction.
For each situation you have actually picked to examine, write a quick, simple synopsis. This helps lead the viewers through the entire study, and also can additionally serve as the main focal point of your verdict. Many instances have 4 parts: The Problem (the question), the Proof (the resource), the Discussion (the conclusion), and also the Referral. Write each of these sections in a single paragraph, beginning with a brief introduction that clarifies the nature of your research as well as your findings.
The Verdict. In this paragraph, supply three of your findings and also discuss what the effects of these findings are for your viewers. Make certain you include a conclusion based on the extent and kind of study, not always an introduction of everything you did.
Testimonial the Intro. Evaluation the Introduction to your book. Make sure it was clear as well as informative, and see to it that it doesn't have any kind of false cases about your job. If you can, consist of a section at the end that offers an in-depth review of what you picked up from your research study.
Review the Discussion. In this component, you will summarize the major conclusions you pertained to from your research. Create these final thoughts in such a way that gives the viewers the sense that you have actually completed your argument and there is little left to claim.
Testimonial your Situation Summary. The Case Summary need to have a summary of your searchings for, conversation about the influence of your searchings for, as well as suggestions for additional examination. Include your references if they matter, as well as appropriate to the case you are reviewing.
Finally, examine the last note or referral. These notes are not necessary to complete your case studies, yet exist to summarize what was learned about your picked topic.
Once you have actually completed the initial draft of your case study Walt Disney And The 1941 Animators Strike, take some time to modify the paper, as well as do away with the mistakes you have made throughout the composing procedure. This last duplicate needs to be completely clear, precise, as well as concise.
When you have actually finished your case study Walt Disney And The 1941 Animators Strike, begin preparing for the 2nd draft by examining the Walt Disney And The 1941 Animators Strike and making minor improvements. Ensure you go through the whole report several times.
When you feel great that you have actually succeeded in the first draft, it's time to turn your second draft right into a publication. Beginning by transforming it into a publication by editing the introduction as well as verdict. You can utilize a book-writing software program to help you with this, such as WordPad Pro.
When you have completed the second draft, look at your paper once again, including your very own comments and also editing the text as required. You need to rejoice with the completed product.
If you locate that your creating abilities aren't quite up to par yet, do more study. Then take a program in creating.
If you intend to write a study, you need to be careful and also choose a style that is easy to understand and makes sense. One of things that you require to prevent is making your paper feels like a sales pitch or a few other type of trick. You are attempting to give your reader a way to find out something, not simply take a quick study and also leave.
There are lots of formats that you can use for your PDF file. A few of them include a tabulation, bullet factors, sub-headings, headings, and footnotes. All of these can help to supply the reader with a feeling of your details and make it much easier for them to check out the entire document. Below are some suggestions on what to include in a case study.
The first thing you require to do before you start with your case study is to identify what kind of layout you want for your PDF file. You may wish to utilize an eBook for your file, or perhaps an on-line version of your case study. You might wish to compose it as an HTML paper, and even Word. When you have a good suggestion in mind, you need to establish what kind of formatting you desire. There are numerous choices, however you require to pick the one that will certainly be best for your case study.
One of the biggest blunders that individuals make when creating a case study is to just include info that relates to your topic. It is best to consist of every one of the essential info in your instance, consisting of information concerning any and all facets of your situation. Even if you select to only include a particular section of your situation, including every one of the pertinent information should still be consisted of. This will certainly make it simpler to check through and also understand what you are reading and also will certainly provide your viewers with an extra total photo of your subject.
There are several formats that you can utilize to write your PDF data. You do not require to create your study in the same layout that you composed your publication or Walt Disney And The 1941 Animators Strike. You can utilize various font styles, line breaks, italics, vibrant, and everything else that you would certainly locate in a scholastic file. This is how to create a study that will aid you gain valuable details for your readers as well as show them exactly how your subject applies to reality situations.
You may likewise find that there are several different styles that you can select from when you prepare to publish your study. In this case, you may be able to make use of word processing software program, or even a book. This is something that is offered for free or really economical, so you may want to explore it.
When you are attempting to produce your very own style, you need to maintain the information that you are going to include in your case in a single file. This will make it easier to review and print the documents at a later time. There are numerous advantages to doing this, however the major advantage is that you will certainly have the ability to evaluate the record at your recreation and also rapidly learn what info you require as well as not be bogged down by having to download everything again.
There are several manner ins which you can discover pointers as well as sources on exactly how to create a case study. You can check the web or look for examples online. You can also look in your library. Most often you will find that publications are the most reliable method to start your search. | <urn:uuid:fc971774-9c41-4a93-b1bf-fad38df15a71> | CC-MAIN-2021-04 | https://casestudyhelper.com/kelloggs/walt-disney-and-the-1941-animators-strike.php | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507045.10/warc/CC-MAIN-20210116195918-20210116225918-00745.warc.gz | en | 0.963468 | 1,498 | 2.984375 | 3 |
According to a new study from the Pew Research Center, 24% of teenagers (ages 13–17) are online “almost constantly.” The intense way in which young people are connected is enabled largely by the use of smartphones. Nearly 75% of teens have access to a smartphone and 30% have a basic phone. These phones and other mobile devices have become a primary driver of teen internet use: 91% of teens go online from mobile devices at least occasionally.
For the current generation of teens, gaming, video chatting, text messaging and social networking are a vital means of self-expression and a central aspect of their social lives. The digital age has changed—both positively and negatively—the way teenage friendships are formed and maintained.
The research also found the following:
- For today’s teens, friendships may start digitally.
- Text messaging is a key component of young people’s daily interactions.
- Video games play an important role in the development and maintenance of boys’ friendships and online gaming builds stronger connections between friends.
- Teen friendships are both strengthened and challenged by social media.
- Phone calls are less common early in a friendship, but are an important way that teens talk with their closest friends.
11 and up
Questions to Start the Conversation
- Does this information reflect your own experiences or the experiences of your friends? Why or why not?
- In what ways is being connected “almost constantly” both positive and negative?
- What do you think life was like for teens before technology?
- Do you say and share things using technology (social media, texting, etc.) that you would not share in person? How so?
- Is there anything about your technology habits you would like to change and if so, what would that be?
Questions to Dig Deeper
- What do you see happening in the future with technology and social media?
- I know parents often worry about their teens being “addicted” to technology or young people losing their social skills because of all the technology. What’s your take on these concerns?
- Do you ever see “cyberhate” online? What do you think can be done about it?
(The "Related to this Resource" and Teens, Technology and Friendships (Pew Research Center) provide articles and information that address these questions.)
Ideas for Taking Action
Ask: What can we do to help? What actions might make a difference?
- Using the Pew research questions and others of your own, create a survey about teens and technology and distribute it to classmates and friends. Compile and share the results.
- Working with other students and school staff, develop a public awareness campaign about cyberbullying and what to do about it.
- As a family, decide on a specified amount of time to go “off the grid” (i.e. not use technology) in order to gain insights into what is gained and lost by being connected. Share your reflections through a letter or essay on the topic. | <urn:uuid:daf69799-f034-48c6-9321-7facf837aa8c> | CC-MAIN-2018-43 | https://www.adl.org/education/resources/tools-and-strategies/table-talk/teenagers-and-technology | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.37/warc/CC-MAIN-20181016093012-20181016114512-00351.warc.gz | en | 0.940881 | 633 | 3.28125 | 3 |
Brown: Nature is improving our forests; don’t interfere
You’ve probably heard it said that our current forest-fire risk is the result of suppressing fires, allowing fuel build up. You probably also remember the fear when the recent pine-bark-beetle infestation hit that all the trees would die and we would be at great risk of fire. As it turned out, here in Summit County, far fewer trees died than anticipated. And once the needles dropped from the trees that did die, the fire risk dropped dramatically, probably to less than that from live trees.
So, we still have beautiful forests. Sure, there are a lot of dead trees, but as you travel our wonderful trails, you’ll still revel in the beauty. And, wherever you do notice dead trees, you’ll also notice young trees underneath. Some are lodgepoles, but many are fir or aspen or a mix. Could it be that nature did a little thinning job on our same-age, single-species lodgepole stands? Can we now expect these stands to evolve into mixed-age, mixed-species forests far less susceptible to future insect infestations and even more beautiful?
One problem: Expecting high mortality of earlier-hit beetlekill areas, the Forest Service developed the infamous Ophir and other clear-cut plans and are currently razing our forests. Such clear-cutting will not mimic benevolent fires of the pre-suppression past; but the catastrophic, widespread fires of 140 years ago and the extensive clear-cutting of the 1930s that created our monoculture lodgepole forests. The result: regenerating monoculture, inviting disastrous future insect epidemics and fires and no forests in the meantime — a long meantime.
Take an elected official on a hike to see what we stand to lose and the way nature will take care of itself — if we let it. Come and speak out Thursday noon at the Silverthorne Library.
Start a dialogue, stay on topic and be civil.
If you don't follow the rules, your comment may be deleted. | <urn:uuid:ed0d1aa1-84ae-480e-9161-1fea2664d8e5> | CC-MAIN-2019-26 | https://www.summitdaily.com/opinion/letters-to-the-editor/brown-nature-is-improving-our-forests-dont-interfere/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998513.14/warc/CC-MAIN-20190617163111-20190617185111-00097.warc.gz | en | 0.939827 | 433 | 2.578125 | 3 |
Rising energy costs, increased awareness of climate change issues, pending global carbon legislation and widely available computing power have meant that more sustainability software services have emerged in recent years than online dating sites. But the next trend for these companies — which spend their days crunching data about energy consumption, water resources and carbon emissions — is to turn to cloud computing and next-generation analytics tools to manage the massive of influx of real-time data emerging around sustainability.
Sustainable Data Overload
There are a few trends driving the need for ever-better information and analytics tools surrounding consumption data. First, the growth of China and India — over the next 40 years the population will explode, adding 2 billion more people on the planet, largely in these rapidly developing countries. Managing the resources — energy consumption and water usage — of an increasingly resource-constrained world will be more necessary than ever.
Secondly, there’s the issue tightly aligned with resource restriction: climate change. Legislation mandating that companies reduce their carbon emissions in order to reduce global warming has been enacted in many countries, and there’s pending cap-and-trade legislation in the U.S. Companies will eventually have to know their carbon emissions, and reduce them, likely via one of the many carbon software providers out there.
Then there’s the emergence of smarter systems. Thanks to dirt-cheap chips and network technology, sensors, always-on communication networks and algorithms have the ability to grab and process information about anything at any time. Like IBM’s Smarter Planet initiative, (and our Green:Net conference series) have focused on, this information technology has a unique ability to monitor and manage resources for companies and make processes more efficient.
For example, the digitization of the electricity industry, by some estimates, could deliver a massive 100 petabytes over the next 10 years. Utilities are in the process of figuring out what software providers and tools will help them manage this influx in real time and some smart grid early adopters are turning to next-gen analytics tools and cloud computing. The Tennessee Valley Authority (TVA) and the North American Electric Reliability Corp. (NERC), for instance, have turned to open-source software framework Hadoop, which was developed for analyzing large data sets generated by web sites, to aggregate and process data about the health of the power grid.
Carbon Software 2.0
Players like SAP, Hara, Enviance, and ENXSuite (formerly Carbonetworks) have established themselves as leaders in the sustainability software market, and they’re knee-deep in this trend — let’s call it carbon software 2.0 — of using cloud computing and next-gen “big data” tools for on-demand data storage and analytics. The cloud can uniquely support sustainability software as a service and provide real-time data crunching for the “hundreds of millions of records” that execs like Anirban Chakrabarti, SVP and General Manager of SAP’s Carbon Impact business unit, say their services need to manage.
SAP has been acting aggressively to build out a next-generation version of its cloud-based carbon software tool. Chakrabarti, who sold his startup Clear Standards to SAP last year, told me in an interview that SAP has been working on systematically rebuilding Carbon Impact around a SAP cloud-based platform that runs on Amazon, and hopes to launch “Carbon Impact 5.0” this July or August. Chakrabarti says that leveraging Amazon’s cloud computing service gives SAP access to a ton of cloud-based, on-demand computing power, storage, memory and hosting. In addition, Carbon Impact 5.0 will leverage “in-memory database” management systems, which use main memory instead of disk storage memory, and which are commonly used in services that need quick response times, like 9-11 operators.
“We believe this is where the carbon solution industry is heading,” says Chakrabarti. That’s because there are a massive number of data points that can be collected about resource consumption, and users are looking for complex reports, delivered in real-time, demonstrating how they can reduce that consumption. For example, Chakrabarti says that for some of its manufacturing clients it collects data about machine energy usage in six-minute intervals updates.
Startup Hara, which only officially launched last year and is backed by venture firm Kleiner Perkins, is also working with cloud provider OpSource for its database. Hara also says it plans to work with a yet-to-be-determined “data integration provider and a data analytics solution.” Hara CEO Amit Chatterjee says one day he thinks the carbon software industry will be managing 12 MB per second — the equivalent to 1 million SMS messages.
A Competitive Edge
The result of this shift in sustainable software companies looking to the cloud and next-gen analytics will mean there will be both a growing barrier for companies to enter the carbon software market and a competitive edge for the players that can already provide these heavy duty, speedy services. As SAP’s Chakrabati explains it, “We think the cloud and these analytics services are our competitive edge,” and says it’s taken no small amount to prepare to launch Carbon Impact 5.0 in the cloud. “It’s something that a smaller startup won’t have the resources for.”
Some kind of differentiator for the carbon software market is actually a relief. There’s been an overload of carbon software startups that have emerged over the past couple of years, and research firm Verdantix put together an extensive list of 22 carbon management software companies. Many of these companies have been backed by venture capital firms (that will likely lose money), and even software vets like Thomas Siebel — the guy who sold Siebel Systems to Oracle for billions of dollars — have been enticed in. His startup, C3, has former Secretary of State Condoleezza Rice and former Senator and Secretary of Energy Spencer Abraham as directors.
The good news is that when the U.S. one day actually puts a price on carbon (depends on how quickly the energy bill can get through Congress), robust, matured software as a service carbon tools will be ready and waiting for the influx of users. | <urn:uuid:8dbb36c5-b8d4-42af-a57c-f1b0955a9741> | CC-MAIN-2018-17 | https://gigaom.com/report/how-big-data-tools-are-shaping-sustainability-software/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936914.5/warc/CC-MAIN-20180419110948-20180419130948-00780.warc.gz | en | 0.934741 | 1,316 | 2.546875 | 3 |
Acute Radiation Syndrome: A Fact Sheet for Clinicians
Download The Brochure (PDF)pdf icon
This fact sheet is for clinicians. If you are a patient, we strongly advise that you consult with your physician to interpret the information provided as it may apply to you. Information on Acute Radiation Syndrome (ARS) for members of the public can be found at http://emergency.cdc.gov/radiation/ars.htm
Acute Radiation Syndrome (ARS) (sometimes known as radiation toxicity or radiation sickness) is an acute illness caused by irradiation of the entire body (or most of the body) by a high dose of penetrating radiation in a very short period of time (usually a matter of minutes). The major cause of this syndrome is depletion of immature parenchymal stem cells in specific tissues. Examples of people who suffered from ARS are the survivors of the Hiroshima and Nagasaki atomic bombs, the firefighters that first responded after the Chernobyl Nuclear Power Plant event in 1986, and some unintentional exposures to sterilization irradiators.
The required conditions for Acute Radiation Syndrome (ARS) are:
- The radiation dose must be large (i.e., greater than 0.7 Gray (Gy)1, 2 or 70 rads).
- Mild symptoms may be observed with doses as low as 0.3 Gy or 30 rads.
- The dose usually must be external ( i.e., the source of radiation is outside of the patient’s body).
- Radioactive materials deposited inside the body have produced some ARS effects only in extremely rare cases.
- The radiation must be penetrating (i.e., able to reach the internal organs).
- High energy X-rays, gamma rays, and neutrons are penetrating radiations.
- The entire body (or a significant portion of it) must have received the dose3.
- Most radiation injuries are local, frequently involving the hands, and these local injuries seldom cause classical signs of ARS.
- The dose must have been delivered in a short time (usually a matter of minutes).
- Fractionated doses are often used in radiation therapy. These are large total doses delivered in small daily amounts over a period of time. Fractionated doses are less effective at inducing ARS than a single dose of the same magnitude.
The three classic ARS Syndromes are:
- Bone marrow syndrome (sometimes referred to as hematopoietic syndrome) the full syndrome will usually occur with a dose between 0.7 and 10 Gy (70 – 1000 rads) though mild symptoms may occur as low as 0.3 Gy or 30 rads4.
- The survival rate of patients with this syndrome decreases with increasing dose. The primary cause of death is the destruction of the bone marrow, resulting in infection and hemorrhage.
- Gastrointestinal (GI) syndrome: the full syndrome will usually occur with a dose greater than approximately 10 Gy (1000 rads) although some symptoms may occur as low as 6 Gy or 600 rads.
- Survival is extremely unlikely with this syndrome. Destructive and irreparable changes in the GI tract and bone marrow usually cause infection, dehydration, and electrolyte imbalance. Death usually occurs within 2 weeks.
- Cardiovascular (CV)/ Central Nervous System (CNS) syndrome: the full syndrome will usually occur with a dose greater than approximately 50 Gy (5000 rads) although some symptoms may occur as low as 20 Gy or 2000 rads.
- Death occurs within 3 days. Death likely is due to collapse of the circulatory system as well as increased pressure in the confining cranial vault as the result of increased fluid content caused by edema, vasculitis, and meningitis.
The four stages of ARS are:
- Prodromal stage (N-V-D stage): The classic symptoms for this stage are nausea, vomiting, as well as anorexia and possibly diarrhea (depending on dose), which occur from minutes to days following exposure. The symptoms may last (episodically) for minutes up to several days.
- Latent stage: In this stage, the patient looks and feels generally healthy for a few hours or even up to a few weeks.
- Manifest illness stage: In this stage the symptoms depend on the specific syndrome (see Table 1) and last from hours up to several months.
- Recovery or death: Most patients who do not recover will die within several months of exposure. The recovery process lasts from several weeks up to two years.
These stages are described in further detail in Table 1
|Syndrome||Dose*||Prodromal Stage||Latent Stage||Manifest Illness Stage||Recovery|
|> 0.7 Gy (> 70 rads)
(mild symptoms may occur as low as 0.3 Gy or 30 rads)
|• Symptoms are anorexia, nausea and vomiting.
• Onset occurs 1 hour to 2 days after exposure.
• Stage lasts for minutes to days.
|• Stem cells in bone marrow are dying, although patient may appear and feel well.
• Stage lasts 1 to 6 weeks.
|• Symptoms are anorexia, fever, and malaise.
• Drop in all blood cell counts occurs for several weeks.
• Primary cause of death is infection and hemorrhage.
• Survival decreases with increasing dose.
• Most deaths occur within a few months after exposure.
|• in most cases, bone marrow cells will begin to repopulate the marrow.
• There should be full recovery for a large percentage of individuals from a few weeks up to two years after exposure.
• death may occur in some individuals at 1.2 Gy (120 rads).
• the LD50/60† is about 2.5 to 5 Gy (250 to 500 rads)
|Gastrointestinal (GI)||> 10 Gy (> 1000 rads)
(some symptoms may occur as low as 6 Gy or 600 rads)
|• Symptoms are anorexia, severe nausea, vomiting, cramps, and diarrhea.
• Onset occurs within a few hours after exposure.
• Stage lasts about 2 days.
|• Stem cells in bone marrow and cells lining GI tract are dying, although patient may appear and feel well.
• Stage lasts less than 1 week.
|• Symptoms are malaise, anorexia, severe diarrhea, fever, dehydration, and electrolyte imbalance.
• Death is due to infection, dehydration, and electrolyte imbalance.
• Death occurs within 2 weeks of exposure.
|• the LD100‡ is about 10 Gy (1000 rads)|
|Cardiovascular (CV)/ Central Nervous System (CNS)||> 50 Gy (5000 rads)
(some symptoms may occur as low as 20 Gy or 2000 rads)
|• Symptoms are extreme nervousness and confusion; severe nausea, vomiting, and watery diarrhea; loss of consciousness; and burning sensations of the skin.
• Onset occurs within minutes of exposure.
• Stage lasts for minutes to hours.
|• Patient may return to partial functionality.
• Stage may last for hours but often is less.
|• Symptoms are return of watery diarrhea, convulsions, and coma.
• Onset occurs 5 to 6 hours after exposure.
• Death occurs within 3 days of exposure.
|• No recovery is expected.|
* The absorbed doses quoted here are “gamma equivalent” values. Neutrons or protons generally produce the same effects as gamma, beta, or X-rays but at lower doses. If the patient has been exposed to neutrons or protons, consult radiation experts on how to interpret the dose.
Cutaneous Radiation Syndrome (CRS)
The concept of cutaneous radiation syndrome (CRS) was introduced in recent years to describe the complex pathological syndrome that results from acute radiation exposure to the skin.
ARS usually will be accompanied by some skin damage. It is also possible to receive a damaging dose to the skin without symptoms of ARS, especially with acute exposures to beta radiation or X-rays. Sometimes this occurs when radioactive materials contaminate a patient’s skin or clothes.
When the basal cell layer of the skin is damaged by radiation, inflammation, erythema, and dry or moist desquamation can occur. Also, hair follicles may be damaged, causing epilation. Within a few hours after irradiation, a transient and inconsistent erythema (associated with itching) can occur. Then, a latent phase may occur and last from a few days up to several weeks, when intense reddening, blistering, and ulceration of the irradiated site are visible.
In most cases, healing occurs by regenerative means; however, very large skin doses can cause permanent hair loss, damaged sebaceous and sweat glands, atrophy, fibrosis, decreased or increased skin pigmentation, and ulceration or necrosis of the exposed tissue.
Triage: If radiation exposure is suspected:
- Secure ABCs (airway, breathing, circulation) and physiologic monitoring (blood pressure, blood gases, electrolyte and urine output) as appropriate.
- Treat major trauma, burns and respiratory injury if evident.
- In addition to the blood samples required to address the trauma, obtain blood samples for CBC (complete blood count), with attention to lymphocyte count, and HLA (human leukocyte antigen) typing prior to any initial transfusion and at periodic intervals following transfusion.
- Treat contamination as needed.
- If exposure occurred within 8 to 12 hours, repeat CBC, with attention to lymphocyte count, 2 or 3 more times (approximately every 2 to 3 hours) to assess lymphocyte depletion.
The diagnosis of ARS can be difficult to make because ARS causes no unique disease. Also, depending on the dose, the prodromal stage may not occur for hours or days after exposure, or the patient may already be in the latent stage by the time they receive treatment, in which case the patient may appear and feel well when first assessed.
If a patient received more than 0.05 Gy (5 rads) and three or four CBCs are taken within 8 to 12 hours of the exposure, a quick estimate of the dose can be made (see Ricks, et. al. for details). If these initial blood counts are not taken, the dose can still be estimated by using CBC results over the first few days. It would be best to have radiation dosimetrists conduct the dose assessment, if possible.
If a patient is known to have been or suspected of having been exposed to a large radiation dose, draw blood for CBC analysis with special attention to the lymphocyte count, every 2 to 3 hours during the first 8 hours after exposure (and every 4 to 6 hours for the next 2 days). Observe the patient during this time for symptoms and consult with radiation experts before ruling out ARS.
If no radiation exposure is initially suspected, you may consider ARS in the differential diagnosis if a history exists of nausea and vomiting that is unexplained by other causes. Other indications are bleeding, epilation, or white blood count (WBC) and platelet counts abnormally low a few days or weeks after unexplained nausea and vomiting. Again, consider CBC and chromosome analysis and consultation with radiation experts to confirm diagnosis.
Initial Treatment and Diagnostic Evaluation
Treat vomiting5, and repeat CBC analysis, with special attention to the lymphocyte count, every 2 to 3 hours for the first 8 to 12 hours following exposure (and every 4 to 6 hours for the following 2 or 3 days). Sequential changes in absolute lymphocyte counts over time are demonstrated below in the Andrews Lymphocyte Nomogram (see Figure 1). Precisely record all clinical symptoms, particularly nausea, vomiting, diarrhea, and itching, reddening or blistering of the skin. Be sure to include time of onset.
Figure 1: Andrews Lymphocyte Nomogram
From Andrews GA, Auxier JA, Lushbaugh CC. The Importance of Dosimetry to the Medical Management of Persons Exposed to High Levels of Radiation. In Personal Dosimetry for Radiation Accidents. Vienna : International Atomic Energy Agency; 1965.
Note and record areas of erythema. If possible, take color photographs of suspected radiation skin damage. Consider tissue, blood typing, and initiating viral prophylaxis. Promptly consult with radiation, hematology, and radiotherapy experts about dosimetry, prognosis, and treatment options. Call the Radiation Emergency Assistance Center/Training Site (REAC/TS) at (865) 576-3131 (M-F, 8 am to 4:30 am EST) or (865) 576-1005 (after hours) to record the incident in the Radiation Accident Registry System.
After consultation, begin the following (as indicated):
- supportive care in a clean environment (if available, the use of a burn unit may be quite effective)
- prevention and treatment of infections
- stimulation of hematopoiesis by use of growth factors
- stem cell transfusions or platelet transfusions (if platelet count is too low)
- psychological support
- careful observation for erythema (document locations), hair loss, skin injury, mucositis, parotitis, weight loss, or fever
- confirmation of initial dose estimate using chromosome aberration cytogenetic bioassay when possible. Although resource intensive, this is the best method of dose assessment following acute exposures.
- consultation with experts in radiation accident management
For More Help
Technical assistance can be obtained from the Radiation Emergency Assistance Center/Training Site (REAC/TS) at (865) 576-3131 (M-F, 8 am to 4:30 pm EST) or (865) 576-1005 (after hours), or on their web site at http://www.orau.gov/reacts/external icon, and the Medical Radiobiology Advisory Team (MRAT) at (301) 295-0316.
Also, more information can be obtained from the CDC Health Alert Network at emergency.cdc.gov or by calling (800) 311-3435.
Berger ME, O’Hare FM Jr, Ricks RC, editors. The Medical Basis for Radiation Accident Preparedness: The Clinical Care of Victims. REAC/TS Conference on the Medical Basis for Radiation Accident Preparedness. New York : Parthenon Publishing; 2002.
Gusev IA , Guskova AK , Mettler FA Jr, editors. Medical Management of Radiation Accidents, 2 nd ed., New York : CRC Press, Inc.; 2001.
Jarrett DG. Medical Management of Radiological Casualties Handbook, 1 st ed. Bethesda , Maryland : Armed Forces Radiobiology Research Institute (AFRRI); 1999.
LaTorre TE. Primer of Medical Radiobiology, 2 nd ed. Chicago : Year Book Medical Publishers, Inc.; 1989.
National Council on Radiation Protection and Measurements (NCRP). Management of Terrorist Events Involving Radioactive Material, NCRP Report No. 138. Bethesda , Maryland : NCRP; 2001.
Prasad KN. Handbook of Radiobiology, 2 nd ed. New York : CRC Press, Inc.; 1995.
- The Gray (Gy) is a unit of absorbed dose and reflects an amount of energy deposited into a mass of tissue (1 Gy = 100 rads). In this document, the referenced absorbed dose is that dose inside the patient’s body (i.e., the dose that is normally measured with personal dosimeters).
- The referenced absorbed dose levels in this document are assumed to be from beta, gamma, or x radiation. Neutron or proton radiation produces many of the health effects described herein at lower absorbed dose levels.
- The dose may not be uniform, but a large portion of the body must have received more than 0.7 Gy (70 rads).
- Note: although the dose ranges provided in this document apply to most healthy adult members of the public, a great deal of variability of radiosensitivity among individuals exists, depending upon the age and condition of health of the individual at the time of exposure. Children and infants are especially sensitive.
- Collect vomitus in the first few days for later analysis. | <urn:uuid:b4ef553d-727e-4e43-b2ce-5a4140ba7b16> | CC-MAIN-2021-04 | https://www.cdc.gov/nceh/radiation/emergencies/arsphysicianfactsheet.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00109.warc.gz | en | 0.89754 | 3,439 | 4.125 | 4 |
EPA and Army Corps of Engineers Release New Rule to Protect Waterways
The Environmental Protection Agency and U.S. Army Corps of Engineers released a proposed rule this week to clarify which waterways are protected under the Clean Water Act.
Two Supreme Court decisions, one in 2001 and the other in 2006, made determining which waterways are protected under the CWA confusing and complex. The proposed rule doesn't expand waterways protected under the CWA, but only clarifies which ones are protected. Or as EPA head Gina McCarthy stated in an op-ed piece for the Huffington Post, "Our proposed rule will not add to or expand the scope of waters historically protected under the Clean Water Act."
About 60 percent of the stream miles in the U.S. only flow seasonally or after rain, and about 117 million people, one in three Americans, get drinking water from public systems that rely in part on those streams. The proposed rule would clarify that under the CWA most seasonal and rain-dependent streams are protected. It would also clarify that wetlands near rivers and streams are protected. Other waterways whose connections with downstream water are uncertain will be evaluated to determine whether the connection is significant. In addition, the proposed rule preserves the CWA exemptions and exclusions for agriculture.
To explain the proposed rule to the American people, the EPA released several videos. One of them is by McCarthy, who gave an overview on the rule. "The EPA is taking action to keep America's waterways clean and healthy," she said. The rule is "about protecting our natural resources" because "every sector of our economy depends on water." The other video is by EPA's Deputy Chief of Staff Arvin Ganesan who included an overview of the CWA. "The Clean Water Act worked. Our waterways were getting cleaner, but over the last 15 years a few complex court cases have tangled up these essential protections, making it unclear what waters are covered by the Clean Water Act," Ganesan said.
Continue reading at ENN affiliate, Triple Pundit.
River image via Shutterstock. | <urn:uuid:4a5876f0-5010-44d5-9e34-d9f7acf316dd> | CC-MAIN-2015-35 | http://www.enn.com/environmental_policy/article/47222/print | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645356369.70/warc/CC-MAIN-20150827031556-00257-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.962284 | 417 | 2.9375 | 3 |
When you start Word, it gives you a blank document to let you start typing right away. Word makes some assumptions about how the document will look, so you don't need to worry about formatting at all unless you want to change the default settings. Here are the most important ones:
Throughout the remaining chapters about Word, you learn how to change these formatting options. For now, you can just focus on typing.
Typing Paragraphs and Creating Blank Lines
The key to having a happy typing experience is knowing when to press Enter. Follow these two rules for typing paragraphs of text:
Figure 3.1 illustrates these two rules.
Figure 3.1. Do not press Enter within a paragraph. Do press Enter at the end of the paragraph.
To create blank lines between your paragraphs, press Enter twice between each paragraph, once to end the paragraph you just typed and once to create the blank line. If you need several blank lines, just continue pressing Enter. If you press Enter too many times and need to delete a blank line, press the Backspace key. You'll learn much more about deleting in Chapter 4, "Managing Documents and Revising Text."
Figure 3.2 illustrates when to press Enter to create short lines of text and blank lines.
Figure 3.2. Press Enter to end short paragraphs and create blank lines.
As you type, you may see an occasional red or green wavy line under your text. These lines indicate possible spelling or grammatical errors. You'll learn how to use them (and hide them if they bother you) in Chapter 8, "Correcting Documents and Using Columns and Tables."
Word gives you default tab stops every one-half inch across the horizontal ruler. (If you don't see your rulers, choose View, Ruler.) Each time you press the Tab key, the insertion point jumps out to the next tab stop. Any text to the right of the insertion point moves along with it. Figure 3.3 shows the beginning of a memo in which the Tab key was pressed after the labels To:, From:, Date: and Re: to line up the text at the half-inch mark on the horizontal ruler.
Figure 3.3. Press the Tab key to push text out to the next tab stop.
If you press the Tab key too many times, press the Backspace key to delete the extra tabs.
You can also press the Tab key at the beginning of a paragraph to indent the first line by one-half inch. Figure 3.4 shows a document whose paragraphs are indented in this way.
Figure 3.4. Press the Tab key at the beginning of each paragraph to indent the first line.
Seeing Your Paragraph, Tab, and Space Marks
As you're typing your document, you may occasionally want to check whether you accidentally pressed Enter at the end of a line within a paragraph, or pressed Enter too many times between paragraphs. Or, maybe you think you may have pressed the Tab key one time too many, or typed an extra space between two words. You can use Word's Show/Hide feature to solve these mysteries. To turn it on, click the Show/Hide button on the Formatting toolbar (or press Ctrl+Shift+*). This is a toggle button, meaning that you click it once to turn it on, and again when you want to turn it off (see Figure 3.5).
Figure 3.5. The Show/Hide feature lets you see your paragraph, tab, and space marks.
The Show/Hide feature uses the paragraph mark symbol to indicate you where you pressed Enter, a right arrow to show where you pressed the Tab key, and a dot to mark where you pressed the Spacebar.
Figure 3.5 shows a document that has an errant paragraph, tab, and space mark. The user accidentally pressed the Tab key a second time on the From: line, typed an extra space between the words designating and Fridays, and pressed Enter at the end of a line within a paragraph.
To delete any of these hidden characters , click immediately to the left of the character and press the Delete key. Figure 3.6 shows the same document after these three problems were fixed.
Figure 3.6. The extra paragraph, tab, and space marks have been deleted.
Typing onto the Next Page
As you're typing, Word calculates how many lines fit on a page. When the page you're on is full, Word automatically inserts a page break and starts another page. Figure 3.7 shows the break between two pages of text, as it appears in Print Layout view. (You'll learn about views in Chapter 5, "Viewing and Printing Your Documents.")
Figure 3.7. Word breaks pages for you. | <urn:uuid:85d6181b-1fa8-4c1b-8e4c-0d65cb925ee0> | CC-MAIN-2020-45 | https://flylib.com/books/en/4.250.1.34/1/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00285.warc.gz | en | 0.893597 | 977 | 3.5625 | 4 |
Common Name: Mallard, Mallard Duck
Scientific Name: Anas platyrhynchos
A large group of ducks is called a raft, and mallards will frequently form large roosting colonies in the winter. This is not only a protection measure against predators, but also because favored food sources and open water are more scarce.
Large rafts of mallards are also common in all seasons when food sources are plentiful, such as in urban parks or popular nesting grounds. In prime feeding or nesting areas, mallards may mix with other duck species and waterfowl, including northern pintails, American wigeons, gadwalls, ring-necked ducks, American coots and Canada geese. So long as food remains plentiful, large mixed flocks are common, but the birds will begin to separate and seek better areas if resources run out.
Photo © Aske Holst | <urn:uuid:5b02729f-d4fb-423f-ae78-61a77bef7321> | CC-MAIN-2014-15 | http://birding.about.com/od/birdprofiles/ig/Pictures-of-Mallards/Raft-of-Mallards.htm | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00304-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.949333 | 185 | 3.71875 | 4 |
Despite its nutritional virtues, pretty color and pleasant scent, grapefruit has a reputation for tasting sour and bitter. Horticulturalists have worked hard to develop varieties that are sweet and tangy, rather than unpleasantly acidic. Their labors have happily borne fruit -- red, white and pink fruit, to be exact.
Blush Is Best
It's not entirely true that the redder the grapefruit, the sweeter the taste. Some deep red grapefruits can still be unpleasantly bitter, while blush-pink grapefruits feature a pleasingly sweet-tart taste. Nonetheless, both pink and red grapefruits are sweeter than white grapefruit varieties, as a general rule. Choose a grapefruit with at least a little red pigment to it for the sweetest fruit. You'll be boosting the health benefits, too -- red and pink grapefruit get their color in part from lycopene, a powerful antioxidant that may help fight cancer.
The Oro Blanco
Citrus fruits readily hybridize with each other, so new varieties crop up all the time. The Oro Blanco, whose name means "white gold," is a cross between a white grapefruit and another grapefruit relation called the pomelo. The resulting fruit is a bit larger than your standard grapefruit, with a deep golden peel, pale yellow flesh and a thick rind that is easy to peel. Oro Blancos are sweet and lack the balancing acidity of more common grapefruit varieties. Oro Blancos were patented by the University of California at Riverside and are grown exclusively in California. They may be harder to find than standard grapefruits, but when it's pure sweetness that you're after, they're worth the search. | <urn:uuid:b58ee59a-30be-4f01-a271-50621f24ae26> | CC-MAIN-2018-13 | https://www.leaf.tv/articles/which-types-of-grapefruit-are-the-sweetest/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00132.warc.gz | en | 0.937385 | 355 | 2.59375 | 3 |
Embedded sensors that track our lives
Improving the comfort and efficiency of daily life is easy with the range of connected objects now available. So-called "wearable technologies" turn clothes and accessories into intelligent objects with simple and discreet integrated sensors. By communicating with a dedicated smartphone application the sensors provide a wealth of useful information. Watches and other activity trackers offer many functions that keep an eye on our physical shape but sensors can do other things as well such as geolocating lost keys at the bottom of bags or tracking our suitcases as we travel.
Connected objects and the home
Automation services embedded into homes facilitate the remote management of many devices such as lighting. We can program the opening and closing of roller shutters as well as set heating hours from our smartphone or tablet. We quickly appreciate the effect these useful features have on energy bills. Connected objects also contribute to home security. Cameras can now differentiate between human and animal presences and you will receive a notification directly to your smartphone if an unexpected presence is detected.
Healthcare and connected objects
Health care is also at the heart of the connected revolution and everyday objects simplify our relationship with health through tailor-made monitoring. This inevitable balance keeps evolving and shifting. Yesterday, trackers carefully watched our weight now they can evaluate our heart rate. New toothbrushes can even assess how well we brush our teeth so decreasing or eliminating the risk of caries. Like the connected thermometer, all of these devices provide accurate measurements in real time that are then recorded on dedicated applications. In situations of disability or illness medical professionals can easily follow our health condition and care from a distance thus saving our time and theirs.
Bringing entertainment to you
The entertainment potential achieved with the Internet without even the need to open a computer is no longer the stuff of science fiction. Virtual helmets offer us the ability to join networks and play while objects around the home become broadcast channels on which you can receive SMS messages, listen to the radio or even watch episodes of the latest TV series that's got everybody talking.
The Internet of Things
We can see then that the Internet of Things or IoT offers the real advantage of data exchange in the provision of new services. The result? These developments facilitate the management of our shopping and other daily chores allowing us more time for our family life and leisure activities. Washing machines can be operated remotely with an application and the stock of food in our fridges monitored and managed, reordering supplies as necessary. Appliances that track our food stocks can even suggest recipes according to what's in our cupboards. A button placed on the front of any appliance allows it to be cleaned with just a single click while, as for our coffee machine, it knows exactly how we like our coffee! What's next?
For inquiries about our products or our services, please contact us. | <urn:uuid:5ff73b32-9678-4778-a70c-83f2daf6a323> | CC-MAIN-2023-06 | https://eu.sithon-technologies.com/es/module/csblog/post/18-1-5-good-reasons-to-use-connected-objects-in-daily-life.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00482.warc.gz | en | 0.946316 | 578 | 2.8125 | 3 |
I agree Our site saves small pieces of text information (cookies) on your device in order to deliver better content and for statistical purposes. You can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings you grant us permission to store that information on your device.
Sodium Chloride (NaCl) is widely used in biochemistry and molecular biology applications. It is considered an essential nutrient. It is used in a wide variety of biochemical applications including creating density gradients, regulating biological functions and as a diluent to increase ionic strength in buffers or culture media. NaCl is a component of phosphate buffered saline and SSC buffer. It has also been used for precipitation of DNA from SDS-containing samples and in the removal of small nucleic acid fragments from plasmid DNA preparations.
ARG Lang. Osmotic coefficients and water potentials of sodium chloride solutions from 0 to 40°C.Aust. J. Chem.,1967,20(9), 2017-2023.
Guangling Song.; Andrej Atrens.; Xianliang Wu.; Bo Zhang. Corrosion behaviour of AZ21, AZ501 and AZ91 in sodium chloride.Corros. Sci.,1998,40(10), 1769-1791.
May be harmful if swallowed.
Call a POISON CENTER/doctor if you feel unwell. | <urn:uuid:fee52d50-c5e7-4cff-9b29-60d373577805> | CC-MAIN-2019-47 | https://www.alfa.com/en/catalog/011019/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670162.76/warc/CC-MAIN-20191119172137-20191119200137-00426.warc.gz | en | 0.843757 | 292 | 3.171875 | 3 |
Available Mbytes stands for free unallocated RAM and displays the amount of physical memory, in MB, available to processes running on the computer.
This counter only displays the last value and is not an average.
If the value is less than 20/25 percent of installed RAM it is an indication of insufficient memory.
Less than 100 MB is an indication that the system is very starved for memory and paging out.
Fluctuations of 100 MB or more can indicate that someone is logged in remotely into the server.
Pages/sec is the number of pages read from the disk or written to the disk to resolve memory references to pages that were not in memory at the time of the reference.
1 This is the sum of two counters - Pages Input/sec and Pages Output/sec.
2 The threshold is normally 20 pages/sec, although one has to investigate activity on the server before concluding paging is the problem.
3 Spikes in pages/sec are normal and possible due to backups, big files/data being written to disk and after reboot.
4 SQL Server has to be configured to dynamically manage to the "Dynamically configure SQL Server memory" option, and the "Maximum Memory" setting should be set to the maximum RAM possible with allowing room for OS. SQL Server should also ideally be the only application on the server.
5 High Available mbytes and low paging file % usage with high pages/sec may not indicate a problem, may merely be indicating that the system is reading a memory mapped file sequentially.
6 Also investigate Page Faults per second, which is the cumulative sum of hard and soft page faults since when the system rebooted. It may be hard to interpret this counter since it is a cumulative value and may be very large but if you have multiple programs sharing the computer with SQL Server you may be able to see which program is causing the paging by looking at each program’s page faults per second. | <urn:uuid:999efe12-131b-49a0-85ee-ff1416834609> | CC-MAIN-2018-30 | https://avishkarm.blogspot.com/2011/04/performance-counters-for-sql-server-dba.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589752.56/warc/CC-MAIN-20180717144908-20180717164908-00082.warc.gz | en | 0.927396 | 403 | 3 | 3 |
Unsafe driving and certain road conditions place you at risk of collision. According to the National Highway Traffic Administration (NHTA), car accidents happen every day by the minute. Collisions can be fatal at times. In the U.S. it claims an average of 37,000 lives each year.
While it doesn’t always lead to fatalities, collisions are never without consequence. Some have to deal with minor injuries and medical bills. Others get away unscathed but are left with a vehicle that needs repairs.
3 Driving Tips for Preventing Car Collisions
Thankfully, there are ways for you to avoid car collisions. Just follow the tips below and you’d be able to save money while keeping everyone safe from harm.
Tip #1: Always be a defensive driver
This should be a no-brainer, considering you have to take a course on defensive driving before qualifying for a license. In essence, being a defensive driver means taking road safety into your own hands.
Here’s are some techniques for defensive driving:
- Stay alert at all times. Do not drive if you’re sleepy, under the influence, or if you are taking medications that impair judgment and reaction time.
- Be on the lookout for potential dangers. Use your car mirrors to anticipate what goes in your surroundings from every angle and not just the road ahead.
- Drive at a safe speed. Staying within the recommended speed limits gives you ample time to react in case of accidents and unexpected events.
- Maintain a safe distance from other vehicles. This is particularly true when driving behind large trucks and vehicles since they can limit your view of the road and the opposite lane.
- Never let down your guard. Even if the traffic light is green, slow down and observe all directions before crossing the road.
Tip #2: Focus on driving
Statistics show that it takes as little as 3 seconds of distracted driving for a collision to occur. Distracted drivers claimed the lives of 3,450 people in the United States last 2016 and injured as much as 391,000 people a year prior.
There are several reasons why drivers get distracted. The most common being phone use (i.e. texting or calling while driving) and letting the mind wander off. Other causes of distracted driving include having to attend to children and pets. But even seemingly harmless acts like grooming your hair and listening to conversation or music are enough to distract you.
Do yourself a favor keep your eyes on the road.
Tip #3: Be extra careful at night
Nearly half of fatal car collisions happen in the evening. More often than not, they are caused by limited lighting conditions, driving under the influence, and sleepy drivers. Given these numbers, you have to be more careful when driving at night. This is the perfect time to employ tactics in defensive driving.
Bonus Tip: Keep your vehicle in good driving condition
Regular car maintenance is another way you can avoid collisions. It provides the added benefit of reducing the possibility of injury in the event that a collision does occur. Loose brakes, sudden car stalls, and a broken windshield are three safety hazards associated with an ill-maintained vehicle. By ensuring that your vehicle is functioning properly, an auto body shop can minimize your risk for collisions. | <urn:uuid:04b2ca3c-e6a9-463c-9e98-105122975b03> | CC-MAIN-2020-34 | https://www.chaneyscollision.com/how-to-drive-to-avoid-collisions/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00164.warc.gz | en | 0.955095 | 670 | 2.953125 | 3 |
Smart contracts serve the same purpose regular ones do. They set and regulate the conditions of an agreement in digital form. They possess several important qualities that differ from what a regular contract can guarantee.
Smart contracts are autonomous, which means they aren’t subject to third-party interference. Their secure structure allows the processing of data in a secure way. They are fast and cost-effective, and, when properly written and audited, they can be error-free as well. But arguably the most relevant aspect is that a smart contract eliminates the need to trust each participant.
A smart contract is a program stored on the blockchain that runs when the sides meet the set conditions. In this article, we review the top 10 smart contracts use cases.
There is a reason smart contracts are used for exchanging cryptocurrencies and trading. In fact, dApps or decentralized applications, are used to conduct all types of financial operations similar to the regular offline financial system. From trading currencies to lending and borrowing money. Smart contracts provide the necessary services. DeFi projects simplify financial relationships, allowing anyone to enter the world of trading and earning interest.
Paying for Services
Aside from regular transactions, dApps allow completing payments. Users can pay for services the same way they can purchase items or exchange finances. Depending on the specifics of the contract, various conditions can be set, while the exchange still happens automatically. Binance Smart Chain provides multiple examples of smart contracts being used for ordering services.
NFTs in Gaming
Lately, NFTs have become a crucial and one of the most popular aspects of the decentralized market. They represent digital ownership of certain items, meaning unique attributes and objects for a character or player. Powered by smart contracts, they are considered to be both convenient and easier to comprehend for users new to blockchain. Aside from being a foundation for non-fungible tokens, smart contracts provide a secure way to buy and sell unique items with a guarantee of a successful transaction. Items can be not just purchased or sold but moved to other games.
Smart contracts are contracts, after all. And aside from simply repeating the functionality of binding agreements, they offer potential to the legal industry. The main feature, in this case, is the strict structure that demands fulfilling obligations. Some believe that eventually, smart contracts might even eliminate the need for involving actual lawyers.
Tokenization is yet another practice that has successfully made its way into mainstream markets. What are smart contracts used for in real estate? There have already been examples of tokenizing property and developing special platforms like RealT and SolidBlock that combine real estate with blockchain technologies. According to the proponents of this technology, smart contract implementation can reduce fees that are usually applied to any real estate transaction.
Decentralized autonomous organizations or DAOs function similarly to corporations with one fundamental difference – they are member-owned and don’t have a centralized leadership. Every aspect of business relations is detailed with the help of smart contracts, which is once again where their trustless structure comes in handy. Since 2017 businesses are allowed to be incorporated and managed via blockchain.
Smart contracts can be a convenient solution for automating operations and providing transparency for various government services. While this is not a widespread practice just yet, it can potentially improve efficiency in various procedures. Real-world examples of smart contracts in government services may include speedy tax collection, vaccine tracking, and other relevant issues.
Creating Public and Private Networks
Since blockchain can be used for public or private access, it allows companies to create networks where the users, be it clients or partners, would provide data through the ledger, use the network and participate in the processes. While public networks provide universal access, private can limit it while data remains secure. Naturally, they would rely on smart contracts to provide functionality.
Proof of Authenticity
One of the ways to implement smart contracts in commerce is using blockchain for proof of authenticity for items. When famous brands develop clothing items or collectibles, blockchains are sometimes used for tracking them. When the item gets sold or switches ownership, it’s always possible to track it. This can simplify counterfeit detection.
Research and Development
Smart contracts and blockchain technology may find arguably the most important application in research. Storing and accessing data while exchanging information with partners or other parties can be performed with ease using smart contracts. One such example is the development and advancement of the Internet of Things. Security and transparency needs can be met by implementing smart contracts to the IoT devices.
With every new step in tech, smart contracts find more and more applications. These are only a few examples of how this technology can improve most aspects of our everyday life. Of course, smart contracts have to function properly, which is why it’s vital to subject them to audits prior to putting them to use. 0xGuard is here to address any security needs related to smart contracts. | <urn:uuid:337c3d09-f959-4564-8220-97035ada735b> | CC-MAIN-2023-14 | https://0xguard.com/top_10_smart_contract_use_cases | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00667.warc.gz | en | 0.932755 | 1,007 | 2.5625 | 3 |
Extra research for Season 3 – Episode 13 – The Angels
Surely some of the places with the saddest spirits in Canada are the former residential schools, where approximately 150,000 First Nations and Métis children lost their cultures, identities, and even their lives at the hands of government and church employees whose aim was to “kill the Indian in the child.”
The Other Side encountered some of these spirits in the investigation of the former Muskowekwan Indian Residential School in the village of Lestock, Saskatchewan, not far from the Muskowekwan First Nation. When the original school became too small for the number of students, it was replaced by this building, which open in 1932. Cree, Saulteaux, Métis and other Indigenous children were forcibly taken from their families and made to attend school there where they were taught by Catholic nuns and priests.
Many of the students endured physical, sexual and emotional abuse, and some of them died at the school. The spirits of some of these children, and perhaps one of the abusers, still haunt the abandoned building, which is known by former students and people who worked in the building to be a hotbed of paranormal activity.
Many of the estimated 6000 students who died in Canada’s residential schools were buried in unmarked graves near the schools rather than being returned to their families. At the site of the Muskowekwan School, which closed in 1981, at least 19 unmarked graves were discovered in 1992.
According to federal records, some of the burials date back to the early 1900’s and the original school.As time passes, the Muskowekwan school building has also become a place for healing. Survivors and their descendants have returned to face its dark past and have ceremonies to move on, much like the sacred fire and charcoal ceremonies that Tom performed to help the little angels and to protect everyone involved in the investigation.
It’s possible that other residential schools or the locations where they used to stand in Canada are also haunted by the spirits of the young victims of horrific abuse at the hands of government and church employees. Paranormal stories have emerged not just from The Muskowekwan Indian Residential School, but from a former residential school in Ontario, too, and one in Manitoba.
Generations of Canada’s Indigenous people are still haunted by the impact of the residential schools, but hopefully the victims who haunt the schools themselves can finally find peace on the other side.
— Sarah MacDonald | <urn:uuid:7884b9bf-9b6e-4d23-828a-9b13f6547681> | CC-MAIN-2020-05 | http://theothersidetv.ca/extra-research-season-3-episode-13-angels/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00223.warc.gz | en | 0.985324 | 508 | 2.703125 | 3 |
Key in Elementary School
Helping teachers provide students with a foundation for good decision making and understanding the necessity of investing in themselves. Viewing history through an “economic lens” helps students understand not only what happened but why.
What’s Important to Understand?
- SCARCITY (limited resources) and that people make choices because they can’t have everything they want
- All choices have an OPPORTUNITY COST (what is given up when making a choice)
- RESOURCES are needed to make goods and provide services: natural, capital and particularly human resources
- Because people and regions cannot produce everything they want, they SPECIALIZE in producing some goods and services and TRADE for the rest
- People WORK to earn MONEY to buy goods & services and SAVE to purchase things in the future | <urn:uuid:48242349-dd1b-42a4-afac-3c9b8104559d> | CC-MAIN-2018-13 | http://vcee.org/elementary-school/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00405.warc.gz | en | 0.937382 | 174 | 4.1875 | 4 |
Join Laurie Burruss for an in-depth discussion in this video Objective of this course, part of Creating a CSS Style Guide: Hands-On Training.
As you work through this project, you will learn how to create an HTML web page and an external CSS style sheet, plan and manage a website, its folder structure and its assets. That might be images, style sheets, and the source materials. Identify the visual interface elements. Implement the best practice of separating structure from presentation or HTML versus CSS styles. Learn how to communicate internally with the team using text proposals and code comments.
Generate color palettes for the web page based on the header image or the images inserted into the HTML document. Practice and deliver consistent style and design objectives for your client's site as well as at the same time experiment and test the limits of your concepts. Finally and ultimately, create an effective communication tool for the design team and the client. So let's go out and look at a successful style guide that Monash University in Australia has created. Monash University has a large, large website and serves a huge target audience. This website has many, many people who contribute to it; both the web design development team, faculty members, and administrators.
So they have designed a style guide that everyone can share and everyone can know what the rules and the consistent display of images, branding, and information should be. What we are looking right now is an example of their style guide for their design elements. As I scroll down the page, you can see things such as layout, composition, topography, color palettes, usage of images, photography, logo, accessibility, all kinds of issues that might need to be addressed in a consistent website and a consistent branding for University.
They have also taken the time to develop a style sheet that shows both what it would look like and what kind of code you need to support that look. This is a very important issue that should be addressed in a web style guide, not only what it looks like, but how do you do that, what lies beneath. I really like this page. Let me scroll down to the Header information. As you can see, this is a clear visual instrument. On the left side we see what the user would see and on the right side we see what the code or the styling should be.
This is a great way to inform your team. It makes it visual not only for the team, but I think the client could understand it and everyone can use it and be consistent with the way they deploy the headers in the style. They also do this for the images. It's slightly different looking, but again, they are consistent with the way they display. This is what the user will see and this is what's done to achieve this. It's a great tool, it's an effective tool, and easy to use. So what's the importance of creating a style guide for you? It will illustrate not only your understanding of Dreamweaver, but it also becomes a printed, an online version of the visual interface elements of a website. This allows a real world team to maintain consistent design elements, update visual interface guidelines, and to establish it as a reference tool, a source of contact information and a help reference.
In some, it becomes your team management tool, and effectively this portfolio piece demonstrates that you can work well with the team that you understand project management and that you understand the principles of web design.
Download the exercise files from the Exercise Files tab.
- Planning a site from a blank file
- Creating and editing a style guide with just HTML
- Using the Property Inspector for text markup
- Inserting images, tables, and footers for a custom look
- Creating and editing an external CSS style sheet
- Building a custom color palette for a site
- Testing web pages in various browsers
- Styling tips for professional sites | <urn:uuid:2871afd7-2288-4e0a-a858-c9f384cf9cc1> | CC-MAIN-2017-04 | https://www.lynda.com/CSS-tutorials/Objective-course/758/46982-4.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928588 | 796 | 3.3125 | 3 |
Head Start, the federally funded early childhood education and social service program, is poorly funded, and quality and access is uneven across the states, according to a new sweeping analysis from the National Institute for Early Education Research.
The picture is particularly bleak in California. The report finds program quality falls below standard benchmarks, teacher pay is one of the lowest nationwide when compared to teachers with similar qualifications in elementary schools, and only a fraction of needy children are actually served.
Statewide, 103,447 pregnant women and children under five receive Head Start services, yet this represents just 8 percent of the state’s total low-income population in this age range.
Based on the state's population and its large percentage of impoverished families, California receives just over $1 billion to provide these services, an amount far greater than any other state. Yet, said report author Steve Barnett of NIEER, the amount of money distributed to Head Start providers has not changed in decades.
“The allocations were essentially frozen in time across states at 1981 levels," Barnett said. "So what California gets is based on what California looked like in 1981, not based on what California looks like 35 years later.”
Head Start regulations do not stipulate how much needs to be spent per child, and that amount varies from state to state. California, the report finds, spends less per child than what is spent on average by other states.
In 2014-2015, Early Head Start funding per child in California was $11,222, below the national average of $12,575. For three and four year-olds, funding per child was $8,028, just below the national average of $8,038 when adjusted for cost of living.
“Lower per pupil funding is partly historical and partly because individual programs choose to serve more kids,” Barnett said. “The trade off is they spend less per kid which compromises quality and/or how many hours and days each kid gets.”
California’s programs came in slightly below national quality standards. Programs use a classroom observation tool to evaluate progress. Using those metrics, California’s Head Start centers rated a 2.8 – below the 3.0 level that indicates a quality program.
Barnett said a big part of the problem is the poor pay for teachers. Head Start teachers in California make significantly less per year than public school teachers with the same credentials.
“With a $37,000 salary gap, you have a big problem hiring qualified teachers, retaining qualified teachers,” Barnett said.
The report also finds that the state’s Head Start teachers have fewer qualifications overall. Just 65 percent have a Bachelor's degree or higher, compared to 73 percent nationwide.
Teachers without a B.A. are paid less, Barnett said. It's a “chicken and egg” game, he said, because the pay is so low for having a B.A. that there is little incentive for teachers to invest in their own education and then stay teaching Head Start.
On a positive note, the state does fairly well in the area of teacher ethnicity matching that of students – two thirds of both teachers and students are Latino.
The biggest issue, the report finds, is that Head Start needs to reach more children in need. NIEER estimates it would cost additional $20 billion to serve half of all low income three and four year-olds nationwide.
In California, for Head Start to enroll half of all eligible 3- and 4-year-olds, it would cost an additional almost $2.2 billion.
It’s that kind of price tag -–for just one state – that worries early childhood expert Michael López.
While Head Start might not be serving the majority of low-income children in California, they may be getting services from other programs, said López, an early childhood principal at the thinktank Abt Associates.
“In California, the other options that weren’t included in the NIEER report would include subsidized child care, Temporary Assistance for Needy Families (TANF) child care, and Title 1 funded pre-k,” López said.
He worries that the scope of what the NIEER report suggests, and the price tag, may lead some lawmakers to conclude that the program is not worth federal funds at all.
“It’s easier to look at the much smaller percent [of children] served and conclude that we shouldn’t continue to invest in such programs,” López said.
Report author Barnett is careful to point out that his report should not be used to "conclude ... that Head Start doesn’t work."
In fact, one of the program's biggest goals is being achieved, he said.
"Supports for social and emotional development are very high, and that’s really important for children’s later success," Barnett said.
Michael López believes the solution is more collaboration between the various early childhood providers that serve low-income children. “There needs to be more attention paid to identifying the current gaps in different communities and targeting expansion and quality improvements in those communities that have the greatest needs,” Lopez said.
Christopher Maricle, executive director of the California Head Start Association, said programs could increase quality if funding levels increased.
“Caring for children ages birth to 5 requires significant training and preparation, which drives up wages,” he said. If funding increases so teachers can be paid higher wages, Maricle said, “Head Start programs can better attract and retain qualified staff.”
The NIEER report recommends a national commission of bipartisan policymakers, researchers, educators, and parents to look for solutions. | <urn:uuid:045d1cbe-5cb8-4ae5-81b7-4673eb9e5fde> | CC-MAIN-2019-35 | https://www.scpr.org/news/2016/12/14/67192/head-start-is-poorly-funded-and-california-s-progr/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313617.6/warc/CC-MAIN-20190818042813-20190818064813-00342.warc.gz | en | 0.958265 | 1,192 | 2.765625 | 3 |
Automatic Generation of Photo-Realistic Human Avatars Stephen Fung March 13, 2007 A new method of computer generation and animation of 3D human characters has been developed at the Max Planck Institute for Computer Science in Saarbrücken, Germany. Instead of designing a virtual skeleton first, then extrapolating the surface deformations caused by movement, direct 3D laser scans of a real person are being used and subsequently animated. The next time you play a computer game, your avatar could be a realistic animation of the “real” you. A 3D animation technique that could take the hard work out of acting has been developed by German researchers. It allows a high-resolution 3D scan of one person to be pasted on to another person’s movements. Source: New Scientist Share This With The World! | <urn:uuid:70765a16-389a-4983-afdb-7256d4860c3c> | CC-MAIN-2021-25 | https://www.megatechnews.com/automatic-generation-of-photo-realistic-human-avatars/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488567696.99/warc/CC-MAIN-20210625023840-20210625053840-00308.warc.gz | en | 0.936704 | 170 | 3 | 3 |
Tags: Score.Org Business Plan Template8 Steps Problem SolvingPurpose Of Accounting EssaysTrio Quest Photo EssayHigh School Vs College Essay Compare And ContrastDissertation Topic Examples
This concept of ‘greatest good for the greatest number’ is known as utilitarianism.Despite the writing and recognition that Jeremy Bentham brought to identifying modern utilitarianism, it was first used by Beccaria, which later influenced Bentham’s work.
There can be no confusion and personal discretion used in the sentencing process. Originally published anonymously in 1764, Dei Delitti e Delle Pene was the first systematic study of the principles of crime and punishment. .95 * Reprint of the fourth edition, which contains an additional text attributed to Voltaire.His essay was widely distributed and read, which brought him widespread acclaim.Unsatisfied and wanting to “challenge the existing structure of eighteenth century Italian society”, Beccaria “based many of his ideas…the philosophy of the greatest good for the greatest number” (Martin et al., 1990). It had a profound influence on the development of criminal law in Europe and the United States. Infused with the spirit of the Enlightenment, its advocacy of crime prevention and the abolition of torture and capital punishment marked a significant advance in criminological thought, which had changed little since the Middle Ages.The First Systematic Study of the Principles of Crime and Punishment Beccaria, [Cesare Bonesana, Marchese de]. An Essay on Crimes and Punishments, Translated from the Italian; With a Commentary Attributed to Mons. For each of these categories, Beccaria provides examples of crimes that fall under each category and the punishment that fit each crime under the categories, which will be discussed later.After the identification of crimes, there needed to be a way to measure what was and was not considered a crime. | <urn:uuid:135033c9-696f-471a-bf40-9b2c010cbfdd> | CC-MAIN-2021-31 | https://fructaroma.ru/beccaria-essay-on-crimes-and-punishments-influenced-3716.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.92/warc/CC-MAIN-20210726183622-20210726213622-00590.warc.gz | en | 0.933735 | 396 | 3.4375 | 3 |
If you closely read the 20-page draft decision on the Clean Development Mechanism prepared at COP16 in Cancun, you will see a tiny reference to the possibility of including ``city-wide programs’’.Those few words represent an enormous effort: mainly championed by Amman, Jordan, with support from the World Bank , the European Union, UN-HABITAT , C40 Cities , ICLEI , United Cities and Local Government(UCLG ) and others.
There is reason to be excited. Cities are the every-day face of civilization, the rough and tumble, action oriented arm of government: The ones you call when you need to get things done. And in Cancun they got the call.
Making sense of the COP, the ‘Conference of the Parties’ (cities would call it a meeting, ‘fiesta’ if you added beer and a beach) is a full time job. Thousands of people jet across the planet arguing over commas and clauses while climate change waits for true political will. But that political will does not come from countries at a COP. No, first and foremost it needs to be understood, nurtured, and acted-upon in cities. Countries get their marching orders mainly from urban residents, not the other way round.
Unfortunately, it hasn’t been working that way. The Clean Development Mechanism (CDM) under the Kyoto Protocol to the UN Framework Convention on Climate Change (UNFCCC) was intended to be an innovative market-based approach to combine GHG mitigation with sustainable development objectives. Between March 2005 and October 2010, more than 2400 projects were registered with the CDM. But only 203 were in cities (80% of those associated with landfill gas). This makes no sense: activities in urban areas cause more than 75% of the world’s GHG emissions, and yet less than 10% of the CDM projects were in cities. So far the creativity, pragmatism and impact of cities have not been effectively called upon to mitigate GHG emissions. That is now changing.
Recognizing the limitations placed on cities under the CDM rules, many people worked for more than five years to bring about changes in the CDM, represented by just those words in the draft decision mentioned above, to help cities ‘get in the game’. The Cancun summary document was seeded with that potential.
‘City-wide programmes’ will enable cities to aggregate lots of smaller activities across the city and measure them against a standard baseline. These ‘programmes of activities’, added together, should yield impressive results; city-wide building codes, traffic management, ‘smart’ power meters, street lighting, are only some of the possibilities.
Just a couple weeks before COP16 in Cancun, Mexico City hosted cities and agencies working with them from around the world to discuss the details on how cities can better participate in climate change mitigation and adaptation. The ‘Mexico City Pact ’ resulted; more than 125cities have already signed.
Similarly, half a planet away, the City of Tokyo recently established the world’s first city-based Emissions Trading System (ETS). Tokyo’s credibility, capacity, and potential to drive GHG reductions is another welcome addition. This initiative has the potential to grow quickly and be replicated broadly. Cities like Rio de Janeiro and Shanghai are already looking at the details.
City-wide carbon finance as proposed by Amman, will likely never generate more than a percent or two of a city’s total budget, nor will a local cap-and-trade system as proposed by Tokyo provide all of the sweeping mitigation efforts needed. But the leadership and innovation shown by theses cities gives hope to negotiators and policy-makers.
One of the great hopes of Cancun is that cities can now be more fully in the game. Cities will be there during the lead-up to COP17 in Durban, but mostly cities will see a continuation and intensification of discussions, debates, and flat-out brawls as they wrestle with day-to-day adaption in a changing climate. This while welcoming three million new residents every week, all in the midst of growing in an increasingly carbon constrained world. Cities are happy to have planted the seeds for a more innovative approach to deal with climate change, but they won’t have the luxury to stand around watching the results. They will be busy making it happen. | <urn:uuid:d3fb7d61-84db-45ca-b531-69671c76c83b> | CC-MAIN-2016-26 | http://blogs.worldbank.org/climatechange/print/cities-get-call-cancun | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00071-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.951891 | 931 | 2.625 | 3 |
The way that people take care of their dirt in Mexico, sweeping it and tossing water on it to "keep the dust down" results in what is pretty much a "cob sidewalk". What they don't do is to add lime to the water that they toss every morning as they are tidying up their patch of dirt. The more calcium that you add to the dirt, the quicker it turns into cob, and the harder it gets. In places with a lot of dirt roads, it's a common practice to scratch in some lime to harden up the soil. Do it often enough and you end up with a surface layer of caliche, which can be concrete like in its properties.
Of course, during the rainy season all bets are off. If you get long and steady rains, the lime can slowly dissolve and leach out and run off someplace lower in elevation. Which means at the start of the dry season, you may have to do the lime trick all over again. It would be kind of the same story for snowy regions; all would be fine while the ground was frozen, but during the spring thaw, if the ground stays wet enough for the calcium to leach out, then all you are going to be left with is soft mud instead of hardened cob.
A "dutch baby" is not a baby. But this tiny ad is baby sized:
permaculture bootcamp - learn permaculture through a little hard work | <urn:uuid:07a6b4cc-3a6f-4c55-85e4-d46d2065a89b> | CC-MAIN-2019-51 | https://permies.com/t/28383/Alternatives-Cement-Sidewalks | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482038.36/warc/CC-MAIN-20191205190939-20191205214939-00102.warc.gz | en | 0.971949 | 297 | 2.71875 | 3 |
In drug addiction, the transition from casual drug use to dependence has been linked to a shift away from positive reinforcement and towards negative reinforcement. That is, drugs ultimately are relied on to prevent or relieve negative states that otherwise result from abstinence (e.g., withdrawal) or from adverse environmental circumstances (e.g., stress). Recent work has suggested that this “dark side” shift also is key in the development of food addiction. Initially, palatable food consumption has both positive reinforcing, pleasurable effects and negative reinforcing, “comforting” effects that can acutely normalize organism responses to stress. Repeated, intermittent intake of palatable food may instead amplify brain stress circuitry and downregulate brain reward pathways such that continued intake becomes obligatory to prevent negative emotional states via negative reinforcement. Stress, anxiety and depressed mood have shown high comorbidity with and the potential to trigger bouts of addiction-like eating behavior in humans. Animal models indicate that repeated, intermittent access to palatable foods can lead to emotional and somatic signs of withdrawal when the food is no longer available, tolerance and dampening of brain reward circuitry, compulsive seeking of palatable food despite potentially aversive consequences, and relapse to palatable food-seeking in response to anxiogenic-like stimuli. The neurocircuitry identified to date in the “dark” side of food addiction qualitatively resembles that associated with drug and alcohol dependence. The present review summarizes Bart Hoebel’s groundbreaking conceptual and empirical contributions to understanding the role of the “dark side” in food addiction along with related work of those that have followed him.
Drug addiction is a chronic, relapsing disorder with three distinct phases: a binge intoxication phase driven and characterized by the rewarding properties of the drug, a withdrawal phase accompanied by a negative emotional state as the acute rewarding drug properties wear off, and a preoccupation and anticipation phase that precedes renewed drug intake. Dr. Bartley Hoebel is among the very earliest pioneers who hypothesized that intake of sugar, and perhaps of other palatable foods, also could become governed by these three phases of addiction. His leadership has been instrumental not only in bridging the fields of addiction and feeding behavior through his experimental work, but also in his efforts to increase awareness of and legitimize what once was an unpopular and even controversial hypothesis within the scientific community – that one could become “food addicted.” Now, food addiction symposiums, such as the Food & Addiction Conference on Eating and Dependence hosted by the Rudd Center for Food Policy and Obesity at Yale, the “Food Addiction: Fact or Fiction” session at the 2008 Experimental Biology meeting in San Diego, and the Obesity and Food Addiction Summit of 2009, regularly bring together scientists, physicians, public policy makers, and health advocates from diverse backgrounds. Further, Dr. Hoebel’s groundbreaking work has helped spur the creation of institutes devoted specifically to advancing food addiction research, including the Food Addiction Institute and the Refined Food Addiction Research Foundation.
As drug users progress from casual use to addiction, the factors motivating drug use are hypothesized to shift in importance. While initial use is motivated by the hedonically rewarding properties of the drug, use in addicts is hypothesized to become motivated less by positive reinforcement (e.g., a euphoric high), but rather by negative reinforcement: to prevent or relieve a negative emotional state that arises from abstinence (e.g., drug withdrawal) or from adverse experience of the environment (e.g., stress) . At the neurobiological level, this shift corresponds to a downregulation of brain reward systems that subserve appetitive responses to the drug and a concurrent amplification of brain stress or “antireward” systems. In this framework, the shift to the “dark side” of food addiction may similarly be conceptualized as a key transition in the addiction process. As individuals progress towards compulsive intake of palatable foods, the acute rewarding value of food items may hold less importance for motivating additional intake than does preventing or ameliorating negative states (e.g., anxiety, depression, irritability, and possibly even somatic withdrawal symptoms) that are experienced when such preferred foods are not available or when environments are adverse.
2. Evidence for the “dark side” from human studies
To determine whether an addiction-like “dark side” motivates intake of palatable food, a useful starting point is to identify the human population(s) whose eating habits most closely resemble addictive behaviors. Although obesity and addiction-like eating behaviors likely overlap, “food addiction” is unlikely to explain all cases of human obesity, and some normal weight individuals likely engage in addiction-like eating patterns. No consensus diagnostic criteria for “food addiction” currently exist [2, 3]. Recently, however, the Yale Food Addiction Scale (YFAS) has been introduced as an index of addictive-like eating behaviors that mimic the diagnostic criteria for substance dependence in the DSM-IV-TR . The YFAS measures the extent to which (a) individuals overeat specific foods despite repeated attempts to limit their consumption, (b) their eating behaviors interfere with social and professional activities, and (c) withdrawal symptoms emerge when abstaining from specific foods. Preliminary application of these criteria suggest that the compulsive, uncontrollable intake of greater-than-expected amounts of food seen in binge eating disorder maps most neatly onto the current diagnostic criteria for substance dependence. Accordingly, scores on the YFAS predicted binge eating behavior and emotional eating but did not correlate with body mass index (BMI) in women participating in a weight maintenance trial who reported no eating disorder . These results suggest that the “dark side” of food addiction, as operationalized by the YFAS, might be more fruitfully studied in individuals with binge eating than in randomly selected obese individuals.
2.1 Psychiatric comorbidity in binge eating
Consistent with a possible role for a “dark side” in food addiction, binge eaters have greater rates of psychiatric diagnoses involving negative emotional states compared to the general population. For example, adults and adolescents with bulimia nervosa or binge eating disorder show increased prevalence of major depression, bipolar disorder, anxiety disorders, and alcohol or drug abuse than do individuals without an eating disorder [6–8]. Rates of major depression are also elevated in the obese, but the association of binge eating with increased depression scores remains even in weight-matched comparisons of overweight and obese individuals . Extremely high rates of suicidal ideation in binge eaters attest to the severity of mood disturbance in this population. Over half of teenage bulimics and one-third of those with binge eating disorder report suicidal ideation, and a third of teenage bulimics report attempting suicide . The direction of causality between binge eating and major depression is not firmly established and may be reciprocal [10–12]. Such psychiatric comorbidity is associated with poor long-term treatment outcome and a greater frequency of binge eating . Conversely, many antidepressants, such as SSRIs or tricyclics, can reduce the frequency and severity of binge eating symptoms .
2.2 Negative emotional states increase palatable food intake in vulnerable populations
The prevalence and severity of depression and anxiety in binge eaters suggests the hypothesis that negative emotional states can trigger relapse to bingeing behavior. Indeed, self-reported negative emotional traits of depression, low self-esteem, and neuroticism are associated with binge eating in both men and women . During negative emotional states and situations, normal and underweight individuals report consuming less food than during positive emotional states and situations. In contrast, this undereating in response to negative states is not observed in overweight individuals, who report eating significantly more during negative states than do other groups . Consistent with a role for negative emotional states in driving binge behavior, mood scores in bulimics are lower immediately prior to a binge than on days when no binges occur .
Another construct that implicates stress and negative emotions as triggers of overeating is dietary restraint. Attempts to control body weight (e.g. via dieting, exercise, appetite suppressants, or laxatives) are paradoxically associated with increased weight gain in female adolescents ; dietary restriction similarly is associated with long term weight gain in female adults . A possible explanation for these apparent contradictions is the consistent finding that restrained eaters overeat in response to a variety of stressful situations . For example, anticipation of a social stressor (a public speaking task) increased food intake in restrained eaters while not altering that of unrestrained eaters . Similarly, restrained eaters who reported high subjective stress and negative affect following a series of cognitive tasks showed greater intake after the stressor than did those reporting low levels of subjective stress . Dietary restraint also may have temporally restricted importance in binge eaters because the intent to restrict intake is greater prior to a binge as compared to days on which no binges occur .
Though laboratory mood induction studies may be criticized as not modeling real world eating practices under natural mood conditions , they also broadly support the “dark side” hypothesis that overeating can be triggered by stressful or negative emotional responses in subsets of individuals. For example, obese binge eaters consumed significantly more chocolate after viewing a sad film in a laboratory setting than following a neutral film . All participants in this study reported mood as one of their triggers to binge eat, with “depression” or “sadness” most often implicated. In non-obese females, those with greater salivary cortisol responses to a battery of social stressors ate more after the stressful experience than did those with lower cortisol responses . Induction of a negative emotional state via autobiographical recall of a sad memory also increased the amount of snack food consumed in a study of non-dieters, and the effect was particularly pronounced in participants who reported greater “emotional eating” . Unlike the reviewed findings and what occurred in restrained eaters, unrestrained eaters reduced their snack food intake after viewing a sad film [27, 28].
Such negative affect-driven food intake can disrupt body weight maintenance. Weight regain in the 6 months following successful weight loss is associated with eating in response to stressful life events, eating in response to negative mood, and the use of food to regulate mood . Perhaps accordingly, adding cognitive therapy to help manage general mood and coping, and not only eating behavior and diet, can reduce relapse to obesity
2.3 Influence of palatable food intake on mood and reward function
Eating in response to emotionally negative situations suggests that overeating may be an attempt to self-medicate with “comfort food.” The typical foods consumed during a binge tend to be palatable and energy dense; further, they often are carbohydrate-laden items such as breads, pastas, and sweets . Initially, such carbohydrate-rich foods may have the intended negative reinforcement effect, because they reduce subjective reports of anger and tension and increase calmness within 1-2 hr of consumption. Repeated overconsumption of such palatable foods, however, may produce long term neuroadaptations in brain reward and stress pathways that ultimately promote depressive or anxious responses when those foods are no longer available or consumed. Consistent with this “dark side” hypothesis, after eating a high fat diet (41%) for one month, men and women who were switched to a lower-fat (25%), high-carbohydrate diet reported increased anger and hostility during the subsequent month than did subjects who continued eating the high fat diet . Increased anger may have resulted either from the reduction in dietary fat (or perceived palatability) or from neuroadaptations to increased dietary carbohydrates.
Repeated overconsumption of highly palatable foods may downregulate dopaminergic reward circuitry via mechanisms that mirror those commonly observed in drug addiction: reduced striatal dopamine D2 receptor availability and blunted dopamine release [35, 36]. Indeed, obese individuals show lower striatal availability of the dopamine D2 receptor than do non-obese controls, and this reduction in striatal D2 is correlated directly with BMI [37, 38]. Caudate activation in response to a chocolate milkshake is also reduced in obese relative to lean individuals . This blunted activity level is especially pronounced in individuals with the TaqIA A1 polymorphism of the D2 receptor, which is associated with reduced D2 receptor expression . Another polymorphism linked to reduced dopamine function, the 7R allele of the dopamine D4 receptor, has been associated with higher lifetime maximum BMI in bulimics as well as with binge eating behavior in women with seasonal depression . The collective genetic data suggest a predisposition towards weight gain in individuals with low striatal dopaminergic signaling, and it has been hypothesized that such individuals overeat in an attempt to compensate for a perceived reward deficit. Recent data suggest, however, that weight gain (or a correlate of weight gain, perhaps overeating palatable food) downregulates striatal dopamine activity. Women whose BMI increased during a 6 month period showed reduced caudate activation to consumption of a chocolate milkshake than did women whose BMI remained stable, and the reduction in caudate activation was associated with greater BMI increases . Conversely, gastric bypass increased striatal D2 receptor availability within 6 weeks of bariatric surgery in a small study of severely obese women .
Striatal D2 receptor availability in obese subjects also correlates directly with glucose metabolism in frontal cortical regions that subserve inhibitory control, including dorsolateral prefrontal, orbitofrontal, and anterior cingulate cortices . This relationship suggests the hypothesis that reduced dopaminergic modulation from the striatum may lead to impaired inhibitory control over food intake and thereby increase risk of overeating. Perhaps analogously, a direct correlation between striatal D2 availability and glucose metabolism in dorsolateral and anterior cingulate cortices also has been observed in alcoholics, but not in non-alcoholics or non-obese controls [38, 44].
Consistent with reviewed behavioral differences in the ingestive response to stress, eating style also differentiates subpopulations with distinct mesolimbic dopamine system profiles. Non-obese individuals who reported greater “emotional eating” showed reduced baseline D2 receptor availability in the dorsal striatum as compared to non-emotional eaters; those high in dietary restraint had increased D2 binding in the dorsal striatum in response to food stimulation as compared to those low in dietary restraint . Finally, obese binge eaters showed increased D2 receptor binding in the caudate in response to a combination of food stimulation and methylphenidate challenge as compared to obese non-binge eaters [44, 46].
3. Evidence for the “dark side” from animal models of food addiction
The development of animal models was key for validating the concept of food addiction and beginning to characterize its “dark side.” Bart Hoebel’s group has led the way in modeling aspects of food addiction in rodents . While animal models cannot encompass all of the complex social factors that influence eating behavior in humans, they have the advantage of more easily distinguishing between antecedents and consequences of addictive-like eating behavior, establishing tighter dietary control, and allowing for a more detailed examination of the associated molecular mechanisms.
3.1 Induction of withdrawal-like states after cessation of palatable food access
Consistent with the “food addiction” hypothesis pioneered by Hoebel and colleagues, numerous studies in animal models have now observed behavioral and somatic profiles that resemble withdrawal-like states in animals withdrawn from intermittent access to palatable food. For example, Hoebel and colleagues provided evidence that daily bingeing on high sugar solutions (e.g., 25% glucose or 10% sucrose) may lead to endogenous opioid dependence. Rats provided with daily 12-hr access to glucose and chow alternated with 12-hr food deprivation displayed somatic signs associated with opiate withdrawal, including teeth chattering, forepaw tremors, and head shakes, when challenged with the opioid antagonist naloxone . Precipitated withdrawal via naloxone pretreatment also increased anxiety-like behavior in 12-hr daily glucose-cycled animals, as shown by reduced open arm time on the elevated plus-maze, but not in animals receiving ad lib access to chow or glucose . In the absence of naloxone pretreatment, somatic signs of withdrawal also occurred “spontaneously” 24-36 hr after the last glucose access session. In the absence of naloxone challenge, increased anxiety-like behavior on the plus-maze also was seen in sucrose-cycled animals after a 36-hr fast, as compared to ad lib chow fed controls, providing evidence for a heightened anxiety-like state in cycled animals withdrawn from intermittent access to a sugar solution .
Hoebel and colleagues have hypothesized that reduced reward function and increased anxiety-like behavior during withdrawal may originate in part from alterations in the balance of dopaminergic and acetylcholinergic (ACh) signaling within the striatum. They found that naloxone challenge stimulated significantly greater ACh release in the nucleus accumbens (NAc) of rats with a cyclic history of daily 12-hr glucose and chow access followed by a 12 hr food deprivation than in animals maintained on ad lib chow . This amplification of the ACh response is accompanied by a reduction in extracellular accumbens dopamine following naloxone challenge, similar to what occurs during morphine withdrawal [50, 51]. After a 36-hr fast, glucose/chow-cycled animals have lower dopamine and higher ACh levels extracellularly in the NAc even in the absence of naloxone, again resembling a spontaneous opiate withdrawal-like state during abstinence from the glucose diet . Hoebel and colleagues propose that this shift towards enhanced ACh release concurrent with diminished dopamine release may reflect a broader behavioral shift away from dopamine-mediated approach behaviors and towards harm avoidance .
Using a sugar-rich solid diet, rather than a liquid diet, Cottone et al. similarly found spontaneously increased anxiety-like behavior in rats withdrawn from intermittent access to a high-sucrose, chocolate-flavored diet. Rats provided with alternating 5-day/2-day access to standard laboratory chow and the palatable diet spent less time on the open arms of the elevated plus-maze and more time within the withdrawal chamber in a defensive withdrawal task when tested during the chow phase of their diet cycle [53, 54]. The increase in anxiety-like behavior was accompanied by increased expression of the stress-related neuropeptide corticotropin-releasing factor (CRF) in the central nucleus of the amygdala (CeA), a system that also is activated during withdrawal from alcohol [55–59], opiates [60–63], cocaine , cannabinoids ,and nicotine [66, 67]. Pretreatment with the selective CRF1 antagonist R121919 blocked the food withdrawal-associated anxiety at doses that did not alter behavior of chow-fed controls [68–70]. Analogously, CRF1 antagonists ameliorated aversive- or anxiety-like states during withdrawal from alcohol [59, 71, 72], opiates [73, 74], benzodiazepines , cocaine [76, 77], and nicotine . CRF1 antagonist pretreatment also blunted the degree to which diet-cycled animals overate the sucrose-rich diet upon renewed access at doses that did not alter intake of chow-fed controls or of animals fed the sucrose-rich diet, but without a history of diet cycling. Analogously, CRF1 antagonists reduce excessive intake of alcohol [57, 78–82], cocaine , opiates , and nicotine in models of addiction, while having lesser effects on drug and alcohol self-administration of non-dependent animals.
When diet-cycled animals were studied while receiving access to the preferred, sucrose-rich diet, both plus-maze behavior and CeA CRF levels normalized, supporting the hypothesis that increased activation of the amygdala CRF system and anxiety-like behavior reflected an acute withdrawal state [53, 54]. Finally, diet-cycled rats also showed increased sensitivity of CeA GABAergic neurons to modulation by CRF1 antagonism. R121919 reduced evoked inhibitory postsynaptic potentials in the CeA to a greater degree in diet-cycled rats than in chow-fed controls, mirroring the enhanced modulatory influence of CRF1 antagonists on CeA GABAergic synaptic transmission that is seen during withdrawal from alcohol . Thus, the pattern of palatable food withdrawal-associated increases in CeA CRF expression and anxiety-like behavior, escalation of intake upon renewed access, and reversal of behavior via CRF1 antagonist pretreatment resembles findings in both drug and alcohol addiction [68–70].
In a separate study, Cottone et al. also found that female rats with a history of receiving highly limited (10 min/day) access to the same chocolate-flavored, sucrose-rich diet exhibited not only dramatic escalation of their intake of the palatable diet (consuming over 40% of their daily intake within 10 min), but also an anxiogenic-like reduction in plus-maze open arm time when studied 24 hr after their last access session . Diet-cycled rats that spent the least time on the open arms were also those that binged the most on the palatable diet, a correlation not evident in chow-fed controls. These results support the Hoebel hypothesis that intermittent access to a palatable sucrose-rich diet leads not only to binge-like intake of the diet, but also to a withdrawal-like state of increased anxiety in direct relation to the binge-like eating.
3.2 Sugar vs. fat addiction: Is there a difference?
Hoebel and colleagues also have recently proposed that there may be something different about the ability of simple sugars (vs. fats) to promote “food addiction” . Whereas somatic and anxiety-like signs of withdrawal have been observed following cessation of intermittent access to sugar solutions or solid diets, the case for withdrawal signs following diets consisting predominantly of fat or sweet-fat mixtures is less clear. As with sugar diets, rats develop binge-like eating patterns when receiving intermittent access to pure fats such as vegetable shortening and sweet-fat chow mixtures . Unlike the robust findings of opiate-like withdrawal in glucose-cycled rats, however, naloxone challenge and fasting have failed to produce opiate-like somatic withdrawal signs in rats with intermittent access to vegetable fat or sweet-fat chow .
Still, a lack of somatic opiate withdrawal-like signs does not preclude the possible development of a negative emotional state in animals withdrawn from high-fat food (i.e. “affective withdrawal”). Indeed, some have observed altered behavioral responses to mild stressors after removal of a preferred high fat diet. Mice maintained continuously on a high-fat diet showed increased activity in the open field test 24 hr after being switched to standard chow, an effect not seen in rats withdrawn from a high-sucrose diet . Moreover, 24-hr withdrawal from high fat diet also resulted in increased CRF mRNA levels in the CeA , similar to the findings of Cottone et al. with a sucrose-rich diet . On the other hand, group differences were not observed in other indices of anxiety-like behavior, including marble burying or elevated plus-maze behavior. Additional considerations for interpreting results from this experiment vis-à-vis previously reviewed studies of sugar “withdrawal” include that the palatable diets were provided continuously rather than intermittently; that the high-fat diet here was more preferred than the high-sucrose diet; and that the high-sucrose diet was an admixture of macronutrients, rather than a predominantly or pure sugar diet.
Withdrawal-like signs of anxiety upon removal of a palatable diet also may be moderated by genetic factors. Cottone et al. observed stable individual differences in the degree to which rats binged on a high-sucrose diet that correlated with their degree of anxiety-like behavior 24-hr post-access . Pickering et al. found that obesity-prone, but not obesity-resistant, rats showed reduced activity in the center of an open field 2 weeks after being switched to a standard chow diet subsequent to 7 weeks of access to a palatable high-fat, high-sugar diet . The obesity-prone animals continued to undereat the chow relative to both chow-only controls and obesity-resistant animals across three weeks of withdrawal.
Rodents withdrawn from preferred diets will also endure negative consequences to obtain renewed access [89, 91]. For example, mice withdrawn from a high-fat diet spent more time in a brightly-lit aversive environment where they can eat a high-fat pellet than did mice not withdrawn from the high fat diet or chow-fed controls . Rats with a history of extended access to a palatable cafeteria diet also did not reduce responding for the palatable diet despite the presence of a footshock-conditioned cue . The latter behavior resembles the persistence of cocaine-seeking behavior in rodents despite the presence of a cue that predicts footshock. The results suggest the development of compulsive eating patterns, perhaps analogous to compulsive drug intake, that are resistant to potentially aversive outcomes .
3.3 Stress-induced food-seeking and intake
Because palatable food can have negative reinforcing, or “comforting,” effects, heightened anxiety and stress are not merely consequences of being withdrawn from a palatable diet, but also motivating factors that promote relapse to increased intake after a period of abstinence. By extension, increases in the motivation to obtain, consume and select palatable “comfort” foods under environmental stress can be hypothesized to reflect negative reinforcement processes analogous to those operating during withdrawal from palatable food [49, 54, 93, 94]. The well-established ability of consumption of palatable foods under certain conditions to attenuate exogenous activation of stress systems, as evidenced in behavioral, autonomic, neuroendocrine, and neurochemical measures [94–111], strongly supports this hypothesis.
Perhaps accordingly, the alpha-2 adrenergic antagonist yohimbine, a pharmacological stressor that produces high anxiety states in humans and rodents, and that triggers reinstatement of cocaine-, alcohol-, and methamphetamine-seeking behavior in rats [112–114], also triggers reinstatement of responding for palatable food pellets and sucrose solutions [115–117]. Yohimbine induces reinstatement of seeking for a variety of energy-containing food pellets, including non-sucrose carbohydrate, sucrose and high-fat pellets, but not of energy-devoid and, perhaps also less palatable, cellulose fiber pellets . Multiple neurotransmitter systems have been implicated as downstream modulators of this effect, including the CRF, orexin, and dopaminergic systems. Systemic pretreatment with the CRF1 receptor antagonist antalarmin strongly attenuates yohimbine-induced reinstatement of palatable food seeking , as does pretreatment with the orexin-1 antagonist SB334867. The site(s) of action for these compounds in blocking yohimbine-induced reinstatement remains unknown. Based on the neuroanatomy of stress- or yohimbine-induced reinstatement of drug seeking , however, regions involved in the extended amygdala or in inhibitory control are plausible candidates. Indeed, microinjection of CRF into the nucleus accumbens can potentiate cue-induced responding for sucrose and administration of the dopamine D1 antagonist SCH23390 into the dorsomedial prefrontal cortex can attenuate yohimbine-induced reinstatement of food seeking .
Stressful environmental conditions also can promote ongoing intake of palatable foods by rodents. Under chronic variable stress, mice select more of their daily caloric intake from a high fat diet, than from high protein or high carbohydrate diet options . CRF2 deficient mice, which show an exaggerated HPA-axis response to stress, increase their intake of high fat diet following chronic variable stress to a greater degree than do wild type controls, if the high fat diet is provided for 1hr daily rather than ad libitum. These mice also show a reduction in CORT release to restraint stress after 2-3 weeks of concurrent exposure to high fat, carbohydrate, and protein diets during chronic variable stress .
Boggiano and colleagues have identified a synergistic relationship between food restriction and stress in promoting binge-like food intake in rats that may model the previously reviewed interaction of dietary restraint and stress in triggering binge eating in humans. In the model, neither a history of caloric restriction nor footshock stress alone are sufficient to promote binge-like eating relative to unstressed+unrestricted chow-fed rats. Rather, the combination of repeated cycles of dietary restriction+footshock leads to increased intake of palatable food (cookies) following the stressor [122, 123]. The increased intake is not driven by current metabolic need because the diet schedule allows restricted groups to re-feed on chow to normal body weight prior to the footshock challenge . If only standard chow is available, no binge-like behavior occurs, but if a small sample of palatable food is provided alongside the standard chow diet, then the rats proceed to binge on chow. These data echo findings from human bulimics, who are much more likely to initiate a binge (on any food) if they first consume a craved food . Other groups have observed similar binge-like behavior following a history of cyclic food restriction if the footshock stressor is replaced with a 15-min period of visual and olfactory exposure to palatable food, during which consumption is not permitted . Although the precise neurobiological changes induced by repeated cycles of restriction, stress, and refeeding remain to be elucidated, endogenous opioids may contribute to the stress-triggered binge-like behavior. Naloxone challenge decreases and the mu/kappa agonist butorphanol increases palatable food intake in the restricted+stressed group specifically ,
3.4 Loss of hedonic value of previously rewarding stimuli
One of the hallmarks of the “dark side” of drug addiction is the development of tolerance, in which larger and larger quantities of drug are required to produce the same hedonic effect. Lesser quantities are no longer perceived as rewarding. A similar loss of hedonic response to food rewards may occur in animals with a history of palatable food access. Indeed, Hoebel and colleagues observed dramatic increases in glucose intake over successive days of 12-hr limited access and increasingly rapid glucose consumption during the first hour of access, consistent with the development of tolerance and a shift towards binge-like eating Enhanced motivation to obtain the glucose diet was also observed following a two week period of abstinence . Other investigators have since replicated such binge-like escalation that may indicate tolerance using a variety of diets and degrees of limited access [85, 87, 129, 130].
Also potentially resembling tolerance, other previously acceptable rewards become less effective at supporting operant responding and engaging mesolimbic reward circuitry. Rats receiving intermittent access to a chocolate-flavored, sucrose-rich diet develop progressively lower break points when asked to respond for a less preferred, but otherwise palatable, corn-syrup sweetened chow on a progressive ratio schedule . Motivational deficits to obtain the less preferred food are reversed by pretreatment with a CRF1 antagonist, perhaps analogous to the ability of a CRF1 antagonist to reverse blunted reward function during nicotine withdrawal .
Other evidence of reduced responses to less palatable, alternative rewards comes from microdialysis experiments in which extracellular dopamine levels were measured in rats with a history of cafeteria diet access. Cafeteria-diet feeding results in lower basal levels of dopamine in the nucleus accumbens after 14 weeks of access, and lower stimulation-evoked dopamine release in both the accumbens and dorsal striatum . In chow-fed rats, increases in dopamine efflux were observed in response to a meal of standard laboratory chow, whereas this increase was no longer observed in the cafeteria-diet fed rats. Dopamine efflux in response to an alternative rewarding stimulus, amphetamine, was also markedly diminished in the cafeteria-diet fed rats. The cafeteria diet, however, continued to stimulate dopamine efflux in the accumbens, suggesting that continued consumption of the cafeteria diet is required for these animals to avoid a chronic dopamine release deficit . Intermittency of access to a palatable diet may also impact its ability to sustain striatal dopamine release. In rats with 12-hr intermittent access to sucrose, sucrose continues to stimulate dopamine efflux in the accumbens after three weeks, but this effect is lost in animals with ad libitum sucrose access .
Intracranial lateral hypothalamic self-stimulation thresholds also increase in rats provided with extended, but not limited, access to a palatable cafeteria diet. . Elevated self-stimulation thresholds, an index of impaired brain reward function, arise concurrently with the development of diet-induced obesity and persist even after forced abstinence from the cafeteria diet for a period of two weeks. Analogous to previously reviewed findings in humans, striatal dopamine D2 receptor levels are also markedly reduced after extended access to the cafeteria diet; lentivirus-mediated knockdown of D2 receptor expression accelerated the increase in reward thresholds, implicating a causal role for this diet-induced neuroadaptation in subsequent brain reward system dysfunction . Reductions in striatal D2 binding and D2 receptor mRNA have also been observed in response to daily, binge-like limited access to sucrose, while D3 receptor mRNA and dopamine transporter expression are increased . Dampened mesolimbic dopaminergic transmission may have functional implications for risk of weight gain, because obesity-prone rats have lower basal extracellular dopamine levels in the accumbens than do obesity-resistant rats even prior to weight divergence, and injection of a lipid emulsion fails to increase accumbens dopamine levels in the obesity-prone group . In contrast, food restriction is associated with increases in D2 levels in obese Zucker rats . As a whole, the results suggest that palatable food consumption can lead to lasting impairments in brain reward systems.
Just as the transition from drug use to dependence is accompanied by a downregulation of brain reward circuitry and a concurrent enhancement of “antireward” circuitry, so does the transition to food addiction appear to involve a “dark side.” Studies of human binge eaters, whose behavior most closely aligns with the current conception of food addiction, have implicated stress and anxious and depressive mood states in the development and maintenance of this transition to consuming palatable food for its negative reinforcing effects.
Animal studies, initiated in large part by Bart Hoebel’s group and now gaining in momentum, have begun to clarify the specific roles of diet schedule, composition, and palatability in altering behavioral, neural, and endocrine stress systems as well as in dampening hedonic responses to food and alternative rewards. However, significant challenges remain. Further work is needed to reach consensus on diagnostic criteria for food addiction in humans. Refinement of such criteria will further the development of suitable animal models to better study the most critical aspects of this disorder.
- Drug addiction has a substantial “dark side” involving relief from negative states.
- A similar dark side may be critical in the development of food addiction.
- Stress and negative affect can trigger excess consumption of palatable foods.
- Repeated palatable food consumption alters brain reward and stress circuitry.
Financial support for this work was provided by the Pearson Center for Alcoholism and Addiction Research, the Harold L Dorris Neurological Research Institute, and grants DK070118, DK076896, and DA026690 from the NIH. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Conflict of Interest
EPZ and GFK are inventors on a patent filed for CRF1 antagonists (USPTO Applicaton #: #2010/0249138).
Publisher’s Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. | <urn:uuid:2e492a09-b3d7-4399-b636-18870bee940f> | CC-MAIN-2021-10 | https://www.yourbrainonporn.com/relevant-research-and-articles-about-the-studies/food-addiction/the-dark-side-of-food-addiction-2011/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363782.40/warc/CC-MAIN-20210302065019-20210302095019-00134.warc.gz | en | 0.933381 | 7,610 | 2.734375 | 3 |
Please see the drawing tutorial in the video below
You can refer to the simple step-by-step drawing guide below
Sketch the first picture.
When you draw the rough shape of Peppas head, you can draw the features on her face. Remember that the eyes are on the same side of the face and some parts like her legs and ears are placed in the same way. Eyes and ears should be drawn at this stage.
The mouth and nose are important features, so draw them like in the picture I drew: two round dots for the nostrils and the curved banana shape for the mouth.
All that is required is an arc, which you will not see because Peppa’s head is on the top of the body. As we know, Peppa wears a beautiful red dress almost every time. We will add color at the end.
Now we can draw in Peppa’s arm as shown in the image below. As usual, there’s only one arm because the other arm is on the other side (which we don’t see). Remember to draw two fingers and one thumb, just like actual trotters.
Pig leg painting is quite easy because they are small oval sticks that are crushed like feet because she often wears black shoes. Oh, and don’t forget the little curly pig tail on your back (because I almost forgot!). Now enter some colors, because Peppa Pig is nothing without some colors.
Finally color the peppa pig:
Light pink for the face
Slightly pink for cheeks
Red for her dress
Black for her shoes
We now have a completed Peppa pig. | <urn:uuid:ec67ecaf-e432-4e2c-b7c1-47000e47654e> | CC-MAIN-2023-50 | https://htdraw.com/how-to-draw-peppa-pig-step-by-step-peppa-pig-drawing-easy/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00594.warc.gz | en | 0.934762 | 344 | 2.984375 | 3 |
Interactive, scaffolded model
This Activity Requires:
Important! If you cannot launch anything from this database, please follow the step-by-step instructions on the software page.
Please Note: Many models are linked to directly from within the database. When an activity employs our scripting language, Pedagogica, as do some of the "guided" activities, the initial download may take several minutes. Subsequent activities will not take a long time. See this page for further instructions.
Students observe a model of a gas and are challenged to prevent spatial equilibrium while the model is running. Students learn how spatial equilibrium is determined.
Students will be able to:
Nature loves equilibrium. All natural systems tend to move in this direction be it energy minimization, pressure, or osmosis. Spatial equilibrium explores the nature of how molecules move from an area of high concentration to lower concentration. Students set up various starting conditions and watch as the system naturally reaches a spatial equilibrium in which the concentration is, on average, over time, the same everywhere.
This activity is linked closely with two other activities:
Brownian Motion: http://molo.concord.org/database/activities/40.html
Diffusion, Osmosis, and Dialysis: http://molo.concord.org/database/activities/223.html
Continual movement of atoms results in motion that appears random and causes particles to be distributed evenly among the atoms in a gas.
Additional Related Concepts
You might find useful the classroom support available at: http://www.concord.org/~barbara/workbench_web/unit1/index.html
Imagine that you are at a party where someone dared you to pop one of the Helium balloons in a room that has all the windows and doors closed. Describe what would happen to those Helium atoms once they have been released from the balloon. Specifically, talk about their eventual position inside the room and how they arrived there.
Does not work on all Macs. | <urn:uuid:fa35721c-64ac-43f9-bc67-60797a51fcb4> | CC-MAIN-2017-13 | http://workbench.concord.org/database/activities/220.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186780.20/warc/CC-MAIN-20170322212946-00605-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.897073 | 422 | 3.953125 | 4 |
From window boxes and growbags to greenhouses and allotments, scientists need your help to measure the amount of own-grown fruit and vegetables across the UK in the first national estimate of the 21st century. Click here to find out how you can get involved.
The MYHarvest survey, which will be the first since the famous Dig for Victory campaign during the Second World War, will assess the important contribution green-fingered fruit and vegetable growers make to national food security and also reveal how much allotment and garden space we need in the future for the growing number of people living in our cities and towns.
Anyone who grows their own produce is invited to take part in the innovative citizen science project led by the University of Sheffield to help determine the yield of typical UK staple fruit and vegetable crops.
We currently have a poor understanding of how much own-grown food is produced so the research is key to providing an evidence base to support the use of land for growing spaces within our towns and cities.
Dr Jill Edmondson, a soil scientist and ecologist from the Department of Animal and Plant Sciences at the University of Sheffield, hopes the project will provide key insights into the ability of urban areas to contribute to UK food security.
“With over 80 per cent of the UK population living in cities or towns that are currently dependent on imported fruit and vegetables, it is important to understand how we can make our cities and towns more sustainable,” said Dr Edmondson.
“In this way, allotment holders and gardeners are helping us determine the extent to which own-grown food contributes to UK food security and sustainability. Urban greenspaces, for example, parks, allotments, gardens, wasteland, provide a multitude of options for areas where food crops can be grown.
“Anyone who grows their own fruit or veg, even if it’s only a small patch for carrots, can join in and simply log information on the MYHarvest website about what they are growing, where, and how much they have produced.”
MYHarvest is part of a wider research project that will study whether there are any barriers to using other urban greenspaces to increase the area of land used for own-grown food production for example some soils within greenspaces may contain high concentrations of pollutants such as heavy metals that could pose a risk to human health if used for food production.
Scientists will use the data gathered from people’s allotments, gardens and other spaces to produce estimates of the yields own-growers are able to achieve for typical UK fruit and vegetable crops.
The scientists will use this data to produce the first 21st century estimate of the amount of food grown in UK allotments as they are currently the largest areas of land used for own-growing in our cities and towns.
Own-growers are encouraged to capture and share their harvests – from scrumptious strawberries to juicy tomatoes - on Twitter and Instagram using the #MYHarvest
Roscoe Blevins, a research scientist in the MYHarvest team, said, “We are working with the National Allotment Society (NAS) and the Royal Horticultural Society (RHS) and hope to engage as many own-growers as possible. We’d love to hear from all gardeners, school groups, city farms, allotment owners and anyone who is growing their own fruit or vegetables – whether they are doing it for the first time or they are an accomplished green-fingered horticulturist.
“Every piece of information we receive from the nation’s own-growers will be important to the outcomes of this project and we hope taking part will also be a fun way for growers to get the most out of their planting as they compare their own findings with that of others around the country.”
The project is a Living with Environmental Change Fellowship funded by the Engineering and Physical Sciences Research Council (EPSRC).
The Society will be encouraging members and all plot-holders to engage with the My Harvest project and record their allotment haul this season. The information gathered will reinforce our message about the many benefits of allotment growing and help us to continue to grow the allotment movement. | <urn:uuid:7cbf9e29-b584-482e-81e6-8f9f17ba0357> | CC-MAIN-2019-04 | https://www.nsalg.org.uk/news/my-harvest-project-launches/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583667907.49/warc/CC-MAIN-20190119115530-20190119141530-00096.warc.gz | en | 0.948882 | 868 | 3.15625 | 3 |
Expenditure Method - Explained
What is the Expenditure Method?
If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.
- Marketing, Advertising, Sales & PR
- Accounting, Taxation, and Reporting
- Professionalism & Career Development
Law, Transactions, & Risk Management
Government, Legal System, Administrative Law, & Constitutional Law Legal Disputes - Civil & Criminal Law Agency Law HR, Employment, Labor, & Discrimination Business Entities, Corporate Governance & Ownership Business Transactions, Antitrust, & Securities Law Real Estate, Personal, & Intellectual Property Commercial Law: Contract, Payments, Security Interests, & Bankruptcy Consumer Protection Insurance & Risk Management Immigration Law Environmental Protection Law Inheritance, Estates, and Trusts
- Business Management & Operations
- Economics, Finance, & Analytics
Table of ContentsWhat is the Expenditure Method?How is the Expenditure Method Used?Main Components Under Expenditure MethodLimitation of GDP MeasureAcademic Research on the Expenditure Method
What is the Expenditure Method?
This expenditure method states that expenses of the government, private and public firms sum up the GDP. These indices contribute to the overall value of all finished goods and services over a period of time. In other words, the expenditure method is a means of calculating the gross domestic product through consumption, investment, government expenses and net export indices over a period of time. The method here does not account for inflation and at such, results in nominal GDP but when inflation is considered, the real GDP is estimated.
How is the Expenditure Method Used?
GDP is mostly calculated using the expenditure method by adding up all of the expenses made on final goods and services. There are four considerations when calculating GDP, they are household consumption, business investments, government expenses and net exports (exports excluding goods and services imported). These indices are used by the expenditure method to estimate the GDP.
Main Components Under Expenditure Method
Consumer spending is focussed on in the expenditure method of the United States. This is divided into purchases of durable (cars, computers, and others) and nondurable goods (clothing, food, and others). Another component considered is government expenses. Factors such as expenses on defense and non-defense goods such as weapons, drugs, and books are of utmost concern. Business investment component includes firms expenses on acquiring assets such as real estate, equipment, manufacturing facilities, and plants. This is one of the unpredictable components in estimating GDP. Net exports refer to the implication associated with foreign trade of goods and services on the economy. This is the last component considered in estimating GDP using the expenditure method.
Limitation of GDP Measure
Economist Joseph Stiglitz notes that GDP is supposed to measure a country's standard of living. But GDP has failed in this objective because it excludes factors that make the citizens of a country smile or happy such as work-life balance and interpersonal relationships. As such, GDP should not be taken as a perfect measure and indication of a society's well being. | <urn:uuid:9762ae61-2ee3-4892-a383-44f25ad69477> | CC-MAIN-2022-49 | https://thebusinessprofessor.com/economic-analysis-monetary-policy/expenditure-method-definition | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00525.warc.gz | en | 0.924384 | 650 | 3.390625 | 3 |
This weekend we checked out the Chrome Web Lab in the London Science Museum.
It’s the first time I’ve seen a major museum host an interactive exhibit on the wonder of the web. As we wove through throngs of kids drooling over display cases as web-powered robots drew in sand, it made me realize what an amazing learning opportunity an exhibit like this is.
The Web Lab features five different stations, each a playful interaction of machines, haptic interfaces, and occasional online users.
But while I enjoyed getting my portrait drawn by robots and playing instruments with virtual friends, the Web Lab fell short of exposing the real “magic” behind all its wonders: the web itself.
Beneath all of the chrome, the only time you could glimpse any code was when a staff member had to reboot a machine.
Which got me thinking: how would you design an exhibit that put the web on display and let you play with code in a fun, accessible way?
An Exhibit for Webmaking
This is certainly not an exhaustive list, but here are some ideas for a webmaker exhibit:
- Hackability. Much of the Web Lab seemed predetermined, or at least quite limited in its variables. A webmaker exhibit would invite the unexpected and encourage playful appropriation. Perhaps it could provide a glossary of HTML tags that you could use throughout the exhibit, and prompts for how the tags can be recombined to create new commands and attributes.
- Interoperable activities. The stations would be interoperable, so something you made in one activity would transfer over to the other and let you keep adding to it. That way, you see how the pieces fit together.
- Real code. You’d definitely get to manipulate real code. Maybe it’d use an interface like Joe’s CodeCards to invite users to shuffle syntax and run neat, short programs.
- Design. The team behind the Chrome Web Lab did a brilliant job with a coherent visual concept, a clear path through the space, and gorgeous fiducial markers on name tags so you could save your work and play with it when you got home. Having a consistent user experience and an attractive design goes a long way, letting visitors focus more on what they’re trying to build rather than how to navigate the space.
- Interest-driven. The Web Lab gave a lot of presets, which is smart in an exhibit where you just want things to work and to be inoffensive. But their stations didn’t allow for interest-driven personalization. So for example, in an image search activity, you could only select from a prepared list of ca. 20 images. While it’d be riskier, it’d also be more interesting to allow custom searches. Or just more activities that let you play with real content from the web.
It was definitely a pleasure to see the Chrome Web Lab, and together with the Exquisite Forrest exhibit at the Tate Modern, Google is making a smart move to be present in heavily visited museums in London. I’d argue there’s an opportunity to complement these exhibits with more activities that emphasize making and hacking, while still being durable and appropriate enough for thousands of visitors. | <urn:uuid:ed285950-4529-47c9-94d0-5ecf4a11869c> | CC-MAIN-2015-11 | http://michellethorne.cc/2012/10/webmaker-lab/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464193.61/warc/CC-MAIN-20150226074104-00305-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.933578 | 668 | 2.53125 | 3 |
Southeastern China was hit hard in May and early June 2006, when heavy rains and flooding killed dozens of people. The problems began when Typhoon
Chanchu made landfall along the central southeastern coastline on May 18, 2006. The storm, which had earlier passed through the central Philippines, dumped several inches of rain and battered southern coastal regions leaving 11 people dead. Later in the month and into June, heavy monsoon rains hit the area, leaving many more dead or missing, and forcing numerous evacuations as a result of flooding, say news reports. The provinces of Fujian, Guizhou, and Guangdong were the hardest hit.
This image shows rainfall totals over southeastern China as seen by the Tropical Rainfall Measuring Mission satellite (known as TRMM), from May 16 to June 1, 2006. The highest rainfall totals for the period (shown in red) are on the order of 500 millimeters (20 inches) and occur near the coast in the area around Hong Kong in the province of Guangdong. A widespread area of 450-millimeter (8-inch) rainfall totals (green) covers most of southeastern China, with locally heavier amounts of a foot or more (yellow and orange areas).
TRMM was placed into service in November 1997. From its low-earth orbit, TRMM has been measuring rainfall over the global Tropics using a combination of passive microwave and active radar sensors. The TRMM-based, near-real time Multi-satellite Precipitation Analysis (MPA) at the NASA Goddard Space Flight Center monitors rainfall over the global Tropics. TRMM is a joint mission between NASA and the Japanese space agency, JAXA. | <urn:uuid:78ad9105-2220-4ca1-a020-5010f98d0c6f> | CC-MAIN-2014-15 | http://visibleearth.nasa.gov/view.php?id=16782 | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00385-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.960903 | 341 | 3.375 | 3 |
The largest, most representative study of suicidal behaviour ever conducted has found that risk factors for suicidal thoughts, plans and attempts appear to be similar across the globe. Risk factors include having a mental disorder, being female, younger, less educated and unmarried. The study, by the World Health Organization, interviewed 84,850 adults in 17 countries. Among those interviewed 9.2% reported that they had seriously thought about suicide and 2.7% reported making a suicide attempt at some point in their lives. Rates of suicidal thoughts ranged from 3.1% of people in China to 15.9% in New Zealand. The risk of sucidal thoughts increased sharply during adolescence and young adulthood and impulse control disorders, substance use disorders and anxiety disorders were all associated with a significantly higher risk of suicidal thoughts and attempts.
You can find out more about this research at | <urn:uuid:3b5c6ffb-bd51-4ed5-b3fc-72f75e01b12a> | CC-MAIN-2017-43 | http://mentalhealthupdate.blogspot.com/2008/02/massive-suicide-study-reports.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00083.warc.gz | en | 0.968566 | 170 | 2.625 | 3 |
Come learn about IPR in San Diego at our Chapter Meeting 1/20.
January 08 2010 | Know Your H20,
by Belinda Smith
Visit ICarPool for carpools and check out www.sdcommute.com for public transport options.
The City of San Diego is working to develop local solutions for future water supply reliability. They include the City of San Diego’s:
Recycled Water Program –
• Two water reclamation plants
• These plants treat wastewater to a level that is approved for irrigation, manufacturing and other non-drinking, or non-potable purposes. The North City Plant has the capability to treat 30 million gallons a day and the South Bay Plant can treat 15 million gallons a day. Recycled water gives San Diego a dependable, year-round, locally controlled water resource.
Indirect Potable Reuse/Reservoir Augmentation Demonstration Project -
• Evaluate the feasibility of using advanced water treatment on recycled water.
• Provide a locally-controlled drought-proof supply of high quality water to over half the region’s residents
• Increase recycled water use in the region
• Provide a supply of water with a smaller environmental footprint (including lower carbon emissions) than imported or desalinated water
Recycled Water Study -
• Identify opportunities to increase recycling of wastewater for potable and non-potable uses
• Determine implementation costs
• Determine the extent recycling can off-load the Point Loma Wastewater Treatment Plant | <urn:uuid:e05d702f-5998-4d23-bf91-c6097858e81e> | CC-MAIN-2014-42 | http://www.surfrider.org/coastal-blog/entry/come-learn-about-ipr-in-san-diego-at-our-chapter-meeting-1-20 | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447819.35/warc/CC-MAIN-20141017005727-00150-ip-10-16-133-185.ec2.internal.warc.gz | en | 0.873499 | 316 | 2.828125 | 3 |
Topics covered: Gene Regulation
Instructors: Prof. Eric Lander
Good morning. Good morning.
So, what I would like to do today is pick up on our basic theme of molecular biology. We've talked about DNA replication.
The transcription of DNA into RNA, and the translation of RNA into protein. We discussed last time some of the variations between different types of organisms: viruses, prokaryotes, eukaryotes, with respect to the details of how they do that in general that bacteria have circular DNA chromosomes typically that eukaryotes have linear chromosomes, etc. What I'd like to talk about today is variation, but variation not between organisms but within an organism from time to time and place to place, namely, how it is that some genes or gene activities are turned on, on some occasions, and turned off on other occasions. This is, obviously, a very important problem to an organism, particularly to somebody like you who's a multi-cellular organism, and has the same DNA instruction set in all of your cells.
It's obviously quite important to make sure that the same basic code is doing different things in different cells.
It's important, also, to a bacterium to make sure that it's doing different things at different times, depending on its environment. So, I'm going to talk about a very particular system today as an illustration of how genes are regulated, but before we do that, let's Ask, where are the different places in this picture?
DNA goes to DNA goes to RNA goes to protein, in which you might, in principle, regulate the activity of a gene. Could you regulate the activity of a gene by actually changing the DNA encoded in the genome? So, why not? Because what? It becomes a different gene. Yeah, that's just a definition.
Why couldn't the cell just decide that I want this gene now to change in some way? Oh, I don't know, I'll alter the DNA sequence in some way. And, that'll make the gene work.
Could that happen? Is that allowed? Yeah, it turns out to happen.
It's not the most common thing, and it's not the thing they'll talk about in the textbooks a lot but you can actually do regulation.
So, the levels of regulation are many, and one is actually at the level of DNA rearrangement. As we'll come to later in the course, for example, your immune system creates new, functional genes by rearranging locally some pieces of DNA, some bacteria, particularly infectious organisms control whether genes are turned on or off by actually going in there, and flipping around a piece of DNA in their chromosome.
And, that's how they turn the gene on or off is they actually go in and change the genome. There's some protein that actually flips the orientation of a segment of DNA. Now, these are a little funky, and we're not going to talk a lot about them, but you should know, almost anything that can happen does happen and gets exploited in different ways by organisms.
So, DNA rearrangement certainly happens. It's rare, but it's always cool when it happens.
So, it's fun to look at. And, something like the immune system can't be dismissed as simply an oddity. That's an incredibly important thing. The most common form is at the level of transcriptional regulation, where whether or not a transcript gets made is how it's processed can be different. First off, the initiation of transcription that RNA polymerase should happen to sit down at this gene on this occasion and start transcribing it is a potentially regulatable (sic) step that maybe you're only going to turn on the gene for beta-globin and alpha-globin that together make the two components of hemoglobin, and you're only going to turn them on in red blood cells, or red blood cell precursors, and that could be done at the level of whether or not you make the message in the first place. That's one place it can be done.
Another place is the splicing choices that you make.
With respect to your message, you get this thing with a number of different potential exons, and you can regulate how this gene is used by deciding to splice it this way, and skip over that exon perhaps, or not skip over that exon. That alternative spicing is a powerful way to regulate. And then finally, you can also regulate at the level of mRNA stability.
Stability means the persistence of the message, the degradation of the message. It could be that in certain cells, the message is protected so that it hangs around longer.
And, in other cells, perhaps, it's unprotected and it's degraded very rapidly. If it's degraded very rapidly, it doesn't get a chance to make a protein or maybe it doesn't get to make too many copies of the protein. If it's persistent for a long time, it can make a lot of copies of protein.
All of those things can and do occur. Then, of course, there is the regulation at the level of translation.
Translation, if I give you an mRNA, is it automatically going to be translated? Maybe the cell has a way to sequester the RNA to ramp it up in some way so that it doesn't get to the ribosome under some conditions, and under other conditions it does get to the ribosome, or some ways to block in other manners than just sequestering it, but to physically block whether or not this message gets translated, what turns out that there's a tremendous amount of that. It's, again, not the most common, but we're learning, particularly over the last couple of years, that regulation of the translation of an mRNA is important.
There are, although I won't talk about them at length, an exciting new set of genes called micro RNA's, teeny little RNAs that encode 21-22 base pair segments that are able to pair with a messenger RNA and interfere in some ways partially with its translatability. And so, by the number and the kinds of little micro RNAs that are there, organisms can tweak up or down how actively a particular message is being translated.
So, the ability to regulate translation in a number of different ways is important. And then, of course, there's post-translational control. Once a protein is made, there's post-translational regulation that could happen.
It could be that the protein is modified in some way.
The proteins say completely inactive unless you put a phosphate group on it, and some enzyme comes along and puts a phosphate group on it. Or, it's inactive until you take off the phosphate group.
All sorts of post-translational modifications can occur to proteins after the amino acid chain is made that can affect whether or not the protein is active. Every one of these is potentially a step by which an organism can regulate whether or not you have a certain biochemical activity present in a certain amount at a certain time. And, every one of these gets used. This is the thing about coming to a system that has been in the process of evolution for three and a half billion years is that even little differences can be fought over as competitive advantages, and can be fixed by an organism. So, if a tiny little thing began to help the organism slightly, it could reach fixation. And, you're coming along to this system, which has had about three and a half billion years of patches to the software code, and it's just got all sorts of layers and regulation piled on top of it. All of these things happen. But, what we think is the most important out of this whole collection is this guy.
The fundamental place at which you're going to regulate whether or not you have the product of a gene is whether you bother to transcribe its RNA. But I do want to say because, yes? And, which exons you used and which aren't? Yeah, well, there are tissue-specific factors that are gene-specific that can influence that. And, surprisingly little is known about the details. There are a couple of cases where people know, but as you'd imagine, you actually need a regulatory system in that tissue to be able to decide to skip over that exon.
And, the mechanics of that surprisingly are understood in very few cases. And, you might think that evolution wouldn't like to use that as the most common thing because you really do have to make a specialized thing to do that. So, that's what happens on these. That's one in particular where I think a tremendous amount of more work has to happen.
mRNA stability, we understand some of it but not all the factors in this business. I was telling you about translation with these little micro-RNAs is stuff that's really only a few years old that people have come to understand. So, there's a lot to be understood about these things. I'm going to tell you about initiation of mRNAs, because it's the area where we know the most, and I think it'll give you a good idea of the general paradigm.
But, any of you who want to go into this will find that there's a tremendous amount more to still be discovered about these things.
So, the amount of protein that a cell might make varies wildly.
Your red blood cells, 80% of your red blood cells, protein, is alpha or beta-globin. It's a huge amount. That's not true in any other cell in your body. So, we were talking about pretty significant ranges of difference as to how much protein is made.
How do things like that happen? Well, I'm going to describe the simplest and classic case of gene regulation and bacteria, and in particular, the famous lack operon of E coli.
So, this was the first case in which regulation was ever really worked out, and it stands today as a very good paradigm of how regulation works. E coli, in order to grow, needs a carbon source. In particular, E coli is fond of sugar.
It would like to have a sugar to grow on. Given a choice, what's E coli's favorite sugar? It's glucose, right, because we have the whole cycle of glucose. The whole pathway of glucose goes to pyruvate, which we've talked about, and glucose is the preferred sugar to go into that pathway, OK, of glycolysis.
Glycolysis: the breakdown of glucose. But, suppose there's no glucose available. Is E coli willing to have a different sugar?
Sure, because E coli's not stupid. If it were to refuse another sugar, it wouldn't be able to grow. So, it has a variety of pathways that will shunt other sugars to glucose, which will then allow you to go through glycolysis, etc. Now, given a choice, it would prefer to use the glucose. But if not, suppose you gave it lactose. Lactose is a disaccharide. It's milk sugar, and I'll just briefly sketch, so lactose is a disaccharide where you've got a glucose and a galactose.
Glucose plus galactose equals lactose. So, if E coli is given galactose, it is able to break it down into glucose plus galactose.
And it does that by a particular enzyme called beta galactosidase, which breaks down glactosides. And, it'll give you galactose plus glucose. How much beta-galactosidase does an E coli cell have around? Sorry? None? But how does it do this?
When it needs it, it'll synthesize it. When it needs it, like, there's no glucose and there's a lot of galactose around, how much of it will there be? A lot. It turns out that in circumstances where E coli is dependent on galactose as its fuel, something like 10% of total protein can be beta-gal under the circumstances when you have galactose but no glucose. Sorry? Sorry, when you have lactose but no glucose. Thank you. So, when you have lactose but no glucose, E coli has 10% of its protein weight as beta-galactosidase. Wow. But when you have glucose around or you don't have lactose around, you have very little.
It could be almost none, trace amounts. So, why do this?
Why not, for example, just have a far more reasonable some compromise?
Like, let's always just have 1% of beta-galactosidase.
Why do we need the 0-10%? 10%'s actually extremely high.
So what. It's a good insurance policy. So, if I only have galactose, I need more. Well, I mean, 1% will still digest it. I'll still do it. What's the problem? Sorry?
So what, I do it at a slower rate. Life's long. Why not? Ah, it has to compete. So, if the cell to the left had a mutation that got it to produce four times as much, then it would soak up the lactose in the environment, grow faster, etc. etc., and we could have competed.
So, these little tuning mutations have a huge effect amongst this competing population of bacteria. And so, if E coli currently thinks that it's really good to have almost non at sometimes and 10% at other times, you can bet that it's worked that out through the product of pretty rigorous competition, that it doesn't want to waste the energy making this when you don't need it, and that when you do need it, you really have to compete hard by growing as fast as you can when you have that lactose around. OK. So, how does it actually get the lactose, sorry, keep me honest on lactose versus galactose, into the cell? It turns out that it also has another gene product, another protein, which is a lactose permease. And, any guesses as to what a lactose permease does? It makes the cell permeable to lactose, right, good. So, the lactose can get into the cell, and then beta-gal can break it down into galactose plus glucose. These two things, in fact, both get regulated, beta-gal and this lactose permease. So, how does it work?
Let's take a look now at the structure of the lack operon.
So, I mentioned briefly last time, what's an operon? Remember we said that in bacteria, you often made a transcript that had multiple proteins that were encoded on it.
A single mRNA could get made, and multiple starts for translation could occur, and you could make multiple proteins.
And, this would be a good thing if you wanted to make a bunch of proteins that were a part of the same biochemical pathway.
Such an object, a regulated piece of DNA that makes a transcript encoding multiple polypeptides is called an operon because they're operated together. So, let's take a look here at the lack operon. I said there was a promoter.
Here is a promoter for the operon, and we'll call it P lack, promoter for the lack operon. Here is the first gene that is encoded. So, the message will start here, actually about here, and start going off. And, the first gene is given the name lack Z.
It happens to encode beta-galactosidase enzyme.
Remember, they did a mutant hunt, and when they did the mutant hunt, they didn't know what each gene was as they isolated mutants.
So, they just gave them names of letters. And so, it's called lack Z. And, everybody in molecular biology knows this is the lack Z gene, although Z has nothing to do with beta-galactosidase. It was just the letter given to it.
But, it's stuck. Next is lack Y.
And, that encodes the permease. And, there is also lack A, which encodes a transacetylase, and as far as I'm concerned you can forget about it. OK, but I just mentioned that it is there, and it actually does make three polypeptides.
We won't worry about it, OK, but it does make a transacetylase, OK? But it won't figure in what we're going to talk about, and actually remarkably little is known about the transacetylase. There's also one other gene I need to talk about, and that's over here, and that's called lack I. And, it too has a promoter, which we can call PI, for the promoter for lack I.
And, this encodes a very interesting protein.
So, we get here one message encoding one polypeptide here.
This mRNA encodes one polypeptide. It is monocystronic. This guy here is a polycystronic message. It has multiple cystrons, which is the dusty old name for these regions that were translated into distinct proteins. And so, that's that mRNA.
So, lack I, this encodes a very interesting protein, which is called the lack repressor. The lack repressor, actually I'll bring this down a moment, is not an enzyme.
It's not a self-surface channel for putting in galactose.
It is a DNA binding protein. It binds to DNA. But, it's not a nonspecific DNA binding protein that binds to any old DNA.
It has a sequence-specific preference.
It's a protein that has a particular confirmation, a particular shape, a particular set of amino acids sticking out, that it combined into the major groove of DNA in a sequence-specific fashion such that it particularly likes to recognize a certain sequence of nucleotides and binds there. Where is the specific sequence of nucleotides where this guy likes to bind? It so happens that it's there.
And this is called the operator sequence or the operator site.
So, this protein likes to go and bind there. Now, I've drawn this, by the way, so that this operator site is actually right overlapping the promoter site.
Who likes to bind at the promoter site? RNA polymerase.
What's going to happen if the lack repressor protein is sitting there?
RNA polymerase can't bind. It's just physically, blocked from binding. So, let's examine some cases here.
Let's suppose that we look at here at our gene. We've got our promoter, P lack. We've got the operator site here. We've got the lack Z gene here, and we've got the lack repressor, lack I, the repressor sitting there.
Polymerase tries to come along to this, and it's blocked.
So, what will happen in terms of the transcription of the lack operon: no mRNA. So, that's great.
So, we've solved one problem right off the bat.
We want to be sure that sometimes there's going to be no mRNA made.
This way, we're not going to waste any metabolic energy, making beta-galactosidase. Are we done? No? Why not.
We've got to sometimes make beta-galactosidase.
So, we've got to get that repressor off there. Well, how is the repressor going to come off there? When do we want the repressor off there: when there's lactose present.
So, somehow we need to build some kind of an elaborate sensory mechanism that is able to tell when lactose is present, and send a signal to the repressor protein saying, hey, lactose is around. The signal gets transmitted all the way to the repressor protein, and the repressor protein comes off.
What kind of an elaborate sensory mechanism might be built?
Use lactose as what? So, this is actually pretty simple.
You're saying just take lactose, and you want lactose to be its own signal? So, if lactose were to just bind to the repressor, the repressor might then know that there was lactose around.
Well, what would it do if lactose bound to it? Sorry? Why would it fall off? Yep. More interested in the lactose.
So, if you're suggestion, this is good. I like the design work going on here. The suggestion is that if lactose binds to this here, binds to our repressor, it's going to fall off because it's more interested in lactose than in the DNA. Now, how is the interest actually conveyed into something material? Because the actual level of cognitive like or dislike for DNA on the part of this polypeptide is unclear, you may be anthropomorphizing slightly with regard to this polypeptide chain. So, mechanistically, what's going to happen? Shape. Yes, shape? Change confirmation, the binding act, the act of binding lactose creates some energy, may change the shape of the protein, and that shape of the protein may, in the process of wiggling around to bind lactose may de-wiggle some other part of it that now no longer binds so well to DNA. That is exactly what happens.
Good job. So, you guys have designed, in fact, what really happens. What happens is what's called an allosteric change. It just means other shape.
So, it just changes its shape, that it changes shape on binding of lactose. And it falls off because it's less suitable for binding this particular DNA sequence when it's bound to lactose there. So, in this case, in the presence of lactose, lack I does not bind.
And, the lack operon is transcribed. Yes? Uh-oh. OK, all right designers, here we've got a problem. You have such a cool system, right? You were going to sense lactose.
Lactose was going to bind to the lack repressor, change its confirmation falloff: uh-oh. But, as you point out, how's it going to get any lactose, because there's not a lactose permease because the lactose permease is made by the same operon. So, what if, in fact, instead of getting one of these DOD mill speck kind of things of some repressor that is absolutely so tight that it never falls off under any circumstances, what if we build a slightly sloppy repressor that occasionally falls off, and occasionally allows transcription of the lack operon? Then, we'll have some trace quantities of permease around. With a little bit of permease around, a little lactose will get in.
And, as long as even a little lactose gets in, it'll now shift the equilibrium so that the repressor is off more, and of course that will make more permease, and shift, and shift, and shift, and shift. So, as long as it's not so perfectly engineered as to have nothing being transcribed, so no mRNA is really very little mRNA. See, this is what's so good, I think, about having MIT students learn this stuff because there are all sorts of wonderful design principles here about how you build systems. And, I think this is just a very good example of how you build a system like this.
Now, all right, so we now have the ability to have lack on and lack off, and that is lack off, mostly off because of your permease problem: very good. Now, let's take a little digression about, how do we know this? This kind of reasoning, I've now told you the answer. But let's actually take a look at understanding the evidence that lets you conclude this.
So, in order to do this, and this is the famous work in molecular biology of Jacobin Manoux in the late '50s for which they won a Nobel Prize, they wanted to collect some mutants.
Remember, this is before the time of DNA sequence or anything like that, and wanted to collect mutants that affected this process.
So, in order to collect mutants that screwed up the regulation, they knew that beta-galactosidase was produced in much higher quantity if lactose was around. The difficulty with that was that wild type E coli, when you had no lactose would produce very little beta-gal, one unit of beta-gal, and in the presence of lactose, would produce a lot, let's call it 1, 00 units of beta-gal. But, the problem with playing around with this is lactose is serving two different roles.
Lactose is both the inducer of the expression of the gene by virtue of binding to the repressor, etc., etc.
But, it's also the substrate for the enzyme because as beta-galactosidase gets made, it breaks down the lactose. So, there's less lactose in binding, and if you wanted to really study the regulatory controls, you have the problem that the thing that's inducing the gene by binding to the repressor is the thing that's getting destroyed by the product of the gene. So, it's going to make the kinetics of studying such a process really messy. It would be very nice if you could make a form of lactose that could induce beta-galactosidase by binding to the repressor, but wasn't itself digested.
Chemically, in fact, you can do that. Chemically, it's possible to make a molecule called IPTG, which is a galactoside analog. And, what it does is this molecule here which I'll just sketch very quickly here, it's a sulfur there, and you can see vaguely similar, this is able to be an inducer.
It'll induce beta-gal, but not a substrate. It won't get digested.
So, it'll stick around as long as you want. It's also very convenient to use a molecule that was developed called ex-gal.
Ex-gal again has a sugar moiety, and then it also has this kind of a funny double ring here, which is a chlorine, and a bromine, and etc. And, this guy here is not an inducer. It's not capable of being induced, of inducing beta-galactosidase expression. But, it is a substrate.
It will be broken down by the enzyme, and rather neatly when it's broken down it turns blue. These two chemicals turned out to be very handy in trying to work out the regulation of the lack operon. So, if I, instead of adding lactose, if I think about adding IPTG, my inducer, when I add IPTG I'm going to get beta-gal produced. When I don't have IPTG, I won't produce beta-gal. But then I don't have a problem of this getting used up. So now, what kind of a mutant might I look for? I might look for a mutant that even in the absence of the inducer, IPTG, still produces a lot of beta-gal. Now, I can also look for mutants that no matter what never produce beta-gal, right? But, what would they likely be? They'd likely be structural mutations affecting the coding sequence of beta-gal, right? Those will happen.
I can collect mutations that cause the E coli never to produce beta-gal. But that's not as interesting as collecting mutations that block the repression that cause beta-gal to be produced all of the time. So, how would I find such a mutant?
I want to find a mutant that's producing a lot of beta-gal even when there's no IPTG. So, let's place some E coli on a plate. Should we put IPTG on a plate? No, so no IPTG.
What do I look for? How do I tell whether or not any of these guys here is producing a lot of beta-gal? Yep?
So, no IPTG, but put on ex-gal, and if anybody's producing a lot of beta-gal, what happens? They turn blue: very easy to go through lots of E coli like that looking for something blue.
And so, lots of mutants were collected that were blue.
And, these chemicals are still used today. They're routinely used in labs, ex-gal and stuff like that, making bugs turn blue because this has turned out to be such a well-studied system that we use it for a lot of things. So, mutants were found that were constituative. So, mutants were found that were constituative mutants. Constituative mutants: meaning expressing all the time, no longer regulated, so, characterizing these constituative mutants.
It turns out that they fell into two different classes of constituative mutants. If we had enough time, and you could read the papers and all, what I would do is give you the descriptions that Jacobin Maneaux had of these funny mutants which they'd isolated and were trying to characterize, and how to puzzle out what was going on.
But, it's complicated and hard, and makes your head hurt if you don't know what the answer is. So, I'm going to first tell you the answer of what's going on, and then sort of see how you would know that this was the case. But, imagine that you didn't know this answer, and had to puzzle this out from the data.
So, suppose we had, so if there were going to be two kinds of mutants: mutant number one are operator constituents.
They have a defective operator sequence. Mutations have occurred at the operator site. Mutant number two have a defective repressor protein, the gene for the repressor protein.
How can I tell the difference?
So, I could have a problem in my operator site.
What would be the problem with the operator site?
Some mutation to the sequence causes the repressor not to bind there anymore, OK? So, a defective operator site doesn't bind repressors. Defective repressor, the operator site is just fine, but I don't have a repressor to bind at it. So how do I tell the difference? One way to tell the difference is to begin crossing the mutants together to wild type, and asking, are they dominant or recessive, or things like that?
Now, here's a little problem. E Coli is not a diploid, so you can't cross together two E colis and make a diploid E coli, right? It's a prokaryote. It only has one genome. But, it turns out that you can make temporary diploids, partial diploids out of E coli because it turns out you can mate bacteria. Bacteria, which have a bacterial chromosome here also engage in sex and in the course of bacterial sex, plasmids can be transferred called, for example, an F factor, is able to be transferred from another bacteria. And, through the wonders of partial merodiploid, you can temporarily get E colis, or you can permanently get E colis, that are partially diploid. So, you can do what I'm about to say. But, in case you were worried about my writing diploid genotypes for E coli, you can actually do this.
You can make partial diploids. So, let's try out a genotype here.
Suppose the repressor is a wild type, the operator is wild type, and the lack Z gene is wild type. And, suppose I have no IPTG, I'm un-induced. I have one unit of beta-gal. When I add my inducer, what happens? I get 1,000 units of beta-gal.
Now, suppose I would have an operator constituative mutation.
Then, the operator site is defective. It doesn't bind the repressor. Beta-gal is going to be expressed all the time, even in the absence. All right, well that was, of course, what we selected for. Now, suppose I made the following diploid.
I plus, O plus, Z plus, over I plus, O constituative, Z plus. So, here's my diploid. What would be the phenotype? So, in other words, one of the chromosomes has an operator problem.
Well, that means that this chromosome here is always going to be constituatively expressing beta-gal.
But, what about this chromosome here? It won't. So, this would be about 1, 01, give or take, because it's got one chromosome doing that and one chromosome doing this, and this one would be about 2, 00. Now, that quantitative difference doesn't matter a lot. What you really saw when you did the molecular biology was that when you had one copy of the operator constituative mutation, you still got a lot of beta-gal here even in the absence of IPTG. So, that operator constituative site looked like it was dominant to this plus site here.
But now, let's try this one here. I plus, O plus, Z plus, over I plus, operator constituative, Z minus. What happens then?
This operator constituative site allows constant transcription of this particular copy. But, can this particular copy make a working, functional beta-gal? No. So, this looks, when you do your genetic crosses, you find that the operator constituative, now, if I reverse these here, suppose I reverse these, I plus, O plus, Z minus, I plus, O constituative, Z plus, same genotypes, right, except that I flipped which chromosome these are on.
Now, what happens? This chromosome here: always making beta-gal and it works. This chromosome here: not making beta-gal.
Even though it's regulated, it's a mutant. So, in other words, from this very experiment, you can tell that the operator site is only affecting the chromosome that it's physically on, that it doesn't make a protein that floats around.
What it does is it's said to work in cys. In cys means on the same chromosome. It physically works on the same chromosome.
Now, let's take a look, by contrast, of the properties of the lack repressor mutants. If I give you a lack repressor mutant, I plus, O plus, Z plus is the wild type.
I constituative, O plus, Z plus: what happens here?
This wild type is one in 1, 00. This guy here: 1,000 and 1, 00, and then here let's look at a diploid: I plus, O plus, Z plus, I constituative, O plus, Z plus. What's the effect? The I constituative doesn't make a functioning repressor. But, I plus makes a functioning repressor. So, will this show regulation?
Yeah, this will be regulated just fine. This works out just fine, and in fact it'll make 2,000, and it'll make two copies there.
But again, the units don't matter too much. And, by contrast, if I give you I plus, O plus, Z minus, and I constituative, O plus, Z plus, what will happen?
Here, I have my mutation on this chromosome. But, it doesn't matter because I've got my mutation on this chromosome in the repressor. I've got a mutation on lack Z here, but as long as I have a functional copy, one functional copy of the lack repressor, it works on both chromosomes.
It will work on both chromosomes, and so in other words this lack repressor, one copy works on both chromosomes. In other words, it makes a product that diffuses around, and can work on either chromosome, and it's said to work in trans, that is, across.
So, the operator is working in cys. It's operating on its own chromosome only. A mutation in the operator only affects the chromosome it lives on, whereas a functional copy of the lack repressor will float around because it's a protein, and that's how Jacobin Maneaux knew the difference.
They proved their model by showing that these two kinds of mutations had very different properties. Operator mutations affected only the physical chromosome on which they occurred, which of course they had to infer from the genetics they did, whereas repressor, a functional copy repressor, could act on any chromosome in the cell.
So, OK, we've got that. Now, last point, what about glucose?
I haven't said a word about glucose. See, this was a big deal to people.
This model, the repressor model, we have this repressor. What about glucose? What's glucose doing in this picture?
So, glucose control: so here's my gene. Here's my promoter, P lack. Here's my operator, beta-gal.
It's encoded by lack Z. You've got all that. When this guy is present, sorry, when lactose is present, the repressor comes off. Polymerase sits down. Wait a second, polymerase isn't supposed to sit down unless there's no glucose.
We need another sensor to tell if there's glucose, or if there's low glucose. So, we're going to need us a sensor that tells that. Any ideas? Yep?
Yeah, if you work that one through, I don't think it quite works. But, you've got the basic idea. You're going to want another something, and it turns out there's another site over here, OK? There's a second site on which a completely different protein binds. And, this protein is the cyclic AMP regulatory protein, and it so happens that in the cell, when there's low amounts of glucose, let me make sure I've got this right, when there's low amounts of glucose, what we have is high amounts of cyclic AMP. Cyclic AMP turns out, whereas lactose is used directly as the signal, cyclic AMP is used as the signal here. When the cell has low amounts of glucose, it has high amounts of cyclic AMP. Now, what do you want your cyclic AMP to do? How are we going to design this?
It's going to bind to a protein, cyclic AMP regulatory protein, it's going to sit down, and now what's it going to do?
Is it going to block RNA polymerase?
What do we want to do? If there's low glucose, high cyclic AMP, we sit down at the site, we want to turn on transcription now, right? So, what it's got to do is not block RNA polymerase, but help RNA polymerase. So, what it actually does is instead of being a repressor, it's an activator. And what it does is it makes it more attractive for RNA polymerase to bind, and it actually does that by, actually it does it slightly by bending the DNA.
But, what it does is it makes it easier for RNA polymerase to bind.
It turns out that the promoter is kind of a crummy promoter.
It's actually just like, remember the repressor wasn't perfect; the promoter's not perfect either. The promoter's kind of crummy.
And, unless RNA polymerase gets a little help from this other regulatory protein, it doesn't work.
We have two controls: a negative regulator responding to an environmental cue, a positive activator responding to an environmental cue, helping polymerase decide whether to transcribe or not, and basically that's how a human egg goes to a complete adult and lives its entire life, minus a few other details. There are some details left out, but that's a sketch of how you turn genes on and off. | <urn:uuid:603a21f0-9a71-48e8-b74d-1e4741689b1b> | CC-MAIN-2016-30 | http://ocw.mit.edu/courses/biology/7-012-introduction-to-biology-fall-2004/video-lectures/lecture-13-gene-regulation/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258943369.84/warc/CC-MAIN-20160723072903-00073-ip-10-185-27-174.ec2.internal.warc.gz | en | 0.973231 | 8,359 | 3.046875 | 3 |
Birding the Navajo and Hopi Reservations,
By Bud Johnson
Notice: This article appeared in Cactus Wrendition, vol. 46,
no. 3, May-June 1998, published by the Maricopa Audubon Society, P.O. Box 15451,
Phoenix, AZ 85060. ©1998, Maricopa Audubon Society and the author. Reproduction restricted to personal or educational use; reproduction for
commercial use is prohibited without the written consent of the Maricopa Audubon
Society and the author.
The great mesas and colored buttes of the Hopi-Navajo region East of the
Grand Canyon are home to several birds not readily seen elsewhere in Arizona.
Black-Billed Magpie, Northern Shrike, vagrant Eastern birds, mountain species
and sometimes even Chukar can be found in the Colorado plateau. The movie set
geology is worth pursuing even if no special birds are found. The Hopi-Navajo
area is often referred to as the Four Corners area, since it is the only place
in the 48 contiguous states where one can have feet and hands in 4 states at
The area has great expanses of sparse grass used to feed sheep, horses and
cattle and valleys are often filled with sagebrush. Typical birds are: Horned
Lark, Sage Thrasher, Pinyon Jay, Mountain Bluebird, and various sparrows
including Vesper, Sage, and Brewer's.
Atop the great mesas, ponderosa pine with mixtures of Oak, New Mexico locust,
Aspen and Douglas fir can be found. Birds to be looked or listened for include:
Northern Goshawk, Golden Eagle, Wild Turkey, Band-Tailed Pigeon, Flammulated
Screech Owl, Northern Pygmy Owl, Whip-Poor-Will, White-Throated Swift,
Broad-Tailed Hummingbird, Northern (Red Shafted) Flicker, Acorn Woodpecker,
Hairy Woodpecker, Western Wood Pewee, Violet-Green Swallow, Steller's Jay,
Mountain Chickadee, White-breasted Nuthatch, Pygmy Nuthatch, American Robin,
Western Bluebird, Mountain Bluebird, Virginia's Warbler, Yellow-rumped Warbler,
Grace's Warbler, Red-Faced Warbler, Western Tanager, Hepatic Tanager,
Black-headed Grosbeak, Evening Grosbeak, Pine Siskin, Red Crossbill, and
Most of the Navajo-Hopi country, except that occupied by ponderosa pine and
fir-spruce forests, can be divided into area characterized by three types of
- open, weedy grassland that is chiefly grama and galleta grasses and
- stretches of sagebrush, greasewood and saltbrush; and
- the pygmy forest of pinyon pine and juniper trees.
In the grassland, the Poor-will, Horned Lark and Western Meadowlark are
virtually the only breeding birds. Bird life in the
sagebrush-greasewood-salt-bush type is also scarce, typified by such species as:
Mourning Dove, Common Nighthawk, Say's Phoebe, Northern Mockingbird, Bendire's
Thrasher, Sage Thrasher, Loggerhead Shrike, House Finch, Vesper Sparrow,
Black-Throated Sparrow, Sage Sparrow, and Brewer's Sparrow.
In the pygmy forest, the following are the typical breeding birds: Red-tailed
Hawk, American Kestrel, Mourning Dove, Great Horned Owl, Common Night Hawk,
Cassin's Kingbird, Ash-Throated, Flycatcher, Gray Flycatcher, Scrub Jay, Pinyon
Jay, Plain Titmouse, Bushtit, Bewick's Wren, Rock Wren, Mountain Bluebird,
Blue-Gray Gnatcatcher, Black-Throated Gray Warbler, Spotted Towhee and Chipping
During migration, the number of species changes dramatically as seen in
figure 1. The limited waterways with trees and the few marshes concentrate the
migrating birds to a few areas. These can provide the best places to find
vagrant shore birds and rare Eastern birds not seen often in Arizona. A loop can
be planned for a long weekend that can yield a variety of birds as well as
seeing some of the spectacular countryside of movie fame.
Fig. 1. Recorded species vs. month, showing migration peaks
|Fig. 2. Map of the Navajo Reservation, showing birding locations.
One bird finding route (See Figure 2) would be to go North from Phoenix to
Payson and on up to I-40 by way of Holbrook. Good places to stop along I-40 are
the trees at the Petrified Forest Visitor Center
area, Navajoa, and the Sanders school grounds. Rarities such as Black-throated Blue Warbler have been found by
MAS members at these migrant traps. One can then continue on North to Ganado and Ganado Lake. The lake can be reached by going
East out of Ganado on Rt. 264 and then North on Navajo Rt. 27 a short ways. A
dirt road to the West leads to willows near the lake. Camping out by the lake
allows one to look for Eastern migrants in the willows and cotton wood trees
with the dawn chorus. The lake had dried up due to a slow leak, but should have
water in it now. A variety of ducks and geese, shorebirds and various swallows
are to be expected at the proper season. Black Terns are regular in migration.
While in the Ganado area, the College at Ganado grounds have had some good
birds such as Mourning Warbler. Park near the Hospital and walk the loop drive
around the campus. Look especially along the ditches and outer areas from the
buildings. Nearby is the Hubbell's Trading Post, which is a National Historic
Site. Recently the Arizona Highways had an article on this very interesting
place. Park by the Trading Post and then bird Northeast, up Ganado Wash.
Interesting birds found in or by the wash have included Prothonatory Warbler and
Rose-Breasted Grosbeak. Many Eastern warblers have been found along this wash,
which is one of the best riparian areas on the reservation.
Brad Jacobs notes in his book on Birding the Navajo and Hopi Reservation
that: "This area is a regular on the fall field trip list of the Maricopa
Audubon Society. Many of the vagrant Eastern warblers recorded for North-east
Arizona are seen in the part of the grove that stretches from the Trading Post
to about a half mile above the bridge on highway 264. September and early
October seem to be the best time to visit for vagrants."
Going West of Ganado on Rt. 264, one comes to Rt. 191, which is taken north
towards Chile. A family of Long-eared Owls was seen at Moaning Lake bed to the West of Rt. 191. In Chinle,
take the road East to the Canyon de Chelly area. The cottonwoods and Russian
olives at the mouth of Canyon de Chelly have yielded several Eastern vagrants.
The corral area on the road to the Thunderbird Lodge has had an Eastern
Kingbird. There is a large campground to overnight for catching the dawn chorus
along the wash. The Lodge has a cafeteria that features Navajo and Southwest
dishes at reasonable prices.
Continuing on North on Rt. 191 leads to Many Farms Lake.
This is one of the best birding areas on the Reservations any time of the year.
Well over 200 species have been seen on or around the lake. Access to the lake
is by a good dirt road going to the West just a ways North of the town of Many
Farms. The road leads to the dam, where a scope can be used to look out over the
lake. Following the road further West and North allows one to park and walk out
to the lake. Good birds have included documented White-rumped Sandpiper and
Lapland Longspur. Black-Billed Magpies and gulls are often found around the
The next stop on our loop could be Round Rock Lake by the Round Rock community reached by continuing on North on Rt. 191 until the
intersection with Navajo Rt. 12. This is a good place to look for wintering
birds. One year several swans kept the lake open by disturbing the ice as it
formed. Going further North on Rt. 191 until reaching Rt. 160 leads East to the
town of Teec Nos Pos. Black-Billed Magpie nest near here
and Black-Capped Chickadees have been found since they have nested in the San
Juan river area just to the East in New Mexico. Blue Jays have also been seen.
Kayenta is to the West on Rt. 160. This is near
Black Mesa and the Peabody Coal Company where Chuck LaRue was the biologist for
a number of years. Chuck is now in Flagstaff and is not able to provide the
updates as often on the good birds in the Northeast part of Arizona and the
Reservations. He was also a good guide for trying to find the elusive Chukar in
the reservation area.
The coal is taken from the mesa South of Kayenta by a long conveyor belt that
can be seen to the West of town. The belt loads storage bins, which are used to
fill long electric trains. The train runs along Rt. 160 for quite some way
before going off to the North to be used to fire the Navajo power plant near
Lake Powell. Along the road in the winter watch for Northern Shrike. Watch for
the road to the North of the train tracks leading to Cow
Springs near the old Cow Springs Trading. The lake has had a number of
unusual migrants including the first record for Red Phalarope for Arizona.
Our loop takes us on to Tuba City; home of the Tuba
City Truck stop made famous by a song. The food and service is good. This area
and any of the towns along our loop should be checked for birds since the towns
typically have the only trees and water for miles around. Taking Rt. 264 to the
East out of Tuba City takes one past Coal Canyon. This interesting place is
famous for its legend of an Indian maiden that haunts the canyon on full moons.
Continuing on Rt. 264 to Keam's Canyon, one is in the
heart of the Hopi Reservation that is surrounded by the larger Navajo
Watch for groves of cotton woods and Russian olives along the way to look for
Eastern strays. Just before the town, there is an old camp ground in the wash.
Park and walk the wash looking for Cassin's Finches and Townsend's Solitaires.
This is a good place for a picnic. After leaving the wash, continue on East into
town to past the hospital area. Park by the school play ground and look along
the wash for such birds as Brown Thrasher and Philadelphia Vireo that have been
found here. Continuing on the road leads to a small lake where Rose-Breasted
Grosbeak has been seen.
Leaving East from Keam's Canyon one can take Rt. 77 South back to Holbrook
and return to Phoenix to complete the loop. If time is short, one can do just
the Ganado, Many Farms and Keam's Canyon portion only by doubling back after
leaving Many Farms. Motels are available on the Navajo and Hopi reservations in
Chile, Kayenta, Second Mesa, Cameron and Tuba City. In addition to the
campgrounds already mentioned, on can camp in Monument Valley, Wheatfields Lake
Remember that travel within the Navajo and Hopi reservations comes under the
Indian Nation laws and is a privilege, not a right. Many sections have been
closed at the request of residents or to protect archeological sites in the
area. Please stay on marked roads and stay away from private homes, buildings
and livestock. Also, landmarks such as unusual hills, other geological
formations and rock outcroppings may have religious significance and should not
be climbed. If in doubt ask local residents before proceeding. They will
appreciate your courtesy and concern, but they will be reluctant to talk about
such sites. Ask permission before taking pictures of residents, their homes or
Always avoid approaching livestock herds too closely. You may frighten or
scatter the herds. Local residents are concerned about poachers, so when
strangers near their herds, they have reasonable cause for concern. Also, dogs
that run with the flocks can be very aggressive. With a little forethought and
planning, one can have a unique experience birding the USA 's largest Indian
reservation. With the lack of birders in the area, you too may find a new bird
to add to the growing list of Arizona birds.
References: Jacobs, Brad. 1986. Birding on the Navajo and Hopi Reservations,
Jacobs Publishing Co, Sycamore, Missouri. | <urn:uuid:adc311ba-c49b-435b-b52f-ef9aa0ba995e> | CC-MAIN-2016-22 | http://www.maricopaaudubon.org/web-content/birding_locations/nav_hopi.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464052868853.18/warc/CC-MAIN-20160524012108-00000-ip-10-185-217-139.ec2.internal.warc.gz | en | 0.937352 | 2,856 | 2.984375 | 3 |
Nnete Okorie-Egbe of Akwete was a Princess and revolutionary leader from Akwete.
Nnete was the fearless leader of the 1929 women’s riot of Aba, protesting unfair taxation of women.
She was imprisoned by the British Colonial Administration for two years in Port Harcourt.
Later released to a hero’s welcome as the British Colonial Administration backed down and reversed itself, by abolishing the taxation on women.
At the time, she was able to lead the revolution which also had women from other old Eastern Nigerian ethnic groups like Opobo, Ibibio, Andoni, Ogoni & Bonny. It was a strategically executed anti-colonial revolt over economic, social & political injustice against women who also weren’t allowed to have leadership positions.
Indeed, she and her fellow amazonian women of that era were the ‘pioneers’ of true and sensible feminism in the Nigerian clime.
She Died at age 102 in 1968 during the Nigerian/Biafra Civil War. | <urn:uuid:02e69e4f-5986-44e0-a93f-3b34e917db4c> | CC-MAIN-2023-23 | https://howng.com/meet-nnete-okorie-egbe-the-leader-of-the-aba-womens-riot-of-1929/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653930.47/warc/CC-MAIN-20230607143116-20230607173116-00650.warc.gz | en | 0.977642 | 220 | 2.640625 | 3 |
Every year, companies work hard to improve the quality of their basic metal products. They use electropolishing methods to reduce the rough edge burrs and make the sheet metal gleam. They also turn to another process to improve the overall quality and durability of their products. This procedure is called electroplating.
What Is Electroplating?
Electroplating is a process in which metal is added to the surface layer of another product. The process can be employed to coat plastic (only following a coating with an electrically conductive material) but is more frequently metal. While all commercial metals can undergo electroplating, the ones most commonly used for this procedure are:
* Zinc-based dye casting
Similarly, the metal used during the process can be one of several. The most commonplace metals used to coat minutely the surface of either metals or plastic materials include:
Characteristics of Electroplating
The procedure of electroplating is frequently done as a continuous process. During the procedure, the parts to be electroplated are often hung from conveyors. As they pass through the process, the individual parts are lowered into the various tanks. They move in order from successive plating to washing and then fixing tanks. At the end of the line, they are examined for any flaws.
The electroplating process is one that relies on its ability to address the individuality of each piece or type of material submitted. It does one piece or piece type at a time because of the specific characteristics and requirements of each metal submitted for electroplating. Consideration is given to such things as the details of each electroplating solution as well as to the immersion times required and the current densities. As a result, the process of electroplating is adjusted in accordance with the materials including such things as the sizes and shapes of the pieces to be electroplated.
What Is the Purpose of Electroplating?
* The purpose of electroplating focuses on two aspects: appearance and durability. Overall, the procedure is intended to:
* Increase the ability of the material to be corrosion free
* Prevent or at least reduce wear resistance
* Improve the overall appearance of the metal. This is particularly true when it comes to decorative pieces including jewelry and trays
* Increases the overall dimensions of the product making it wider or longer or both
Electroplating is an old form of improving metal work. It is used to improve the physical appearance and increase the corrosion resistant capabilities of a variety of metals. While the process may be considered old, the results of electroplating create a look that is entirely new and capable of outlasting the original item. This becomes very important in military and exterior applications but is just as imperative for the smallest medical equipment. Overall, this confirms that electroplating is an integral part of producing items for today’s modern world that work and work well even under stress.
PEP General Metal Finishing provides its clients with the benefits arising from the technology behind electroplating. While we are strongly proud of our past, we are rooted in the present while looking forward to seeing what the future will bring us so we can pass on our acquired skills and knowledge to our clients. To find out more about who we are and our quality services, take the time to browse our site online at http://www.pepgenmetal.com/. | <urn:uuid:4ac2fc37-ee11-49fe-836e-ede0ba24a303> | CC-MAIN-2018-30 | http://www.bizlocaldir.com/why-choose-electroplating/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00550.warc.gz | en | 0.948551 | 689 | 3.640625 | 4 |
This September, the Ross Medical Education Center Grand Rapids North, Michigan campus participated in the Juvenile Diabetes Research Foundation (JDRF) walk, to raise awareness for type 1 diabetes. The campus as a whole showed up on Saturday morning, bright and early, and showed their dedication to the event. “We all showed up with our Ross shirts on for our third annual walk,” said Office Assistant Jenny MacLarty. “Showing the community that we are a part of the awareness and willing to do our part.”
What is JDRF you ask? It is the leading global organization funding type 1 diabetes research. It was started more than 45 years ago by parents who were trying to find a cure to help their children. Type 1 diabetes is a serious autoimmune disease that attacks a person’s pancreas and stops the production of insulin. Insulin is a hormone that the body produces that is essential to turning the food that we eat into energy for us to function. This disease strikes children and adults suddenly. It is nothing that you can prepare for, it just happens. And currently there is no cure for this disease. Every day there are 40 children and 40 adults diagnosed with type 1 diabetes in the United States alone.
This is a difficult disease for children as they love to play and run around. Having this disease puts a toll on their daily lives. They have to constantly have to have their blood-sugar levels checked, rely on getting injections or infused insulin, and make sure that they are eating a well-balanced diet and engaging in physical activity. For their parents this is a very serious and stressful disease to manage and make sure their little ones do not miss out on being a child or the fun things in life. As of today there are more than one million people affected by this disease.
JDRF has committed over $1.9 billion to research since the early 1970s. So where does the money go? It goes to making better treatment plans, prevention, and one day a world without Type 1. According to their website, there has been $16 million in funding that has been used towards artificial pancreases, and another $13 million in funding research led to discovering Lucentis. Lucentis is a medication that works as blood vessel growth inhibitor. It can treat age-related macular degeneration. It can also treat macular edema caused by a blocked blood vessel or diabetes, and diabetic retinopathy. Because of all of the incredible efforts of JDRF, the Grand Rapids North campus was thrilled to help out and be a part of this fundraising event. | <urn:uuid:e080b494-7d57-422b-8f86-5cc06ea4b19b> | CC-MAIN-2019-47 | https://rosseducation.edu/blog/ross-medical-grand-rapids-north-walks-diabetes-research/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00063.warc.gz | en | 0.972121 | 536 | 2.953125 | 3 |
Finland : Success Through Equity : The Trajectories in PISA Performance
Ahonen, A. K. (2021). Finland : Success Through Equity : The Trajectories in PISA Performance. In N. Crato (Ed.), Improving a Country’s Education : PISA 2018 Results in 10 Countries (pp. 121-136). Springer. https://doi.org/10.1007/978-3-030-59031-4_6
© The Author(s) 2021
The Finnish education system has gone through an exciting developmental path from a follower into a role model. Also on the two-decade history of PISA studies, Finland’s performance has provided years of glory as of the world’s top-performing nation, but also a substantial decline. This chapter examines Finland’s educational outcomes in recent PISA-study and the trends across previous cycles. Boys’ more unsatisfactory performance and the increasing effect of students’ socio-economic background are clear predictors of the declining trend, but they can explain it only partly. Some of the other possible factors are discussed.
Parent publication ISBN978-3-030-59030-7
Is part of publicationImproving a Country’s Education : PISA 2018 Results in 10 Countries
Publication in research information system
MetadataShow full item record
Showing items with similar title or keywords.
Välijärvi, Jouni (University of Jyväskylä, Institute for Educational Research, 2002)
The challenge for equity and excellence: Evidence for future successful action in bilingual Finland Brink, Satya; Nissinen, Kari (University of Jyväskylä, Finnish Institute for Educational Research, 2018)In 2013, the authors prepared a report on educational excellence and equity in Finland based on PISA 2009 data and asked the question “Could Finland achieve both excellence and equality goals in the coming decade? ...
Saarela, Mirka (University of Jyväskylä, 2017)The Finnish educational system has received a lot of attention during the 21st century. Especially, the outstanding results in the first three cycles of the Programme for International Student Assessment (PISA) have made ...
Harju-Luukkainen, Heidi Katarina; Sulkunen, Sari; Maunula, Minna (European Commission, 2022)This chapter provides information on results of international achievement surveys and their use in monitoring educational outcomes in Finland. The educational monitoring system in Finland differs from that of many other ...
Equity and excellence : evidence for policy formulation to reduce the difference in PISA performance between Swedish speaking and Finnish speaking students in Finland Brink, Satya; Nissinen, Kari; Vettenranta, Jouni (University of Jyväskylä, Finnish Institute for Educational Research, 2013) | <urn:uuid:9d1b2022-3d76-4be1-bbfd-b8ba3ee96b29> | CC-MAIN-2023-40 | https://jyx.jyu.fi/handle/123456789/72817 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510427.16/warc/CC-MAIN-20230928162907-20230928192907-00697.warc.gz | en | 0.845721 | 646 | 2.546875 | 3 |
by Steven Windmueller, Ph.D.
Great Jewish communities require a set of social elements if they are to sustain and grow Jewish life. For communities to have true standing and historic impact, they will need to exhibit some of these particular features:
An Engaging History and a Sense of Pride: Communities require their own distinctive record; in part having such a defining urban identity lays a framework for a communal tradition essential for great cities and their legacy.
A Substantial Demographic Cohort: Jewish cities of import have historically possessed a significant population base. Without core numbers a community simply can not sustain and grow its institutions, promote its identity, or create a definitional statement of its contributions and import on the stage of history.
Economic Capacity: To achieve any of the core structural elements of greatness, a community requires a financial infrastructure as well as the capacity to sustain and grow its enterprise. Families of significant wealth would be a defining element within Jewish history in underwriting the religious and communal needs of a great Jewish city; in more contemporary times communal and private foundations are seen as core to the rise and expansion of great Jewish communities.
Diversity and Choice: Jewish communities of prominence are often comprised of diverse Jewish ethnic constituencies that add definition, character, and import to the development of these “empire” cities which tend to attract different cultural streams and social groups. The presence of significant ideological controversy and policy debates would represent specific markers of communal prowess and importance. Diversity in this context can be seen as an indicator of a community’s strength and maturity.
National (International) Connections and Geographical Access: Jewish communities throughout time who made an impact on historical events and garnered recognition were ones that demonstrated an array of connections with other communities. Great Jewish communities were seen as important “players” beyond their own borders. They demonstrated quality leadership, the excellence of their institutions, and the contributions of their citizenry. Most great communities were historically near major trade and sea routes permitting them the means to share ideas and exchange information. Today, “geographical access” may well be described in the context being seen as a media center.
Opportunity and Continuity: Communities of influence and standing provided the resources and access points for its members to fully embrace Jewish ideas and practices. Prominent public lectures, academic convocations, and the presence of high profile personalities contribute to the exposure of local institutions and their leaders.
Creative Spark: Communities have understood the importance of experimentation as a way to market themselves but also to inspire and enrich their citizenry. New forms of institutional expression reflect this form of creative expression.
Intellectual Commitment and Cultural Resources: Over the centuries the centers of Jewish life were constructed around the great intellectual and cultural resources that a community could offer. Aligned with a number of the other core infrastructural elements, great centers of learning and teaching embraced the creation of Jewish educational institutions and programs.
Leadership: Significant communities of influence have been known or identified by their leadership. Families of prominence and wealth have been significant factors in sustaining institutions, promoting Jewish learning, and advancing great Jewish ideas.
On the Edge and in the Center: Communities need to exhibit political and social power and intellectual achievement that define their core interests and values. Yet, great Jewish cities have also been witness to creative pockets of energy and resources operating on the edge, challenging the establishment while adding new social ideas and institutional models. Both elements are important to the health of the communal enterprise.
These ten measures help to define centers of Jewish creativity and greatness. Certainly, communities are not limited to these specific expressions to define their unique and important contributions. Modern Jewish urban centers have thrived combining many of these essential pieces, in contrast to older Jewish communities where the emergence of a single institution or the presence of a prominent Jewish personality or family could lend special credence to a community’s standing.
Yet, when assessing the vitality of a Jewish community, these standards of measure ought to serve as an important framework. What ought we to expect of the communal enterprise, and how should we evaluate the destiny and health of our urban Jewish centers?
Dr. Steven Windmueller is the Rabbi Alfred Gottschalk Emeritus Professor of Jewish Communal Service at the Jack H. Skirball Campus of Hebrew Union College-Jewish Institute of Religion in Los Angeles. You can find more on his writings and research on his website, The Wind Report. | <urn:uuid:73ebf25a-5dbe-4b58-8639-e1309260a26d> | CC-MAIN-2014-52 | http://ejewishphilanthropy.com/defining-jewish-communal-greatness-how-ought-we-to-measure-our-jewish-communities/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773066.29/warc/CC-MAIN-20141217075253-00039-ip-10-231-17-201.ec2.internal.warc.gz | en | 0.944794 | 902 | 3.421875 | 3 |
by William Bork
The 1930's was a period of great economic hardship for the American people, a period of upheaval in the social and political structure. Streets were filled with hungry people waiting in breadlines. During the Great Depression, workers also walked the picket lines demanding their rights under laws passed during the New Deal.
The National Industrial Recovery Act (NIRA), passed in 1933, contained a section guaranteeing to workers a right to organize for the purpose of collective bargaining. Several large and sometimes violent strikes occurred in 1934 involving unions struggling for recognition as collective bargaining agent under the NIRA. Toledo, Minneapolis, and San Francisco were scenes of three of the best known strikes.
The level of strike activity was the highest in American history. Between May, 1933 and July, 1937, 10,000 strikes took place involving some 5,600,000 workers. It was a period of bitter conflict between Capital and Labor.
In May 1935, the NIRA was declared unconstitutional by the U.S. Supreme Court. Its labor provisions, however, were replaced on July 5, 1935 by the National Labor Relations Act, popularly referred to as the Wagner Act.
This act set up elaborate machinery for the determination of collective bargaining agencies and for the protection of labor from unfair practices by employers who might attempt to hinder union organization. By its protection of workers who chose to organize, it went much further than any previous law to encourage a policy of collective bargaining. The steelworkers were among the first to begin organizing under this new law.
Read more: Massacre at Republic Steel | <urn:uuid:7aa0ecfc-1b1a-439b-bae6-f462eab00a13> | CC-MAIN-2016-07 | http://www.illinoislaborhistory.org/resources/articles.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701167113.3/warc/CC-MAIN-20160205193927-00000-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.976869 | 316 | 3.953125 | 4 |
African Pygmy Kingfisher
Pygmy Kingfisher, Miniature Kingfisher Martin-pêcheur pygmée Natalzwergfischer Martín Pigmeo Africano Pygmékungsfiskare Afrikaanse Dwerg-ijsvogel Martino pigmeo africano
World: Widely distributed in Africa S of 15°N. It tends to be associated with water so is mostly absent from E Somalia and the SW of the continent. It is also missing from some areas to the N of the Congo Basin.
Kenya: Found mostly W of the Rift Valley in the less arid regions. Mostly absent from the dry N and E.
This is a small, brightly coloured, insectivorous species of woodlands and forests. Although it hunts for terrestrial insects it is generally found in close association with water. It prefers to perch in shady areas and, being unobtrusive, it can be difficult to spot.
The bird photographed above is a member of the most common race Ispidina picta picta, which can be distinguished from the less common I. p. natalensis by its lilac ear coverts. The latter race is present on Pemba island and along the coast, it also occurs as a non-breeding migrant in the E from April-September. | <urn:uuid:db1e62ef-5fb8-4061-94e4-7d8cf43d3b2a> | CC-MAIN-2014-35 | http://www.kenyabirds.org.uk/pygmy.htm | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835844.19/warc/CC-MAIN-20140820021355-00038-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.838775 | 279 | 3.1875 | 3 |
Yes, this is part of my immigration series, and no I will not examine forms of democracy. I won’t explain the nature of the constitutional republic (which protects the individuals) and how it operates via a representative democracy either. I’m not even going to touch on the nuances of power balance (technically) ensures that no group in the United States has absolute control. If you want an exact overview of United States democracy, you can read through these documents.
I will say that a representative democracy imbues its people with certain powers that make them ultimately responsible for much of what goes on within our countries policies. Sure, a majority rule with constitutional protections of individuals is the modus operandi, but the powers of the people are vast. Lincoln rightly called the United States a government of the people, by the people, and for the people. | <urn:uuid:c704a76f-2b1b-47be-92de-4612398aefb1> | CC-MAIN-2019-51 | https://biblearchive.com/blog/tag/civilians/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00515.warc.gz | en | 0.952513 | 175 | 2.71875 | 3 |
In a review of Jamieson’s memorial exhibition in 1938, Jan Gordon identified a stylistic distinction between those paintings executed prior to the Great War, and those made afterwards. Those before she categorised as “substance”, reflecting the influence of Manet; those afterwards she categorised as “shadow”, reflecting the increased importance to the artist in that period of capturing the effects of light and atmosphere and the greater influence of Monet and Constable. Sunset, Versailles, seems to bridge the gap between these two categories; whilst the brushstrokes are bold, energetic and full of body, the way in which Jamieson captured the unique light of the ‘golden hour’ is quite exquisite and evocative of Constable’s work.
It was not only Jamieson’s stylistic concerns that changed after the war. To augment a meagre income, the artist and his wife decided to teach, at home and abroad, and gave sketching classes every summer. The destinations of these summer painting trips are largely unrecorded, other than by the evidence of the exhibited paintings themselves, but it seems that the couple revisited with their students a number of the French and Belgian harbours, towns and gardens on which Alexander’s reputation had been built prior to the war.
In his foreword to Jamieson’s memorial exhibition, Sir John Lavery wrote, “He dipped his brush in light and air … Many a time … I have been struck by his wonderful perception and clear judgement, his keen sense of colour and composition, allied with masterly technique, which enabled him to convey his impression in the simplest language.”
This could have been written in reference to Sunset, Versailles itself; from a few paces this work perfectly embodies the forms and atmosphere of a French sunset on a summer’s evening, but up close is a masterful cacophony of surprisingly few brushstrokes – an example of the ‘simple language’ to which Lavery refers. | <urn:uuid:020ba718-405c-4f40-9773-6ff9a7620e68> | CC-MAIN-2019-35 | https://www.trinityhousepaintings.com/art/sunset-versailles-1930/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317037.24/warc/CC-MAIN-20190822084513-20190822110513-00309.warc.gz | en | 0.974213 | 423 | 2.53125 | 3 |
Use of Skin-Shock at the Judge Rotenberg Educational Center (JRC)
print this page
USING THE GRADUATED ELECTRONIC DECELERATOR TO TARGET ANTECEDENT BEHAVIOR IN MR/AUTISTIC STUDENTS
Patricia M. Rivera, Ph.D., Matthew L. Israel, Ph.D., Candy McGarry, Heather Sutherland
Judge Rotenberg Educational Center
The Judge Rotenberg Educational Center operates day and residential programs for children and adults with behavior problems, including conduct disorders, emotional problems, brain injury or psychosis, autism and developmental disabilities. The basic approach taken in all of JRC's programs is the use of behavioral psychology and its various technological applications, such as behavioral education, programmed instruction, precision teaching, behavior modification, behavior therapy and behavioral counseling. From JRC's inception, its basic philosophy has always included the following principles: a willingness to accept students with the most difficult behavioral problems and a refusal to reject or expel any student because of the difficulty of his or her presenting behaviors; the use of a highly structured, consistent application of behavioral psychology to both the education and treatment of its students; a minimization of the use of psychotropic medication; and the use of the most effective behavioral education and treatment procedures available.
This study examines the use of the Graduated Electronic Decelerator (GED) to target antecedent behaviors in individuals with mental retardation and/or autism. The GED is a contingent skin shock device used as a consequence to decelerate inappropriate behavior. The GED has been shown to be effective in decelerating behaviors such as aggression destruction, and health dangerous behaviors (please see our website www.judgerc.org for more details). Data from two students at the Judge Rotenberg Center (JRC) will be presented. Both students participated in court authorized aversive treatment program with the goal of decelerating their inappropriate behaviors. Results will show an initial deceleration of aggressive behaviors treated with the GED and a subsequent continued deceleration after initiating GED treatment of antecedent behaviors. Discussion of these results and the implications for further study will also be presented.
Participants for this study were 1 current and 1 past student from the Judge Rotenberg Center (JRC). The students participated in all aspects of JRC’s residential and educational programming. All students were referred to JRC because they exhibited a high frequency of inappropriate behaviors and were not able to be maintained within a regular school setting or other residential schools. They exhibited maladaptive behaviors such as aggression, destruction and self-abuse that interfered with their educational growth and put themselves and others at high risk for physical harm. Both of the students have also received multiple psychiatric diagnoses. They were involved in special education at an early age and due to their behaviors were placed in various psychiatric hospitals and residential placements. These students had been rejected from numerous facilities due to the severity of their behavior. Alternative treatments for the students’ behavior problems prior to JRC included positive only behavior modification and medication management, all of which proved to be ineffective in treating their maladaptive behaviors.
Positive only programming was implemented with both of the students upon their admission to JRC. This treatment included positive reinforcement such as tokens, and tangible rewards for performing appropriate behaviors and refraining from inappropriate ones. Social reprimands as consequences for inappropriate behavior, and ignoring of certain problematic behaviors were also incorporated into the students’ behavior modification programs. DRO contracts of varied lengths (range1 minute to 1 month) were also implemented. Due to the intensity an frequency of the students inappropriate behavior a court approved aversive program was implemented which included the use of water spray, spatula spanks, muscle squeezes, and SIBIS which is a contingent skin shock device. These aversive methods were found to be ineffective in treating the students and their inappropriate behaviors continued to accelerate. Both students were subsequently switched to the Graduated Electronic Decelerator (GED) for their major inappropriate behaviors including aggression. The GED is an FDA approved skin shock device used to decelerate inappropriate behaviors. Frequency data were recorded 24 hours a day and tally charts were converted to standard behavior charts in order to track behavior changes and adjust the length and time of contracts.
Participant #1: L.L.
· Mental Retardation (Mild)
· Organic Dementia
· Mood Disorder NOS
Past medications include:
Problematic behaviors exhibited prior to JRC placement:
· Aggression towards others (bite, kick, punch etc.)
· Health Dangerous Behaviors (biting self, hitting self, etc)
· Numerous psychiatric hospitalization for disorganized and assaultive behavior
Participant #2: J.C.
32 year-old male
· Mental Retardation (Severe)
Past medications include:
Problematic behaviors exhibited prior to JRC placement:
· Pulled hair until almost bald
· Health Dangerous behaviors (bite self, bang head, etc.)
· Aggression towards others (hit, head butt, bite, etc.)
· Multiple day and residential placements due to aggressive and self-abusive behavior
Results of this study indicate a significant deceleration in both students’ aggressive behavior as a result of their antecedent behavior being treated with the GED. For participant #1, L.L., his aggressive behavior was increasing before the implementation of GED (see Figure 1). These behaviors were multiplying at a rate of 9.28 every 6 months. There was an initial deceleration of his aggressive behaviors following GED treatment but this progress appeared to level off and he continued to be aggressive at an approximate rate of 20 occurrences per month. It was noted that most of L.L.’s aggressive episodes started with him bolting out of his seat and then attacking staff/other students. In June of 1993 we began to treat the antecedent behavior of “out of seat without permission” with the GED. Using negative reinforcement, a chair was created that would close a switch, activating the GED any time L.L got out of his seat without permission. The GED applications continued every 2 seconds until he returned to his seat. If he raised his hand to get out of his seat, staff were able to deactivate the seat board switch. Figure 1 shows L.L’s aggressive behaviors decelerated significantly following the treatment of his antecedent behavior with the GED. For participant #2, J.C., her aggressive behavior was also accelerating at a rate of 1.27 every six months before the implementation of GED (see Figure 2). There was an initial deceleration in her aggressive behavior once they were treated with the GED but this behavior appeared to level off at a median of 5 occurrences per month. Again, the antecedent of “out of seat with out permission” was identified and began being treated with GED in May of 2002 (negative reinforcement was not used in this case). Figure 2 shows that J.C.’s aggressive behaviors decelerated at a rate of 2.17 every 6 months following the treatment of her antecedent behavior with the GED.
This study supports the effectiveness of treating antecedent behaviors of aggression with the GED to further decelerate the occurrence of aggressive behaviors in students with mental retardation. Quite often the inappropriate behavior being treated is not the first link in the chain of events. Using frequency data, a clinician can clearly distinguish behavior that precipitates an aggressive act. Identifying and treating such antecedent behaviors with aversive interventions such as the GED can be considered an effective treatment option for further decelerating aggressive behavior. Future studies could examine the effectiveness of starting to treat the antecedent behavior at the same time as the aggressive behavior instead of waiting for the aggressive behavior to level off. Also, one could look at other behavioral topographies such as health dangerous behaviors and identify and treat their antecedent behaviors. Finally, when examining the effectiveness of treating antecedent behaviors of aggression with the GED it would also be beneficial to look at the effect this treatment has on decelerating other defined behaviors such as destruction, major disruptive behaviors and non-compliance. | <urn:uuid:f8f51489-0fc4-47a7-86e9-4a4dda113ec9> | CC-MAIN-2016-50 | http://www.effectivetreatment.org/using_graduated.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542412.97/warc/CC-MAIN-20161202170902-00283-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.958786 | 1,693 | 2.53125 | 3 |
2004 INDEXCountry Ranks
Denmark Introduction - 2004
SOURCE: 2004 CIA WORLD FACTBOOK
Once the seat of Viking raiders and later a major north European power, Denmark has evolved into a modern, prosperous nation that is participating in the general political and economic integration of Europe. It joined NATO in 1949 and the EEC (now the EU) in 1973. However, the country has opted out of certain elements of the European Union's Maastricht Treaty, including the European Economic and Monetary Union (EMU) and issues concerning certain justice and home affairs.
NOTE: The information regarding Denmark on this page is re-published from the 2004 World Fact Book of the United States Central Intelligence Agency. No claims are made regarding the accuracy of Denmark Introduction 2004 information contained here. All suggestions for corrections of any errors about Denmark Introduction 2004 should be addressed to the CIA. | <urn:uuid:ab32f5c9-0e1c-4174-9660-9b6d52390ec3> | CC-MAIN-2019-26 | https://immigration-usa.com/wfb2004/denmark/denmark_introduction.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00511.warc.gz | en | 0.851254 | 178 | 2.546875 | 3 |
Research History (3 of 3)
By 1981, National Jewish was recognized internationally as the leading medical institution for the study and treatment of chronic respiratory, allergic and immune system disorders.
Two highly significant research discoveries were made at National Jewish in the 1980s.
The isolation and identification of one of two genes that code for the human T cell receptor, a key immune system component.
The discovery of superantigens, bacterial or viral toxins which massively stimulate disease-fighting T cells, often with harmful effects, such as attacking the body’s own cells.
These two discoveries revolutionized thinking on how the immune system recognizes foreign invaders and mounts it defense. This discovery also led researchers to explore the link between T cells and HIV.
Research in the 1990s
Research efforts doubled in the 1990s, and in 1995 Science Watch rated National Jewish as the top private institution in immunology research in the world. By the mid 1990s, the institution research parameters also expanded to include lung cancer studies as well as a cooperative project with Colorado’s Fitzsimons Army Medical Center to help evaluate Persian Gulf War veterans with chronic medical symptoms.
During this decade, research of pediatric asthma also continued to be paramount as specialists designed effective drug regimens to keep children from all socioeconomic levels with severe, life threatening asthma alive and out of the ER.
In 1999, National Jewish established two important research programs:
An unprecedented partnership with the University of Colorado Denver, forming the Integrated Department of Immunology.
The Harry and Jeanette Weinberg Clinical Research Unit opened and completed more than 70 studies in its first year, on subjects ranging from asthma to COPD, drug effectiveness and depression.
A New Millennium for Discovery
During the new millennium, chronic obstructive pulmonary disease (COPD) became a major focus in research at National Jewish. A number of other notable research efforts also got underway, including research to perfect an experimental lung surgery; attempt to develop stronger, safer vaccines; investigations to look for better ways to treat food allergies; and advancing the field’s understanding of cystic fibrosis.
Decade of Innovation
In 2007, the board approved a new strategic plan which called attention to personalized healthcare as an emerging trend, enabled by technology, knowledge of genetics and biology, economics and consumerism. Personalized medicine presented opportunities to enable healthcare practices to be increasingly patient-specific by taking into account individual differences in health states, disease processes and outcomes from interventions. This new goal of proactive, individualized medicine required a more complete research continuum to further integrate clinical care with translational and basic science research at the point of the patient.
As part of this new emphasis on personalized medicine, new programs were introduced including the Institute for Advanced Biomedical Imaging, the Advanced Diagnostics Laboratories, the Center for Genetics and Therapeutics, and the Integrated Bioinformation and Specimen Center.
Page 1, 2, 3
Read about National Jewish Health's clinical history.
Read about National Jewish Health's academic history. | <urn:uuid:b217fb6c-3cd8-40e7-aa62-b264f01741e9> | CC-MAIN-2015-35 | http://www.nationaljewish.org/about/whynjh/history/research/research-history3/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00005-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.934083 | 610 | 2.78125 | 3 |
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Students examine how engineers can use photosynthesis as a model of a complex. In this energy source lesson students describe the relationship between photosynthesis and respiration and how they sustain life on this planet.
16 Views 17 Downloads
What Members Say
- Victoria C., Student teacher | <urn:uuid:e8f5836a-f24b-4044-a45f-5203d4349009> | CC-MAIN-2017-04 | https://www.lessonplanet.com/teachers/lifes-primary-energy-source | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00502-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.880487 | 114 | 3.453125 | 3 |
Place-Making Traffic Solutions!
ThrU-Turn Intersections - Pedestrian-Friendly, Place-Making Solutions to Congestion!
The "Bowtie" is part of the "ThrU-Turn" family of innovative concepts. In this sketch of a major arterial, traffic making a U-turn around the ellipse can merge with oncoming traffic, eliminating the need to install a signal that would stop oncoming traffic as necessary with a Loon. The median can be landscaped, or potentially used as a major transit station.
Traditionally, It's Either Form OR Function
All over the nation, planners and residents alike hope to reinvent languishing suburban retail corridors into sustainable, livable, transit and pedestrian-oriented "Places" within the midst of their stale, Anywhere USA sprawl. But the exciting Visions created by well-meaning stakeholders, planners, and architects often bump up against a hard reality: The streets in question often carry huge traffic loads and it is extremely difficult to acquire the space for desired "Complete Streets."
Another sad truth is that even the best Transit Oriented Development will still generate a lot of vehicle traffic. If the roadways that serve proposed TOD are too congested, builders and city councils may be too reluctant to create TOD because it adds traffic to already intolerable congestion!
And engineers charged with managing the street simply will not accept form over function. In truth, to be truly livable and to sustain desired TOD densities, these intersections need to flow well. "ThrU-Turns" describe a family of designs such as Median U-turns, Bowties, Loons. These are hot new designs with excellent Place-Making qualities as well as exceptional ability to move a lot of traffic at slower maximum speeds, but higher average speeds, without congestion through sensitive mixed-use, multi-modal areas.
Place-Making Form AND Engineering Function!
Have you ever tried to turn left from a parking lot onto a busy arterial, and found it so impossible to get a gap in both directions that you instead went right, then made a U-turn? Thru-Turn designs such as Median U-turns, Bowties, Loons, Superstreet Intersections, Michigan Lefts, and even Roundabouts successfully formalize this action with amazing results.
ThrU-Turn accomplished using a Bowtie concept. Retail-oriented roundabouts on a low-speed, pedestrian-oriented cross-street.
How do they work?
In the diagrams, some lefts are completed as "right-U-through." Others are "through-U-right." Either way, the result is that the former left turn pocket is no longer needed, so fill it with a transitway, or whatever Place-Making architects can dream up! And since there are no left-turn arrows, the intersection can handle more traffic with significantly less delay, and it is also much easier for pedestrians to cross.
Meet the family
Decades ago, transportation engineers recognized that if you forced traffic to go "through-U-right", it would eliminate the need for left-turn arrows, resulting in far less congestion. Michigan took it to heart. It takes a lot of space for large vehicles to make a U-turn, and Michigan engineers required that hundreds of miles of roadway include extra-wide medians to accommodate these U-turns.
But what can you do if past planners and engineers didn't bless you with wide medians? A "Loon" is a great variation that simply carves out a small piece of a convenient parking lot to create enough space for trucks to make the turn. The required carve-out resembles the head of a loon, as shown here.
The "Loon" design for carving out enough space for a Thru-Turn
Bowties are another creative innovation that add wonders to the form and can also improve the function beyond that of a normal Median-U. The Bowtie uses two roundabouts or ellipses, the centers of which can be used for stately shade trees, pedestrian refuge, or even transit stations. Where Loons and Median U's require a signal to stop oncoming traffic while vehicles U-turn, roundabouts and ellipses do not necessarily need to stop oncoming traffic with a mid-block signal.
The mayor loves the monument he can fit into the Bowtie's ellipses, but just can't get support from the DOT nor residents for "Through-U-Rights." No problem! Just build the ellipses or roundabouts anyway, but still allow people to turn left at the main intersection as usual. The ellipses serve a useful traffic calming function. They define an "entry zone" into a more pedestrian-oriented place, and you have also gained a "Get out of jail free" card where if traffic ever gets bad enough, just put up signs showing the new way to go left, and voila!
Utah's First Thru-Turn, Summer of 2011
Median U-Turns are commond in Michigan, and exist in several other states. Bowties and Loons operate similarly but are rare by comparison. Utah will soon build its first ThrU-Turn at 123rd South and Minuteman Drive near I-15 in Draper. The images below are from a nice video presentation at: http://www.udot.utah.gov/thruturn/
In Septemeber 2009, Metro Analytics developed the initial concept for this ThrU-Turn intersection in Draper, Utah. The City was excited about the posibilities, and together with UDOT sponsored a study to evaluate several options and ultimately determined to build a variation of that initial concept. For that study, Avenue Consultants conducted traffic engineering and simulation, and Metro Analytics provided future volume estimates. The project is scheduled to begin construction in 2011.
Sign showing drivers the new way to make a left.
Vehicles on the "Thru-U-Right" path, instead of using a left-turn arrow, which causes significant delay. See this video and learn more at: http://www.udot.utah.gov/thruturn/
The 123rd South ThrU-Turn in Draper, Utah, is expected to reduce average 2030 delay from 2-minutes to just 26-seconds!
ThrU-Turns are "Alternative Intersections"
The Thru-Turn family, along with several cousins such as Town Center Intersections, Continuous Flow Intersections, Quadrant Intersections, and others are among a series of concepts collectively known as Alternative Intersections, Alternative Intersections, or even Unconventional Intersections. The key trait that links all Alternative Intersections is that they successfully eliminate the "left-turn arrow" phase, which otherwise reduces intersection efficiency considerably.
Where can I learn more?
Thru-Turns and other Alternative Intersections can all be found at www.alternativeintersections.org, where you can search for every Alternative Intersection that exists or has been planned anywhere in the world. If you see we are missing some, register and add them!
FHWA's latest findings on ThrU-Turns and other concepts, can be found at: | <urn:uuid:7f7d0f93-6cf8-47ee-a1ba-b66f5d727503> | CC-MAIN-2014-23 | http://thruturnintersections.org/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00250-ip-10-146-231-18.ec2.internal.warc.gz | en | 0.93549 | 1,494 | 2.515625 | 3 |
Did you participate in a Christmas Bird Count this year? If so, you should be proud to be a part of the longest running citizen science project in the world. Data from this project has been used in over 200 scientific studies and featured prominently in 57 studies on biodiversity according to Elinore Theobald of the University of Washington and lead author of the study “Global Change and local solutions: Tapping the unrealized potential of citizen science for biodiversity research.” One of the more recent studies examined the distribution of the exotic Monk Parakeet in the United States. The Monk Parakeet is a native of South America and popular pet species that has established feral populations in many areas of the U.S. and Europe. In this study, Amélie Davis and co-authors use data from the Christmas Bird Count as well as Project Feeder Watch, the Great Backyard Bird Count, and the eBird Program to demonstrate that natural factors such as climate and forest cover determine Monk Parakeet distribution in the South, but in the northern U.S., parakeet distribution was correlated with human factors such as housing density and distance to nearest large city. You can read the full text of the open access article here: Substitutable habitats? The biophysical and anthropogenic drivers of an exotic bird’s distribution.
The world is changing rapidly, in large part due to the growth of the human population. Other species are going extinct or invading new continents and habitats at such speed that it is difficult for scientists to keep pace. But could that same human population be harnessed to better understand these changes? In a new, open source paper in the journal Biological Conservation, E.J. Theobald and colleagues ask whether citizen science projects addressing biodiversity provide data that is currently used, or has potential for use, in mainstream biodiversity research.
The paper, “Global change and local solutions: Tapping the unrealized potential of citizen science for biodiversity research,” (which you can read by clicking on the link) addresses three main questions:
- What kind of biodiversity data does citizen science currently provide and what is it worth?
- How much of this data gets published in peer-reviewed journals and what makes it more likely to get published?
- What is the potential for citizen science to contribute to biodiversity research?
By scouring the Internet and interviewing citizen science project managers and biodiversity scientists, the authors compiled a database of citizen science projects in the field of biodiversity. They found that these projects run the taxonomic gamut from birds to bacteria and range in scale from continent-wide to just 10 km. When they tallied up the volunteer-hours of the 1.36-2.28 million people that participate annually in citizen science, they found that work is worth $667 million – $2.5 billion annually. That is equivalent to roughly 11-42% of the U.S. National Science Foundation budget!
That’s great, but is the data that all these hard working volunteers collect being used effectively? Does it make it into the peer-reviewed literature and why should that matter? Citizen science fulfills many other purposes besides collecting data- education, experiential learning and basic monitoring. But biodiversity research is data thirsty and requires data on the time, and geographic, scales for which citizen science is uniquely suited. When this data is analyzed for research published in the peer-reviewed scientific literature, the ideas and insights it generates are vetted by other experts and become available to the scientific community. The authors found that data from only 12% of the citizen science projects in their database were used in peer-reviewed studies. Data from projects that covered large spatial scales or long time frames (i.e. decades) were more likely to be used in peer-reviewed research, as was data collected by citizen scientists trained in species identification. Projects that made their data easily available to scientists, for instance on their website, were also more likely to be included in published research. The authors acknowledge that 12% is a conservative estimate of the data that gets used. Some data from citizen science projects may be used in peer-reviewed studies without being explicitly acknowledged (shame on the scientists.) Other data may be used in non-peer reviewed reports, which are useful, but not held to the same standard as peer-reviewed studies, and are not as accessible to scientists.
There is increasing interest and participation in citizen science and much of this interest aligns with the types of research biodiversity scientists want to do. Though citizen scientists typically collect data locally, there are lots of them and they may be spread out over large scales. This allows citizen scientists to collect data at spatial and time scales it may not be feasible for professional scientists to accomplish alone. But there is a disconnect between the citizen science and the mainstream science worlds. Biodiversity scientists need to become more aware of the data resources citizen science provides. Citizen science projects should to be designed with the needs of biodiversity research in mind, i.e. large spatial scales and long time frames are advantageous. Organizations such as the Citizen Science Association (http://citizenscienceassociation.org/) may be critical to integrating these two worlds. If you are interested in citizen science, go join this organization now- inaugural membership is free!
This study’s assessment of citizen science in biodiversity research makes it clear that the value and potential of citizen science is huge but underappreciated- primarily due to a lack of communication. We need to change that. Get outside, get data and get communicating! | <urn:uuid:2e92aa5d-7439-493d-b53d-e098acc917a8> | CC-MAIN-2018-47 | https://picahudsonia.com/2014/12/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743046.43/warc/CC-MAIN-20181116132301-20181116154301-00402.warc.gz | en | 0.948944 | 1,127 | 3.21875 | 3 |
- 1.2 MB
A zoning by-law is a regulatory Council approved document which establishes the permitted uses and development standards for specific areas or 'zones.'
A zoning by-law is a precise and inflexible document, which must be in conformity with the Township's Official Plan. The Official Plan provides general land use policy document of the municipality.
Zoning By-laws provide municipalities with a way to among other things:
A zone usually contains a list of uses permitted in that zone and associated standards and provisions related to:
The zoning by-law also contains general provisions that relate to all areas. General provisions contain standards for common land use matters such as legal non-conforming uses, accessory buildings, and setbacks from natural or man-made hazards for such matters as parking, landscape buffers, fencing and swimming pools. As a regulatory document, the zoning by-law also contains definitions for many of the terms used in the document.
If you want to use or develop your property in a way that is not allowed by the zoning by-law, you may have to apply for a zoning change - also known as a zoning by-law amendment. Council can consider a change only if the new use is allowed by the Official Plan. Contact Tracey Atkinson. | <urn:uuid:b2d7790a-92ff-49c6-a33e-335159746ee8> | CC-MAIN-2021-10 | https://mulmur.ca/build/zoning-by-law | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00534.warc.gz | en | 0.933234 | 259 | 2.6875 | 3 |
HALIFAX – Have you ever wondered just how much Halifax and its surrounding areas have changed in the last three decades? Thanks to Google’s massive archive of Earth imagery, you can take a virtual trip back in time to watch the landscape transform.
The time-lapse feature of the Google Earth Engine lets users view Landsat satellite images of locales all across the planet — one for each year between 1984 and 2012.
Users clicking through the images can see the Halifax Regional Municipality expanding outward as its suburbs grow substantially through the 1990s and early 2000s.
Here are a few of the interesting changes that can be seen taking place on the map:
- Business parks in Bayers Lake and Dartmouth Crossing appearing suddenly on previously forested terrain
- More trees across the city give way to new neighbourhoods and roads in Clayton Park-West, Rockingham and along the Bedford Highway
- The damage caused by Hurricane Juan in 2003, when many trees in areas such as Point Pleasant Park and McNabs Island were destroyed
- Damage from a forest fire in the Spryfield area between 2009 and 2010
Google also recently rolled out a feature that allowed users of its Maps product to view street-level imagery from the past, although its catalog of those images only dates back less than a decade.
View the annual time-lapse satellite images below:
© Shaw Media, 2014 | <urn:uuid:69fdd812-70bd-484d-a3c5-25ca6e58699b> | CC-MAIN-2014-41 | http://globalnews.ca/news/1517931/watch-halifax-transform-with-time-lapse-satellite-images/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657128304.55/warc/CC-MAIN-20140914011208-00333-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | en | 0.946363 | 282 | 2.625 | 3 |
Nicknames for U.S. Soldiers
- “Jonny Rebel”
A Confederate soldier during the Civil War.
- “Billy Yank”
A Union soldier during the Civil War.
- “Doughboy” A World War I Soldier.
- “Dogface” A World War II and Korean War Soldier.
A Vietnam War soldier.
- “Leatherneck, Jarhead”
A US Marine.
A commissioned officer who has advanced from an enlisted rank.
A soldier that can not keep up with his/her unit.
A group of airborne soldiers who deploy from the same aircraft.
Fact Monster/Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved. | <urn:uuid:74443b34-5eb6-40ee-b3f2-4c11a73141b5> | CC-MAIN-2015-14 | http://www.factmonster.com/ipka/A0769995.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296951.54/warc/CC-MAIN-20150323172136-00191-ip-10-168-14-71.ec2.internal.warc.gz | en | 0.838429 | 158 | 2.625 | 3 |
3.7 STL decomposition
STL is a versatile and robust method for decomposing time series. STL is an acronym for “Seasonal and Trend decomposition using Loess,” while Loess is a method for estimating nonlinear relationships. The STL method was developed by R. B. Cleveland et al. (1990).
STL has several advantages over the classical, SEATS and X11 decomposition methods:
- Unlike SEATS and X11, STL will handle any type of seasonality, not only monthly and quarterly data.
- The seasonal component is allowed to change over time, and the rate of change can be controlled by the user.
- The smoothness of the trend-cycle can also be controlled by the user.
- It can be robust to outliers (i.e., the user can specify a robust decomposition), so that occasional unusual observations will not affect the estimates of the trend-cycle and seasonal components. They will, however, affect the remainder component.
On the other hand, STL has some disadvantages. In particular, it does not handle trading day or calendar variation automatically, and it only provides facilities for additive decompositions.
It is possible to obtain a multiplicative decomposition by first taking logs of the data, then back-transforming the components. Decompositions between additive and multiplicative can be obtained using a Box-Cox transformation of the data with \(0<\lambda<1\). A value of \(\lambda=0\) corresponds to the multiplicative decomposition while \(\lambda=1\) is equivalent to an additive decomposition.
The best way to begin learning how to use STL is to see some examples and experiment with the settings. Figure 3.7 showed an example of an STL decomposition applied to the total US retail employment series. Figure 3.18 shows an alternative STL decomposition where the trend-cycle is more flexible, the seasonal pattern is fixed, and the robust option has been used.
%>% us_retail_employment model( STL(Employed ~ trend(window = 7) + season(window = "periodic"), robust = TRUE)) %>% components() %>% autoplot()
The two main parameters to be chosen when using STL are the trend-cycle window
trend(window = ?) and the seasonal window
season(window = ?). These control how rapidly the trend-cycle and seasonal components can change. Smaller values allow for more rapid changes. Both trend and seasonal windows should be odd numbers; trend window is the number of consecutive observations to be used when estimating the trend-cycle; season window is the number of consecutive years to be used in estimating each value in the seasonal component. Setting the seasonal window to be infinite is equivalent to forcing the seasonal component to be periodic
season(window='periodic')(i.e., identical across years). This was the case in Figure 3.18.
By default, the
STL() function provides a convenient automated STL decomposition using a seasonal window of
season(window=13), and the trend window chosen automatically from the seasonal period. The default setting for monthly data is
trend(window=21). This usually gives a good balance between overfitting the seasonality and allowing it to slowly change over time. But, as with any automated procedure, the default settings will need adjusting for some time series. In this case the default trend window setting produces a trend-cycle component that is too rigid. As a result, signal from the 2008 global financial crisis has leaked into the remainder component, as can be seen in the bottom panel of Figure 3.7. Selecting a shorter trend window as in Figure 3.18 improves this. | <urn:uuid:5dbc7a52-33dc-49c3-8461-8cc0ecd5a52a> | CC-MAIN-2021-10 | https://otexts.com/fpp3/stl.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378872.82/warc/CC-MAIN-20210307200746-20210307230746-00059.warc.gz | en | 0.889176 | 766 | 2.875 | 3 |
Use of Blockchain in Space Applications
In 2017, Bitcoin experienced a rise of 1500% investment in the world trade economy. Independent of the traditional investment channels, Blockchain technology has enabled safe exchanges of cryptocurrency through secured distributed ledger and smart contracts.
The success of Bitcoin has attracted the attention of the aerospace industry. More specifically, Blockchain technology’s platform that enables the execution of crypto-currency has appealed to space agencies, such as the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA), as well as the private sector.
The aerospace industry requires enormous amounts of capital and research, which restricts the entry of small companies given that traditional Venture Capital channels are not apt to invest in an industry that is heavily government-regulated, -financed, and -managed. In the U.S., only a select few companies like SpaceX and Blue Origin have successfully entered this closed niche industry without government assistance.
It is with this backdrop that NASA and ESA are attempting to leverage Blockchain’s decentralized principles, such as distributed ledger and smart contracts, to further propel multi-sensor-based satellites and the aerospace industry’s supply chain ecosystem.
In April 2018, NASA granted a proposal to combine Blockchain and Artificial Intelligence (A.I.) technology to address technology gaps in the operation of constellations and swarm of multi-sensor satellite flight architectures. The combination of these two technologies will enable autonomous operation of multiple satellites to execute various mission objectives both for earth-based observations and deep space missions. Traditional ways that operators conduct a mission entails the carefully timed and coordinated management of satellites and ground base stations to complete a mission or task. A space-based A.I. blockchain network would enable operators to send a single command to the Blockchain network, and a constellation of satellites coupled with ground base stations would autonomously and efficiently execute the mission.
ESA, on the other hand, has a different approach where blockchain technology’s use will transform the aerospace’s supply chain industry. ESA’s goal is for Blockchain’s secure decentralized distributive ledger and smart contracts technology to provide a simpler and efficient medium for any potential existing and new aerospace company participant to provide their innovative ideas and products. ESA envisions that this simpler and efficient use of secured smart contracts will create a robust supply chain ecosystem where suppliers and manufacturers can interact with each other to achieve their respective purposes. ESA is seeking to leverage blockchain’s technology in a way that will increase visibility, renew liability, reduce inconsistency, increase payment processing accuracy, and eliminate compliance problems among current suppliers and manufacturers.
With NASA and ESA’s transformative visions of Blockchain technology both in space and the aerospace supply chain industry, the private sector is properly positioned to adopt and further improve on those ideas. For example, Blockstream and Nexus Earth are two companies that are planning to launch their space-based blockchain network to deliver the world’s first “open source satellite network” for cryptocurrency, data collection, computing, storage, and other functions. With the advent of the adoption of Blockchain technology in the space arena, new companies will be better positioned to introduce new ideas that they would have not otherwise been able to do in the current traditional channels.
While Blockchain technology introduces new innovations for the aerospace industry, the legal system will need to properly react to those changes. For example, NASA has numerous missions that have joint ventures with other countries; thus, the implementation of an autonomous constellation of satellite comprising U.S. and foreign countries’ satellite will pose some interesting challenges as far as data ownership. The decentralized aspect of blockchain will create issues of who really owns the right of the data on ledgers. The legal system and government agencies will need to quickly react to establish rules of the game both in a country and through the global systems.
With globalization and the interaction of both foreign governments and private companies, the application of Blockchain technology in the aerospace supply chain industry could also bring about complex legal issues around regulatory regimes and agreements to be carefully considered in ledger transactions. In a multiple jurisdiction decentralized environment, it may be difficult to identify the appropriate governing law to apply.
For private companies, the use of smart contracts is established outside traditional legal institutions. The legal system and government agencies will likely have to adapt or enforce some regulations to influence blockchain technology’s smart contracts to reflect current legal systems’ definition of a contract. Additionally, data privacy will be another issue that providers would need to grapple with outside the Bitcoin environment. Once data is stored in a blockchain platform, it cannot be altered. This will be particularly challenging for data containing personal information. Providers or customers will need to design protection measures for privacy needs, or risk facing a surge of litigation activity in the absence of data privacy designs.
A new way of using Blockchain technology could potentially further propel new use for earth and space-based networks. While humans find better ways to utilize blockchain technology, it does not come without both technological and legal challenges. The legal system would need to quickly adapt to the ever-changing technological landscape and how it affects human interactions.
GLTR Staff Member, Georgetown Law, J.D. expected 2021; Georgia Institute of Technology, M.S. 2006; Georgia Institute of Technology, B.S. 2002; Lincoln University, B.S., 2002. © Séké Godo, 2018. | <urn:uuid:6cb226ea-89a2-408b-bd3f-5eb611fe7227> | CC-MAIN-2023-50 | https://georgetownlawtechreview.org/use-of-blockchain-in-space-applications/GLTR-11-2018/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00519.warc.gz | en | 0.917248 | 1,091 | 3.125 | 3 |
The Video Game Explosion: A History from PONG to Playstation and Beyond
The Video Game Explosion: A History from PONG to PlayStation and Beyond traces the growth of a global phenomenon that has become an integral part of popular culture today. All aspects of video games and gaming culture are covered inside this engaging reference, including the leading video game innovators, the technological advances that made the games of the late 1970s and those of today possible, the corporations that won and lost billions of dollars pursing this lucrative market, arcade culture, as well as the demise of free-standing video consoles and the rise of home-based and hand-held gaming devices.
In the United States alone, the video game industry raked in an astonishing $12.5 billion last year, and shows no signs of slowing. Once dismissed as a fleeting fad of the young and frivolous, this booming industry has not only proven its staying power, but promises to continue driving the future of new media and emerging technologies. Today video games have become a limitless and multifaceted medium through which Fortune 50 corporations and Hollywood visionaries alike are reaching broader global audiences and influencing cultural trends at a rate unmatched by any other media.
What people are saying - Write a review
It is a great book i recomend looking into it, and if you like it like i do then i would suggest buying it
Part II The Early Days Before 1985
Part III The Industry Rebounds 19851994
Part IV Advancing to the Next Level 1995Present
Part V A Closer Look at Video Games
Glossary of Video Game Terminology | <urn:uuid:66a74df6-2aa8-4e21-9048-673edfce3987> | CC-MAIN-2016-44 | https://books.google.co.uk/books?id=XiM0ntMybNwC&hl=en | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721405.66/warc/CC-MAIN-20161020183841-00177-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.911883 | 324 | 2.546875 | 3 |
- (chemistry) Any of a group of three isomeric aromatic hydrocarbons, di-methyl-benzene, found in coal and wood tar.
Translations: Etymology: From Ancient Greek (polytonic, ) "wood" + -ene.
- 2006, Thomas Pynchon, Against the Day, Vintage 2007, p. 262:
- :... proceeding, desperately, from such opiated catarrh preparations as Collis Brown's Mixture on to cocainized brain tonics, cigarettes soaked in absinthe, in unventilated rooms, and so on ....
Supplemental Details:Sponsor an extended definition for xylene for as little as $10 per month. Click here to contact us. | <urn:uuid:71199996-4b1b-4da5-99be-866a8af5f9f3> | CC-MAIN-2020-10 | https://www.allwords.com/word-xylene.html | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146907.86/warc/CC-MAIN-20200227221724-20200228011724-00434.warc.gz | en | 0.872203 | 155 | 2.59375 | 3 |
How much money would you pay not to have to take a pill every day? Or how many weeks or months of life would you give up to avoid that pill?
Researchers at the University of California San Francisco (UCSF) and University of North Carolina recently asked 1,000 U.S. adults questions like these, in order to better understand the utility of pill-taking for cardiovascular prevention. On average, people were willing to pay $1,445 to avoid a daily pill, but 2.8% of the respondents were willing to pay $25,000.
The average response to how many weeks or months of life a person would give up was 12.3 weeks, but 8.2% of the respondents would be willing to die 2 years sooner if it meant they didn't have to take a pill, according to results published in the March Circulation: Cardiovascular Quality and Outcomes.
The results are particularly useful for cost-effectiveness research but should also be considered when physicians are prescribing and talking to patients about medication, said lead study author Robert Hutchins, MD, MPH, a resident at UCSF.
Q: What led you to research this topic?
A: I was reading a cost-effectiveness analysis, and the authors had attributed a utility of 1.0 to taking pills. I didn't know what that meant, so I looked into it and found that utility is a number that is used in these sorts of analyses to quantify the quality of life in a given health state. Utility ranges from zero to 1.0, with 1.0 essentially saying that there's no effect on quality of life.
The fact that the authors had attributed a utility value of 1.0 was strange to me. People have got to say that taking pills every day for the rest of their life is going to affect their quality of life, even if it's just a little bit. I had taken pills before for short courses and knew that it was a hassle to fill the prescription, remember to take the pills, and actually swallow the pills every day.
I did a systematic review and looked into what data were out there about the utility value of taking a pill. I found that there's not very much at all. There was 1 study from a physician at Stanford, who had done a study just at the [Veterans Administration], predominantly middle-aged and older men, and [it was] a small study. The rest of the utility values that I found for taking pills were based on expert opinion. We found there was a gap in the data.
Q: Your study found a mean utility between 0.990 and 0.994 for pill-taking. How did those results compare to your expectations?
A: What I expected was actually pretty close to what we got. It's close to the expert opinion. We assumed that there would be some effect on quality of life, but the utility value would be pretty close to 1.0. This isn't like being in the hospital, getting dialysis every day, or being paralyzed for the rest of your life, all of which we would expect to have much larger effects on quality of life.
What we were actually surprised about was the way the respondents' answers were distributed, a kind of bimodal distribution. We found a lot of people said, “This doesn't really affect me much at all, maybe just a little bit.” Then there were also a pretty good chunk of people on the other end of the spectrum that responded in a way that suggested they hated taking pills and this was really affecting their quality of life a great deal.
Q: Did you find any demographic factors that could predict responses?
A: We looked at everything that we could think of that might interfere with our results: age, gender, race, income, level of education, numeracy, and literacy. We asked what their self-reported health was, the number of times people saw a doctor per year, their insurance status, and the number of pills they took a day.
There are some things that are statistically significantly different between the categories. Older people, for example, say that taking pills affects their quality of life a little bit more than younger people. In general, men have a little bit larger quality-of-life effect. The number of pills people take at baseline also affects their response. But because these numbers we're looking at are so small, we're talking about 0.993, 0.995, we don't really think that it's going to be all that helpful for individual physicians to use the demographics that we used in their prescribing practices.
Q: What lessons should physicians take from the study?
A: The general idea was that people don't like taking medication. It's going to be impossible to predict who doesn't like taking pills just based on the demographics, but I think understanding that there are differences among people is important. We all know that compliance isn't 100% with medication regimens for a variety of reasons.
It does reinforce the idea that physicians should be having these discussions with patients about how this affects their quality of life, especially for patients that are not as compliant. Ask them if it's something that they're willing to do before we prescribe medications.
This is a pretty significant effect on people's quality of life. It's not huge, but when we think about this over a period of years or decades, then we really have to think about what we're doing to patients [by prescribing lifelong medications], and make sure that they're on board with us and have the same ideas in terms of what we want for their medical care.
Q: What are other implications of the findings?
A: It's probably more pertinent on a population level, specifically when looking at policy decisions. Policymakers look at these cost-effectiveness analyses to decide whether or not to implement or pay for a certain intervention. We hope that our data can be used to make these cost-effectiveness analyses more accurate, so that policy decisions will be more appropriately decided.
Q: Is there any additional research you hope or plan to do in this area?
A: One thing that we wished we had done, and we probably would have had we had more resources, was actually talk to people in depth ... especially these people who were at the far end of the spectrum and said that this really affected their quality of life. We wished we had been able to interview those people and find out why exactly, for them, this was such a big deal. We've talked about repeating the study, but picking out those people who were the outliers and finding out a little bit more from them. | <urn:uuid:9822046f-7991-4e06-82b0-8193d75f3091> | CC-MAIN-2018-51 | https://www.acpinternist.org/archives/2015/05/pills.htm | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823382.1/warc/CC-MAIN-20181210170024-20181210191524-00538.warc.gz | en | 0.986808 | 1,362 | 2.546875 | 3 |
Looking for an examination copy?
This title is not currently available for examination. However, if you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact email@example.com providing details of the course you are teaching.
This book is the first to explore the dramatic amplification of global warming underway in cities and the range of actions that individuals and governments can undertake to slow the pace of warming. A core thesis of the book is that the principal strategy currently advocated to mitigate climate change--the reduction of greenhouse gases--will not prove sufficient to measurably slow the rapid pace of warming in urban environments. Brian Stone explains the science of climate change in terms accessible to the nonscientist and with compelling anecdotes drawn from history and current events. The book is an ideal introduction to climate change and cities for students, policy makers, and anyone who wishes to gain insight into an issue critical to the future of our cities and the people who live in them.Read more
- The first book to explore the dramatic amplification of global warming underway in cities
- Outlines the range of actions that can be taken to slow the pace of warming
Reviews & endorsements
‘Cities have begun to feel the sting of a changing climate already. This powerful volume reminds us what we can still do - globally and locally – by adapting to that which we can't prevent, and even more crucially, preventing that to which we can't adapt.’ - Bill McKibben, Schumann Distinguished Scholar at Middlebury College and author of ‘The End of NatureSee more reviews
‘In this groundbreaking study Stone provides the first systematic analysis of what a changing climate will mean for cities. Stone argues convincingly that we must be as concerned about urban warming as global warming. … a clarion call for cities to begin to shape their climate destinies. - Timothy Beatley, Teresa Heinz Professor of Sustainable Communities, University of Virginia
‘ … highly significant and unique because it fully bridges the study of cities, climate, and urban heat.’ - William D. Solecki, Professor, Department of Geography, City University of New York (CUNY)
'A great introduction to how climate change will hit cities and what can be done about it . . . essential reading for urban planners, city officials, and the general public.' - David W. Orr, Oberlin College, author of Down to the Wire: Confronting Climate Collapse
"highly readable, informative book..excellent volume does not simply reiterate what has been said many times before. Stone's clear writing enables anyone to understand and appreciate the importance of these local regional climate factors. Highly Recommended." - B Ransom, CHOICE, November 2012
"...begins with one of the most persuasive and surprising chapters that I have read...Overall, Stone's excellent book provides an important service in bringing urban heat island forward as a core and resolvable urban challenge...this is not just a book for climate enthusiasts. Rather, it will be a helpful book for anyone interested in improving human health and safety through better urban form" - Elisabeth Harmin, Journal of the American Planning Association, January 2013
Not yet reviewed
Be the first to review
Review was not posted due to profanity×
- Date Published: April 2012
- format: Paperback
- isbn: 9781107602588
- length: 206 pages
- dimensions: 228 x 153 x 15 mm
- weight: 0.38kg
- contains: 30 b/w illus. 5 colour illus.
- availability: In stock
Table of Contents
Prologue: la canicule
1. Keeling's curve
2. The climate barrier
3. Islands of heat
4. The green factor
5. Leveraging canopy for carbon.
Sorry, this resource is locked
Please register or sign in to request access. If you are having problems accessing these resources please email firstname.lastname@example.orgRegister Sign in
You are now leaving the Cambridge University Press website, your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks.Continue × | <urn:uuid:c2db5438-0c29-4338-9ccf-b26c36c92e41> | CC-MAIN-2015-18 | http://www.cambridge.org/ve/academic/subjects/earth-and-environmental-science/climatology-and-climate-change/city-and-coming-climate-climate-change-places-we-live | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658904.34/warc/CC-MAIN-20150417045738-00235-ip-10-235-10-82.ec2.internal.warc.gz | en | 0.907018 | 893 | 2.515625 | 3 |
One question throughout the ages has been regarding the atonement of Jesus Christ. Did Christ’s death on the cross simply make a way for mankind to merit the graces of God ourselves? Was Christ’s death on the cross solely a demonstration of His love, and therefore setting an example for us to follow? Or was Christ’s death on the cross the full payment for our sin? Before we can answer this question, let’s first discuss the definition of “substitutionary atonement.”
Substitutionary atonement simply means that someone or something takes the punishment for someone else. In other words, a person or an animal takes the punishment in the place of the one deserving the punishment. Is there any biblical basis for this idea? Consider this:
“And Abraham lifted up his eyes, and looked, and behold behind him a ram caught in a thicket by his horns: and Abraham went and took the ram, and offered him up for a burnt offering in the stead of his son.” – Genesis 22:13
Abraham was about to sacrifice his son Issac on the altar, but a ram was substituted in his place. This is one of many examples of how the Old Testament foreshadowed events to come (e.g. the sacrifice of Jesus Christ).
What can we find in the New Testament to suggest that Jesus Christ took our place when He suffered and died on the cross?
“Surely he hath borne our griefs, and carried our sorrows: yet we did esteem him stricken, smitten of God, and afflicted. But he was wounded for our transgressions, he was bruised for our iniquities: the chastisement of our peace was upon him; and with his stripes we are healed. All we like sheep have gone astray; we have turned every one to his own way; and the LORD hath laid on him the iniquity of us all.” – Isaiah 53:4–6
“Who was delivered for our offences, and was raised again for our justification.” – Romans 4:25
“For he hath made him to be sin for us, who knew no sin; that we might be made the righteousness of God in him.” – 2 Corinthians 5:21
“Who his own self bare our sins in his own body on the tree, that we, being dead to sins, should live unto righteousness: by whose stripes ye were healed.” – 1 Peter 2:24
Jesus Christ not only bore our sins on the cross, but His work on the cross was propitiatory. In other words, His work on the cross appeased the wrath of God toward us.
“Yet it pleased the LORD to bruise him; he hath put him to grief: when thou shalt make his soul an offering for sin, he shall see his seed, he shall prolong his days, and the pleasure of the LORD shall prosper in his hand. He shall see of the travail of his soul, and shall be satisfied: by his knowledge shall my righteous servant justify many; for he shall bear their iniquities.” – Isaiah 53:10–11
“Whom God hath set forth to be a propitiation through faith in his blood, to declare his righteousness for the remission of sins that are past, through the forbearance of God” – Romans 3:25
“And he is the propitiation for our sins: and not for ours only, but also for the sins of the whole world.” – 1 John 2:2
“Herein is love, not that we loved God, but that he loved us, and sent his Son to be the propitiation for our sins.” – 1 John 4:10
The Scripture clearly teaches that Jesus Christ’s sacrifice on the cross was two-fold: He took our punishment in our place; and by doing so, He appeased God’s wrath. The Scripture does not suppport any other views of the atonement, such as Jesus’ death was solely a demonstration of love or that He was simply making a way for us to merit our own salvation. Are we able to be saved by our own merits or by the keeping of God’s law? No we cannot. What did the Apostle Paul say?
“I do not frustrate the grace of God: for if righteousness come by the law, then Christ is dead in vain.” – Galatians 2:21 | <urn:uuid:60ab8bc8-57ab-4f73-bab1-aa7fba06b657> | CC-MAIN-2019-18 | https://christistherock.com/2017/05/31/the-substitutionary-atonement/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582736.31/warc/CC-MAIN-20190422215211-20190423001211-00449.warc.gz | en | 0.977014 | 943 | 2.5625 | 3 |
In New Zealand Company settlements land was initially sold as a package: a small block of urban land and larger block of suburban or rural land. The idea was that landowners would farm their country land, but also have residences and business interests in town. This 1842 map shows the distribution of country sections in the Nelson district. The colour coding of the blocks highlights the speculative nature of the enterprise. Red was land sold in England – often to absentee owners as an investment; purple was land sold in the colony; and blue was land set aside as company reserves – some for future sale. A tenth of the rural land was meant to be set aside as native reserves (the Native Tenths Reserves) for the betterment of Māori, but no rural Tenths were ever allocated.
Te whakamahi i tēnei tūemi
Alexander Turnbull Library
Reference: MapColl 834.1gbbd/ Acc.3044
Permission of the Alexander Turnbull Library, National Library of New Zealand, Te Puna Mātauranga o Aotearoa, must be obtained before any re-use of this image. | <urn:uuid:8b5cb61e-4a01-41b5-81f2-578910f2999f> | CC-MAIN-2022-33 | https://teara.govt.nz/mi/zoomify/25718/nelson-country-blocks | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00058.warc.gz | en | 0.967034 | 237 | 3.546875 | 4 |
On The Ottoman History Podcast, listen to Alexander Bevilacqua discuss when and how European scholars first began to seriously study Islam and the Arabic language:
In the seventeenth and eighteenth centuries, a pioneering community of Christian scholars laid the groundwork for the modern Western understanding of Islamic civilization. These men produced the first accurate translation of the Qur’an into a European language, mapped the branches of the Islamic arts and sciences, and wrote Muslim history using Arabic sources. The Republic of Arabic Letters reconstructs this process, revealing the influence of Catholic and Protestant intellectuals on the secular Enlightenment understanding of Islam and its written traditions.
Drawing on Arabic, English, French, German, Italian, and Latin sources, Alexander Bevilacqua’s rich intellectual history retraces the routes—both mental and physical—that Christian scholars traveled to acquire, study, and comprehend Arabic manuscripts. The knowledge they generated was deeply indebted to native Muslim traditions, especially Ottoman ones. Eventually the translations, compilations, and histories they produced reached such luminaries as Voltaire and Edward Gibbon, who not only assimilated the factual content of these works but wove their interpretations into the fabric of Enlightenment thought.
The Republic of Arabic Letters shows that the Western effort to learn about Islam and its religious and intellectual traditions issued not from a secular agenda but from the scholarly commitments of a select group of Christians. These authors cast aside inherited views and bequeathed a new understanding of Islam to the modern West. | <urn:uuid:9747c2b0-1996-4d7e-b19c-4192721ed27a> | CC-MAIN-2020-10 | https://www.hup.harvard.edu/catalog.php?isbn=9780674975927 | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147116.85/warc/CC-MAIN-20200228073640-20200228103640-00428.warc.gz | en | 0.911196 | 300 | 3.203125 | 3 |
1 Answer | Add Yours
It is helpful to know the definition of each option prior to making a selection.
A.) Transpiration is the evaporation of water through the leaves of plants.
B.) Groundwater is exactly that- water that flows underground! It is housed in soil pore spaces or fractures between rocks.
C.) Runoff (also known as overland flow) is the flow of water that occurs when excess stormwater, water that melts from mountaintops, or other sources flows to the surface of the earth.
D.) Evaporation occurs as water gains enough kinetic energy upon being heated and changes from a liquid to a vapor. Evaporation eventually condenses onto dust particles in the atmosphere to form clouds. When the condensed water droplets become heavy enough and fall to the Earth's surface, they are called precipitation.
Therefore, (A) and (D) are not likely answers since these processes are removing water from the surface of Earth. If it is not raining, runoff (C) is most likely not significant. Thus, the best answer would be (B), groundwater.
We’ve answered 319,627 questions. We can answer yours, too.Ask a question | <urn:uuid:ea351fb6-390c-49f1-b5d7-786affb2d6c4> | CC-MAIN-2017-13 | https://www.enotes.com/homework-help/rivers-sustained-by-during-periods-when-rain-does-513719 | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187519.8/warc/CC-MAIN-20170322212947-00429-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.95736 | 250 | 3.515625 | 4 |
As part of monitoring activity of the surrounding environment, we
conducted an analysis of plutonium contained in the soil collected on
March 21st and 22nd at the 5 spots in Fukushima Daiichi Nuclear Power
Station. As a result, plutonium 238, 239 and 240 were detected.
As the result of the plutonium analysis in the soil from the sample from
the 3 periodic sampling spots on April 14th, plutonium 238, 239 and 240
were detected as shown in Attachment 1. In addition, as the result of
gamma ray nuclide analysis from the same sample, radioactive materials
were detected as shown in Attachment 2.
Besides, as the result of the americium and curium analysis in the soil
from 2 samples among the 3 periodic sampling spots in which plutonium was
detected on March 28th amerium 241, curium 242, 243, and 244 were detected.
We have reported the results of analyses to the Nuclear and Industrial
Safety Agency and the government of Fukushima Prefecture.
We will continue to conduct the similar analysis.
Attachment1:Fukushima Daiichi Nuclear Power Station: Plutonium analysis
result in the soil (PDF 9.3KB)
Attachment2:Result of gamma ray nuclide analysis of soil (PDF 33.7KB)
Attachment3:Fukushima Daiichi Nuclear Power Station: Am and Cm analysis
result in the s (PDF 54.7KB) | <urn:uuid:f28a0be3-bc74-4685-bfc0-0715cc952bdd> | CC-MAIN-2014-15 | http://www.tepco.co.jp/en/press/corp-com/release/11042711-e.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.890581 | 301 | 2.921875 | 3 |
The Elgin and Belvidere Electric Company (operational from 1907-1930) was a 36-mile interurban line that connected Belvidere, Illinois and Elgin, Illinois. It was the central link in the interurban network connecting Freeport, Rockford, Elgin and Chicago which included the Rockford and Interurban Railway to the west and the Chicago, Aurora and Elgin Railway to the east. In 1927, the line was extended to Rockford over a line of the Rockford and Interurban.
The Elgin and Belvidere Electric Company was incorporated March 11, 1905. Bion J. Arnold acquired the railroad after it went into financial difficulties during construction in 1906. His company, The Arnold Company, designed and built the power generating stations and the overhead structure for the railway, and had largely been paid in railway securities. Construction of the line was completed in 1906, however it did not enter service until February 2, 1907.
Arnold used the railroad as a proving ground for pioneering designs; the first automatic substation was on the line at Union and the railroad was one of a handful to use gasoline generators to generate electric power. Its rolling stock consisted of standard wooden interurban cars which typically ran in short one- to three-car trains on hourly intervals. Arnold himself was heavily involved in the line's construction and management, and at one point operated the cars himself during a strike.
On May 1, 1927, the Elgin and Belvidere Electric was sold to Milton Ellis and his associates, owners of the Rockford and Interurban and the local Rockford trolley lines. A new company, the Elgin, Belvidere and Rockford Railway, was formed and the Rockford to Belvidere line of the Rockford and Interurban Railway was transferred to it. Bion Arnold remained as manager and president of the new company.
ELGIN AND BELVIDERE INTERURBAN CLOSING
The railroad was never particularly profitable, with a rate of return of about 2% in its best years. On March 10, 1930, the railroad ceased operations due to competition from the parallel Chicago and North Western Railway and from the automobile, after the paving of nearby US 20. The Depression also a huge factor that drove the E&B (and many other interurban routes) out of business.
For a time the railroad sat moribund, with the cars stored at the shops in Marengo, until Arnold scrapped the line himself in the mid to late 1930s | <urn:uuid:d42efb20-7600-48bc-87e7-17e832fd4681> | CC-MAIN-2018-34 | https://drloihjournal.blogspot.com/2016/12/history-of-elgin-and-belvidere.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215284.54/warc/CC-MAIN-20180819184710-20180819204710-00593.warc.gz | en | 0.975815 | 519 | 2.71875 | 3 |
After decades on the market, and decades of being mainstream medicine’s “go-to” we know exactly how NSAIDs works.
Ibuprofen, a well-known NSAID, is STILL causing the same old laundry list of common side effects:
- Stomach ulcers
- Liver problems
- Leg swelling
And worst of all?
A new report has just revealed taking these common anti-inflammatory drugs could WORSEN the symptoms of the deadly coronavirus, aka COVID-19.
Ibuprofen isn’t worth the risk
Go to the store right now… just about any store… and you’ll see specific items are flying off the shelves.
Among the most popular items sought out by desperate shoppers are:
- Hand sanitizer;
- Toilet paper;
- Clorox wipes;
- Disinfectant spray;
- Cough syrup;
- And ibuprofen!
But according to the latest report… you shouldn’t be so quick to pick up that last bottle...
See, it’s been long known that anti-inflammatories can have a depressive effect on our immune systems.
And when we take ibuprofen during a cold, we don’t really have to worry about the slight – but important – reduction in our immune response because it’s very unlikely that you could die from a cold.
But the exact OPPOSITE could go for the coronavirus…
We need our immune system in top working order to fight and WIN the battle against this deadly virus.
Anti-inflammatories block our immune system’s ability to release cells called mast cells, which are the body’s first line of defense against the virus!
Normally, when the mast cells come into contact with the virus, they trigger an immune response – leading to inflammatory chemicals being released and tackling the foreign invader.
But if you take an NSAID like ibuprofen, your body is unable to release these inflammatory chemicals… which, as you probably can guess, leads to a higher risk of complications.
You don’t even need NSAIDs
The dark truth is that you might not even need NSAIDs.
For instance, if you are taking ibuprofen to overcome a fever caused by the common cold or in most cases it’s more effective to let the fever run its course.
In fact, previous studies have found that actually INCREASE the severity of the common cold or pneumonia.
A fever is your body turning up the heat to incinerate bacteria and viruses. It’s an entirely NORMAL immune response.
Folks, there are better and SAFER alternatives out that WON’T put you at risk…
The government of Shanghai, China is reportedly recommending vitamin C as an all-natural alternative to reducing the symptoms of coronavirus.
Remember, vitamin C is essential for a strong immune defense.
It’s easy enough to bump up the amount you’re getting by paying attention to your diet. Eating a wide variety of fruits and veggies like bell peppers, dark and leafy greens, kiwi, broccoli, strawberries, tomatoes, peas, papayas, citrus fruits, and apples will give you a boost.
And if you want to ensure you’re getting enough, purchase a high-quality supplement online or at your local pharmacy. I recommend taking 500mg twice a day. | <urn:uuid:046b5799-1ecb-4a33-ac27-9a209a08bcd7> | CC-MAIN-2022-49 | https://www.realadvantagenutrients.com/health-blog/2020/03/24/coronavirus/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00448.warc.gz | en | 0.913915 | 743 | 2.59375 | 3 |
THE DRAMATIC occupation by Greenpeace campaigners of an oil rig in the freezing seas off the coast of Greenland this week marks the first skirmish in what may prove to be the defining environmental battle of this decade. On the one side, a multinational oil industry desperate for new drilling fields to meet the world’s insatiable demand for fuel. On the other, a global environmental movement anxious to find a new front on which to fight its stalled campaign against climate change. With Greenpeace’s action having halted drilling from the rig, at least temporarily, and Greenland’s prime minister weighing in to condemn its “illegal act,” battle lines have been drawn.
Located in Baffin Bay, an arm of the Arctic Sea between the west coast of Greenland and Baffin Island in the very north of Canada, the rig in question is operated by Cairn Energy, a small UK-based oil and gas exploration company. Last week Cairn announced it had found gas in thin sand – the possible precursor to oil – in the area. With oil majors such as Exxon and Chevron already buying licences to drill off Greenland, successful discovery of oil is almost certain to spark a new “black gold rush” in the region.
The oil industry’s desire to find new reserves in the Arctic is not hard to understand. Increasingly locked out of developing countries whose governments now prefer to control their own oil sectors, and plagued by political instability in oil-rich countries from Nigeria to Iraq, global oil companies view the prospect of finding oil in the Arctic – governed by the stable democracies of the United States, Canada and Scandinavia – with enthusiasm. They are already investing heavily in the high-carbon tar sands of Canada; the US Geological Survey estimates that the Arctic’s technically recoverable offshore reserves could amount to around ninety billion barrels, or up to 10 per cent of currently estimated “proven” reserves.
But the environmental case against exploration is even more powerful. The Arctic’s fragile ecology is already under pressure from the warming seas and fracturing ice masses caused by climate change. The region is rich in birdlife, with millions of birds passing through on their annual migrations, and is also home to many species of whale, including blue whales, minke and humpback, alongside seals, narwhals and walruses. With low temperatures, lack of sunlight and thick ice inhibiting the breakdown and dispersal of spilled oil, the environmental impact of an oil leak here could dwarf the BP disaster in the Gulf of Mexico. Ecologists warn that the contamination caused could be carried far inland by coastal species such as polar bears and foxes, which prey on marine animals. It is now more than twenty years since the Exxon Valdez ran aground in the Gulf of Alaska; despite the huge clean-up operation, local populations of marine mammals have yet to recover and some are nearing extinction.
And the Arctic Sea is a spectacularly inhospitable place for drilling. Following the Deepwater Horizon disaster in the Gulf of Mexico President Obama imposed a moratorium on drilling at a depth of 152 metres or more. But Cairn Energy has drilled to a depth of more than 300 metres from its rig off Greenland. Cairn’s ships are having to tow icebergs out of the way to avoid collisions with its rig. But it can’t do anything to divert the largest bergs, which means that the rigs themselves will have to be moved at short notice. Last month an ice island four times the size of Manhattan broke off the Petermann glacier north of Disko Island and will eventually make its way south through the Nares Strait into Baffin Bay.
Drilling in this area is only possible for the few months between July and early October each year; for the rest of the year the sea-ice becomes too thick to allow vessels to operate. This means that should a leak occur it may become impossible to drill a relief well until at least the following year, allowing oil to flood into the Arctic waters for months. And it would be impossible to mobilise the kind of clean-up resources that BP has applied to the Deepwater Horizon spill in the heavily industrialised area of the Gulf of Mexico. BP has used more than 3000 vessels in that operation; it is understood that Cairn Energy so far has an estimated fourteen in Baffin Bay. Industry experts warn that there are, in any case, no methods yet developed to recover spilled oil trapped underneath ice; the oil skimmers used in the Gulf of Mexico would simply be unable to reach it.
According to the US Minerals Management Service, the chance of a major spill over the lifetime of a block of exploration leases in Alaska is as high as one in five. And what will be the reward for such an environmental risk? Even if all the estimated Arctic reserves can be exploited, they would provide less than three years of global oil consumption at present rates.
For here lies the central problem. The oil industry is right to point to the continuously rising global demand for oil; but the answer cannot be an ever-expanding supply produced in ever more hazardous ways. And the reason is climate change.
Figures from the International Energy Agency make this clear. If the growth of energy demand continues on current trends, by 2030 oil consumption will have expanded by a quarter from current levels, inevitably requiring new discoveries. But these same trends, and their resulting greenhouse gas emissions, will lead to global warming of a catastrophic five or six degrees, well beyond the capacity of human society to adapt. By contrast, if the global temperature rise is to be held to a just-tolerable two degrees, global oil consumption would have to be only just above current levels by 2030, and already falling. Such levels of consumption can be met from within existing reserves. But more importantly, as the agency shows, they will require a significant development of alternatives to oil.
Such alternatives are beginning to become viable, notably in the development of electric vehicles and second and third generation biofuels. Electric and hybrid vehicles in particular are now under commercial production by all the big car manufacturers, and could become widespread over the next two decades. But the incentive for their development is the prospect of increasingly scarce and expensive oil, and this will only be retarded by the continued focus on developing new supply.
So Greenpeace’s demand for the banning of Arctic drilling is justified on both climate change and ecological grounds. But this does not mean it will be easy to achieve. Governments will always seek to avoid limiting production of an exploitable resource. When they wish to act at all, it is much easier for them grasp the other end of the stick, encouraging alternative sources and greater efficiency in use. That is why, in the field of electricity, emphasis has gone into developing renewables and nuclear energy and insulating homes and buildings. This depresses demand for coal and gas without requiring the awkward step of making it illegal to develop them.
Yet prohibiting resource use has been done – indeed, many of the environmental movement’s greatest victories have taken this form. The bans on whaling, prohibitions on cutting down ancient forests, the creation of national parks, the protection (just) of Antarctica – all provide precedents. They have required protracted public campaigns to pressure governments and the companies involved, but in the end have succeeded, even if in some cases only partially. In the field of electricity, too, the demand to ban unabated coal-fired power stations is beginning to achieve success in a number of countries.
So a global campaign to prohibit the exploitation of Arctic oil looks set to become the new focus for environmental concern. The deep anxiety and backlash against the oil industry caused by the BP disaster in the Gulf of Mexico provides a huge opportunity for the environmental movement. Yet in putting pressure on the US, Canadian, Danish, Norwegian and Russian governments, it will meet fierce resistance from an oil industry with deep pockets and even deeper contacts in the upper ranks of governments and legislatures. For a movement that’s struggled to mobilise public opinion on the scale required to combat climate change, this will be a huge challenge. But in every generation one issue comes to symbolise the wider battle over humankind’s exploitation of the natural world. The confrontation now taking place in the cold winds of Baffin Bay may mark the next frontier. • | <urn:uuid:38ccbe52-8aa9-4560-9bfd-954f75495e6a> | CC-MAIN-2017-13 | http://insidestory.org.au/arctic-oil-the-battle-begins/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00013-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.951383 | 1,705 | 3.046875 | 3 |
I love Arts teachers. All of them.
Each Arts teacher has a unique fingerprint of creativity that they bring to their students, their teaching and the world. And they do it all with less. Less time, less money, and less resources than everyone else in the school (typically). They are true rock stars for our kids.
And so I understand why many art teachers are protective of their supplies. Please keep in mind, too, that this isn’t just relegated to the visual art teachers. Music teachers, dance, drama and media arts specialists are all very careful about the resources, materials, and instruments being used outside of their classrooms. This only makes sense: we wouldn’t ask doctors to share their instruments with dentists after a surgery. It’s not fair, therefore, to just assume that the art teacher should provide everyone in the school with glue bottles, construction paper, or “just a little” paint. Nor is it okay to assume that the music teacher’s classroom instruments are fair game for a music integration lesson in a social studies classroom.
This is not to say that Arts teachers don’t love to share: they DO. They want to be a vital part of the creative heartbeat of a school. But they need those materials and resources (in the limited amounts provided to them) to be used to teach students the intricacies and value of their own craft. Students need to be able to use the materials and instruments to practice their art as a way to deepen their schema of learning the arts themselves. When we take these things away from our wonderful Arts teachers, we are then taking away a chance for our students to have a purely artistic experience which could then be applied critically in an integrated lesson.
As a result, our Arts teachers have had to get tough. They have had to put their foot down and say that classroom teachers cannot “borrow” paint or cannot just take a few instruments for an upcoming lesson. To classroom teachers, this feels like the Arts teacher isn’t being cooperative, but that is just not it. Arts teachers are simply advocating for the resources that they have fought so hard for in the first place.
So what can you do? How can you provide an Arts Integration lesson when going to the Arts teachers are out of the question?
1. Have a dedicated Arts Integration Cart or Closet. This has worked wonders for me in any building I have worked with. Set aside a certain amount of money – $1,000/$2,000 is a good range to start – and then order supplies just for that cart. You can order art supplies, musical instruments, an iPod dock, and even a few iPads for digital design. Then, as you and your colleagues plan for integrated lessons, you can check out the cart to use with your students. Be sure to keep an inventory of what you used, as well as a copy of the lesson plan, so that you can document where the materials are going. This will help in the ordering process next year. Want some great ideas for what to include in your cart? Check out our Arts Integration Cart Pinterest board to get started.
2. Try crowdsourcing or PTAs for the materials you need. Create a list of supplies that you would like to have dedicated for Arts Integration in your classroom. Don’t forget to include storage and organization for these supplies! Once you have your list, price it out via online ordering sites (Blick Art Supplies, Sax Art Supplies, and West Music are great places to start) and come up with a total. Present your budget and a brief statement of why you need these materials to your PTA or use them to create a pitch on a crowdsourcing site like donorschoose.org to let others help fund your initiative.
3. Don’t forget about recycling! Have a dedicated AI Supplies box outside of your classroom where teachers or parents can drop off odds and ends that they don’t need anymore. You’d be amazed at how many pieces of construction paper, scissors, markers, paints, and old instruments that will wind up with you just by that one box.
4. Preparation is key. Often, being able to gather arts supplies or resources from the Arts teachers doesn’t have to be a chore – you just need to use what you learned in kindergarten. Ask nicely and give them plenty of notice. It’s hard to feel generous when a teacher comes into your arts classroom saying they need the materials for this afternoon’s lesson. If you prepare your lesson in collaboration with others, be sure to connect with the Arts teacher you are linking to when you create your lesson. They may be able to budget and set aside something that you need. Be prepared too, for if they can’t offer you the items you need and try tips 1-3, but thank them for their willingness to try. Next time, you might just be able to work something out with them.
Have you been successful in gathering materials for your arts classes? What stumbling blocks have you run across in getting these resources? | <urn:uuid:77185d08-79c0-4500-af8c-5cabee6bfc76> | CC-MAIN-2019-47 | https://educationcloset.com/2013/10/15/dont-take-art-supplies-gather-materials-arts-integration-lessons/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00113.warc.gz | en | 0.964022 | 1,051 | 2.8125 | 3 |
Engelbart was involved in the development of the ARPANET—the precursor to the modern internet—and showed off hypertext long before most people had interacted with a computer, let alone touched a networked computer. On December 9, 1968 Douglas Engelbart's "Mother of All Demos" from Menlo Park, California showcased what was considered incredibly futuristic technology for the time, including his mouse. You can watch the demo on YouTube.
From the Computer History Museum:
While at SRI, Engelbart's most important work began with his 1959 founding of the Augmentation Research Center, where he developed some of the key technologies used in computing today. Engelbart brought the various strands of his research together for his "mother of all demos" in San Francisco on December 8, 1968, an event that presaged many of the technologies and computer-usage paradigms we would use decades later. His system, called NLS, showed actual instances of, or precursors to, hypertext, shared screen collaboration, multiple windows, on-screen video teleconferencing, and the mouse as an input device. This demo embodied Engelbart's lifelong commitment to solving humanity's urgent problems by using computers as tools to improve communication and collaboration between people.
Engelbart's legacy can be shared thanks to the incredible tools he helped create. Our hats are off to you, Mr. Engelbart. | <urn:uuid:c6cc67ab-ea92-462f-89b2-083e6e318a65> | CC-MAIN-2014-10 | http://paleofuture.gizmodo.com/douglas-engelbart-developer-of-the-early-computer-mous-659855829 | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010855566/warc/CC-MAIN-20140305091415-00022-ip-10-183-142-35.ec2.internal.warc.gz | en | 0.954779 | 285 | 3.546875 | 4 |
|Bumble bee robbing nectar|
Some flower visitors don't hesitate to rob nectar without paying their dues. They either don't bother or can't enter the flower the "legitimate" way, and thus, ensure picking up pollen along the way and delivering it to the next flower. Instead they take a shortcut, slashing the throat of the flower and going right to the source. Carpenter bees can be among the most notorious robbers because of their strong and sharp mouth parts that enable them to perforate the walls of a flower; but other large insects can be just as bad.
|Abelia flowers with slashed throats|
Tubular or trumpet shaped flowers are the most frequent victims of this larceny because their nectar is hard to reach. Here are some abelias that have been robbed. You can see the scar at the base of the flower.
Sometimes smaller bees or other insects take advantage of the shortcut and visit the wound, like this beetle, which also happens to parasitize the nests of bumble bees.
|Sap-feeding beetle Epuraea aestiva|
In this case something more tragic happened. A small bug ventured deep inside the flower and became tangled, its legs sticking out of the hole, and unable to go in or out or turn around. I thought that I may be able to help it and by the way find out the identity of the victim; but my clumsy old fingers couldn't perform this delicate task. I never found out who this innocent bystander was. Any guesses?
|An insect trapped inside the flower|
Beginners Guide to Pollinators and Other Flower Visitors
© Beatriz Moisset. 2012 | <urn:uuid:613906f1-c381-4746-887a-6a8bf45450b1> | CC-MAIN-2017-30 | http://pollinators.blogspot.com/2012_02_01_archive.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423809.62/warc/CC-MAIN-20170721202430-20170721222430-00038.warc.gz | en | 0.936243 | 343 | 2.59375 | 3 |
Harnessing the Power of Positivity: The Importance of Positive Self-Talk for Well-Being
Introduction: The way you speak to yourself matters more than you might realize. Positive self-talk isn’t just a feel-good exercise; it plays a significant role in shaping your mental and emotional well-being. In this guide, we’ll explore why cultivating a habit of positive self-talk is crucial for nurturing a healthier and more fulfilling life.
1. Influence on Mindset: Positive self-talk directly impacts your mindset. By reframing negative thoughts and cultivating optimism, you can shape your perspective to focus on possibilities rather than limitations.
2. Improved Self-Esteem: The words you use to describe yourself influence your self-esteem. Positive self-talk builds a foundation of self-worth, reinforcing the idea that you are deserving of love, respect, and kindness.
3. Reduction of Stress and Anxiety: Negative self-talk can amplify stress and anxiety. Shifting to positive self-talk reduces self-inflicted pressure and alleviates feelings of overwhelm.
4. Increased Resilience: When faced with challenges, positive self-talk helps you view setbacks as temporary and surmountable. This resilience enables you to bounce back stronger and more determined.
5. Enhanced Problem-Solving: Positive self-talk encourages you to approach problems with a solution-focused mindset. Instead of fixating on obstacles, you begin to explore creative ways to overcome them.
6. Emotional Regulation: Positive self-talk supports emotional regulation by helping you manage intense emotions. It guides you toward rational thinking and prevents spiraling into negative thought patterns.
7. Boosted Self-Confidence: As you replace self-doubt with self-assurance, your confidence grows. Positive self-talk reminds you of your strengths and capabilities, empowering you to take on challenges.
8. Nurtured Relationships: Positive self-talk extends to your interactions with others. When you treat yourself with kindness and respect, you model healthy self-esteem and create a more positive atmosphere in relationships.
9. Enhanced Goal Achievement: Believing in your abilities propels you toward your goals. Positive self-talk fuels your motivation, making it more likely for you to persevere and achieve what you set out to do.
10. Mind-Body Connection: Positive self-talk can positively impact your physical health. When your mind is in a positive state, stress hormones decrease, and your body’s healing mechanisms are activated.
11. Cultivation of Gratitude: Positive self-talk encourages gratitude for your accomplishments, experiences, and even challenges. Gratitude reinforces a positive outlook on life.
12. Self-Compassion: Above all, positive self-talk is an expression of self-compassion. Treating yourself with the same kindness and encouragement you would offer a friend creates a nurturing inner environment.
Conclusion: Positive self-talk is a transformative tool that can reshape your outlook on life and enhance your well-being. By intentionally replacing negative self-talk with empowering, kind, and optimistic words, you’re not only nurturing your own mental and emotional health but also creating a foundation for a more fulfilling and resilient life journey. The power to improve your well-being is within your thoughts and words—harness it to create a brighter and more positive path forward.
How Ridgeview Hospital Can Help
When you enroll at Ridgeview Hospital, you are taught techniques that can be applied even after you’ve left. With that being said, emotional sobriety starts when individuals receive dual diagnosis treatment for both their addiction and any coexisting mental health disorder.
If you have a dual diagnosis, co-occurring substance use treatment is one of the most effective treatment programs you can partake in. While participating in this program, you will learn ways to treat the symptoms of your addiction and psychiatric illness through a combination of methods.
Our co-occurring substance use treatment options include:
- Cognitive behavioral therapy
- Group therapy
- Recreational therapy
- Relapse prevention planning
- 12-step programming
In addition, Ridgeview Hospital’s adult psychiatric program is designed to be comprehensive enough to treat various mental health issues but flexible enough to accommodate individual patient needs.
Achieve Coping With Substance Withdrawal Symptoms at Ridgeview Hospital
At Ridgeview Hospital, our Middle Point, Ohio, center is dedicated to helping people get clarity by finding physical and emotional sobriety. In addition, our adult mental health program is designed to help you overcome your addiction and establish a strong foundation for sustained recovery.
From the moment you arrive, our team will work with you to create a plan to help you maintain physical and emotional sobriety. | <urn:uuid:1384ab90-d56b-4801-8096-936adf79e264> | CC-MAIN-2023-40 | https://ridgeviewhospital.net/importance-of-positive-self-talk-for-well-being/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506632.31/warc/CC-MAIN-20230924091344-20230924121344-00274.warc.gz | en | 0.916259 | 982 | 2.875 | 3 |
Background: Recessively inherited genetic disorders such as sickle cell anemia and β-thalassemia are commonly encountered in heterozygous and homozygous form in India. These hemolytic disorders cause a high degree of reproductive wastage in vulnerable communities. Inbreeding is usually the mating between two related individuals. Homozygosis is antagonistic process of heterosis.
Purpose: This study was aimed at finding reproductive outcome in carrier couples of sickle cell disease, and β-thalassemia in terms of reproductive wastage in relation to varied marital distance between partners in Madhya Pradesh.
Methods: A total of 107 (35 and 72, respectively) carrier couples of β-thalassemia major and sickle cell anemia with confirmed affected offspring after taking detailed reproductive history were studied following the standard methodology in a tertiary hospital in Central India during March 2010 to February 2013.
Results: A majority of sickle cell and b-thalassemia carrier couples (77.8% and 65.7%, respectively) had married within physical distance of radius less than 50 kms. away from their native places. It was found that as the marital distance between two carrier partners of above disorders decreases, the number of abortions, still-births, neonatal mortality, infant mortality, and mortality under 10 years age increases, and vice versa, implicating inbreeding and homozygosis. The overall reproductive wastage of 28.2% and 18.6% was recorded in carrier couples of sickle cell disease and β-thalassemia, respectively.
Conclusions: Relative small population size clubbed with small marital distance leads to inbreeding resulting in homozygosity which increases chances of affected offspring by recessive or deleterious traits and contributes to decreased fitness of a couple or population in Central India. | <urn:uuid:fab45afe-8dc2-4acc-a404-e6eab69c09a3> | CC-MAIN-2018-26 | https://mjhid.org/index.php/mjhid/article/view/2013.063 | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863980.55/warc/CC-MAIN-20180621001211-20180621021211-00145.warc.gz | en | 0.946033 | 368 | 2.53125 | 3 |
Adding Up the Many Dangers of Tobacco -- and Finding New Ones
Download MP3 (Right-click or option-click the link.)
This is SCIENCE IN THE NEWS, in VOA Special English. I'm Shirley Griffith. And I'm Steve Ember. This week, we will present some new warnings about smoking and tobacco products.
For many years, scientists have warned us not to smoke. The World Health Organization says tobacco is the leading preventable cause of death in the world. Five million people die of causes linked to tobacco use every year.
Now, medical research has provided even more warnings. Advisers to America's Centers for Disease Control and Prevention report that pneumococcal pneumonia threatens smokers more than nonsmokers. The advisers say many smokers will need a vaccine to help prevent the disease.
This is the first time medical experts have suggested the vaccine for young and middle-aged adult smokers. The Advisory Committee on Immunization proposed that the vaccine be given to smokers ages nineteen through sixty-four.
Past research showed that cigarette smokers are four times more likely to get pneumococcal diseases than nonsmokers. For years, older adults and children under two have been urged to get the vaccine. So have people with serious health problems like diabetes and heart disease. Others at risk are people with low resistance to infection.
A C.D.C. official says it is not known why smokers are more likely to get pneumococcal infections. One idea is that smoking damages protective tissue in the back of the throat. As a result, bacteria are more likely to connect to the smoker's windpipe and lungs.
The vaccine fights several kinds of Streptococcus pneumoniae. The bacteria can infect a person's brain, causing the disease meningitis. It also can affect the blood. Experts say up to twenty percent of people with pneumococcal blood infections die, even when treated.
The experts say smoking even one cigarette a day can increase the threat of pneumococcal pneumonia by one hundred percent. The more cigarettes a person smokes, the greater the threat of the disease. Health officials say smokers should do more than get the pneumococcal vaccine. They urge people to stop smoking.
Smoking also can affect your hearing. That warning resulted from a study reported earlier this year by the International Society of Audiology Congress in Hong Kong. The study was said to be one of the largest ever carried out about hearing loss. The results were published in Springer's "Journal of the Association for Research in Audiology."
The report says hearing loss is not just a natural result of the aging process. The major cause is noise. But the report says smoking and being over-weight aid the development of hearing loss.
Four thousand eighty-three people took part in the study. They were fifty-three to sixty-seven years old. They answered questions about their medical history and their contact with possible environmental threats. They also took hearing tests.
Researchers considered the possibility of the links between the possible threats and hearing loss. The researchers found a close connection between smoking and hearing loss.
Many smokers use tobacco products while eating or drinking alcohol in public. The American state of Massachusetts banned smoking in almost all restaurants and workplaces four years ago.
Recently, a study found that the state had five hundred seventy-seven fewer heart attack deaths each year since the ban became law. The Massachusetts Department of Public Health and the Harvard School of Public Health organized the study. The findings may strengthen evidence for workplace smoking bans.
The World Health Organization says one billion three hundred million people still smoke worldwide – even after all the warnings. W.H.O. officials say eighty-four percent of all smokers live in developing countries. At the same time, smoking in the United States and Europe has decreased.
People who smoke also harm non-smokers. The American Cancer Society says this kind of secondhand smoke causes lung infections in as many as three hundred thousand young children each year.
Expectant mothers who smoke are more likely to have babies with health problems and low birth weight. Such babies may suffer health problems as they grow.
Older smokers are also at risk. A study in the publication "Neurology" showed that older adults who smoke face an increased risk of Alzheimer's disease. Decreased mental health also was more likely in persons who smoked than in non-smokers. After a time Alzheimer's, patients lose the ability to think, plan and organize.
Most people know that smoking causes lung cancer. But it also has been shown to be a major cause of cancers of the mouth, esophagus, kidney, bladder and pancreas. Cigarettes are not the only danger. Smokeless tobacco and cigars also have been linked to cancer. But these facts are not enough to prevent people from smoking.
The American Cancer Society says there is no safe way to smoke. The group has warned that smoking begins to cause damage immediately. All cigarettes can damage the body. Smoking even a few cigarettes is dangerous.
Nicotine is a substance in tobacco that gives pleasure to smokers. Nicotine is a poison. The American Cancer Society says nicotine can kill a person when taken in large amounts. It does this by stopping the muscles used for breathing.
The body grows to depend on nicotine. When a former smoker smokes a cigarette, the nicotine reaction may start again. This forces the person to keep smoking.
Studies have found that nicotine can be as difficult to resist as alcohol or the drug cocaine. So experts say it is better never to start smoking than it is to smoke with the idea of stopping later.
Experts say menthol cigarettes are no safer than other tobacco products. Menthol cigarettes produce a cool feeling in the smoker's throat.
That means that people may hold the smoke in their lungs longer than smokers of other products. As a result, scientists suspect that menthol cigarettes may be even more dangerous than other cigarettes.
Other smokers believe that cigarettes with low tar levels are safer. Tar is a substance produced when tobacco leaves are burned. It is known to cause cancer.
America's National Cancer Institute has said that people who smoke low-tar cigarettes do not reduce their risk of getting diseases linked to smoking. Scientists found no evidence of public health improvements from changes in cigarette design and production in the past fifty years.
Is there no way to smoke without harming your health?
The American Cancer Society does not think so. The group wants people to stop, or at least reduce, smoking.
For this reason it organizes the Great American Smokeout every year. The event takes place in November. Local volunteers support the efforts of individuals who want to stop smoking.
The American Cancer Society says blood pressure returns to normal twenty minutes after the last cigarette. Carbon monoxide levels in the blood return to normal after eight hours. The chance of heart attack decreases after one day. After one year, the risk of heart disease for a non-smoker is half that of a smoker.
There are products designed to help people reduce their dependence on cigarettes. Several kinds of nicotine replacement products provide small amounts of the chemical. These can help people stop smoking.
Experts also say a drug used to treat depression has helped smokers. The drug is called Zyban. It does not contain nicotine. It works by increasing levels of dopamine in the brain. Dopamine is a chemical that produces pleasure.
Here is some advice from people who have stopped smoking: Stay away from alcohol. Take a walk instead of smoking a cigarette. Avoid people who are smoking. If possible, stay away from situations that trouble you.
It is not easy to stop. And people never can completely control their own health. But as one doctor advises her patients, becoming a non-smoker is one way to gain control of your life.
This SCIENCE IN THE NEWS was written by Jerilyn Watson. Brianna Blake was our producer. I'm Steve Ember. And I'm Shirley Griffith. Read and listen to our programs at voaspecialenglish.com. Join us again next week for more news about science in Special English on the Voice of America. | <urn:uuid:da8e6c7a-b78c-45b0-b235-7c43a6ad5893> | CC-MAIN-2016-30 | http://www.manythings.org/voa/0/12601.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828322.57/warc/CC-MAIN-20160723071028-00248-ip-10-185-27-174.ec2.internal.warc.gz | en | 0.956777 | 1,653 | 3.25 | 3 |
(March 27 1892
– April 3 1972
) was an American pianist
Born Ferdinand Rudolph von Grofé
, in New York City
, Grofe came by his myriad musical interests naturally. Of French Huguenot
extraction, his family had four generations of classical musicians
. His father, Emil von Grofé, was a baritone
who sang mainly light opera and his mother, Elsa Johanna von Grofé, was a professional cellist
. She was also a versatile music teacher who taught Ferde to play the violin and piano. Elsa's father, Bernardt Bierlich, was a cellist in the Metropolitan Opera
Orchestra in New York and Elsa's brother, Julius Bierlich, was first violinist
of the Los Angeles
Ferde's father died in 1899 and Elsa took Ferde abroad to study piano, viola and composition in Leipzig, Germany. Given such a musical background, it is perhaps understandable that Ferde became proficient over a remarkable range of instruments including piano (his favored instrument), violin, viola (he became a violist in the LA Symphony), baritone horn, alto horn and cornet.
This command of musical instruments and composition gave Ferde the foundation to later become first an arranger of other composers' music and then an orchestrator of his own compositions.
Grofé left home at the age of 14 and variously worked as a milkman, truck driver, usher, newsboy, elevator operator, helper in a book bindery, iron factory worker, and as a piano player in a bar for two dollars a night and as an accompanist. He continued studying piano and violin. When he was 15 he was performing with dance bands. He also played the alto horn in brass bands. He was 17 when he wrote his first commissioned work.
With Paul Whiteman
Beginning about 1920, he played the jazz
piano with the Paul Whiteman
orchestra. He served as Whiteman's chief arranger from 1920-1932. He made hundreds of arrangements of popular songs, Broadway show music, and tunes of all types for Whiteman.
Grofé's most memorable arrangement is that of George Gershwin's Rhapsody in Blue, which established Grofé's reputation among jazz musicians. Grofé took what Gershwin had written for two pianos and orchestrated it for Whiteman's jazz orchestra. He transformed Gershwin's musical canvas with the colors and many of the creative touches for which it is so well known. He went on to create two more arrangements of the piece in later years. Grofé's 1942 orchestration for full orchestra of Rhapsody in Blue is the one most frequently heard today.
Due to Grofé's ubiquity in arranging large-scale musical works and a perceived paucity of American achievements in serious music, the German conductor Wilhelm Furtwängler complained that "America has no composers, only arrangers."
During this time, Grofé also recorded piano rolls for the Ampico company in New York. These were embellished with extra notes after the recording took place to attempt to convey the thick lush nature of his orchestra's style, and are marked "Played by Ferde Grofé (assisted)".
After leaving Paul Whiteman
During the 1930s, he was the orchestra leader on several radio programs, including Fred Allen
's show and his own The Ferde Grofé Show
. In 1944 he was a panelist on A Song Is Born
, judging the works of unknown composers.
Grofé was later employed as a conductor and faculty member at the Juilliard School of Music where he taught orchestration.
In addition to being an arranger, Grofé was also a serious composer in his own right. While still with Whiteman, in 1925, he wrote Mississippi Suite
, which Whiteman recorded in shortened format in 1927. He wrote a number of other pieces, including a theme for the New York World's Fair
of 1939 and suites for Niagara Falls
and the Hudson River
Today, Grofé remains most famous for his Grand Canyon Suite (1931) a work regarded highly enough to be recorded for RCA Victor with mastery by Arturo Toscanini and the NBC Symphony (in Carnegie Hall in 1945, with the composer present). The earlier Mississippi Suite is also occasionally performed and recorded. Grofé conducted the Grand Canyon Suite and his piano concerto (with pianist Jesús Maria Sanromá) for Everest Records in 1960.
He also composed original film music, including the scores to Minstrel Man (1944), Time Out of Mind (1947), Rocketship X-M (1950) and The Return of Jesse James (1950).
Ferde Grofé died in Santa Monica, California at the age of 80. He was buried in the Mausoleum of the Golden West at the Inglewood Park Cemetery in Inglewood, California.
Grofé composed a number of original pieces of his own in a symphonic jazz style. Grofé's works include:
His soundtrack to the 1950 science fiction film Rocketship X-M included the use of the theremin. His monumental Grand Canyon Suite is his best known work, a masterpiece in orchestration and evocation of mood and location.
- Grofé's Grand Canyon Suite, performed by the NBC Symphony, conducted by Arturo Toscanini. On LP and on the recently out-of-print CD, it is coupled with works by George Gershwin, and (on the CD) Samuel Barber and John Philip Sousa.
- Grofé's Grand Canyon Suite, performed by the New York Philharmonic (with John Corigliano, Sr.as the violin soloist) conducted by Leonard Bernstein. Coupled with Bernstein conducting Gershwin’s Rhapsody in Blue (with Bernstein at the piano) and An American in Paris (Sony 63086)
- Grofé's Grand Canyon Suite, performed by the Detroit Symphony Orchestra conducted by Antál Dorati. Coupled with Dorati conducting Gershwin's Porgy and Bess: A Symphonic Picture (London/Decca Jubilee 430712)
- Symphonic Jazz: Grofé and Gershwin, performed by the Harmonie Ensemble/New York conducted by Steven Richman (Bridge Records 9212), playing:
- Grofé's Mississippi Suite (the original Whiteman Orchestra version)
- Gershwin's Second Rhapsody for Orchestra with Piano arranged by Grofé, with Lincoln Mayorga on the piano (premiere recording)
- Grofé's Gallodoro's Serenade for Saxophone and Piano with Al Gallodoro on alto saxophone and Mayorga on piano (premiere recording)
- Grofé's Grand Canyon Suite (original Whiteman Orchestra version; first complete recording)
- Liner notes by Don Rayno for Symphonic Jazz: Grofé and Gershwin (Bridge Records 9212) | <urn:uuid:91532a6d-d8c2-4a7a-83af-6a81ad3cf29b> | CC-MAIN-2014-23 | http://www.reference.com/browse/grof%C3%A9 | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877644.62/warc/CC-MAIN-20140722025757-00062-ip-10-33-131-23.ec2.internal.warc.gz | en | 0.969566 | 1,483 | 2.84375 | 3 |
M dwarf spectral classification L dwarf and T dwarf spectra L dwarf characteristics - photometry
M dwarf spectral classification
L dwarf and T dwarf spectra
L dwarf characteristics - photometry
As discussed in the introduction, one of the main goals in establishing a reliable spectral-type scale is correlating the empirical sequence against the physical parameters which determine the overall energy distribution and detailed spectroscopic appearance of the very low-mass star or brown dwarf. While parameters such as gravity and chemical abundance probably play a significant role, the expectation is that the changing appearance of M, L and T dwarfs is driven primarily by changes in effective temperature.
Early and mid-type M dwarfs have temperatures in the range 4000 to 3000 K. Fiducial points are provided by the eclipsing binaries YY Gem (Gl 278C) and CM Dra (Gl 630.1). Since both luminosity and radius can be derived empirically to accuracies of a few percent or better in those systems, the fundamental relation
In principle, the most accurate method of determining the effective temperature of a stellar object should be through matching the spectral energy distribution against predictions based on theoretical models. In practice, the atmospheres of late-type M dwarfs and L dwarfs are rendered extremely comple by the presence of several important diatomic and triatomic molecules, notably TiO, VO and metal hydrides at optical wavelengths, and H2O in the near-infrared. Substantial progress has been made in this field over the last few years, with the various manifestations of the Phoenix models (see, for example, Hauschildt et al, 1997 & Allard et al, 2001) providing a dramtic improvement over the Mould (1976) models, which were still basically the only game in town in the early 1990s. However, accurate model atmosphere calculations require data for the hundreds of thousands of lines contributed by the more important molecules, and those linelists remain incomplete. This results in systematic deficiencies in the models: for example, one of the three transitions in the 7050 A. TiO bandhead is simply not present in the theoretical spectra predicted by state-of-the-art atmosphere models.
Dealing quantitatively with the effects of dust formation offers another significant complication. In qualitative terms, dust grains start to for form at mid-M spectral types; the grains grwo in size with decreasing temperature, and eventually `rain out' to deeper layers below the effective photosphere. Until rain out is complete, however, the dust grains absorb short wavelength radiation and re-emit that energy at longer wavelengths (backwarming), with concomitant effects on the overall temperature structure. The resultant emergent spectrum, particularly at near-infrared wavelengths, depends strongly on where the grains reside within the atmosphere: dusty models for mid and late--type L dwarfs are 1-2 magnitudes redder in (J-K) than grain-free models. Spectral features are also dust dependent: as already noted, the relatively shallow depth of the near-IR steam bands, as compared with Mould's (1976) models, is a product of backwarming by dust (Tsuji et al, 1996). Clearly, these unavoidable complications should make one leary of placing overmuch reliance in quantitative conclusions derived from matching detailed observations against those same models.
Even with these limitations, the current generation of models provide a reasonable representation of the spectral energy distribution of late-type dwarfs, and a much-improved match to observations of early- and mid-type M dwarfs. We can use the latter fact to extend the reference temperature calibration given by YY Gem and CM Dra. Leggett et al (1996) matched one of the first generation of Phoenix models (Allard & Hauschildt, 1995) against near-infrared spectra of M dwarfs, deriving temperatures between 3700 and 2700K. The coolest dwarf included in this analysis was GJ 1111, spectral type M6.5, with a deuced temperature of 2700K. This provides a reference point for the upper end of the ultracool dwarf temperature scale.
Somewhat paradoxically, modelling T dwarf atmospheres is a much simpler prospect than dealing with the hotter and more luminous late-M and L dwarfs. Many of the more important opacity sources have precipitated out as dust and therefore do not need to be included in the calculations. Moreover, observations indicate that the grains themselves have rained out to relatively deep layers within the atmosphere, Thus, Noll et al (1997) have used the models computed by Marley et al (1997) to analyse the detailed photometric and spectroscopic observations of the T6 dwarf, Gl 229B, deriving a temperature estimate of 950+/-50 K. This sets a reference point at the lower end of the ultracool dwarf temperature scale.
Figure L4.1: Spectroscopic temperature scales: the magenta squares plot the temperature calibration derived by Basri et al (2000); the yellow points plot results from Schweietzer et al (2001, 2002). The cyan points mark the reference points provided by GJ 1111 (M6.5) and Gl 229b (T6). The spectral types are on the K99/Burgasser et al. (2002) scale.
Basri et al (2000) used a later generation of Phoenix models to analyse high-resolution spectra of late-type M and L dwarfs, matching the line profiles of the neutral caesium and rubidium resonance lines. Figure L4.1 plots their results as a function of spectral type. In brief, they place the M/L transition at 2200-2300 K, while L8 dwarfs, such as 2MASS1632+19, are assigned temperatures of 1600-1700K. Martin's spectral types are tied to this temperature scale.
More recently, Schweitzer and collaborators have analysed both high-resolution and low-resolution spectra of late-tyep M and L dwarfs. The high-resolution analyses (outlined in Schweitzer et al, 2001) are similar in nature to the Basri et al study, deriving temperature estimates based on detailed modelling of line profiles. The temperatures derived from those analyses show little variation with spectral type - essentially all of the dwarfs, spanning spectral types M9.5 to l3.5, have Teff = 2000+/-100 K, with no obvious correlation with spectral type. In contrast, matching lower-resolution (Keck LRIS) data against the models produced the temperature/spectral-type correlation plotted in Figure L4.1 - temperatures typically 100K lower than the Basri et al scale. Follow-up analysis of LRIS data for other L dwarfs (Schweitzer et al, 2002) extends this calibration to spectral types L6-L8, which are assigned temperatures between 1700 and 1400K.
Before the availability of even reasonably accurate model atmospheres, most estimates of the M dwarf temperature scale rested on the analysis of broadband photometry. One widely-used technique, originating with Greenstein, Neugebauer & Becklin (1969), was blackbody fitting - fitting a Planck curve to the spectral energy distribution outlined by broadband data, and equalising the area under the two curves. Usually the Planck curve is normalised to the flux at either K or L. This technique has been used by Reid & Gilmore (1984), Berriman & Reid (1987), Berriman et al (1991) and Tinney et al. (1993). In many cases the broadband data are supplemented by either CVF spectrophotometry or near-infrared spectroscopy. The temperature scale derived from these analyses places spectral type M9/M9.5 at 2000-2100K. Figure L4.2 shows the results derived by Tinney et al. for the coolest dwarfs in their sample (including GD 165B). More recently, Stephens et al (2001) have used a hybrid approach, comparing (K-L) colours against theoretical predictions to derive a spectral type relation
Figure L4.2: Photometric temperature scales: blue open circles plot data from Tinney et al. (1993); the magenta squares plot the temperature calibration outlined above; the yellow line plots the calibration deduced by Stephens et al (2001). As in L4.2, the cyan points mark the reference points provided by GJ 1111 (M6.5) and Gl 229b (T6).
As an alternative to these techniques, if the bolometric magnitudes are known and the radius can be estimated, we can use the fundamental equation given above to derive effective temperatures. Both Kithpatrick et al (1999) and Reid et al (1999) point out that since the radii of these cool dwarfs is largely set by degeneracy, most of the difference in luminosity can be ascribed to differences in temperature. Moreover, the bolometric corrections for these dwarfs appear to be well-defined, with relatively little variation in BCJ in particular over the full range between mid-M and T (Leggett et al, 2000; Reid et al, 2001).Thus, it is not only relatively straightforward to transform the observed near-infrared magnitudes to bolometric luminosities, and hence estimate temperatures, but the small magnitude difference at MJ between 2M1523+30 (Gl 584C, L8, one of the lowest luminosity L dwarfs) and Gl 229B, the archetype of class T, suggests strongly that there is a relatively small temperature difference between those two dwarfs. That, in turn, suggests that there is relatively little scope for as-yet undiscovered classes of brown dwarfs between the latest-type L dwarfs in the current observational samples and the `transitional' early-type T dwarfs discovered by the Sloan Digital Sky Survey.
Quantifying this approach, we have two fiducial points
One of the important points to bear in mind in studying ultracool dwarfs is that these spectral types encompass a mix of very low-mass stars and brown dwarfs. Unlike hydrogen-burning stars, where the location on the main sequence changes very little over the star's lifetime, brown dwarfs evolve on relatively short astronomical timescales. They emerge from the T Tauri stage with temperatures of ~3000K, equivalent to mid-type M dwarfs, but cool rapidly through mid- and late-M, before descending through class L to become T dwarfs (and, in principle, whatever comes next). Burrows & Liebert (1993) give approximate analytic relations for this behaviour:
Figure L4.4: An artist's impression of the visual appearance of a late-M, L- and T-dwarfs. The M8 dwarf, plotted in the upper right, is scaled to a temperatures of ~2500K, with the dark areas representing active spots; the L5 dwarf, plotted in the lower left, represents a MSun brown dwarf at age Y Gyrs, T= yyyyK, with the cross-hatched structure outlining large convective cells; the T dwarf is set to match a yy Msun object at age Y Gyrs, T= , and is starting to show the belts and zones which characterise Jovian planets. An image of Jupiter is shown in the upper left to give the size scale: the T dwarf has a simialr radius; the M8 and L5 dwarfs are approximately 10% smaller.
A further feature of low-mass dwarfs is that, to first order, they all have the same radius - more exactly, for masses below ~0.09 MSun, all isolated dwarfs, whether hydrogen-burning or not, have masses within 10% of that of Jupiter. This behaviour stems from the fact that these low mass dwarfs are partially electron degenerate - just as in white dwarfs, the radius is set by degeneracy pressure rather than hydrostatic equilibrium. The larger radius (Jovian rather than terrestrial) reflects the different mean molecular weight, me, since
Figure L4.4: The same four dwarfs at near-infrared wavelengths: the methane bands lead to the "blue" colour of the T dwarf.
Figure L4.5: The theoretical HR diagram for low-mass stars and brown dwarfs. The model calculations are from the Arizona set (Burrows et al, 1997). Dotted lines plot evolutionary tracks for individual masses: 0.015 MSun (yellow), 0.020 MSun (cyan), 0.03 MSun (magenta), 0.040 MSun (green), 0.050 MSun (red), 0.060 MSun (blue), 0.075 MSun (yellow), 0.080 MSun (magenta), 0.090 MSun (green) and 0.10 MSun (red). Note that low mass brown dwarfs are more luminous at a given temperature due to their larger radii. The corresponding isochrones for ages 106 to 1010 years are picked out by the appropriate symbols, connected by solid lines. The two large stars mark the locations of GJ 1111 and Gl 229B.
The resulting implications for the HR diagram are illustrated in Figure L4.5, which plots evolutionary tracks from the Arizona models (Burrows et al, 1997) for a range of masses between 0.015 and 0.10 MSun. With only a limited range of radii, brown dwarfs (and even low-mass stars) of different masses follow very similar tracks in the (Mbol, Teff) plane. Thus, even when one has an accurate parallax, luminosity and temperature, it is generally possible to estimate a mass for a brown dwarf only if there is some independent estimate of the age. (There is a caveat - see below. ) It is for this reason that wide companions of main sequence stars (such as 2M1523+30 = Gl 584C, 2M1112+35 = Gl 417B, Gl 570D and Gl 229B) and confirmed members of open clusters, where independent age estimates are possible, provide a powerful means of testing theoretical models.
Figure L4.6: The evolution of central temperature with time in the Arizona models of low-mass dwarfs. The numbers indicate the masses in solar units. These models predict a hydrogen-burning limit of ~0.075 MSun; note that at that mass, H-burning lasts for only ~10 Gyrs. The lithium burning limit is ~2 x 106K.
The caveat mentioned above regarding mass estimates centres on whether lithium absorption is detectable in the spectrum of an ultracool dwarfs - although even here, one obtains an upper (or lower) limit to the mass, rather a direct estimate of the mass of an individual object. Figure L4.6 shows the physical basis for what has become known as the "lithium brown dwarf test" (Rebolo et al, 1993). As the protstellar core collapses, the central temperature increases. Hydrogen fusion reactions are initiated when that temperature exceeds ~3 x 106K, and if those persist, the dwarf evolves onto the main sequence as a (very) long-lived ultrcool dwarf. However, those reactions can be damped out as energy is depleted (and the central temperature drops) through increasing degeneracy. Thus, a 0.075 MSun dwarf in the Arizona models is able to maintain quasi-stable hydrogen fusion for ~10 Gyrs before dropping into the brown dwarf regime. At lower masses, the central temperature is never able to maintain sustained hydrogen fusion, and the brown dwarf `cools like a rock'.
Low mass stars are powered by hydrogen fusion in the proton-proton chain (which terminates at the formation of He3, rather than He4, in stars with masses below 0.025 MSun). However, another important reaction involves lithium, which combines with a proton to form two He4 atoms (Figure L4.7). That reaction requires a temperature of T > Tcrit ~ 2 x 106K. It is clear from Figure L4.6 that a 0.06 MSun brown dwarf barely achieves this threshold, while lower-mass brown dwarfs fall far short.
Lithium is produced in limited quantities by the PP chain (at least in higher-mass M dwarfs), but it is also a Big Bang nucleosynthetic product. Thus, every star and brown dwarf starts its life with a baseline abundance of lithium. If that dwarf has a mass below ~0.30 solar masses, it is fully convective; that is, convection transports material from the surface to the hydrogen-burning core. Thus, all of the material is exposed to the maximum range of temperature. The net result is that the primordial lithium (and any PP chain products) is destroyed rapidly in dwarfs with M > Mcrit, where Mcrit ~ 0.06 Msun. Note that stars with masses above ~1 MSun also retain primordial lithium, since the outer convective shell is sufficiently thin in those stars that the material is never exposed to temperatures above Tcrit.
Figure L4.9: lithium depletion as a function of time and temperature. The upper panel plots evolutionary tracks from the Arizona models for dwarfs with masses between 0.015 and 0.11 Msun; the lower panel plots similar data for the Lyon models. The boundaries between spectral types M, L and T are shown, and the solid yellow line marks the locus where lithium is predicted to be depleted to <1% of its primordial abundance.
Lithium persists in dwarfs with M > Mcrit for a relatively short, but still significant, period of time. Not surprisingly, the depletion rate depends on the central temperature, and hence the mass. Thus, higher-mass brown dwarfs (and even hydrogen-burning stars, vide T Tauris) can have measureable lithium. However, Figure L4.9 shows that the models predict that any dwarf with detectable lithium and a spectral type later than M6 should be a low-mass brown dwarf: for more massive dwarfs, the cooling time to reach that temperature (~2800K) is sufficiently long that all of the primordial lithium should be depleted.
Figure L4.10: Lithium in late-M and L dwarfs ,br> 2M1439 are spectral type L1; 2M1146 is spectral type L3; Denis 1228 is L5; and 2M0850 is L6. The last mentioned is known to be an unequal-mass binary.
Lithium has a relatively strong resonance doublet at 6708 A (whose visibility is enhanced in L dwarfs by the transparent atmospheres), so both low-mass brown dwarfs and young higher-mass dwarfs can be identified directly. Figure L4.10 plots some examples of ultracool dwarfs with detected lithium; other examples are present amongst the dwarfs plotted in on the spectroscopic classification page.
NStars home page
PMSU main page
INR home page | <urn:uuid:02de126e-1ea8-4ff8-9314-70cae02eeef5> | CC-MAIN-2016-26 | http://www.stsci.edu/%7Einr/ldwarf3.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00102-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.903735 | 3,937 | 2.734375 | 3 |
An early log schoolhouse was erected in 1822, southeast of the site of Belfast. It was constructed on Montjoy Survey No. 1566, on land belonging to Stacey Storer. The community decided a school was needed and the location on Bee Run was selected. Word was sent ou that the settlers were going to gather on a certain day to roll logs. (6)
The cabin schoolhouse was constructed on round logs, when fisished it was 16 feet and had a clapboard roof. The primitive room was heated by a large fireplace across one end and had a puncheon floor. The door was fastened to the battens by wooden pins and swung from wooden hinges. The backless benches driven for legs. This log schoolhouse, with tar or oiled paper across the windown openings, was an educational center of the southeastern part of the township for many years. It was used for social gatherings as well as political meetings. Meetings were held in it whenever an itinerant minister rode through the neighborhood. the name, "Wildwood," was given to the school, suggested by the surrounding countryside. (6)
In 1842 contracts were let for a new school building to be erected three miles northwest of the site of Belfast. The log structure supervised by George W. "Squire" Siders was a large room heated by a giant "Mogul" store. It had windows with very small glass panes instead of the oiled paper used in the primitive school. Expenses for its erection were met by contributions, mostly of labor and logs. | <urn:uuid:87cc3539-77ce-4a18-8597-d0f428e8b757> | CC-MAIN-2016-40 | http://www.usgennet.org/usa/oh/county/highland/building/belfast.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662159.54/warc/CC-MAIN-20160924173742-00174-ip-10-143-35-109.ec2.internal.warc.gz | en | 0.989888 | 322 | 3.453125 | 3 |
Gender and Respectful Relationships: Real experiences of Victorian students and teachers
The articles shared this week have covered lots of debate surrounding Respectful Relationships education in Victoria. We have explored research on whether (particularly male) students and teachers will be alienated by the curriculum, and what potential the education has to actually address gender-based violence. In all this research, however, what has been the real experiences of school students and teachers who have been through the program?
Shared today is a video from the Victorian Department of Education and Training that explores just this.
In the video, students and teachers from different schools in different areas and from different year levels share their views on the curriculum. While many do talk about gender-based violence, most of the students actually talk in quite simple language. They use phrases like ‘respect’, ‘equality’ and ‘being nice’. Students from an all-boys secondary school talk about how the curriculum has allowed them to help each other out and tease each other less.
Based off these interviews and this week’s article series, will Respectful Relationships really lead to a war between the genders? Or will it teach kids (and even teachers) how to respect one another more, irrespective of things like gender, sexuality, or even ethnicity? As Victorian schools prepare to roll out the new curriculum, only time will tell, but the current evidence-base gives us a reason to be optimistic!
Department of Education and Training, Victoria 2016, Respectful Relationships, Department of Education and Training, Victoria, viewed 8 November 2016, https://www.youtube.com/watch?v=Z3kmDAkd0tQ | <urn:uuid:8923f3d4-bcb1-4864-ad25-dcb6d44f1749> | CC-MAIN-2019-22 | http://www.chalkcircle.org/resources/2016/12/2/gender-and-respectful-relationships-real-experiences-of-victorian-students-and-teachers | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257605.76/warc/CC-MAIN-20190524104501-20190524130501-00102.warc.gz | en | 0.95701 | 346 | 2.921875 | 3 |
Published on: 11-Jun-2013
System taps on crowdsourcing for real-time dengue monitoring
When it comes to stopping dengue, social media posts, tweets and a web system may be just what the doctor ordered.
Researchers at Nanyang Technological University (NTU) have developed a social media-based system called Mo-Buzz that can predict where and when dengue might occur.
It combines a web system that taps into historical data on weather and dengue incidents and swift reports by the public on mosquito bites and breeding sites via smart phones and tablets.
These reports are geo-tagged to the user's location and shown live on Google Maps in the system.
These real-time information can boost the authorities' efforts to keep a constant eye on the spread of dengue and, more importantly, help in using resources more accurately and in a more targeted manner.
The system is developed by NTU at the Centre of Social Media Innovations for Communities (COSMIC), which, as its name suggests, aims to develop social media innovations to bring about self help in a more integrated society.
Mo-Buzz is a combination of a public health surveillance web application, integrated with a social media-based mobile app. By leveraging crowdsourcing and advanced computing, Mo-Buzz can potentially predict dengue outbreaks weeks in advance, and enable users to help health authorities monitor the spread of dengue in real-time using their mobile devices.
“This new capability represents a significant shift in how the spread of dengue and other infectious diseases can and will be monitored in the future,” said Associate Professor May O. Lwin, the principal investigator of the programme.
“What we’re hoping to do with a dynamic system like Mo-Buzz is to create active channels of communication between citizens and health authorities during the dengue season. The main advantage is that it helps everyone take preventive action well ahead of time, which is what is important for preventing dengue and saving lives.”
Health alerts and advice tailored to locations and users
Unlike conventional public health reporting, the system automatically processes historic weather and dengue incidence data using a computer simulation to generate predictive hotspot maps that forewarn the public and health authorities where and when dengue might occur. As soon as an area on the map is identified as a hotspot, health alerts and education messages can be quickly sent to residents in that area.
Users can also receive customised health information that they can share with their family and friends using social networking tools, such as Facebook, Twitter and even SMS. This encourages the community to adopt behaviours that will reduce their risk of contracting dengue.
The World Health Organisation (WHO) has invited COSMIC to implement the system in Monaragala, a rural district that is part of the WHO’s global network of age friendly cities and communities.
“Our system will vastly reduce the time lag between collecting and reporting data and preventive action taken by the authorities. The app is quick and easy to use for health workers who are constantly on the move and performing multiple duties; it is simply a click of a button, rather than pages of paper work. They can also provide health education in a visually engaging format. Moving forward, we see this as an essential tool that can also be used in Singapore, Malaysia and other countries in the region,” said Associate Professor Lwin, who specialises in research in public health communication.
Professor Schubert Foo, Director of the NTU COSMIC, comments, “Dengue is a problem in the region and Mo-Buzz provides a platform the community to fight dengue together with the authorities. As researchers, adoption of the system in different communities where dengue problems are severe will also enable us to better understand the necessary conditions for a successful public dengue health campaign and management system.”
Established in 2010, COSMIC is an inter-disciplinary research centre that aims to use the power of social media for addressing critical challenges in Asia such as healthcare, agriculture and community cohesion. COSMIC is funded by the Media Development Authority’s (MDA) Interactive Digital Media R&D Programme Office (IDMPO) in Singapore.
*** END ***
Feisal Abdul Rahman
Senior Assistant Director (Media Relations)
Corporate Communications Office
Nanyang Technological University
Tel: (65) 6790 6687
About Nanyang Technological University
A research-intensive public university, Nanyang Technological University (NTU) has 33,500 undergraduate and postgraduate students in the colleges of Engineering, Business, Science, and Humanities, Arts, & Social Sciences. It has a new medical school, the Lee Kong Chian School of Medicine, set up jointly with Imperial College London.
NTU is also home to world-class autonomous institutes – the National Institute of Education, S Rajaratnam School of International Studies, Earth Observatory of Singapore, and Singapore Centre on Environmental Life Sciences Engineering – and various leading research centres such as the Nanyang Environment & Water Research Institute (NEWRI), Energy Research Institute @ NTU (ERI@N) and the Institute on Asian Consumer Insight (ACI).
A fast-growing university with an international outlook, NTU is putting its global stamp on Five Peaks of Excellence: Sustainable Earth, Future Healthcare, New Media, New Silk Road, and Innovation Asia.
Besides the main Yunnan Garden campus, NTU also has a satellite campus in Singapore’s science and tech hub, one-north, and a third campus in Novena, Singapore’s medical district.
For more information, visit www.ntu.edu.sg
Back to listing | <urn:uuid:b67b5eb7-9b09-4e4d-92bb-86279b6dacdb> | CC-MAIN-2018-26 | http://media.ntu.edu.sg/NewsReleases/Pages/newsdetail.aspx?news=8213639f-ee89-44fd-9b57-a760dfb9eec7 | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864740.48/warc/CC-MAIN-20180622162604-20180622182604-00594.warc.gz | en | 0.914228 | 1,192 | 3.09375 | 3 |
Colic in horses is a term used to describe an equine stomach ache... for horses, this spells big trouble. Horses have a very delicate digestive system. Great care must be taken to avoid potentially deadly disruptions. Unlike humans, horse colic progresses quickly and can be fatal. It is essential that horse owners understand the many causes of colic so they can prevent it from happening to their horse.
Most cases of colic in horses are caused by poor horse management: poor feeding habits, poor exercise and weight management, poor horse worming practices, improper care of a horse's teeth, and even can be caused by improper exposure to water.
Any sudden feed changes in a horses diet will almost certainly cause a horse to colic. Especially if the horse is exposed to a feed that is richer than the one they have been accustomed to eating. To avoid feed induced colic in horses, always introduce new feeds slowly, over the course of 1½ - 2 weeks.
Allowing a horse to drink water immediately after strenuous exercise can cause colic. Horses that are hot and sweaty and finished working should always be cooled down before offering them water. The horse should be allowed to cool down for 30-90 minutes before drinking. Horses can also colic from lack of water. See horse water for more information.
Horses can be thrown into a severe and crippling form of colic from improper exercise practices. A horse that is out of shape and then worked heavily can colic, tie-up, founder or any combination of the three, resulting in permanent chronic lameness or death. This is especially true of horses that are both unconditioned and overweight.
To avoid exercise-induced colic in horses, introduce out-of-shape horses to exercise gradually. Start them out walking. Increase the length of the walks bit by bit, until the horse is ready to begin more robust activity. Always condition a horse for hard workouts before expecting them to perform well under strenuous exercise. This is true for all horses. Even horses that are in exceptional condition can be overworked to the point of failure. Regular exercise and maintaining a good healthy body weight greatly reduces the risk of an exercise-induced colic.
Colic in horses can be caused by lack of dental care. Horses that are unable to chew their food properly can suffer from impaction colic, where pieces of poorly chewed hay block portions of the intestine. Have your horse's teeth floated regularly when needed to avoid this type of colic.
Horses often ingest sand while eating. This can happen when they pull up dirt-filled roots while grazing. It's also common in horses that are fed directly off the bare ground. In "vacuuming" up all the good bits of hay, they end up ingesting a good bit of dirt and sand.
Sand that is allowed to accumulate in the gut can be deadly. The heavy sand settles into a pocket of intestine. As the horse moves around they are at high risk of suffering from a twisted gut.
To prevent sand ingestion feed horses off the ground. Use a sand remover to prevent sand build up.
Parasites like large strongyles, roundworms and tape worms cause damage to the intestines of horses. Parasites in the larvae stage can block the blood supply to the intestine, thereby killing portions of the gut. Horses that are wormed after a heavy worm infestation can get impaction-type colic from a blockage of dead worms, especially in weanlings. Consult your vet before worming a very young horse for the first time, especially if you suspect a heavy infestation. See horse dewormers to learn more about worming horses.
The most common cause of colic in foals is due to intussusception. This is a fancy word to describe one piece of bowel telescoping over another. This requires immediate surgery. Newborn foals can also fail to pass the meconium (first stool in newborns) causing an impaction colic. This can often be solved with an enema and a dose of mineral oil. A vet should always be consulted when dealing with colic in a foal.
Horse colic can be avoided in most instances by good horse care habits. But sometimes accidents happen; a horse gets into an apple orchard, sneaks into the grain barrel or escapes from his pen and pigs out on lush green grass. It is essential that you recognize the signs of colic in horses so you act immediately to avoid death or permanent lameness.
Do you know someone who could use this information? Please share it using the buttons below. | <urn:uuid:4d5ea859-32d1-4247-bb74-47128b00be3a> | CC-MAIN-2018-22 | http://www.equinespot.com/colic-in-horses.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00393.warc.gz | en | 0.949276 | 947 | 3.390625 | 3 |
Early Signs of Alzheimer’s Disease
As a family member ages, memory lapses can be a normal part of getting older or an indication of something more concerning. Trying to understand if your loved one’s behavior is part of the natural progression of aging or if it is indicative of something more pressing can be a challenge.
Alzheimer’s is a form of dementia. It is progressive and compromises mental functions. It is not curable but there are some treatments that may help. We are here to assist those with Alzheimer’s and understand the complexity of caring for someone with this disease.
Granny Nannies understands the difficulty of understanding early signs of Alzheimer’s as well as the importance. We can assist you with expert Alzheimer’s Care and want you to know the early signs of Alzheimer’s so you can be well-informed and prepared should you need our assistance.
Forgetfulness can be common, like momentarily forgetting what day of the week it is or briefly misplacing car keys. A clear distinction between a true concern with memory or being a little forgetful is that with normal aging, whatever is forgotten can be later recalled. For example, you may forget a co-worker’s last name but can remember it later in the day.
Some cognitive abilities that are unchanged, even by normal aging, are the ability to perform tasks you have always been able to do like simple math, knowledge you have from experiences and education, common sense, and reasonable judgment. If someone is having issues with these four key abilities, it might be a cause for concern.
A key factor to recognizing when memory issues become more concerning is recognizing when poor memory starts to interfere with your loved one’s daily life. It is important to be able to differentiate your loved one’s memory loss from early signs of Alzheimer’s or the typical aging process.
Alzheimer's Warning Signs
According to the Alzheimer’s Association, the following are the ten main warning signs that indicate a person may have Alzheimer’s:
- Memory loss
- Problem-solving challenges
- Inability to complete tasks
- Confusion with time and places
- Trouble with vision and judging distance
- Difficulty with words and speaking
- Losing or misplacing belongings
- Poor judgment and lack of being money conscious
- Withdrawal from hobbies and activities
- Changing and erratic moods
Some other associated systemic symptoms that are outside of memory loss include loss of appetite, restlessness, hallucinations, paranoia, aggression, repetitiveness, and poor muscle coordination. These are not necessarily symptoms that lead to diagnosis but may happen due to Alzheimer’s.
A doctor may be able to help by diagnosing your family member with Alzheimer’s. Specialists including, psychologists, neurologists, psychiatrists, and geriatricians can provide specific assistance. Early detection, treatment, and action can be very beneficial for those affected by Alzheimer’s.
If you notice these behaviors that are consistent with Alzheimer’s in someone you know, we are here to help by providing support and services.
If you have any questions about care for your loved one, please call us today!
We are here to Help! Call (772) 807-8666 for a Free home care consultation or complete our home care request form to be contacted by a home care specialist. | <urn:uuid:ee14c1d7-d7d5-41b2-aee7-96e561128406> | CC-MAIN-2023-14 | https://grannynannies.com/Port-Saint-Lucie/Services/Alzheimers-Care/Early-Signs | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00698.warc.gz | en | 0.947612 | 706 | 2.875 | 3 |
“Saul was ashamed of his past, so he changed his name to Paul because Jesus gave him a new beginning.”
When it comes to the Apostle Paul’s name-change, this is the explanation I’ve heard many times. God did give Paul a new identity in Christ, but that didn’t wipe out or erase who he was before. Instead, we see many ways that Paul’s entire life prepared him for his ministry as an Apostle. Likewise, when someone becomes a Christian today, their life history doesn’t get erased and wiped away. Instead, God uses that to fuel their devotion to Christ and to equip them in ministry towards others.
Here’s why Saul’s name changes to Paul throughout the book of Acts and what we can learn from it today.
Saul/Paul Lived in a Hellenized World
The Apostle Paul was from Tarsus, which is in Southern Turkey. It was an important city with an active harbor which made it prominent on the trading route along the Mediterranean Sea. In the generation (or two?) before Paul would’ve been born, Tarsus became the capital city over that region of the Roman world and many Jews began to receive Roman citizenship. It is likely that Paul was from a well-to-do Jewish family who became Roman citizens, which he later invokes in Acts 22 when he was arrested and treated poorly.
As a Jewish man who was a Roman citizen, he would have both a Jewish name (Saul) and a Roman/Greek name (Paul). He did not have a name change, but simply went by a different name depending on his audience.
“When in Rome…”
Throughout the gospels, when he is ministering among the Jewish community he is referred to as Saul. As soon as his ministry shifts to the Gentiles he is consistently referred to as Paul. Again, this isn’t a change in name, but an adaptation where he simply goes by his Jewish name among the Jews and his Greek name among the Gentiles.
Paul’s motivation in this is captured well in 1 Corinthians 9:19-23 where he writes,
“19 Although I am free from all and not anyone’s slave, I have made myself a slave to everyone, in order to win more people. 20 To the Jews I became like a Jew, to win Jews; to those under the law, like one under the law — though I myself am not under the law — to win those under the law. 21 To those who are without the law, like one without the law — though I am not without God’s law but under the law of Christ — to win those without the law. 22 To the weak I became weak, in order to win the weak. I have become all things to all people, so that I may by every possible means save some. 23 Now I do all this because of the gospel, so that I may share in the blessings.”
1 Corinthians 9:19-23 (CSB)
The gospel transforms our identities. Paul was proud of his Jewish identity. That much is clear in a thousand ways throughout his writings… but he laid it aside for the sake of ministry to the Gentiles.
To be sure, the following questions should be asked with discernment. We don’t need to keep re-creating Christianity and there is something beautiful about Christian tradition. Perhaps we’ve always done it a certain way because it’s truly the best and most biblical way to do things, but maybe not. In our pursuit of reaching the lost with the gospel, we must be wise to become “all things to all people” without changing the gospel.
- Who is God calling you to reach with the gospel?
- How must you adapt your preferences in order that the call of the gospel would be heard and received?
- What aspects of your “Christian culture” are you called to lay aside in order to effectively bring the gospel to those God is calling you towards?
- How will these “adaptations” allow the message of the gospel to continue to be faithfully proclaimed?
- In what ways will these changes make it more difficult for different aspects of the gospel to be received? | <urn:uuid:6da550f8-f18f-4c1b-b9b5-5980d0d6dfec> | CC-MAIN-2019-39 | https://livingtheologically.com/2017/03/27/why-did-saul-become-paul/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572471.35/warc/CC-MAIN-20190916015552-20190916041552-00106.warc.gz | en | 0.979661 | 894 | 2.8125 | 3 |
Avoiding gingivitis might have a larger incentive than avoiding periodontitis and tooth decay. A new study released by the College of Dentistry at University of Central Lancashire and The Blizard Institute in the United Kingdom has indicated that dental bacteria may lead to brain degeneration and Alzheimer’s.
The two research teams discovered that oral bacteria was present in four out of 10 Alzheimer’s disease brain samples, while none were found in the brains of individuals without Alzheimer’s disease. The project has been underway for the three years and has concluded that the human mouth contains more than 700 different types of bacteria.
Lead researcher Dr. Lakshmyya Kesavula stated that to prevent bacteria from entering the blood and the brain you should cut down on sugary foods and smoking. There are three known types of Alzheimer’s disease; early-onset, late-onset and familial Alzheimer’s disease, which is determined by genetics.
Bacteria have been shown to effect individuals with late-onset Alzheimer’s disease, said Kesavalu.
The best ways to prevent gingivitis and bacteria in the mouth is to be consistent and diligent with your home care. This means flossing and brushing every day and making regular trips to see your dentist. Additionally, brush up on your family history both of Alzheimer’s and periodontitis. Some patients can keep the utmost care of their teeth but still are unable to avoid bacteria because of genetic history.
Consider purchasing a Rotadent toothbrush if your dentist has already indicated that you are developing gingivitis or periodontitis. The Rotadent is a revolutionary new toothbrush that gets in between the teeth using a 360 degree motion.
There is also a very effective tooth and gum tonic that is a strong alternative to traditional mouthwash like Listerine.
While the research linking Alzheimer’s to gingivitis is still inconclusive, proper brushing and tooth care is always a necessity. | <urn:uuid:954d16fd-4691-450c-858f-411ac73411f7> | CC-MAIN-2018-30 | http://mydcdental.com/gingivitis-might-be-linked-to-alzheimers/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00169.warc.gz | en | 0.94596 | 412 | 3.5 | 4 |
A constant shifting of the earth’s tectonic plates causes a buildup of stress in the crust, which eventually leads to earthquakes. To study these deep stresses, scientists installed the San Andreas Fault Observatory at Depth: a borehole drilled two miles deep into the fault, along with an underground seismic monitoring station. A ton of rock samples were extracted in 2007, the first time researchers gained access to a major fault at the depths where quakes start.
Long-term forecasts remain a distant goal, but close monitoring of faults is yielding clues that could signal an impending earthquake. In 2008, researchers examining data from the San Andreas Fault Observatory at Depth reported detectable changes in the way seismic waves traveled through fault rock in the hours before two quakes. The Southern California Earthquake Center says there is a 46 percent chance that the state will see a magnitude 7.5 or greater quake in the next 30 years.
SLOW AND STEADY
Not all fault movements are violent tremors like the four monster quakes that have shaken San Andreas since 1690. Portions of the fault slide slowly and continuously as the North American Plate shifts northward an inch per year relative to the Pacific Plate. Scientists working for the U.S. Geological Survey examining samples from the San Andreas Fault Observatory at Depth have reported that these slow-moving sections contain talc, a soft mineral, which may account for their slipperiness.
Powerful quakes have lasting impacts on a fault zone, creating new fractures and stresses that leave some areas prone to future damage. A six-year study of seismic activity along the San Andreas Fault found that the 2004 Parkfield earthquake caused breakage and microcracking in shallow rock layers that changed the speed at which seismic waves moved through the area, making it more vulnerable to tremors for more than three years afterward. | <urn:uuid:b0092afc-8b1c-44c5-96ad-b3574a625853> | CC-MAIN-2014-49 | http://discovermagazine.com/2010/apr/12-what-happens-to-ground-at-fault-line | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379190.66/warc/CC-MAIN-20141119123259-00056-ip-10-235-23-156.ec2.internal.warc.gz | en | 0.923599 | 369 | 4.25 | 4 |
L-Citrulline is a non-essential amino acid that can be biosynthesised internally by the body, however, oral consumption appears to enhance its capabilities. Through the biosynthesis or oral consumption of L-Citrulline, the body utilises Citrulline by converting it into Arginine, which then plays an important role in many facets of human physiology.
Found to be both beneficial for strength and endurance athletes, L-Citrulline enhances peripheral blood flow to the working muscle, meaning a greater delivery of oxygenated blood can be achieved during exercise.
For the endurance athlete, this means more nutrient delivery and greater toxic waste removal, whereas for the lifting athlete it may also aid in increasing muscle growth due to an increase in fascial stretching and sarcoplasmic volume, alongside improved performance.
The great thing about L-Citrulline is it does not become degenerated once entering the stomach, but instead passes into the system to be utilised far more effectively than Arginine itself.
L-Citrulline has also been linked to improving the removal of ammonia during exercise, which is a toxic byproduct of exercise-induced fatigue, and when combined with a malic acid, L-Citrulline Malate also appears to enhance ATP regeneration within the bodies energy system known as the Krebs Cycle.
For best results use up to 10g daily or 5-8g in the pre-workout phase.
For best results take 1 - 3 grams 2 to 3 times daily. | <urn:uuid:46fca0a1-9427-457a-aea4-09b34877c7f3> | CC-MAIN-2020-24 | https://www.australiansportsnutrition.com.au/l-citrulline-atp-science.html | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00398.warc.gz | en | 0.94024 | 323 | 2.578125 | 3 |
A great amount of advancement has been made and is still being made on technology that has a considerable effect on our daily lives. Technology not only brought changes to our daily lives but also many of the major industries and sectors such as: business, education, healthcare etc. There are many types of technologies available that one can use, while more new technologies are still being developed. If you are someone who wants to know about some of the modern technology that is being used currently, then you have come to the right place to look for the relevant information. This article will take a look at some of the modern technology.
A look at some of the modern technology: 3D printing technology
This section will take a look at one of the modern technology that is being widely used and this technology is called 3D printing technology. There are various applications and advantages of this technology and many of the industries are turning to incorporate the use of 3Dimensional printing technology.
- So what is 3-dimensional printing technology? 3-dimensional printing technology is known also as additive manufacturing (AM) and it refers to the process that is used for the purpose of synthesizing of a 3 Dimensional object. This 3 Dimensional object can be anything that can be printed easily by using a 3D printer. The successive layers of the object materials are formed successfully under the computer control in order to successfully create the object. The objects can be of various shapes and sizes and they are produced by using of digital model data which is from a 3D model or it can also be obtained from other electronic data source which can be Additive Manufacturing File. There are many benefits of 3-dimensional printing technology and many of the industries are turning to use 3-dimensional printing technology.
- Now this paragraph will take a look at the technology of deep learning and it is a type of subfield of machine earning that is concerned with the various sets of algorithms which are inspired from the structure and also the function of the brain which are known as an artificial neural network. It is also known as: deep structured learning, hierarchical learning and also as deep machine learning. There are many benefits of deep machine learning.
- Another technology that is widely used since its inception by various industries is Virtual reality technology. Virtual reality is one of the most popular technologies that have many useful applications in the real life. The term virtual technology is used for describing a three-dimensional environment that is generated by the computer. Almost any type of environment can be generated by using virtual reality. These three dimensional environments that are generated by the computer can be easily explored and also interacted by the person using the right technological devices. By using a virtual reality, the person is easily able to get immersed and become a part of the virtual reality by using the required technological devices, while being in the real life only. Also through virtual reality, the person is able to perform variety of actions and also the person is able to use and manipulate any object in the virtual reality. There are various applications of virtual reality and it is used in many types of industries such as: sports, entertainment, military training, medicine, arts, architecture etc. Virtual reality can also pave path to many different discoveries in various areas that has a direct or indirect effect on our daily life.
Mainly virtual reality is used for situations where it is impractical or dangerous or too expensive to do something in real life. Some applications of virtual reality in these cases are: trainee fighter pilot training, trainee surgeon training to military training etc.
One main benefit of using virtual reality is that virtual reality allows a person to take the real life risks in safer way in a virtual environment. The costs of the virtual reality are also decreasing, making it more affordable for more accessed by various industries. There are many types of virtual reality systems but they all share the same characteristics, even if they are different. One feature is: allowing the person to view the images in 3 Dimension to make the image appear life size to the person who is using the technology.
However, a virtual reality environment should provide the appropriate responses in the required time and this is done in the real time, when the person is exploring the surrounding. Problems may arise if there are any types of delay in the action of the person and the response of the system and this problem will disrupt the overall virtual environment experience. | <urn:uuid:87a2b3c9-f618-405f-981f-7c6269ee7242> | CC-MAIN-2018-51 | http://www.newcountrytheplay.com/a-look-at-some-of-the-modern-technology/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825495.60/warc/CC-MAIN-20181214070839-20181214092339-00219.warc.gz | en | 0.964922 | 876 | 3.296875 | 3 |
The latest analysis of comet grains collected by NASA's Stardust mission shows that they do not contain the most primitive material in the solar system, according to a study published today in the journal Science.
The study has surprised the scientific community and elevates the importance of other sources of primitive materials from the solar system collected in our atmosphere.
When the Stardust mission returned to Earth in 2006, it brought home samples from a rendezvous with a comet, Wild 2. This material was the first known to have come from the outer solar system.
Scientists had expected it to be a rich source of pre-solar dust, dating from before the planets formed around 4.5 billion years ago.
'Pre-solar material acts like a time capsule, telling us about the origins of our solar system, and what was here before,' explains Museum mineralogist Anton Kearsley, part of the international team who studied the material.
'We had hoped to find lots of undamaged dust from the birthplace of the solar system on comet Wild 2, but have discovered that there are very few unambiguous pre-solar grains.'
'The Stardust samples were definitely collected from an object you'd describe as a comet - icy, with a gaseous head and tail, with an elliptical orbit and originating from the Kuiper Belt in the outer part of our solar system. But it seems that even if you sample directly from a comet itself, you won't necessarily get the oldest material.'
Our understanding of pre-solar materials comes mainly from tiny grains in meteorites, such as those in the Museum's collection, and also particularly from a type of interplanetary dust particle (IDP) scooped up by aircraft flying in the stratosphere, 15-20km above the Earth's surface.
IDPs contain curious particles of glass with embedded metals and sulfides (GEMS). Most planetary scientists believe these were formed in interstellar space before being swept into the cloud of dust and gas from which our solar system formed.
Stardust collected thousands of particles, each only micrometres across - a thousand times smaller than a pinhead. As comet Wild 2 rushed past the spacecraft at over 20,000km per hour, tiny dust grains became embedded in the collector.
When the samples returned to Earth, some of the trapped particles were thought to look like GEMS. Laboratory analysis by the American members of the team at Lawrence Livermore National Laboratory have now shown these GEMS-like structures were actually created during the impact with the collector, and not billions of years ago.
Working with Professor Mark Burchell of the University of Kent, Kearsley and colleagues fired mineral samples from the Museum's collection into blocks of silica aerogel. This replicated the action of the comet dust being swept up on the Stardust spacecraft .
The minerals were captured as tiny dark grains at the end of distinctive pale tracks, about 5mm in length (shown in the image above). The results showed that mixtures of iron-nickel metal and iron sulphide can be created during the capture of dust, and are not reliable indicators of primitive material from before the birth of the solar system.
Another indicator of primitive interplanetary dust - a distinctive form of the magnesium silicate pyroxene mineral enstatite - is also missing from the Stardust samples.
'It seems the best preserved samples of pre-solar dust are those collected in the stratosphere, and not within the material from comet Wild 2,' concluded Kearsley. 'They are almost certainly from comets, although apparently not like the one sampled by the Stardust mission.'
'The mission has shown that comets are probably very diverse, some containing material forged in the swirling disk of gas and dust which then became the solar system we know today, while others preserve even more primitive interstellar material.'
The distinction between comets and asteroids has been blurred. Comet Wild 2 may have more in common with inner solar system asteroids than outer solar system comets. The relationship between comet Wild 2 and specific classes of meteorites from asteroids will be examined in further studies.
The samples from comet Wild 2 represent the first new extra-terrestrial material brought back to Earth since the missions to the moon in the 1970s. Stardust flew further than any other sample-return mission - 4.63 billion kilometres in looping orbits that went out to between the orbit of Mars and the asteroid belt. The Stardust mission has already been highly successful, sparking a revolution in modelling the early history of the solar nebula. | <urn:uuid:8a07a61e-7991-44e9-9fe4-c5598019927e> | CC-MAIN-2014-15 | http://www.nhm.ac.uk/about-us/news/2008/january/hunt-for-real-stardust-continues18882.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00317-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.947975 | 933 | 4.21875 | 4 |
Home » The Jinnah Barrage has been built on ____ river. Geography 20-10-2022 by Ayesha Butt The Jinnah Barrage has been built on ____ river. Indus Jhleum Chenab none of these View Answer The Jinnah Barrage is a barrage on the River Indus near Kalabagh, Pakistan. Its was Opened in 1946. Post Views: 129 Tags: Naib Tehsildar GK in Revenue Department (BS-14) - PPSC Past Paper 2022Leave a Reply Cancel replyYour email address will not be published. Required fields are marked *Name * Email * Website Comment * Save my name, email, and website in this browser for the next time I comment. | <urn:uuid:a339ebd7-abf2-48cd-82cc-b676f9c163e5> | CC-MAIN-2023-50 | https://leadmcqs.com/the-jinnah-barrage-has-been-built-on-____-river/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00879.warc.gz | en | 0.905946 | 153 | 2.578125 | 3 |
The Nobel Prize in literature has been awarded to American singer and songwriter Bob Dylan, who first burst on the scene singing in Greenwich Village coffeehouses in the 1960s. The Swedish Academy said Dylan had “created new poetic expressions within the great American song tradition.”
The prize came to Dylan as he continued what has been known as his “never ending tour” of live performances, which in recent years have included more than a smattering of songs long associated with Frank Sinatra along with his familiar blend of rock, country and blues tunes.
SIGNIFICANCE OF THE PRIZE
Dylan’s award marks a break with tradition as it is the first time the Swedish Academy has chosen someone seen primarily as a musician. Dylan was known first for his groundbreaking songs like “Blowin’ In the Wind” and others that had an impact on the civil rights struggle in the United States. He spearheaded a revival of folk music, then embraced rock ‘n’ roll as his songs became more personal and abstract.
In his long and varied career, Dylan has frequently referred back to the country blues tradition and paid homage to performers like Robert Johnson and Blind Willie McTell. One of his earliest songs was a tribute to singer and songwriter Woody Guthrie, seen as a major influence in his formative years.
Recently Dylan, 75, has had a radio show celebrating American roots music and embraced Frank Sinatra’s style even though he clearly lacks Sinatra’s vocal range. The prize is seen as recognition of the distinctive way he has built on and expanded the range of American music.
WHAT ELSE DOES HE DO?
Dylan has branched out into other forms of art, winning plaudits for an autobiography titled “Chronicles: Volume 1” and directing several films that were not appreciated by critics. He has also exhibited several series of paintings and produced ironworks that have been shown in various galleries. His first book “Tarantula,” an experiment in prose poetry published in the 1960s, has not drawn much attention.
Dylan was one of the original 1960s rebels, mocking authority at every turn, but lately he has been scooping up traditional honors, including a Pulitzer Prize and, in 2012, a Presidential Medal of Freedom. He kept his trademark sunglasses on as President Barack Obama placed the medal around his neck. | <urn:uuid:93277d2c-2e90-4e0d-a280-f7efeb536692> | CC-MAIN-2017-04 | http://news10.com/2016/10/13/bob-dylan-wins-2016-nobel-prize-in-literature/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.988324 | 493 | 2.90625 | 3 |
China’s teachers: The unsung heroes of the workers’ movement
Teachers make up less than two percent of China’s overall workforce but they account for about four percent of total strikes and protests. Moreover, unlike workers in privately-owned factories, most teachers are employed by the state and their protests often pose a direct challenge to local government officials and administrators.
Hong Kong (AsiaNews) – Images of worker activism in China tend to be dominated by factory workers and, more recently, coal miners and steel workers. However, some of the largest, best organized and most determined worker protests of the last few years have been staged by teachers.
Teachers make up less than two percent of China’s overall workforce but they account for about four percent of the strikes and protests recorded on China Labour Bulletin’s Strike Map. Moreover, unlike workers in privately-owned factories, most teachers are employed by the state and their protests often pose a direct challenge to local government officials and administrators. China Labour Bulletin (CLB) is the first free trade union in China, founded during the riots in Tiananmen Square and now based in Hong Kong
China Labour Bulletin’s new research report entitled ‘Over-worked and under-paid: The long-running battle of China’s teachers for decent work’ examines the deep-seated problems in China’s school system and the collective efforts of teachers to overcome low pay, lack of social security, unequal pay and working conditions, and wage arrears.
The report focuses on the huge disparities that exist in the pay and working conditions of teachers in elite schools in major cities and those in poorer rural districts who often struggle just to get by.
There is no real trade union in China that can give teachers a voice in school management or in local government policy formulation. Neither is there any institutional mechanism through which teachers can resolve their grievances, and, as a result, they are often left with no option but to take collective protest action when their interests are threatened.
The report recommends that the Chinese government:
Provide additional funding to poor rural school districts and ensure that all state school teachers are guaranteed a living wage.
Ensure that all the pay and benefit standards that teachers are entitled to are transparent and publicly available, and stipulate that performance pay should not be used as a means to reduce teachers’ basic pay.
Create a mechanism by which teachers can engage in collective bargaining to establish acceptable standards for pay, working hours, benefits etc. within individual schools and across district and regional jurisdictions.
Promote the much-needed task of trade union reform. School trade unions should be reorganized and democratic elections introduced so that they can effectively represent the teachers in individual schools and in each school district. | <urn:uuid:d5258e2c-1462-4b47-b05c-bd982665f867> | CC-MAIN-2023-14 | https://www.asianews.it/news-en/China%EF%BF%BD%EF%BF%BD%EF%BF%BDs-teachers:-The-unsung-heroes-of-the-workers%EF%BF%BD%EF%BF%BD%EF%BF%BD-movement-37912.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00252.warc.gz | en | 0.963906 | 585 | 2.609375 | 3 |
In this segment of our examination of horse artillery, we will examine the organization and equipment of horse artillery batteries.
An artillery battery generally consisted of either four or six guns, and was commanded by a captain. Two guns formed a section, usually commanded by a lieutenant. During movement, each gun was hooked up behind a limber, which carried the ammunition chest, and was drawn by six horses. Each gun also had its caisson, carrying three ammunition chests, which was also drawn by six horses. These two units made up a platoon, which was commanded by a sergeant and two corporals. A battery was also accompanied by a forge, a wagon carrying the tents and supplies, and generally six additional caissons with reserve ammunition.
There were three drivers for each six-horse team, who rode the horses on the left side and held the reins for the horses on the right. A typical gun crew was made up of nine men. Where the artillery was designated as horse artillery, the crewmen each rode a horse, with two additional men acting as horse-holders in action. When there was a shortage of horses, two men could ride on each ammunition chest, but this added to the load for the horses towing the battery.
In addition to the lieutenants commanding each section, another lieutenant usually commanded the line of caissons. There was also an orderly and quartermaster sergeant, five artificers, two buglers, and a guidon-bearer.
First among the battery’s equipment, we must discuss the cannons themselves. Civil War horse artillery primarily used two different type of cannons, the 12-pounder Napoleon cannon and the 3-inch ordnance rifle. We’ll look at each separately.
The Model 1857 12-pound Napoleon cannon was the most popular smoothbore cannon used during the war. It was named after Napoleon III of France and was widely admired because of its safety, reliability, and killing power. It was particularly lethal at close range. The Napoleon reached America in 1857, and was the last cast bronze gun used by the American army. The Union version of the Napoleon can be recognized by the flared front end of the barrel, called the muzzle swell. The 12-pound in its name refers to the weight of the ammunition it fired. The Napoleon could fire solid ball, case, shell, grapeshot and cannister ammunition.
The 3-inch ordnance gun was the most widely used rifled artillery piece used during the war. Unlike the cast bronze Napoleon, the 3-inch was made of iron. It was popular because of its reliability and accuracy, and was exceptionally durable. The 3-inch in its name refers to the size of the bore, or opening at the muzzle. It normally fired solid bolt, case or common shells (generally Schenkel or Hotchkiss shells), but could fire cannister in an emergency. The 3-inch ordnance rifle had a range of roughly 1,800 yards. Although light by artillery standards, its weight was still significant at roughly 1,700 pounds for the cannon itself and its carriage. It was primarily produced by Phoenix Iron Company of Phoenixville, Pennsylvania.
The carriage of an artillery piece allows the cannon to be aimed, holds it in place while it is fired, and allows it to be moved where it is needed. It basically consisted of a cradle, a trail and two wheels.
The limber for field service was basically a two-wheeled cart, simply an axle, with its wheels, surmounted by a framework for holding an ammunition chest and receiving the tongue. At the back of the axle is the pintle hook, on which the lunette on the trail of the gun carriage can be keyed into place. The result is a four-wheeled cart that pivots on the pintle hook. The ammunition chest on the limber could be used as a seat for three crewmen, but in the horse artillery it was customary to spare the horses, and they would ride the limber and caisson only when necessary.
The caisson was intended to transport ammunition, and carried two ammunition chests like the one on the limber. It had a stock like that on the gun carriage, terminating in a lunette, so that it could be hooked to a limber for transportation. A caisson with its limber thus held three ammunition chests, which with the chest on the limber hauling the gun carriage made a total of four. The caisson with its drivers and crew would be under the direction of a corporal, who would report to the sergeant in charge of the gun to which the caisson was assigned. The line of caissons for the battery would be under the overall supervision of one of its lieutenants.
The battery wagon, also drawn by a limber, was a long bodied cart with a rounded top, which contained tools for the saddlers and carriage makers, spare parts, extra harness, and rough materials for fabricating parts. The limber which drew the battery wagon was a portable blacksmith shop, containing a light forge and blacksmith tools. Each battery had only one wagon and one forge, and they were expected to accompany the battery wherever it went.
Wheels for all three of the standard carriages, as well as caissons, limbers and battery wagons, were 57 inches high, and could be easily interchanged. All caissons carried an extra wheel on the back, and changing a broken wheel was a standard drill for a battery of horse artillery. | <urn:uuid:7c253298-3f17-442c-86f4-6d0892b14b22> | CC-MAIN-2017-26 | https://regularcavalryincivilwar.wordpress.com/2008/09/25/horse-artillery-part-ii/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323895.99/warc/CC-MAIN-20170629084615-20170629104615-00178.warc.gz | en | 0.980971 | 1,132 | 3.296875 | 3 |
JSON files are awesome because they store collection of data in a human-readable format. However, reading the JSON file can be a pain if the JSON file is minified.
Take this for an example:
A computer can easily read it. Even a human can still read it but if JSON file is properly formatted to display the content, it will be much easier. I mean JSON files are supposed to read like this after all:
You can use most text editor with some plugins to display it with proper formatting. However, if you are stuck to a terminal or if you want to do it in your shell script, things will be different.
If you got a minified file, let me show you how to pretty print the JSON file in Linux terminal.
Pretty print JSON with jq command in Linux
jq is a command line JSON processor. You can use it to slice, filter, map and transform structured data. I am not going in details about using jq command line tool here.
sudo apt install jq
Once you have it installed, use it in the following manner to pretty print JSON file on the display:
jq . sample.json
You may also tempt to use cat but I believe it one of the useless use of cat command.
cat sample.json | jq
Keep in mind that the above command will not impact the original JSON file. No changes will be written to it.
You probably already know how to redirect the command output to a file in Linux. You probably also know that you cannot redirect to the same file and the tee command is not guaranteed to work all the time.
If you want to modify the original JSON file with pretty print format, you can pipe the parsed output to a new file and then copy it to the original JSON file.
jq . sample.json > pretty.json
Bonus: Minify a JSON file with jq command
Let’s take a reverse stance and minify a well formatted JSON file. To minify a JSON file, you can use the compact option -c.
jq -c < pretty.json
You can also use cat and redirection if you want:
cat pretty.json | jq -c
Using Python to pretty print JSON file in Linux
It’s more likely that you have Python installed on your system. If that’s the case, you can use it pretty print the JSON file in the terminal:
python3 -m json.tool sample.json
I know there are other ways to parse JSON file and print it with proper format. You may explore them on your own but these two are sufficient to do the job which is to pretty print JSON file. | <urn:uuid:7880ca46-f6f5-4520-98fc-9af18eee212c> | CC-MAIN-2021-04 | https://lemontreesites.com/blog/2020/12/08/how-to-pretty-print-json-file-in-linux-terminal/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00432.warc.gz | en | 0.878742 | 559 | 2.625 | 3 |
Kruja (Kru-yah) town is built around the foot of its sheer Fortress rock, 610 metres high. The castle, the citadel of the national hero Skėnderbeg, is both shrine and monument to the aspiration of the Albanian nation.
It is built at a height of 548 metres on an isolated spur of the limestone mountain-wall of the Kruja range, and has spectacular views of the surrounding region. Kruja is thought to be named from the Albanian word Krua meaning a Spring. The citadel was used by the Illyrian tribes centred on nearby Zgėrdhesh as early as the 6 BC, and have become the main Illyrian castle in the area after Zgėrdhesh was abandoned in the 4th century of our era. The first Albanian feudal state, the Kingdom of Arbėr, was formed here around the year 1190, with Kruja as an important part of its defensive system.
It is mentioned as an important castle in the writing of the Byzantine chronicler Georgius Akropolitis, who in 1245 called it Kroas, and it belonged to Gulam, the lord of Abanon. At the end of the 13th century it was taken by Charles of Anjou, who repaired the walls - after which it passed to the Thopia family.
In 1396 the Ottoman Turks occupied Kruja for the first time, but soon withdrew and did not reappear for another 20 years. In 1430 an uprising started under the leadership of Gjon Kastrioti, but it was crushed by Ottomans, (Gjon Kastrioti is the father of the Albanian national hero Gjergj Kastriot Skėnderbeg).
Critiques | Translate
adamasao1 (466) 2006-11-26 21:26
The shot is nicely composed, and although I don't often like shots taken through windows and the like, I think it works in this case.
The note is informative for a history buff, too.
worldcitizen (8646) 2006-11-26 21:26
This is a beautiful cadtle, and I like your presentation, looking through the arch. I think it would have been nice to get a more complete view of the arch, but maybe you didn't have enough room. It is a nice composition, inlcuding some people visitng the castle, and a lovely sky. TFS.
pranab (5354) 2006-11-27 3:05
nice framing. good colro and contrast, you have used the light very wisely. well done.
danbachmann (1746) 2010-06-06 6:16
Wonderful framing from the house wall onto the well lit museum plus a good note. | <urn:uuid:2ef3f448-0d18-4368-8638-c83f24095193> | CC-MAIN-2016-30 | http://www.trekearth.com/gallery/Europe/Albania/photo520375.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00133-ip-10-185-27-174.ec2.internal.warc.gz | en | 0.961385 | 589 | 2.765625 | 3 |
A more unusual example is to be found in the Richmond Oval, an ice skating arena built in 2010 for the Vancouver Winter Olympics. The Mayor of Richmond (a satellite city to Vancouver) called for the building to be an expression of local materials, a desire that engineers Paul Fast and Gerry Epp (Fast + Epp) took literally and to giant scale. In designing huge structural beams able to span the 90 metre width of the skating space without intermediate column support, the engineers decided to eschew conventional steel and concrete solutions and to fabricate giant glued laminated (glulam) timber beams instead. These are configured in pairs to form curving V-shaped composite beams that not only impressively span the vast breadth of the arena, but also contain all of the building’s mechanical and electrical services (lighting, heating and ventilation, sprinkler system) within the hollow of the V.
Remarkable though this integrated solution is, it is the smaller, 13 metre ‘wood waves’ that span perpendicular to the main beams that most expressively demonstrate the benefits of the wood in the particular circumstances of this building type. Comprising standard sawmill sections, the short lengths of softwood are formed into irregular curved V-shaped beams and spaced to allow air penetration to the sound absorbent material that lines the inner faces of the V. These intermediate beams provide the acoustic dampening so often omitted from these large, reverberant volumes, but also a perception of warmth that is not normally something one associates with ice skating venues.
The big win is that it makes use of some 6000 beetle affected trees, an endemic problem in pine trees in parts of British Columbia and which is too often resolved by simply burning the infected material. Here Fast+Epp have demonstrated that the wood’s structural capacity (and hence its economic value) remains unaffected by disease, and have used this to advantage in their ingenious solution to an engineering challenge. That this has also benefited the local forestry sector is a message for architects, engineers and foresters in the UK where our larch, oak and Scots pine trees are also variously affected by disease. Edinburgh Napier University’s Centre for Wood Science and Technology has now carried out considerable research and testing to show that it is still possible to make intelligent - indeed innovative - use of these species by, for example, fabricating large structures entirely from relatively short lengths of material. | <urn:uuid:935fed3c-7592-4f08-83cf-1b85acfcbf5a> | CC-MAIN-2021-04 | https://www.timberdesigninitiatives.eu/copy-of-wood-modification-why-tinke | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00771.warc.gz | en | 0.940165 | 486 | 2.8125 | 3 |
This image is a CT scan of a mouse's face — but not just any mouse. Scientists at Berkeley have identified thousands of small DNA regions responsible for influencing the development of facial features — and they used this insight to modify the faces of embryonic mice. The question now is, are humans next?
Top image: Harris Morrison, MRC Human Genetics Unit, Institute of Genetics and Molecular Medicine, University of Edinburgh.
Human faces are incredibly distinctive. But why? New research shows that our unique facial features are forged by more than 4,000 small regions of DNA — but it only takes a few genetic tweaks to subtly alter the shape of our faces.
It’s pretty obvious that facial features are hereditary. Just take a look at family resemblances. The shape of our faces are clearly influenced by our genetics, but scientists have only been able to isolate a small fraction of the genes responsible. Based on the complexity of our features, and those of other animals, it’s clear that there’s plenty more going on at the genetic level.
As Axel Visel of the Lawrence Berkeley National Laboratory in California and his colleagues have recently pointed out, there are thousands of specific non-coding regions of genomes that are working to influence the activity of facial genes. These short stretches of DNA act like switches, turning genes on or off.
These regions are called distant-acting enhancers, or transcriptional enhancers. Some scientists stupidly refer to them as “junk DNA” because they were initially thought to lack function, like encoding proteins. But despite the fact that these transcriptional enhancers are physically located hundreds of kilobases away from their target genes, they appear to regulate the spatial patterns, levels, and timing of gene expression in the normal development of facial features.
To determine this, Visel’s team experimented on genetically modified mice to see if they could alter their facial features during embryonic development.
Okay, to be sure — mice aren’t people, but the same genetic processes apply. As Visel told the BBC, “We're trying to find out how these instructions for building the human face are embedded in human DNA. Somewhere in there there must be that blueprint that defines what our face looks like."
By looking at mouse embryos, Visel could see where — as facial features develop — these switches influence the activation of various face-building genes.
To see if they were on the right track, the researchers removed three of these genetic switches from developing mice. Then, by using a technique called optical projection tomography and CT scans, they studied the resulting facial shapes. By comparing the genetically modified mice with the normal ones, they noticed that some mice developed either long or shorter skulls, while some developed wider or narrower faces. The experiment showed that particular switches can affect the shape of skulls in significant ways.
So does this mean we’ll eventually be able to genetically design human faces at the embryonic stage? Visel himself says it’s unlikely that DNA could be used in the near future to predict someone’s exact appearance, or that parents could predetermine the way their baby looks.
Indeed, these on/off switches merely provide a very blunt brush to affect the development of facial features. It’ll be quite some time before we develop the techniques and insights required to forge a human face in precise ways.
More practically, however, Visel’s research could be used to predict — and even mitigate — certain birth defects, like cleft palates. These types of interventions are quite a ways off, but the new research indicates that it may someday be possible.
Read the entire study at Science: “Fine Tuning of Craniofacial Morphology by Distant-Acting Enhancers.” | <urn:uuid:5a6392e5-781b-4872-a17e-af240718bd79> | CC-MAIN-2021-31 | https://gizmodo.com/scientists-can-now-genetically-modify-facial-features-1453368910 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00523.warc.gz | en | 0.948978 | 777 | 3.6875 | 4 |
Generations: Women in Colonial America," by Carol Berkin.
This book, "First Generations," discusses the lives of women who immigrated to America from other countries, and Native Americans that were here when the immigrations started. It then follows through two centuries of life in America, to show how women's lives changed, improved, and/or degraded during this time. It tells in detail how people lived in the 17th and 18th centuries, and particularly how women lived.
It is a compelling picture of everyday life in Colonial times, and of what women had to endure during their short lives. These are women of different ethnic backgrounds, financial circumstances, and areas. Berkin weaves them together to form a tapestry of what life was like for early American women, and it is a fascinating book.
For the first time, we can catch a glimpse of Colonial America from the women's point-of-view, but not just colonists, Berkin also writes about powerful Native American women, black women, and immigrants from several countries, not just England. It is a more complete picture of early women's lives, and an interesting book to read.
The author's arguments are not as much about the women as about their rights, or lack of them. She makes it clear that life was difficult for all these women, indeed for their entire families. "The short and often brutish life these immigrant women faced was not a uniquely female experience" (Berkin 7). Women's rights changed from century to century, and the author follows these everyday rights, allowing us to understand just what women faced as they aged, had children, and remarried in this society.
In 17th Century America, single women retained their rights, and were able to make contracts, sue, and keep their possessions. This seems normal to us today, but then, it was quite an achievement, because as soon as these women married, they lost everything, even the clothing on their backs became the property of their husbands. When their husbands died, women retained property and rights through the "dower right," unless of course, they married again, and then the property reverted to their new husband, and husbands often stipulated this specifically in their wills. "After her death or remarriage the Land is to return to my son Wm. Marriott" (Berkin 19).
Most women enjoyed good relationships with their loved ones, but there were some men who looked at their wives as their personal property, to use or abuse as they wished. One man, found beating his wife said she was "his servant and his slave" (Berkin 31). However, most families worked hard together, and enjoyed their leisure time together too.
Berkin shows us the difference between societies through a Dutch woman who moved to New Amsterdam when she was a young woman, Margaret Hardenbroeck. "Hardenbroeck moved to New Amsterdam from the Netherlands in 1659. She served as agent for a cousin who was an Amsterdam trader, and quickly became engaged in the colonial fur trade. Even when Hardenbroeck married, under the Dutch legal system she preserved both her legal identity and economic independence; as partners, she and her second husband, Frederick Philipsen, built a transatlantic packet line. But while the English takeover of the colony did not restrain the Philipsen firm's economic growth, it did destroy Hardenbroeck's legal rights" (Johansen).
By the 18th century, women's rights had deteriorated, and many women did not even enjoy their own rights before marriage. Eliza Lucas, from South Carolina, ran her father's plantations from the age of sixteen, after her mother died. She was unusual, because her father not only trusted her with the running of the "family business," he also encouraged her to create new enterprises, which she staunchly defended as "hers." She created a "large plantation of Oaks," and "asserted her right to any profits in timber it generated" (Berkin 131). This was extremely unusual at a time when women had no rights politically, and few rights under marriage.
Another feature of this book is how the author follows history through two centuries, to show how lives changed from early colonialism, to "the rise of gentility." Women began to have more leisure time, and people were no longer "puritans," they were "yankees," and the economy changed from "moral" to "capitalism." (Berkin 139). It was not just that the country was changing from agricultural to urban, and from religious to economic, people were acclimating to the "new world," and becoming natives themselves.
Still, "...the household...remained the primary setting for white women's activities in the eighteenth century" (Berkin 139). This tells us that women still worked in the home but had more time to enjoy quilting bees, sewing sessions, and spent less time in the fields.
Berkin did indeed produce new information in her field, because she used sources other than the traditional "letters, diaries, sermons, newspapers and political tracts" (Clarke 26). However, many critics believe that Berkin did not do additional research, that she instead accumulated research from a variety of sources, that had never been put together before. "Academics will note that Berkin doesn't present any new research of her own here. Rather, she gathers the work of 'many talented scholars who have... recovered for us the... experiences of colonial women,' transforming it into a highly readable narrative history" (Clarke 27).
Berkin herself put it this way: "...historians are perhaps the last of the independent artisans. They write about what interests them and employ the methods and theories they know best or what seems most appropriate" (Berkin viii). In these two sentences, we learn about Berkin's interests, her methods, and how she wrote this book. She did produce new information. She also influenced other historians to look at "non-traditional" documents to help learn more about the past, and how people lived, worked, played, and improved their lives.
Berkin's accounts of everyday life and women in general do agree with other who have written on the subject, but her book adds detail and description to the other accounts. Most "everyday life in colonial times" books tend to generalize about how people lived, but Berkin's book breaks America down by area, and shows how different areas, and their weather, surroundings, and even settlers affected how people lived. She differentiates between frontier households, urban households, and rural households, and shows how women made additional income in each of these areas.
She also discusses leisure time, birthing methods, and more intimate details of early lives. She even discusses menstruation rights among the Native Americans, and how women came together for births, weddings, and funerals. Her book delves deeper into real live, and gives more intimate details of how women interacted with each other, with their families, and in society.
The other thing her book does is bring us first hand experiences of races other than white, so we have a more complete picture of the people that populated the early American colonies. Mary Johnson, a black slave, lived as a free woman with her husband in Maryland after their marriage. Her experiences with her white neighbors were positive, and she and her husband enjoyed a good life, but that would change.
Also, as free blacks, the Johnsons made court appearances and were free to hire both black and white bound servants. But by 1672, policies and laws passed by a new generation of English colonists would ensure that the rapidly increasing numbers of Africans in colonial America would be unable to look forward to entering the world of free men and women. Tellingly, when Mary's grandson John died in 1706, the Johnson family disappeared from the historical record" (Clarke 26).
The difficulties of interpreting this subject come mostly from lack of records, and from what Berkin herself noted, historians tend to work in areas where they feel comfortable, rather than trying new areas. Many old records have been destroyed by misunderstanding, family error or disinterest, and natural disasters such as fire. As we are left with fewer historical documents, the study of history will become even more difficult, and more of a science than an art form.
Researchers are relying more and more on non-traditional research materials, like diaries, court records, wills and even linens and sewing records. This is one reason there are still new texts produced on topics that have been discussed repeatedly. Berkin chose to use existing research, but bring it together in a form that had never been done before, which is another way to interpret history, and keep finding and publishing new information. "As Berkin points out in her discussions of Wetamo, leader of the Wampanoag tribe, or Mary Johnson, an African brought to the Chesapeake, we have evidence of only the most public outlines of most women's lives -- often only birth and death dates. For tens of thousands of women in colonial America, we have even less" (Johansen).
Does the author produce good argument? Yes, the author produces an excellent argument, with plenty of examples that prove her points, and add depth to the book. For example, "a… | <urn:uuid:401a8968-4995-4997-8689-542820ea9836> | CC-MAIN-2017-22 | http://www.paperdue.com/essay/generations-women-in-colonial-america-130825 | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605485.49/warc/CC-MAIN-20170522171016-20170522191016-00212.warc.gz | en | 0.982055 | 1,907 | 3.8125 | 4 |
The potential for financial benefits in innovation in carbon capture and storage (CCS), marine energy and electricity networks and storage have been outlined in three government reports published today.
The Technology Innovation Needs Assessments found that: innovation in CCS could reduce UK energy system costs by between £10bn and £45bn by 2050, marine energy by between £3bn and £8bn; and electricity networks and storage by between £4bn and £19bn.
The Department for Energy and Climate Change (Decc) aims to use the findings of the report to target public and private investment to stimulate the innovation. The report was made by carried out by the Low Carbon Innovation Coordination Group, which is made up of Decc, the Department for Business, Innovation and Skills as well as low carbon research bodies such as the Energy Technologies Institute (ETI) and the Engineering and Physical Sciences Research Council.
Further details of the report can be found here. | <urn:uuid:8dea0c68-f858-4cc3-a3e5-e2b4a264efa3> | CC-MAIN-2019-22 | https://www.newcivilengineer.com/low-carbon-technology-reports-outline-financial-benefits/8634500.article | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00133.warc.gz | en | 0.974652 | 193 | 2.53125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.