text
stringlengths
222
548k
id
stringlengths
47
47
dump
stringclasses
95 values
url
stringlengths
14
7.09k
file_path
stringlengths
110
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
53
113k
score
float64
2.52
5.03
int_score
int64
3
5
audio example by a male speaker audio example by a female speaker the above transcription of crest is a detailed (narrow) transcription according to the rules of the International Phonetic Association; you can find a description of each symbol by clicking the phoneme buttons in the secction below. press buttons with phonetic symbols to learn how to precisely pronounce each sound of crest press the "test" button to check how closely you can replicate the pitch of a native speaker in your pronunciation of crest An example use of crest in a speech by a native speaker of american english: “… in phase crest with crests and troughs …” A crest is a symbol that is used to identify a person, family, or organization. the word crest occurs in english on average 2.6 times per one million words; this frequency warrants it to be in the study list for C2 level of language mastery according to CEFR, the Common European Framework of Reference.
<urn:uuid:76c5900e-49bc-47c2-b316-99c14f0b45a3>
CC-MAIN-2023-06
https://accenthero.com/app/pronunciation-practice/english/american/crest
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00109.warc.gz
en
0.911679
201
3.515625
4
Undercar: CV Joint Replacement Tips As you start to explore a career as a transportation technician, you may hear the term “Flat Rate” thrown around quite a bit, but what does it mean? As you start to explore a career as a transportation technician, you may hear the term “Flat Rate” thrown around quite a bit. Essentially, flat rate is how most automotive and collision shops bill for work, and, essentially, how the technician gets paid for the job. It is similar to a commission pay structure in a sales environment. The way flat-rate works is that each job has a predetermined amount of hours associated with it, and that is how the job is billed, regardless of how long it actually takes. For example, a shop might bill two hours to replace a radiator. This means that the technician will get paid two hours to complete that job, regardless of how much time it takes to complete on the clock. - High pay ceiling since a technician can get paid for more than 8 hours in a day. - Gives technicians control over their pay. - Proper tools and environment can allow technicians to become even more efficient. - Training can have a direct impact on pay because you can learn more efficient methods to complete a job. - Repetitive tasks can help technicians build more efficiency into their work. - A poor shop environment (disorganized, poorly equipped) can negatively affect one’s pay. - External factors can slow down one’s efficiency (waiting for parts, paperwork). - Hard to gain efficiency on some jobs (if a job pays half an hour, it may be hard to get it done in less time). - No pay during down times (if there’s no work, there’s no compensation). - Flat rate is less predictable than an hourly or salary wage. Other Things to Know: - Flat rate is common in automotive and collision, but is less common in diesel, fleet and equipment repair because it is harder to estimate repair times. - When you are entering the field, it is probably best to start on an hourly or salary basis, since most entry- level techs will struggle with efficiency and, therefore, not do well on flat rate. - In a flat-rate environment, mistakes are closely monitored. The shop doesn’t benefit from a technician completing a job quickly if the vehicle is not fixed properly. A technician is often not compensated for a vehicle that comes back, and too many mistakes can affect job security. Whether or not flat rate is best for you is a personal choice. If you are confident in your skills and prefer a fast-paced environment, there is an opportunity to do well in a flat-rate environment. Some flat-rate environments can also have good teamwork, because pooling skills, tools and hours might be good for technicians. On the other side, if you prefer a methodical work environment, it may be better to be in an hourly or salary position. Work that requires a great deal of craftsmanship, such as vehicle restoration, is often paid hourly since quick work can negatively affect the outcome. Ultimately, a technician needs to consider his/her passion, working style and skills to determine what pay structure is best for them. Article sponsored by TechForce Foundation.
<urn:uuid:bac67b77-216c-4131-9d53-203ccb774c17>
CC-MAIN-2022-33
https://www.tomorrowstechnician.com/understanding-flat-rate/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00693.warc.gz
en
0.951603
698
2.671875
3
Fracking Linked to Cancer-Causing Chemicals, Yale Study Finds Yet another study has determined that hydraulic fracturing, or fracking, might be a major public health threat. In one of the most exhaustive reviews to date, researchers from the Yale School of Public Health have confirmed that many of the chemicals involved and released by the controversial drilling process can be linked to cancer. Yale researchers have unpacked "the most expansive review of carcinogenicity of hydraulic fracturing-related chemicals in the published literature."Pixabay "Previous studies have examined the carcinogenicity of more selective lists of chemicals," lead author Nicole Deziel, Ph.D., assistant professor explained to the school. "To our knowledge, our analysis represents the most expansive review of carcinogenicity of hydraulic fracturing-related chemicals in the published literature." For the study, published in Science of the Total Environment, the researchers assessed the carcinogenicity of 1,177 water pollutants and 143 air pollutants released by the fracking process and from fracking wastewater. They found that 55 unique chemicals could be classified as known, probable or possible human carcinogens. They also specifically identified 20 compounds that had evidence of leukemia/lymphoma risk. One of the scarier parts from this study is that the researchers could not completely unpack the health hazards of fracking's entire chemical cocktail. More than 80 percent of the chemicals lacked sufficient data on cancer-causing potential, "highlighting an important knowledge gap," the school noted. The unconventional drilling rush in the U.S. has expanded to as many as 30 states, spelling major consequences to the air we breathe and the water we drink. The Wall Street Journal reported in 2013 that more than 15 million Americans lived within a mile of a well. The biggest concern is for people and especially children with fracking operations right in their backyards. In fact, Environment America found that more than 650,000 kindergarten through 12th grade children in nine states attend school within one mile of a fracked oil or gas well. “Because children are a particularly vulnerable population, research efforts should first be directed toward investigating whether exposure to hydraulic fracturing is associated with an increased risk," Deziel said. Per the study, "Childhood leukemia in particular is a public health concern related to [unconventional oil and gas] development, and it may be an early indicator of exposure to environmental carcinogens due to the relatively short disease latency and vulnerability of the exposed population." According to the school, the researchers are now taking air and water samples in a community living near a fracking operation. They are testing for the presence of known and suspected carcinogens and will determine whether these people have been exposed to these compounds, and if so, at what concentrations. A rare species of giant tortoise, feared extinct for more than 100 years, was sighted on the Galápagos island of Fernandina Sunday, the Ecuadorian government announced. By Jennifer Skene and Shelley Vinyard For most people, toilet paper only becomes an issue when it unexpectedly runs out. Otherwise, it's cheap and it's convenient, something we don't need to think twice about. But toilet paper's ubiquity and low sticker price belie a much, much higher cost: it is taking a dramatic and irreversible toll on the Canadian boreal forest, and our global climate. As a new report from NRDC and Stand.earth outlines, when you flush that toilet paper, chances are you are flushing away part of a majestic, old-growth tree ripped from the ground, and destined for the drain. This is why NRDC is calling on Procter & Gamble, the manufacturer of Charmin, to end this wasteful and destructive practice by changing the way it makes its toilet paper through solutions that other companies have already embraced. By John Rennie Short As cities strive to improve the quality of life for their residents, many are working to promote walking and biking. Such policies make sense, since they can, in the long run, lead to less traffic, cleaner air and healthier people. But the results aren't all positive, especially in the short to medium term. By Pete Stauffer For those of us who love the coast, the negative impacts of offshore oil drilling are obvious. Offshore drilling has a proven track record of polluting the ocean, damaging coastal economies and threatening a way of life enjoyed by millions of people. Yet, the oil and gas industry—and the elected officials who prioritize them over the public interest—would like you to believe that offshore drilling is somehow a safe and necessary practice.
<urn:uuid:e62f0d22-4c9a-484f-b2e5-fe2b11843805>
CC-MAIN-2019-09
https://www.ecowatch.com/yale-fracking-cancer-study-2063265923.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247505838.65/warc/CC-MAIN-20190221152543-20190221174543-00279.warc.gz
en
0.953532
937
2.890625
3
One of the primary tasks of the immune system is to distinguish self from non-self. “Self” includes the normal proteins in the body, while "non-self" is a sign of potential harm, like a parasite, virus, or cancer cell. As the immune system’s T cells mature in the thymus (that's why they're called T cells), any cells unlucky enough to respond to "self" are killed off, lest they cause autoimmune diseases. This process is known as negative selection. Like most biological processes, though, negative selection is not foolproof; not every autoreactive T cell ends up dying. To catch them, the body produces Treg (for T regulatory) cells, which provide another safeguard against autoimmunity. Their function is to keep the other T cells in check. However, the mechanism by which certain T cells mature into Treg cells is not yet well-defined. New research has identified how these specialized cells are produced, even though they respond to proteins normally found in the body. All T cells express a receptor (creatively called the TCR, for T cell receptor) that helps them identify cells that are infected or defective. If this receptor ends up recognizing proteins that are found on healthy cells, it was thought that the T cell is induced to commit suicide before it can leave the thymus. But the new work shows that T cells with very high-affinity receptors—those that are extremely sensitive to the proteins they recognize—actually survive. These cells get sent down the pathway that leads to a Treg fate. That fate seems to require a second set of receptors (members of the tumor necrosis factor receptor superfamily). Three different members of this superfamily appear to act in concert; blocking any one of them doesn't prevent the eventual formation of Treg cells, but blocking all of them does. This could be just to make sure that Treg cell development happens properly—a “not putting all of your eggs in one basket” type of thing. Or it could be that later on, once the Treg cells are fully mature, they will need the activity of all three receptors. The authors who made this new TNFRSF connection speculate that multiple receptor types might be needed to regulate the Treg cells themselves. After all, you don’t want immunosuppression during an inflammatory response to pathogens. Just because they're regulators doesn't mean they can't get carried away, too.
<urn:uuid:17e037b7-eff2-439a-bba2-f9a27bf25172>
CC-MAIN-2017-26
https://arstechnica.com/science/2014/03/making-immune-cells-that-keep-the-body-from-attacking-itself/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323807.79/warc/CC-MAIN-20170628222452-20170629002452-00199.warc.gz
en
0.96617
511
3.234375
3
- AR Level5 - AR Quiz Number162146 - AR Quiz Points1 - AuthorStephanie Paris - Guided Reading LevelT - Page Count64 - PublisherTeacher Created Materials - Year Published2013 - Grade Level5 - Type of BookNon-Fiction - SubjectSpace (Children's/YA) Educational: English language: readers & reading schemes Sixty years ago, no one had traveled beyond planet Earth. But two enemies were competing in a race to space. Discover the exciting story that leads from WWII, through the Cold War, and, at last, to the moon! This book is part of these Book Collections Our school reading collections supply a comprehensive assortment of titles that are perfect for adding depth to classrooms and school libraries. Ask us about our Custom Book Lists. We can coordinate collections based on interest, reading level, state standards, and more.
<urn:uuid:718e3533-ae48-4dca-925f-12f49ac38e2f>
CC-MAIN-2017-13
http://www.bmionline.com/books/20th-Century-Race-to-the-Moon-Paris__Q5546.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00096-ip-10-233-31-227.ec2.internal.warc.gz
en
0.861495
193
3.078125
3
David Stephenson introduces Medieval Wales c.1050-1332: Centuries of Ambiguity. Long after it was published in 1911, Sir John Edward Lloyd’s History of Wales from the Earliest Times to the Edwardian Conquest remained the most influential book on the medieval centuries in Wales. The picture painted by Lloyd was in essence simple: a succession of great Welsh rulers, most of them from Gwynedd, working steadily towards the creation of a single Welsh principality, which was finally achieved when Henry III formally acknowledged Llywelyn ap Gruffudd as Prince of Wales in 1267. But with terrible rapidity, Henry’s successor Edward I brought about the downfall and death of Llywelyn in wars of 1277 and 1282. Thereafter, Wales was subjected to English rule. Lloyd’s subsequent biography of Owain Glyn Dŵr made it clear that a long period of oppression at the hands of the conquerors followed the death of Llywelyn. Later historians finessed Lloyd’s picture, but in essentials left it undisturbed. My own book, Political Power in Medieval Gwynedd, put the region and its princes at centre stage in the development of medieval Wales. As I broadened the scope of my research, however, I realised that the focus on great hegemonic princes and on Gwynedd was actually producing a distorted picture of Welsh medieval development. My detailed study in Medieval Powys revealed that the rulers of that land had objectives other than subservience to Gwynedd, and were often prepared to make use of English support to achieve those objectives. The research that followed, much of which underlies my most recent book, Medieval Wales c.1050–1332, revealed a network of Welsh magnates who were hostile to the pretensions of the Gwynedd princes to extend their rule in Wales. Many of them had made marriages with English noblewomen, and almost inevitably, Edward I’s assault on Llywelyn was supported by many of those magnates. And in the half-century that followed, members of the Welsh administrative elite remained central to the politics of Wales. Medieval Wales c.1050–1332 introduces readers to individuals and families whose stories run counter to the established narratives. This is nowhere more evident than in the case of Hywel ap Meurig, a man of the Middle March, and his descendants. Partisans of English kings and lords of the March, they wielded great power across much of Wales, and rose ultimately to be Marcher lords in their own right. It is about their story that I plan to write in more detail in the near future. David Stephenson is Honorary Research Fellow in the School of History, Philosophy and Social Sciences, Bangor University. His many contributions to Welsh history include Political Power in Medieval Gwynedd, and Medieval Powys 1132–1293.
<urn:uuid:7a28a83d-6f6c-498e-ab79-670aa79af33d>
CC-MAIN-2019-47
https://www.uwp.co.uk/medieval-wales-c-1050-1332-centuries-of-ambiguity/
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00263.warc.gz
en
0.963982
596
2.765625
3
By Gar Alperovitz · 12 Mar 2012 On Monday (March 6, 2012), Bloomberg News estimated that Mark Zuckerberg, Facebook's 27-year-old founder, will be worth about $21 billion based on his company's forthcoming initial public offering. Although he won't qualify (yet) for a slot among the planet's richest 20 people in the Bloomberg Billionaires Index, Zuckerberg will still enjoy iconic status as an entrepreneur of mythic proportions. Indeed, the Wall Street Journal credited Facebook with creating "a new way of living," and hailed the company—thought to be worth $100 billion—as a "prototypical American success story, complete with technological brilliance and a fair amount of drama." Bill Keller of the New York Times similarly "marveled" at Mark Zuckerberg's "imagination and industry." But does Zuckerberg deserve all this money? To what degree does his shrewd business idea—rather than the conditions that allowed it to happen—“deserve” credit for creating this enormous bounty? Some obvious questions: Where would he be without the Internet? Or the computer? Or all the many other publicly financed technologies that made Facebook possible? Certainly, he “deserves” something, but how do we gain perspective on the bounty created, on the one hand, by public investment; and on the other, by smart entrepreneurs who run off with the lion’s share of the benefits of such investments? Take the Internet itself: The ?rst large-scale computer network, the ARPANET, was launched in the late 1960s by the Department of Defense. Between the mid-1980s and the mid-1990s the National Science Foundation spent $200 million to build and operate a network of regional supercomputing hubs called the NSFNET. Connected to the ARPANET, this network established Internet access for nearly all U.S. universities, making it a civilian network in all but name. The rest was history. Much else of Silicon Valley's enormous wealth also originates from taxpayer investment. Indeed, the Mark Zuckerbergs of the world might still be working with vacuum tubes and punch cards were it not for critical research and technology programs created or financed by the federal government after World War II which led to semiconductors, solid-state electronic devices, integrated circuits and computers. Even more powerful, but less often realized, is the role of socially created knowledge in general—including the long, long historical development that produced advanced mathematics, modern chemistry, physics, metallurgy and the many specialized fields that over the last century have led to the technologies and information systems that are the precondition of today’s computer and Internet realities. Mark Zuckerberg and an equally intelligent individual working 30 years ago might have the same human capital and might work with the same commitment, risk and intelligence. But the individual working 30 years ago simply did not have the fruits of society’s general, slowly built "stock of knowledge" to be able to develop and market a social-networking platform. The popular, conventional view of technology is one in which progress is viewed as a sequence of extraordinary contributions by "great men" (occasionally "great women") and their heroic innovations. But historians of technology have carefully delineated the incremental and cumulative realities of how most technologies actually develop. In general, a speci?c ?eld of knowledge builds up slowly through diverse contributions over time until—at a particular moment when enough has been established—the next so-called "breakthrough" becomes all but inevitable. Often, many intelligent people reach the same point at virtually the same time, for the simple reason that they all are working from the same developing information and research base. The next step commonly becomes obvious (or if not obvious, very likely to be taken within a few months or years). We tend to give credit (and often a vast fortune!) to the person who gets there ?rst—or rather, who gets the ?rst public attention, since often the "real" ?rst person may not be as good at public relations as the one who jumps to the front of the line and claims credit. Thus we remember Alexander Graham Bell as the inventor of the telephone even though, among others, Elisha Gray and Antonio Meucci got there at the same time or even before him. Both Newton and Leibniz developed versions of the calculus at roughly the same time. If Bill Gates hadn’t "invented" the MS-DOS operating system, someone else would have invented a similar system—and, in fact, Gary Kildall did. Few recall that Campus Network, the social Web site of Columbia University student Adam Goldberg, predated Zuckerberg's Facebook, and in many ways was more sophisticated. Other forgotten innovations include the precursor to Siri, the latest iPhone's conversational personal assistant, CALO (Cognitive Agent that Learns and Organizes), which was developed by a California company called SRI International with a five-year, multimillion-dollar Pentagon grant. At a broader level, "nearly 90 percent...of current GDP was contributed by innovation carried out since 1870," in the estimate of leading economist William Baumol. He points out that even "the steam engine, the railroad, and many other inventions of an earlier era still add to today’s GDP." Nobel Prize-winning economist Robert Solow has calculated that nearly 90 percent of the growth in productivity in the ?rst half of the 20th century can only be attributed to "technical change in the broadest sense," while the supply of labor and capital—what workers and employers contribute—appeared almost incidental to this massive technological "residual." It is clear that before anyone is a "talented" entrepreneur or a "menial" laborer, or anything in between, most of the economic gains that get distributed to individuals in a given year or period are derived from technological and other contributions inherited from the society of the past, not created by them in the present. Put another way, the current technological contributions that produce huge rewards for the fortunate few are a mere pebble placed atop a Gibraltar of historically received science and technology that makes the modern additions possible—a mountain of knowledge often paid for by the public. An obvious question arises from these facts: if most of what we have today is attributable to advances we inherit in common, why, specifically, should this gift of our collective history not more generously and broadly bene?t all members of society? Today’s distributive realities are hard to ignore: Mark Zuckerberg is already within the highest echelon of the wealthiest 400 Americans, who collectively own more wealth than the bottom 60 percent of the country combined. Current elites, William Gates Sr. points out, disproportionately reap the harvest of what is inherently a collective investment; he urges that their estates be taxed accordingly. The late Herbert Simon, winner of the 1978 Nobel Prize in economics, similarly proposed that this sort of "patrimony" be subject to large-order taxation. Particularly appropriate uses might be to support educational and research institutions that generate and pass on knowledge at all levels; to offer tuition relief; to expand opportunities for college education; and to provide much more generous underpinnings for low- and moderate-income citizens.
<urn:uuid:54e829fe-f87a-4e92-9087-3be99f6ccf2f>
CC-MAIN-2015-11
http://www.sacsis.org.za/s/story.php?s=1235
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462141.68/warc/CC-MAIN-20150226074102-00227-ip-10-28-5-156.ec2.internal.warc.gz
en
0.951189
1,492
2.53125
3
Women trying to get pregnant have been advised to eat brazil nuts, seeds and seafood. Scientists say an element found in these foods could increase a woman's chance of conceiving. That element – selenium - is a natural antioxidant and plays a crucial role in the early stages of conception. It's also crucial in the development of healthy ovarian follicles, which are responsible for the production of eggs in women. Brazil nuts have the highest amount of selenium, followed by seeds and grains, seafood, wholewheat bread, red meat and mushrooms. Melanie Ceko, from the University of Adelaide, who carried out the research, said: "Selenium is an essential trace element found in protein-rich foods like red meat, seafood and nuts. "It is important for many biological functions, such as immune response, thyroid hormone production, and acts as an antioxidant, helping to detoxify damaging chemicals in the body. "We've known for some time that selenium is important to men's fertility, but until now no one has researched how this element could be involved in healthy reproduction in women." As part of the study, researchers pinpointed exactly where selenium was located in the ovary. Then, they turned their attention to a protein containing selenium, called GPX1. They found that levels of selenium and proteins containing selenium were higher in large, healthy ovarian follicles, where eggs are produced. Ms Ceko said: "We suspect they play a critical role as an antioxidant during the late stages of follicle development, helping to lead to a healthy environment for the egg." She added that in some cases, eggs that yielded a pregnancy had double the levels of GPX1. Researchers hope their findings will help treat women with infertility problems. More on Parentdish
<urn:uuid:051272b8-dc57-41ca-a27f-468bd9d9004f>
CC-MAIN-2017-17
http://www.huffingtonpost.co.uk/2014/11/19/trying-to-get-pregnant-brazil-nuts-can-boost-your-fertility_n_7319856.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00565-ip-10-145-167-34.ec2.internal.warc.gz
en
0.968525
382
3.078125
3
Here's the question: A fluid has density 2 and velocity field . Find the rate of flow outward through the sphere . So far I've found n, which is 1/2x+1/2y+1/2z and F dot n gives z^2. I converted to spherical coordinates and z^2 is equal to 4cos^2[phi]. My integral is set up as: 4*(int[0-2pi] int[0-pi] (cos^2[phi]*sin[phi]dphi dtheta. The first integral is -1/3cos^3[phi] from 0-pi which is 1/3 - - 1/3 = 2/3 The second integral gives 2/3*2*pi, so the entire thing is 4*2/3*2*pi. I thought I was just supposed to multiply that by 2 (the density) but that's not the right answer. Can someone tell me what I did wrong or what I'm supposed to do with the density? Thanks a lot.
<urn:uuid:a96a227f-41a7-475a-9adf-362be330aa01>
CC-MAIN-2016-44
https://www.physicsforums.com/threads/flux-through-a-sphere.146493/
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719027.25/warc/CC-MAIN-20161020183839-00397-ip-10-171-6-4.ec2.internal.warc.gz
en
0.958197
221
2.546875
3
I get really excited when I get new ideas, or try out new tools, like googledocs, voicethread, voki. But I keep wondering how to use them in my classes. I keep coming back to that analogy of the violin student - it's one thing to get notes out of the violin, but it's another thing entirely to make music with it. I'm still getting the notes, and I wish I knew how to make music. For example, voicethread. What is voicethread best used for? I tried using it as a forum for questions/answers on the day's lesson. Didn't take off. Tried it for collaborative problem solving. Better. About to try it with the vector activity. So I'm getting somewhere through trial and error. And lots of persistence. I know that's what I have to do with all this other cool stuff. It's just that sometimes, I wish someone would just tell me what to do. I know, I know, I sound like my students. Miss, why don't you just tell us what to do? Can't you just give us the steps, instead of making us try this and draw that and explain and talk about our FEELINGS? Because, I say, this way is better. This way you get to figure it out yourselves, and you'll feel proud, and you'll understand it better and retain more of it, and you'll become a lifelong learner like me! But hey, that's fine for them! I, HOWEVER AM TIRED! And my feelings get hurt when I read posts like this: http://vihart.com/doodling/ which contain a briliant idea cushioned in a giant rant about how awful teachers are these days! In a very dark corner of my mind, I'm telling this person what to do, believe me. I guess that's the price we all have to pay for being teachers during a revolution, huh? We really have to walk the walk, not to mention find out where to walk and what to bring on the trip, and who to bring with us, and ok I'm getting carried away with this metaphor. Fortunately, lots of those tools make it easy for us to share our best attempts at making music. That's the whole point, after all, right? Collaboration makes us all greater than the sum of the parts. It makes it easy to tell each other what to do!
<urn:uuid:0a997dbb-3bdf-4a26-bb42-5fb69d6663ed>
CC-MAIN-2018-30
http://audrey-mcsquared.blogspot.com/2011/03/just-tell-me-what-to-do.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589573.27/warc/CC-MAIN-20180717051134-20180717071134-00123.warc.gz
en
0.969882
515
2.59375
3
We believe the ideal place for a child or young person to be raised is within the family of origin or the extended family, where the family can be resourced to a level that ensures the safety and wellbeing of the child. When this is not possible, the Department of Child Safety, Youth and Women has the statutory responsibility of ensuring the safety of the child or young person. This may involve removing the child or young person from the family for a short or longer period of time, and placing the child or young person with other family members or in one of the out-of-home care services that are available. About our residential care service Our residential care service provides placements for children and young people in houses where care is provided by a team of rostered employees. Children and young people are usually referred to residential care because foster care is not appropriate at the time due to a history of foster care placement breakdowns, challenging or aggressive behaviours, sexualised behaviours or social skills deficits or for emergency placement while a more suitable option is found. Residential care is a transitional program, with the child or young person progressing to either foster care, re-unification with their family of origin or independent living. Our primary aim is to support young people to build resilience and increase their capacity to reach their potential by assisting in developing a positive sense of identity, social skills and the ability to participate actively in the community. We utilise the Children and Residential Experiences (CARE): Creating Conditions for Change Program Model developed by Cornell University. As such we apply the following principles to working with children and young people in care: All children have the same basic requirements for growth and development. Activities offered to children need to be appropriate to each child’s developmental level and designed to provide them with successful experiences on tasks that they perceive as challenging, whether in the realm of intellectual, motor, emotional or social functioning. Research and theory have shown that activities that are developmentally appropriate help to build children’s self-efficacy and improve their overall self-concept. Children need opportunities for constructive family contact. Contact with family and community is one of the few indicators of successful treatment that has empirical validation. Children benefit when their families work in partnership with the child-caring organisation. Retaining children’s connections to family and community bolsters their resiliency and improves their self-concept. Children need to establish healthy attachments and trusting, personally meaningful relationships with the adults who care for them. These attachments are essential for increased social and emotional competence. Healthy child-adult developmental relationships help children develop social competencies that can be applied to other relationships. A child’s ability to form relationships and positive attachments is an essential personal strength and a manifestation of resiliency associated with healthy development and life success. Competence is the combination of skills, knowledge, and attitudes that each child needs to effectively negotiate developmental tasks and the challenges of everyday life. It is a primary responsibility of caregivers and the organisation to help children become competent in managing their environment as well as to motivate them to cope with challenges and master new skills. Learning problem-solving, critical thinking skills, emotional regulation skills, and developing flexibility and insight are all essential competencies that allow children to achieve personal goals and increase their motivation for new learning. All interactions and activities should be purposeful and goal-oriented with the aim of building these competencies and life skills. A large percentage of children in care have a history of violence, abuse, and neglect resulting in debilitating effects on their growth and development. Adults need to respond sensitively and refrain from reacting coercively when children exhibit challenging behavior rooted in trauma and pain. Trauma sensitive responses help children regulate their emotions and maintain positive adult-child relationships. Children engage in dynamic transactions with their environment as they grow and develop. To optimise growth and development, children must live within a milieu that is engaging and supportive. Caregiving staff must understand that their relationships with the children are part of a larger social-ecology; their face-to-face interactions with children, the activities they promote, and the physical environment in which they work all have an impact on the developmental trajectories of children. Competent staff using skill sets informed by the CARE principles can only be effective when they are working in an ecology of care that will allow them to use their skills.
<urn:uuid:86d7c80a-4976-4ee7-bd49-a72aaccc62a9>
CC-MAIN-2018-43
https://www.unitingcareqld.com.au/services-and-support/counselling-and-wellbeing/youth-support/residential-care
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513686.3/warc/CC-MAIN-20181021031444-20181021052944-00270.warc.gz
en
0.956881
890
2.796875
3
Advice intended for parents/carers taking their child home after seeing a hospital based healthcare professional Periorbital cellulitis is an infection of the eyelid or skin around the eye. It is almost always one sided and sometimes follows a cut or graze to the skin. Periorbital cellulitis usually responds well to antibiotics. Treatment with intravenous antibiotics (given into a vein) is usually only needed for more severe cases or those that have not responded to antibiotics given by mouth. Some children who need intravenous antibiotics are admitted to hospital initially whilst others can be looked after at home. These children would come into hospital once a day for someone to look at them and for their antibiotics to be given. The decision on when to change from intravenous to oral antibiotics (tablets or liquid) will be made by the medical team caring for your child. This will depend on how quickly your child responds to treatment (improvement in fever, pain and sometimes their blood tests) and whether your child has other health conditions. Antibiotics are usually given for a total of 10 days. You can give regular pain relief (Paracetamol or Ibuprofen) until any discomfort has improved. Most children recover without any complications. However, periorbital cellulitis can occasionally progress to orbital cellulitis. This is where the infection involves the deeper tissues around the eye and the eyeball itself. This is a serious infection, which can cause lasting problems and needs immediate care. If you are concerned that your child's condition is getting worse, you should contact your discharging ward. Thing to look out for include: Call 999 for an ambulance if you have serious concerns for your child. It is not always possible to prevent this infection. However, it is important to have your child fully vaccinated, as two of the bacteria known to cause this infection are covered within your child's current vaccination schedule. Keep any minor injuries surrounding the eye clean and dry. Remember good hand hygiene before and after cleaning around the eye.
<urn:uuid:6de08abb-235a-46f4-8ffa-36ffb1f97ca8>
CC-MAIN-2022-21
https://what0-18.nhs.uk/professionals/hospital-staff/safety-netting-documents-parents/periorbital-cellulitis
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00313.warc.gz
en
0.950315
426
3.15625
3
You've heard about botnets and how they're much harder to stop than the typical worm or virus. Now there's new... By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. proof that they're advancing with alarming speed, building a network of more than a million zombie PCs that make cyberspace more dangerous by the day. "If you thought 2004 was bad, wait until the end of 2005," said Ken Dunham, director of malicious code for Reston, Va.-based security firm iDefense. "It's getting worse out there; more code that's harder to detect and remove." Add that to the new regulations and training issues enterprises are dealing with today, Dunham said, "and you have to conclude that it's a tough time to be an IT administrator. Their workload is bigger and the bad guys are taking advantage of that." Exhibit A is a report from the Honeynet Project and Research Alliance. Using a honeynet, researchers said they were able to track more than 100 botnets in four months and that some of the larger zombie networks were comprised up to 50,000 hijacked machines. The conclusion: More than a million computers are under the control of attackers and in most cases users have no idea their machines have been compromised. These machines are being used for a variety of malicious exploits, an increasing number of them financially motivated. Exhibit B comes from iDefense. Of 27,260 attacks the firm monitored last year, more than 15,000 were designed to covertly steal information or take over computers for criminal purposes, including identify theft and fraud. Among its findings: - Sophisticated malicious code like bots is the fastest growing type of Internet threat. - Attackers are using multiple tools that include free chat rooms to gather, store and analyze data. - Most antivirus and firewall programs simply can't keep up with an average of nearly 75 new threats a day. While security experts have been ringing the bot alarm bell vigorously in recent months, those interviewed for previous bot stories have said the goal isn't to create a sense of panic. It's to help IT professionals understand the nature of a quickly growing threat so they can defend their networks accordingly. Most experts have mentioned the need for an in-depth, layered defense, including antivirus, firewalls, intrusion detection and vigorous patching. They have also said that antivirus companies must update their products to meet the threat. Finnish security firm F-Secure Corp. said it is working to do so with the first version of its BlackLight Rootkit Elimination tool. It's designed to track down root kits used to create botnets and wipe them out. A free version can be downloaded from the company's Web site. Eventually, the tool will be worked into a wider security suite. Botnets target the enterprise warn experts The security lingo of 2004: Botnets one of the big buzzwords
<urn:uuid:af68b2b9-8785-4a19-bc20-7d83e561e6e8>
CC-MAIN-2017-04
http://searchsecurity.techtarget.com/news/1068871/Botnets-more-menacing-than-ever
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00281-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962743
610
2.5625
3
Print this page. Home / Browse / Jayhawkers and Bushwhackers Jayhawker and bushwhacker designate the principal warring parties in the Civil War’s guerrilla conflict, although the names were not unique to Arkansas and actually predated the war by many years. While their application and meaning were never precise—a problem compounded by being woven into postwar folklore—they generally bore negative connotations. Originally, “jayhawker” referred to Union sympathizers, “bushwhacker” to Confederate sympathizers, but the distinction lost much of its meaning in the chaos of war. “Jayhawker” originated in Kansas, and according to some authorities, it came into use in the late 1840s. The name was inspired primarily by the predatory habits of the hawk, but it implied, too, the noisy, mischievous nature of the jay. The combination became the “jayhawk,” a bird unknown to ornithology. The name was widely accepted in Kansas by the late 1850s, when anti-slavery advocates intent on defending Kansas Territory against pro-slavery “border ruffians” from Missouri adopted it. Kansans liked the tough image it conveyed during those bloody days of pre-Civil War violence, and they continued to use it once the war began. Missourians applied the name to Kansans, too, but negatively. They thought it fit the destructive raiders who plundered and destroyed their property before and during the war. This usage was so widely known by the time of the war that Arkansans called any Kansas troops who entered the state jayhawkers. That happened most often in northwest Arkansas, although several Kansas regiments also served prominently around Pine Bluff (Jefferson County) and in the Camden Expedition. However, so notorious did the destructive behavior of the Kansans become that Confederate Arkansans also used the name as an epithet for any marauder, robber, or thief. This included Union guerrillas from Missouri who raided communities in northern Arkansas. It even applied to rebel guerrillas. Confederates reacted to plundering by their own guerrilla chiefs by chastising them as “jayhawking captains” and decrying their “system of ‘jayhawking.’” A Confederate calvaryman, worried about the ill effect that depredations by rebel guerrillas was having upon public morale in northern Arkansas, declared in October 1862, “I have always opposed these little Jaw Hawker Parties, and now think if men who wanted to do any thing, the army is the place to act.” Indeed, “jayhawk” become a verb implying theft. Even Union soldiers spoke of “jayhawking” the property of Southern civilians. The origins of “bushwhacker” also date to the late 1840s, when Washington Irving, the New York author, referred to “gallant bush-whackers and hunters of raccoons” in a story for Knickerbocker Magazine. Essentially, bushwhackers were woodsmen who knew how to fend for themselves in rugged terrain. The name was affixed to guerrillas who struck from ambush during the Civil War. It often implied a lone killer who prowled the hills, swamps, or forests and struck without warning, but it applied equally to whole gangs. Whatever the numbers involved, their slinking style put bushwhackers on the fringes of outlawry. They were deemed too cowardly to fight in open combat, and they drew no line between combatants and noncombatants. As with jayhawker, the word could also be used as a verb. Bushwhackers could be either unionists or rebels, but the Union army gave them official status as a type of illegitimate Confederate guerrilla. Little more than a year into the war, the unionists found themselves stymied in many parts of the South, including Arkansas, by the ferocious resistance of guerrilla fighters. While recognizing the right of a belligerent to use uniformed partisans for scouting purposes, the Union army condemned the broad range of brigands, freebooters, marauders, robbers, and war-rebels that had associated themselves with the Confederate cause. The lowest of all such insurgents was the bushwhacker, whom the Federals dismissed contemptuously as “an armed prowler.” Thus, the name came to embrace any type of skullduggery. For instance, a Union general accused “bushwhackers” of cutting telegraph lines between Fort Smith (Sebastian County) and Fayetteville (Washington County) in 1863. In retaliation, he ordered one bushwhacker hanged from the nearest telegraph pole of every cut wire. Still, Confederates also found the term useful. A Rebel leader at Little Rock (Pulaski County), voicing concern about growing Unionist resistance in February 1863, condemned the activities of “Union bushwhackers.” Among the best known Confederate bushwhackers in Arkansas were James M. Ingram (or Ingraham), Peter “Old Pete” Mankins Jr., and William Martin “Buck” Brown. William Dark and William J. “Wild Bill” Heffington ranked among the best known Union bushwhackers in the state. The more brutal and senseless their deeds, the more likely men were to be called jayhawkers or bushwhackers. Bushwhacker received more universal usage, since guerrillas could be found everywhere fighting for the Union or the Confederacy. Jayhawkers would always be linked to Kansas, but so notorious had the violence perpetrated by early Kansas raiders become that the nature of the deed, rather than any geographical place, came to define the name. The slippery meanings of both names serve to underscore the bitterness and confusion of all civil wars. For additional information: Bailey, Anne J. and Daniel E. Sutherland, eds. Beyond Battles and Leaders: Arkansas in the Civil War. Fayetteville: University of Arkansas Press, 2000. Huff, Leo. “Guerrillas, Jayhawkers and Bushwhackers in Northern Arkansas during the Civil War.” Arkansas Historical Quarterly 24 (Summer 1965): 127–148. Mackey, Robert R. The Uncivil War: Irregular Warfare in the Upper South, 1861–1865. Norman: University of Oklahoma Press, 2004. Prier, Jay A. “Under the Black Flag: The Real War in Washington County, Arkansas, 1861–1865.” MA thesis, University of Arkansas, 1992. Stith, Matthew M. Extreme Civil War: Guerrilla Warfare, Environment, and Race on the Trans-Mississippi Frontier. Baton Rouge: Louisiana State University Press, 2016. Sutherland, Daniel E. American Civil War Guerillas: Changing the Rules of Warfare. Santa Barbara, CA: Praeger, 2013. ———. “Guerrillas: The Real War in Arkansas.” Arkansas Historical Quarterly 52 (Autumn 1993): 257–286. Daniel E. Sutherland University of Arkansas, Fayetteville Last Updated 6/20/2016 About this Entry: Contact the Encyclopedia / Submit a Comment / Submit a Narrative
<urn:uuid:88959825-9132-490a-82cd-f65ac24bc7cc>
CC-MAIN-2016-30
http://www.encyclopediaofarkansas.net/encyclopedia/entry-detail.aspx?entryID=2280
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828286.80/warc/CC-MAIN-20160723071028-00094-ip-10-185-27-174.ec2.internal.warc.gz
en
0.945141
1,506
3.328125
3
The aim of this study was to analyse the survival rate of cracked teeth after endodontic treatment. The secondary aim was to compare the survival rate of cracked teeth restored with composite filling/crown and those restored with a full crown. MATERIALS AND METHODS The study was conducted retrospectively from three general dental clinics in Stockholm, which are all part of the national dental service organisation. Two-hundred patients with teeth receiving endodontic treatment due to symptomatic cracks were included. The patient data range from year 2001 to 2016. The mean age of the patients was 48 years (range 29-69). Fifty-five per cent had cracks located above the pulpal cavity, 11% within the pulpal cavity and 3% located in the root canal. The cracks were located most commonly on the proximal surfaces. The survival rate for teeth with cracks was 68% and 54% after 5 and 10 years, respectively. The survival rate was significantly higher (97%) for cracked teeth receiving a full crown after endodontic treatment compared to teeth restored with either a composite filling or composite crown. The overall survival rate for cracked teeth was 68% after 5 years, while it was significantly higher for cracked teeth restored with a full crown. The results suggest within the limitations of this study that cracked teeth should be restored with a full crown after endodontic treatment.
<urn:uuid:29f57abe-2dea-4c75-bbc5-3d502b6435ae>
CC-MAIN-2023-50
https://www.practiceupdate.com/content/survival-rate-after-endodontic-treatment-in-general-dentistry-for-cracked-teeth-with-different-coronal-restorations/108799/65/23/1
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00528.warc.gz
en
0.977189
283
2.5625
3
Our team of professionals and staff believe that informed patients are better equipped to make decisions regarding their health and well-being. For your personal use, we have created an extensive patient library covering an array of educational topics, which can be found on the side of each page. Browse through these diagnoses and treatments to learn more about topics of interest to you. As always, you can contact our office to answer any questions or concerns. Because of the ultraviolet radiation it emits, the sun is inherently dangerous to human skin. In fact, the American Academy of Dermatology stipulates that there is no safe way to tan. Tanning is the skin's natural response to damage from the sun. Additionally, the Environmental Protection Agency proclaims that everybody, regardless of race or ethnicity, is subject to the potential adverse effects of overexposure to the sun. That's why everyone needs to protect their skin from the sun every day. How We Burn When ultraviolet light penetrates the epidermis it stimulates melanin, the substance responsible for skin pigmentation. Up to a point, the melanin absorbs dangerous UV rays before they do serious damage. Melanin increases in response to sun exposure, which is what causes the skin to tan. This is a sign of skin damage, not health. Sunburns develop when the UV exposure is greater than the skin's natural ability to protect against it. Sunscreens and Sunblocks The sun emits two types of ultraviolet (UV) rays that are harmful to human skin. UVA rays penetrate deep into the dermis and lead to wrinkles, age spots and skin cancers. UVB rays are responsible for causing sunburn, cataracts and immune system damage. Melanoma is thought to be associated with severe UVB sunburns that occur before the age of 20. Sunscreens absorb ultraviolet light so that it doesn't reach the skin. Look for sunscreens with the active ingredients PABA, benzophenones, cinnamates or salicylates. Sunblocks literally block the UV rays instead of absorbing them. Key active ingredients for sunblock success are titanium oxide and zinc oxide. There is no sunscreen or sunblock that works 100%. The U.S. Food and Drug Administration regulates the manufacture and promotion of sunscreens. Sunscreens are given a SPF (Sun Protection Factor) number that indicates how long a person can remain in the sun without burning. It is recommended that people use products with a SPF of 15 or greater. Sunscreens are not generally recommended for infants six months old or younger. Infants should be kept in the shade as much as possible and should be dressed in protective clothing to prevent any skin exposure and damage. There is no such thing as "all-day protection" or "waterproof" sunscreen. No matter what the SPF number, sunscreens need to be re-applied every 2 to 3 hours. Products that claim to be "waterproof" can only protect against sunburn up to 80 minutes in the water. Products labeled "water resistant" can only protect against sunburn up to 40 minutes in the water. Even in the worst weather, 80% of the sun's UV rays can pass through the clouds. Additionally, sand reflects 25% of the sun's UV rays and snow reflects 80% of the sun's UV rays. That's why sunscreen needs to be worn every day and in every type of weather and climate. The sun's intensity is also impacted by altitude (the higher the altitude the greater the sun exposure), time of year (summer months) and location (the closer to the Equator, the greater the sun exposure). Protecting Yourself From Sun Exposure - Look for sunscreens that use the term "broad spectrum" because they protect against both UVA and UVB rays. - Choose a sunscreen with a minimum SPF rating of 15. - Apply sunscreen 15 to 30 minutes before you head out into the sun to give it time to seep into the skin. - Apply sunscreens liberally. Use at least one ounce to cover the entire body. - Use a lip balm with SPF 15 or greater to protect the lips from sun damage. - Re-apply sunscreen immediately after going into water or sweating. - Re-apply sunscreen every 2 to 3 hours. - Use sunscreen every day regardless of the weather. - Wear sunglasses to protect the eyes from UV rays. - Wear wide-brimmed hats and protective clothing to limit skin exposure to the sun. - Stay in the shade whenever possible. - Avoid using tanning beds. Treating a Sunburn If you experience a sunburn, get out of the sun and cover the exposed skin as soon as possible. A sunburn will begin to appear within 4 to 6 hours after getting out of the sun and will fully appear within 12 to 24 hours. Mild burns cause redness and some peeling after a few days. They can be treated with cold compresses on the damaged area, cool baths, moisturizers to prevent dryness and over-the-counter hydrocortisone creams to relieve any pain or itching. It is also important to drink plenty of fluids when you experience any type of sunburn. More serious burns lead to blisters, which can be painful. It is important not to rupture blisters as this slows down the natural healing process and may lead to infection. You may want to cover blisters with gauze to keep them clean. Stay out of the sun until your skin has fully healed. In the most severe cases, oral steroids may be prescribed to prevent or eliminate infection along with pain-relieving medication.
<urn:uuid:8aedbb6a-17c8-4236-a52f-382c429a443f>
CC-MAIN-2016-18
http://www.fwderm.com/library/3911/SunSafety.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860114285.77/warc/CC-MAIN-20160428161514-00211-ip-10-239-7-51.ec2.internal.warc.gz
en
0.9289
1,168
3.25
3
Two women, Julie and Sarah, sit down to watch an infomercial advertising a new kitchen appliance. The infomercial features a celebrity spokesperson, live demonstrations of the product, testimonials from audience members and a limited-time offer for free accessories for those who order the appliance during the show. Julie carefully notes each fact about the appliance, focusing on what it can do and whether she will find it useful. Sarah, however, can’t resist a deal for free stuff and is impressed by seeing the product endorsed by her favorite celebrity. Both women decide to order the appliance. Three weeks later, Julie is enjoying the benefits of her purchase, while the same appliance sits collecting dust in Sarah’s kitchen. Though two women received the same message, they were swayed by it in different ways and to different extents–with widely divergent outcomes. The Elaboration Likelihood Model (ELM) was designed to explain such differences in persuasion and how those differences affect attitudes and value judgments. The term “elaboration” refers to the cognitive act of analyzing a persuasive argument. Developed by psychologists Richard E. Petty and John Cacioppo in the 1980s, ELM describes the basic processes and variables involved in persuasive communication. According to the model, any feature of a persuasive message can impact how a person comes to view various objects, issues and people. ELM outlines these features, identifies which are at play during an act of persuasion, describes how they affect a person’s judgment and lists the consequences. ELM also measures people’s willingness to engage in elaboration, which depends upon their motivation and ability. People feel most motivated to elaborate upon a message when it holds significant relevance for them. An innate enjoyment of thinking can also drive elaboration. “Ability” refers to the knowledge, time and mental resources needed to fully analyze an argument. Like with similar theories of communication, research is often conducted through questionnaires. Elaboration Likelihood Model contains the following assumptions: - Attitudes affect behavior by guiding decisions. Attitudes result primarily from persuasion. - There are two routes to persuasion: the central route and the peripheral route. The route chosen by an individual depends upon the extent to which the individual is willing to critically analyze, or elaborate upon, the content of the persuasive message. - When people are motivated and able to elaborate upon a message, they usually choose the central route. Otherwise, they use the peripheral route. - Changes in attitude can occur with either route, but those occurring with the central route are typically stronger, more stable and longer-lasting than those occurring with the peripheral route. The Central Route For central route persuasion to occur, two elements are needed: enough information within the message to facilitate a thorough analysis of its argument and the message recipient’s willingness to engage in elaboration. With the central route, the recipient uses evaluation, recall, critical judgment and inferential thinking to carefully examine ideas, determine their merit and weigh the consequences of acting upon the message. Ultimately, the extent to which the recipient is persuaded depends upon his or her unique cognitive responses to the message. Recipients who regard a message as being relevant, well-formed and convincing will usually respond positively to it, even if the message contradicts their original beliefs about an issue. Julie, from the opening example, illustrates central route thinking. While watching the infomercial about a special kitchen appliance, she carefully evaluates the product’s performance and reliability to determine whether it would be a wise investment for her. She isn’t impressed by the free offer or the celebrity endorsement. Because her decision to respond to the persuasive message is backed by a thorough analysis of the facts, she continues to find the product beneficial even after the newness of the purchase has worn off. The Peripheral Route Individuals taking the peripheral route to persuasion are swayed not by sophisticated arguments, but by the message’s superficial characteristics, such as whether the speaker is likeable. No analysis of the argument’s merits is involved. Instead, when faced with a persuasive message, the recipient follows a rule of thumb to quickly decide whether to accept or reject it. Recipients may accept a message based on a catchy slogan, an expert or celebrity endorsement, the attractiveness of the message or the quality of the presentation. Six cues in communication signal an appeal to peripheral route persuasion: - Reciprocation – The speaker implies he has done the listener a favor by informing her about his idea, so she should accept his message in return. - Consistency – The speaker says his methods are “the way it’s done” - Social Proof – The speaker claims everyone is buying his product - Liking – The speaker convinces the listener that his product or idea is likeable - Authority – The speaker is an expert or someone with influence - Scarcity – Involves a limited-time offer or a limited supply Due to time constraints and limited audience interest, most television and radio advertisements appeal to peripheral route persuasion. Such methods are also used to hide weak arguments or to reach audiences that are unwilling or unable to process more complex messages. Usually, peripheral route arguments produce changes in attitudes or behaviors by offering rewards for taking a desired action. However, such changes are typically weak and temporary. Sarah, from the opening example, demonstrates peripheral route persuasion by basing her purchasing decision on a celebrity endorsement and an offer for free accessories. Since neither of these factors convinces her that the product is useful, it soon sits unused in her kitchen. ELM is an umbrella model for analyzing all persuasive methods and their cognitive effects. As a result, it has broad applications in the fields of advertising and psychology.
<urn:uuid:ededd243-6ecc-459a-8a94-17c122e33410>
CC-MAIN-2022-33
https://www.communicationstudies.com/communication-theories/elaboration-likelihood-model
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00000.warc.gz
en
0.943822
1,185
3.609375
4
Researchers from the University of California recently demonstrated how thermal cameras could theoretically be exploited to steal PIN numbers from unsuspecting victims. As Sophos security expert Chester Wisniewski notes, thermal imaging provides several advantages. Unlike with traditional cameras, visually masking the PIN pad does not thwart an attack, while the ability to automate PIN harvesting using computer software further simplifies the act. To demonstrate the potential of the above-mentioned attack vector, researchers recruited 21 volunteers and had them test 27 randomly selected PIN numbers using both plastic and brushed metal PIN pads. As expected, the strength of the participants' button presses and their body temperature affected the results - to a certain degree. However, researchers also determined that metal pads made any attack nearly impossible to implement. In contrast, plastic PIN pads allowed the cameras to determine what numbers were being pressed, along with their order. "With the plastic PIN pad, the custom software the researchers wrote to automate the analysis had approximately an 80% success rate at detecting all digits from a frame 10 seconds after the person entered their PIN," explained Wisniewski. "The success rate was still over 60% using a frame 45 seconds after the PIN was entered." Although thermal cameras are currently quite expensive, it is likely that thieves could theoretically adopt the technique sometime in the future. "As far as we know, this attack hasn't been used in the wild," he said. "Nevertheless, the cautious among us could opt to use ATMs with metal PIN pads to reduce the risk of becoming a victim."
<urn:uuid:a53753ba-782b-4b6c-b58c-66048e92fdb1>
CC-MAIN-2017-43
http://www.tgdaily.com/security-features/57950-thermal-cameras-could-compromise-pin-numbers
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00126.warc.gz
en
0.956313
313
2.953125
3
Encryption can protect personal data from government intrusion, which means the government wants the key to break it. Gavin Hanson reports: Like it or not, you are your data. In this day and age, your receipts, social media activity, public records, GPS data, and internet search history are the proof of who you are. And while you may have thought you had secrets, the Federal Government would like the rest of them. The seemingly innocuous pieces of information we trade away every day create a detailed mosaic of our lives used to target advertising and create personality profiles that are exploited by the FBI, political operatives like Cambridge Analytica, and Russian propagandists. And those are just the legal shenanigans! Instances of malicious hacking that jeopardize social security numbers and other important data are on the rise as well. Encryption, to oversimplify, is the process of putting your data in a combination locked safe, and it’s becoming more popular. Like all passcodes, these combinations are best stored non-electronically. Automatically encrypted search engines and internet services simplify the process for users. They protect individuals’ data from hacking, theft, and even the government, but they also retain a repository for all the combinations they use to lock data up. This is the Trojan horse the NSA means to use to gain access to your private data even when it is encrypted. But that may soon change. If the executive agencies have their way, the NSA will have a record of every lock combination in use by every company—a skeleton key, if you will, to gain access to your digital home, papers, effects, and aspects of your person without warrant or probable cause—effectively mandating that companies hand over skeleton keys to the locks that they provide to their users, at any time: what they call “exceptional access.” Read the rest of this entry » Forty-two years after unbreakable encryption was first conceived, these tools are more widespread than ever before. One milestone came in 2016, when the world’s largest messaging service—WhatsApp—announced it would offer default end-to-end encryption on all communications. In other words, the messages can be read only by the senders and recipients; even the platform provider can’t access them. Law enforcement and intelligence agencies are still reckoning with this new reality. For decades, they demanded that tech companies hand over private data on their users, sometimes without obtaining warrants. So companies like Apple changed their policies so individual users were the only ones holding the keys to their data. This new era of consumer privacy led to a standoff in 2016, when the Federal Bureau of Investigation (FBI) demanded access to an encrypted iPhone belonging to Sayed Farook, a deceased terrorist from San Bernardino, California. Farook and his wife, Tashfeen Malik, had killed 14 people at a holiday office party in December 2015. The FBI wanted Apple to write software that would weaken the iPhone’s built-in security. Apple refused, saying that such flawed software would jeopardize the security of its customers, who number in the hundreds of millions. Once a back door was created, the company claimed, the FBI could use it on similar phones—and it could be leaked to hackers or foreign enemies. “It is in our view the software equivalent of cancer,” Apple CEO Tim Cook told ABC News. Read the rest of this entry » Research by of University of Illinois professor has revealed a surprising trend about mass murder in the United States. CHICAGO (CBS) — Nancy Harty reports: Research by of University of Illinois professor has revealed a surprising trend about mass murder in the United States. Contrary to what you might think, mass murders are not on the rise, according to computer science professor Sheldon Jacobson. Jacobson said there were 323 such killings – in which four or more people are killed in one incident – between January 2006 and October 2016. The mass killings appeared to be evenly distributed over that time, meaning their rate remained stable over the past decade, and did not spike during any particular season or year. “The data doesn’t lie. The rate of these events just is not increasing as the perception is given in the media. This is just what it is,” he said. The professor used a decade’s worth of data from USA Today that was cross-checked by the FBI. He said his analysis also found public shooting sprees like the Las Vegas massacre are not the most common type of mass killing. Read the rest of this entry » NASA on Sept. 8 launched the first U.S. mission to collect and return an asteroid sample, in hopes of learning more about how the solar system coalesced and life came to be.
<urn:uuid:7aa0ddc9-98d5-4114-a817-2f8cf98ae789>
CC-MAIN-2020-10
https://punditfromanotherplanet.com/tag/data/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00063.warc.gz
en
0.95645
991
2.765625
3
Projects per year Apomictic dandelions provide a good system to study epigenetic variations that occur in natural populations. These dandelions reproduce with a type of asexual system where the egg cell develops to an embryo in a seed without any fertilization. This means that we can study the epigenetic variation independent from genetic variation because the seeds are a perfect copy of the mother plant. Widespread apomictic lineages lack in genetic variation but are still excellent colonizers of novel habitats. The adaptive potential of these apomicts may partially rely on compensatory mechanisms generating heritable variation, such as increased transposon activity or somatic recombination. An additional mechanism could be epigenetic variation. This assumption is based on our observation that genome wide heritable DNA-methylation variation is readily generated within apomictic dandelion lineages, especially in response to environmental stresses. |Publication status||Published - 2012| |Event||CSH-Asia Meeting: Plant Epigenetics, Stress and Evolution - | Duration: 28 Oct 2012 → 2 Nov 2012 |Conference||CSH-Asia Meeting: Plant Epigenetics, Stress and Evolution| |Period||28/10/12 → 2/11/12| Preite, V., van der Putten, W. H., & Verhoeven, K. J. F. (2012). Natural epigenetic variation in apomictic dandelion lineages. Poster session presented at CSH-Asia Meeting: Plant Epigenetics, Stress and Evolution, .
<urn:uuid:5b33ceaa-bc31-43d4-853e-78739f2c8c0c>
CC-MAIN-2020-45
https://research.wur.nl/en/publications/natural-epigenetic-variation-in-apomictic-dandelion-lineages-2
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891428.74/warc/CC-MAIN-20201026145305-20201026175305-00136.warc.gz
en
0.822473
324
2.8125
3
Utopia is wishful thinking, a striving for the ideal, something to think about. In the desire for order and social change, architecture also plays an important role. Nowadays, in the attempt to fuse functionality and everyday life, many urban concepts focus on combining daily life and work. Some architectural concepts remain pure illusions, obvious from the beginning that they won’t work. Utopian architecture, as an often impossible concept, is the inspiration for the work cité venteuse. Utopia against the backdrop of architectural projects refers to the basic question of space, to organisational schemes and harmony. However, misjudged and unforeseeable factors can alter and disturb the functional rationality. Taking inspiration from the characteristics of the airflow that sometimes channels between high buildings, the sounds for cité venteuse are generated.
<urn:uuid:22bd3dce-f2ca-47c3-98ed-d60e1ef1c994>
CC-MAIN-2019-04
https://www.miriamhamann.com/Cite_venteuse_en.php
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583681597.51/warc/CC-MAIN-20190119201117-20190119223117-00572.warc.gz
en
0.941805
168
2.6875
3
The Modern Reader's Guide to the Gospels by William Hamilton William Hamilton is Associate Professor of Theology at Colgate Rochester Divinity School, and a Baptist minister. Before joining the Colgate Rochester faculty, he was Dean of Chapel, Hamilton College. The Modern Reader’s Guide to the Gospels was published by the Association Press in 1960. It was copyrighted by National Board Of Young Men’s Christian Association in 1959. He is the author also of The Christian Man. This material prepared for Religion Online by Paul Mobley. Chapter 3 The Ministry Outside Galilee 1. Herod's fears, and the murder of John the Baptist, 6:14-29 Mark uses this section as an interlude to fill up the time during which the disciples arc out on their mission. Of course, the death of John the Baptist probably was deeply significant to Jesus, and may have underscored his own forebodings about the future. Herod hears of the mission of Jesus, and asks about him. (He is not technically a king, but tetrarch of Galilee and Perea, ruler of one-quarter of the realm of his father, the late King Herod the Great.) With a murderer's superstition, he fears Jesus as John the Baptist come to life again. After an introduction, Mark recounts what is doubtless a popular legend about John's death. The historian Josephus, writing some sixty years after the event, gives a number of different details. Here John has been imprisoned because of his opposition to Herodís adulterous marriage to his brother's wife Herodias. (We do not know if the brother was alive or dead; or, if alive, divorced from Herodias or not.) Herodias wanted to kill John, hut the prophet apparently exercised a sort of fascination for Herod, and he merely imprisoned him. But Herodias seizes a chance at a party to trick herod ( probably in his cups) into decreeing John's death. Salome is the name given to the daughter by Josephus, hut there is no name here. The note of remorse in verse 26 is interesting, but he keeps his promise and orders the execution. 2. The feeding of the 5,000 and its sequels, 6:30-7:37 a. The feeding of the 5,000, 6:30-44 The twelve now return from their mission, and Jesus takes them away to a quiet place for a rest. But the crowds follow along, and Jesus speaks with them until it is time for the evening meal. The disciples ironically ask Jesus if they should go into the village and buy forty dollars' worth of bread for the crowd. He takes the food he and the disciples have brought along for their meal, blesses it, and distributes it to the crowd. They are all filled, and there are twelve (symbolic number?) baskets of food left over. The story, as Mark received it, was clearly a miracle, in spite of the absence of any note of astonishment or wonder in the narrative. But it is more than a creative miracle of God as it stands. It is also a sign, a pointer to a deeper truth (see Mark 6:52). When John writes up this incident in the fourth gospel (Chapter 6) he follows it with a discourse about the bread of life. The kingdom of God is, in other places, likened to a feast: Luke 14:16-24 and Matthew 22:1-14. And there are hints here that remind us of the last supper, so that this can be read as a kind of preview of that (compare 6:41 and 14:22). So we cannot know whether the original event was miraculous or not. There is a note of mystery here, and it is best not to be sure of any conclusion. However, almost anything is better than the explanation one sometimes hears: that this is a lesson in sharing -- Jesus began to share his food, and everyone else decided to do the same! b. Crossing the lake, 6:45-52 Jesus asks the disciples to leave the site of the feeding and after he has dispersed the crowd he retires into the hills for prayer. A storm blows up, and the disciples in the boats see Jesus apparently walk-ing on the water. He quiets their fear and enters a boat, but the disciples still do not understand. We have some grounds for attempting to rationalize this story, for there is no particular meaning to the story if read as a miracle. The disciples were in trouble, and what frightened them even more than the storm was the ghostly figure of Jesus himself. The picture of Jesus in the story is somewhat unreal. It may be that the disciples were some time in getting under way against the wind, that Jesus unexpectedly waded out into the shallow surf to meet them, and that be took them by surprise. The word of comfort in verse 50 is the significant part, and Mark adds his favorite idea about the disciples' slowness and immaturity. c. Landing on the other side, 6:53-56 Notice the growing popularity described here. d. More controversy with the Pharisees, 7:1-23 This whole section concerns the nature of religious defilement, and verse 15 is the key to the whole. The passage can be conveniently broken up into three sections. 1. On the washing of hands, 7:1-8 The Pharisees, along with some visiting observers from Jerusalem, question Jesus' rejection of the fairly recent Jewish practice of ceremonial washing before meals. As is so often the case, Jesus does not directly respond to the question, but goes straight to the real issue at stake, which he rightly sees to be the authority of scribal tradition. (Mark remembers he is writing for Gentiles unfamiliar with Jewish practice, so he adds verses 3 and 4.) The quotation from scripture in verses 6 and 7 gives Jesus' position. 2. "Corban" 7:9-13 Again he gives an example of how human traditions can take false precedence over the commandment of God. The fifth commandment of Moses is this: Honor your father and mother. But you scribes, he says, fully approve when an unscrupulous son makes a vow to dedicate all his income to the temple, depriving his poor parents of their only means of support. "Corban" means "dedicated to God." So, a perfectly valid human vow of dedication can be used in an irresponsible way which breaks a far more basic commandment of God. 3. More sayings on defilement, 7:14-23 Verse 15 is the summary here, and it is a very significant passage for personal ethics. This is a decisive blow against all legalism: things or places cannot be unclean, only persons. Persons are not defiled by other things, but by themselves and their own disobedience to God. There is no inherent evil in nature, the world, or material things in the Christian ethic. Sin lies in man, and in his misuse of himself and the good things of God's creation. Compare this passage with Jesus' more detailed analysis of man's relation to material possessions in Matthew 6:19-34. Verses 18-19 are a rather unimaginative interpretation of the first half of verse 15, perhaps reflecting the ethical teaching of the early church. Verses 20-23 are a somewhat better interpretation of the second half of verse 15. c. Two healings, 7:24-37 1. Meeting a Greek woman, 7:24-30 Again Jesus' search for privacy is interrupted. The harshness of the reply in verse 27 to the woman's request for help is the main difficulty here. Some find here a reflection of the early Christian (that is, Jewish-Christian) prejudice against Gentiles. Some find a genuine tension in Jesus' own mind between the claims of the Jews and Gentiles. Some find in Jesus' words merely a half-playful testing of the woman's faith. Jesus is impressed, in any case, by her clever and bold reply, and the cure is effected. This is a fairly rare instance of a cure done at a distance. But the real issue here is not healing so much as it is the relation of the Jew and the Gentile in the kingdom of God. 2. The deaf man with a speech defect, 7:31-37 The unusual gestures and the use of spittle (a traditional habit of ancient exorcists) can perhaps be explained by the man's deafness: he is unable to hear the usual word of command and healing. The sighing in verse 34 is a trace of Jesus' profound compassion for the sufferer, and perhaps also of anger at the infirmity itself. Mark doubtless has in mind the passage describing the messianic age in Isaiah 35:5-10. So the evangelist here invites us to look beyond the relief of human suffering to a mighty act of God's chosen Servant, bringing the kingdom into history and dethroning the rule of evil in the world. 3. the feeding of the 4,000 and its sequels, 8:1-26 a. the feeding of the 4,000, 8:1-10 Many scholars believe that this feeding is not a second incident of a miraculous feeding, but a variant account of the same event. Perhaps Mark intended the first feeding to symbolize the salvation of the Jews, and this one that of the Gentiles, since it takes place on Gentile soil. It is difficult to explain the disciples' question in 8:4 if there had been a recent incident similar to this. The parallelism between the contexts of both feeding stories is interesting to note: 6:34-44, feeding the 5,000 6:53-56, crossing the Gennesaret 7:1-23, controversy with Pharisees and scribes on defilement 7:24-30, the Greek woman (throwing bread to the dogs) 7:31-37, healing a deaf stammerer 8:1-9, feeding of 4,000 8:10, crossing the sea to Dalmanutha 8:11-13, controversy with Pharisees about signs 8:14-21, sayings about bread 8:22-26, healing a blind man There are also a number of differences between the accounts. Here we have seven loaves instead of five, 4,000 instead of 5,000, compassion because of the people's hunger here, compassion because they are like sheep without a shepherd in the earlier narrative. b. The Pharisees ask about a sign, 8:11-13 Paul said (I Corinthians 1:22) that the Greeks seek after wisdom and the Jews look for signs. Here the Pharisees want some visible proofs of Jesus' claims; a tangible, and possibly supernatural, portent. Jesus refuses to give this sort of proof, though Mark clearly believes that as the supernatural Son of God he could have done so had he wished. c. The mystery of the loaves, 8:14-21 In reading this section, regard verse 15 as a footnote: a warning to beware of the evil influence of the Pharisees and of Herod. It is probably an independent saying that was dropped in here because of the relationship of the ideas of leaven and bread. The disciples have forgotten to bring along food for their boat trip across the sea. Jesus uses this incident to censure them for their forgetfulness about the meaning of the bread in the miraculous feeding. Here we have an interpretation that approaches the kind of thing the author of the fourth gospel does regularly. Mark shows us here how these feeding stories were understood by the early Christians. The feeding was a sign that the kingdom of God was in their midst and that God was sufficient for their needs. This story reminded the early church readers that not even the disciples understood what was happening in their midst. Perhaps, Mark is saying, some of us today do not yet understand the mystery of the loaves. d. A blind man is healed, 8:22-26 Here is a cure much like that of the deaf stammerer; it is done in private, and spittle is used. It seemed to be a difficult cure to effect, for it required a second laying on of hands. There is real artistry in Mark's placing this story here, following the one before. He has just told us of the disciples' blindness to the meaning of the loaves. Now he tells us here that even the blind can be made to see. The blind man saw; the disciples would come to see clearly; and Mark's readers will come to see as well.
<urn:uuid:a9656500-cfef-4c6d-8600-7ae9561918fa>
CC-MAIN-2014-23
http://www.religion-online.org/showchapter.asp?title=1114&C=1205
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997890773.86/warc/CC-MAIN-20140722025810-00241-ip-10-33-131-23.ec2.internal.warc.gz
en
0.960052
2,630
2.8125
3
Free Newsletters - Space - Defense - Environment - Energy by Boris Pavlishev Moscow (Voice of Russia) Jan 25, 2013 Scientists believe that a large crater, which has been discovered on Mars, might have been a lake several billion years ago. A space vehicle, which NASA sent to explore Mars, has discovered layers of clay and carbonate minerals in the walls of this crater. These substances may form in the ground only after the contact with water. This crater, which has received the name of McLaughlin, is one of Mars's largest craters. It is 92 kms wide and 2 kms deep. The space vehicle discovered no traces of washouts on the crater's walls, which means that, most likely, no water has ever come into the crater from outside. If the crater really was once full of water, this water has most likely penetrated from underground. Mars is smaller than the Earth, and the gravity power on Mars is three times weaker than on the Earth. Thus, scientists suppose that if underground waters have once existed on Mars, the soil layers that contained water were thicker and more clay-like than they were on the Earth. These conditions are ideal for bacteria to appear, scientists say. It is not ruled out that there is still water under the crater's bottom and that bacteria still live there. "The McLaughlin crater is an ideal place for scientists to examine the structure of Mars's soil," Russian scientist Evgeny Chernyakov says. "The fact that there exists such a deep natural hollow on Mars allows scientists to examine Mars's soil without drilling artificial holes," Mr. Chernyakov says. "This makes delivering the relevant equipment to Mars (which would have been very difficult and costly) unnecessary. Now, all that we need is to send a small device to Mars, which would "look" into this crater and take photographs or samples of the soil. From the ribs of the crater, we can rather easily take samples of the ground that would otherwise have been very hard to extract." On the Earth, scientists have discovered bacteria in samples of ground that was extracted from the depth of 5 kms. They are quite capable of living so deep in the ground if there is water there. This fact makes it possible to suppose that bacteria may also live at the similar depth in the ground of Mars - again, if there is water there. Some scientists even suppose that it is deep in the ground that life first appeared on the Earth. Millions of years ago, the Earth had a very thin atmosphere, and asteroids often hit the Earth's surface (now, they usually burn out in the atmosphere before reaching the Earth). But bacteria that lived deep underground were quite safe from asteroids' hits. Besides, although there was no oxygen in the Earth's atmosphere at that time, the species of bacteria that lived then needed no oxygen at all. "If we find bacteria - or more complicated living organisms - on Mars, this may help us to learn more about the origin of life on the Earth, because life on both planets probably appeared in similar conditions," biologist Elena Vorobyova from the Moscow State University says. "Scientists suppose that Mars appeared simultaneously with the Earth," Ms. Vorobyova continues, "and initially, from the geological point of view and the point of view of atmosphere, the two planets developed in a similar way. "Probably, life on Mars appeared approximately at the same time when it appeared on the Earth, and the forms of life on the two planets were very similar. Then, as a result of a certain catastrophe, the composition of Mars's atmosphere radically changed. Some species of bacteria probably survived, but, most likely, in the new conditions, they developed into different forms of life than on the Earth." "Even if there is no life on Mars, by examining Mars, we may better understand what the conditions on the Earth were like when the Earth was young," Elena Vorobyova concludes. Scientists suppose that the lake in the McLaughlin crater dried up about 3 bln 700 mln years ago. The minerals which form the walls of the crater were formed at the same time. Their age corresponds with the age of the oldest minerals that have been found on the Earth. Scientists also suppose that the McLaughlin crater is not the only place on Mars where living organisms may be found. Moreover, the majority of scientists believe that it won't be necessary to dig deep into Mars's ground to find them. Experiments in laboratories have shown that bacteria that live on the Earth are quite capable of living several dozen centimeters deep in a soil that is similar to that on Mars. Thus, probably, once we may learn sensational news that life really exists on Mars. Source: Voice of Russia Mars Science Laboratory Mars News and Information at MarsDaily.com Lunar Dreams and more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:1dac1802-06c1-474d-bf10-5f1125251df6>
CC-MAIN-2016-36
http://www.marsdaily.com/reports/Is_there_life_on_Mars_999.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983580563.99/warc/CC-MAIN-20160823201940-00063-ip-10-153-172-175.ec2.internal.warc.gz
en
0.964568
1,112
3.671875
4
by Richard Liang ’18 The term “cancer” comprises various diseases involving abnormal cell growth. They become most dangerous after metastasis, when they spread to multiple organs. Though few treatments are currently available for cancers past this stage, a recent study led by Dr. Juwon Park in Cold Spring Harbor Laboratory has identified neutrophils, a type of leukocyte in the blood, as a potential therapeutic target for preventing metastasis. This experiment involved monitoring neutrophil levels in breast cancer cells. Tumors were induced by transplanting cancerous cells into the breasts of mice. The lab mice were then divided into two groups: mice who had undergone 4T1 murine breast cancer cells transplants and mice who had undergone 4T07 murine transplants. The 4T1 cells were more prone to metastasize than the 4T07 cells. Confocal intravital lung imaging (CILI) was used to investigate the presence of any abnormal structures involved in the relationship between neutrophils and the onset of metastasis, especially any colocalized DNA structures. The results demonstrated neutrophil levels in the 4T1 mice that were almost five times higher than those in the 4T07 mice. CILI results indicated that neutrophils developed neutrophil extracellular traps (NETs) which promoted cancer metastasis. It was suggested that the rise in neutrophil levels was linked to the onset of metastasis. Based on this discovery, researchers are now trying to target neutrophils and the NETs to develop potential methods for preventing cancer metastasis. - J. Park, et al., Cancer cells induce metastasis-supporting neutrophil extracellular DNA traps. Science Translational Medicine (2016). - Image retrieved from: https://upload.wikimedia.org/wikipedia/commons/0/09/Neutrophils.jpg
<urn:uuid:3dec390b-4ced-4f0c-b61f-8cfe46d78e03>
CC-MAIN-2022-27
https://sbyireview.com/2016/11/12/effects-of-neutrophil-production-on-tumor-metastasis/
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104249664.70/warc/CC-MAIN-20220703195118-20220703225118-00651.warc.gz
en
0.943435
393
3.125
3
With that in mind, the research team decided to do a proof-of-concept study on the ISS to see how the mold would fare in blocking space radiation. They set up petri dishes with C. sphaerospermum fungi on one side and a control with no fungi on the other. Underneath, a pair of radiation detectors were connected to Raspberry Pi devices to capture radiation levels, and measure humidity, temperature, flow and other parameters. The fungi survived just fine in the microgravity environment and lowered radiation levels by nearly two percent. That could rise to as much as five percent if the fungi fully surrounded an object, the team calculated. Considering the relatively thin 1.7 mm fungal “lawn,” (layer) “this shows the ability of C. sphaerospermum to significantly shield against space radiation,” the team wrote in preliminary research paper. Extrapolating further, the team figured that a 21-cm (8-inch) thick layer would “largely negate” the annual dose you’d get on Mars compared to Earth, which is shielded by our magnetic field. That would drop to just 9 cm or 3.5 inches when combined with Martian soil, aka regolith. A big benefit of this for interplanetary travel is that you’d need to carry just a small amount of fungus aboard a spaceship. Once on Mars, astronauts would simply add nutrients and grow it into the large amounts necessary to shield any bases. It’ll still be many years before we send astronauts to the red planet, but no less than three exploration missions, including two rovers, will be en route by the end of July. With the launch of China’s Tianwen-1 last week, the next to launch will be NASA’s Perseverance rover, complete with its own helicopter on July 30th (Thursday) — so stay tuned for more coverage on that. All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
<urn:uuid:7c87cffa-1ad1-4ec1-9614-0fde858162c7>
CC-MAIN-2020-50
https://914local.com/chernobyl-mold-could-shield-astronauts-from-deep-space-radiation/
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195198.31/warc/CC-MAIN-20201128070431-20201128100431-00097.warc.gz
en
0.957711
448
3.421875
3
Kathleen D. Noble Explores the complex processes involved in implicit cognitive and neuroplasticity; the role of thought, emotion, and meditation in shaping the brain; and possibilities for enhancing human psychological development. Prerequisite: BSE 322. Our minds are intricately and inextricably involved in creating our physical, psychological, interpersonal, and collective experiences. This course will explore the profound implications of interconnectedness and the roles of mindfulness, happiness, wisdom and rest in helping us become more balanced and aware human beings. Student learning goals 1. Recognize the ways in which interconnectedness or “one mind” permeates all aspects of our individual and collective lives. 2. Appreciate the power of mindfulness and expanded awareness in achieving and enhancing happiness, harmony, and well-being. 3. Understand the importance of wisdom and rest to consciousness and well-being. 4. Demonstrate the ability to distill, discuss, and evaluate the principal ideas presented in textual material. 5. Demonstrate the ability to reflect on, write about, and discuss your own ideas and insights about these issues General method of instruction BST 321 and BST 322. Commitment to attending and participating in every class. Class assignments and grading 1. Written outline and analysis of readings [except Sabbath] (25%): Double-spaced, 12 point font. Hard copies only. 2 pages per chapter, single spaced: What are the author’s main points? So What? Your so what? Do not use quotes. All writing must be in your own words. Each outline is due at the beginning of the appropriate class and will form the basis of class discussion. 2. Sabbath reflections (25%): Our readings from “Sabbath” are your opportunity to structure Sabbath principles into your life and learning. Consider this work as an extended meditation about the mindful practice of conscious well-being. a. For each set of readings: Do at least one practice from the book or one practice of your own creation based on this set of readings. b. 5 written reflections: (2 pages each, double-spaced): Which retreat practice did you do or create? Where did you do it? When? Why? Describe your experiences during these practices. What was your frame of mind (e.g., thoughts? feelings? insights?)? What did you learn? 3. Participation (25%): Students will be evaluated by the professor and themselves based on their preparation to discuss and raise questions based on the readings, use of notes and texts to support their questions and contributions, and respect shown for other participants. Students must be present in class to earn credit for participation. 4. Final Reflection Essay (25%): 5 pages, double-spaced, 12 point font. This essay is your reflection on what you learned during the course. Your task is to think and write about the material we read or watched and discussed throughout the quarter. I want to know what you think about these ideas and issues and how you and your ideas have grown over the course of the course. Students will present and discuss their final essays during the last day of class. Each writing assignment will get a variation of the following checks. I'll translate them to grades at the end of the quarter. Here's what they mean: Check ++: You're a rock star and you taught me something. Thank you. Check +: Excellent; no improvement needed; you could teach this session. Check(+): Very good; you've almost reached the heights of excellence; just a little tweaking needed to be great. Check: Good, acceptable, but with a little work this could be awesome Check(-): Needs more work but you're on the right path Check -: Not failing because you tried but needs a lot more work. If you read my comments and take them seriously and/or if you meet with me you will improve immensely. Check --: You're not serious, are you? I truly want all students to be in the check (+) range at minimum.
<urn:uuid:7a707642-a3f0-4e9a-83a8-a4264ecc1a4d>
CC-MAIN-2016-30
https://www.washington.edu/students/icd/B/bst/425kdnoble.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823133.4/warc/CC-MAIN-20160723071023-00121-ip-10-185-27-174.ec2.internal.warc.gz
en
0.923587
838
2.921875
3
Things to Do in Uluru Uluru (also known as Ayers Rock) is one of the most recognisable landmarks in the world. It is a large rock formation, estimating to be around 550 million years old. Now a World Heritage-listed attraction, Uluru is not just a natural wonder but a deeply spiritual place. The incredible natural monolith sits snug in the heart of the Red Centre of Australia. Uluru is a local pilgrimage site that set amongst a spiritual and cultural area. It is estimated that Aboriginal people lived in this region for over 20,000 years ago. Uluru is a very significant part of the indigenous culture and history. They regard it as a living form part of the land, a dwelling for past spirits. Although the relationship between Uluru and the aboriginal people is long, Uluru is estimated to be even older, dating back around 500 million years ago. Europeans first discovered it in the 1870s, announcing the area open to tourists in the 1930s. Climbing Uluru was once a common occurrence, but due to unsafe climbing and rock damage, the Anangu law and culture requests the public to restrain from climbing nowadays. Its sheer grandiosity its best experienced during the changing colours of sunrise or sunset when the vibrant oranges turn from milky pink to earthy red. The Surrounding wonders at the Uluru can be just as awe-inspiring. Australia’s outback is home to natural wonders, amazing wildlife, and ancient history. These unforgettable attractions are unique to this area and are found nowhere else in Australia. National Parks bursting with Australian flora and fauna, rock formations, deep canyons, historic artefacts, and incredible outback activities. Discover the world’s oldest cultural history at places such as Uluru Kata Tjuta Cultural Centre. Learn more about the indigenous community when you visit Uluru Kata Tjuta Cultural Centre, full of historic artefacts, video recordings and photographs of past groups. Different sections spanning various time periods. Hear the diverse stories about the lives lived by the ancestors of this incredible land. Kata Tjuta, also known as Olgas, is a similar rock structure to the famous Uluru. Instead of Uluru’s one large rock formation, Kata Tjuta is a collection of thirty-six boulders ranging in sizes, clustered together to form one large structure. Its name means ‘many heads,’ referring to the many boulders attached. This region is a scared place for the indigenous community, with many dreamtime stories regarding it as well as a vast cultural history. Just a few hours’ drive from the stunning Uluru lies the Kings Canyon. Featuring large ancient rock walls sheltering a lush foliage of Australian native. The spectacular red rock walls enclose this plant life haven, home to native animals and cascading waterfalls. Trek through the lush foliage and submerge yourself in this lush wonderland amidst the Australian desert. Venture atop Kings Canyon and see the remarkable views, catch it during sunset or sunrise for a real breathtaking sight. Everyone comes to the Northern Territory for the mighty Uluru, but you will fall in love with the breathtaking surrounds. Explore the Uluru Tours 3 Day Uluru & Kings Canyon Tour 1 Day Uluru Tour from Alice Springs Uluru Sunset and Sacred Sites from the Rock 5 Day Darwin to Alice Springs with Uluru Detour Uluru Sunrise and Kata Tjuta from Ayers Rock
<urn:uuid:a9b8e437-d772-4f4c-bd70-06861eb0d000>
CC-MAIN-2019-26
https://sightseeingtoursaustralia.com.au/attractions/uluru-surrounds/
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998817.58/warc/CC-MAIN-20190618203528-20190618225528-00250.warc.gz
en
0.922482
711
2.640625
3
This set of Software Design Multiple Choice Questions & Answers (MCQs) focuses on “Software Engineering Design Method” 1. Which of these truly defines Software design ? a. Software design is an activity subjected to constraints b. Software Design specifies nature and composition of software product c. Software Design satisfies client needs and desires d. All of the mentioned Explanation: Software design explains all of the statements as its definition. 2. Which among these is false ? a. A process is collection of related tasks that transforms set of inputs to set of output b. A design notation is a symbolic representational system c. A design heuristic is a rule proceeding guidance, with guarantee for achieving some end d. Software design method is orderly procedure for providing software design solutions Explanation: A heuristic is a rule followed but there is no guarantee that we get output. 3. Which of these describes stepwise refinement ? a. Nicklaus Wirth described the first software engineering method as stepwise refinement b. Stepwise refinement follows its existence from 1971 c. It is a Bottom-up approach Explanation: it is top down approach and not bottom up. 4. What is incorrect about structural design ? a. Structural design introduced notations and heuristics b. Structural design emphasis on procedural decomposition c. The advantage is data flow representation d. It follows Structure chart Explanation: The biggest drawback or problem is data flow diagram of structure design. 5. What is solution for Structural design ? a. The specification model following data flow diagram b. Procedures represented as bubbles c. Specification model is structure chart showing procedure calling hierarchy and flow of data in and out of procedures d.Emphasizing procedural decomposition Explanation: It is solution to central problem. Rest others are problems. 6. Which of these are followed by latest versions of structural design? a. More detailed and flexible processes b. Regular Notations c. Wide support by CASE(Computer Aided Software Engineering) Explanation: Notations used are more specialized and sophisticated one. 7. The incorrect method for structural design is? a. Transition of problem models to solution models b. Handling of larger and more complex products c. Designing Object oriented systems d. More procedural approach Explanation: It does not account for larger and complex products. 8. What are followed by design task? a. Choosing specific classes, operations b. Checking model’s completeness c. Following design task heuristics d. a, b e. a, b and c Explanation: All of these tasks are followed by design task. 9. Which of these analysis are not acceptable ? a. Object oriented design is far better approach compared to structural design b. Object oriented design always dominates structural design c. Object oriented design are given more preference than structural design d. Object oriented uses more specific notations Explanation: Though object oriented design is considered far better approach but it never dominates structural approach. 10. Which these does not represent object oriented design ? a. It follows regular procedural decomposition in favor of class and object decomposition b. Programs are thought of collection of objects c. Central model represents class diagrams that show the classes comprising a program and their relationships to one another d. Object-oriented methods incorporates Structural methods Explanation: It does not follow regular procedural decomposition. Sanfoundry Global Education & Learning Series – Software Architecture and Design.
<urn:uuid:d007b14e-1e3d-4b83-b5cc-a87ea767e7f7>
CC-MAIN-2017-13
http://www.sanfoundry.com/software-design-mcqs-engineering-design-methods/
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194600.27/warc/CC-MAIN-20170322212954-00082-ip-10-233-31-227.ec2.internal.warc.gz
en
0.851714
759
3.09375
3
The first of its kind, Abina and the Important Men is a compelling and powerfully illustrated “graphic history” based on an 1876 court transcript of a West African woman named Abina, who was wrongfully enslaved and took her case to court. The book is a microhistory that does much more than simply depict an event in the past; it uses the power of illustration to convey important themes in world history and to reveal the processes by which history is made. The story of Abina Mansah—a woman “without history” who was wrongfully enslaved, escaped to British- controlled territory, and then took her former master to court—takes place in the complex world of the Gold Coast at the onset of late nineteenth-century colonialism. Slavery becomes a contested ground, as cultural practices collide with an emerging wage economy and British officials turn a blind eye to the presence of underpaid domestic workers in the households of African merchants. The main scenes of the story take place in the courtroom, where Abina strives to convince a series of “important men”—a British judge, two Euro-African attorneys, a wealthy African country “gentleman,” and a jury of local leaders— that her rights matter. “Am I free?” Abina inquires. Throughout both the court case and the flashbacks that dramatically depict her life in servitude, these men strive to “silence” Abina and to impose their own understandings and meanings upon her. The story seems to conclude with the short-term success of the “important men,” as Abina loses her case. But it doesn’t end there: Abina is eventually redeemed. Her testimony is uncovered in the dusty archives and becomes a graphic history read by people around the world. In this way, the reader takes an active part in the story along with the illustrator, the author, and Abina herself. Following the graphic history in Part I, Parts II-V provide detailed historical context for the story, a reading guide that reconstructs and deconstructs the methods used to interpret the story, and strategies for using Abina in various classroom settings. The new, second edition of Abina and the Important Men features a new gender-rich section, Part V: Engaging Abina, which explores Abina’s life and narrative as a woman. Focusing on such important themes as the relationship between slavery and gender in pre-colonial Akan society, the role of marriage in Abina’s experience, colonial paternalism, and the meaning of cloth and beads in her story, this section also includes a debate on whether or not Abina was a slave, with contributions by three award-winning scholars—Antoinette Burton, Sandra Greene, and Kwasi Konadu—each working from a different perspective. The second edition also includes new, additional testimony that was rediscovered in the National Archives of Ghana, which is reflected in the graphic history section. Reviews and testimonials Paul Lovejoy, York University Abina and the Important Men is an excellent introduction to history and society through an innovative mix of primary text, annotated transcription and highlighted in cartoon form that captures the imagination of new students. It is a must for adoption in first year courses. Jeremy Rich, Middle Tennessee State University This is a very strong and original work. All three sections (the inclusion of the primary source, the historical context section and the reading guide) allow for a broad range of discussion topics. Students can compare the graphic novel section to the court transcript and discuss how historians develop historical narratives. Sharlene Sayegh, California State University, Long Beach Abina and the Important Men addresses an important gap in the teaching of history, one that recognizes that there are a variety of learning styles Jonathan T. Reynolds, Northern Kentucky University Trevor Getz has pushed the envelope of Africanist Scholarship. With Abina and the Important Men he offers unique insight into such contentious topics as personhood, gender, slavery, and colonialism. Along the way, he provides teachers and readers with a powerful tool for investigating the process of giving meaning to historical documents and narratives. This is exactly the sort of work that will help African history escape the dark and dusty halls of academia and help make it relevant to a wider audience. This is GENIUS. Jason Ripper, Everett Community College Academia has finally woken up to the interests of students and Oxford University Press is a willing partner in this awakening. Bravo! This book takes college-level course material in a fresh and invigorating direction. The story – images included – is engrossing, addresses themes regularly featured in our courses, and provides needed insight into a people who still get too little treatment even in world history courses. Also, the author’s added commentary on the source material and the general historical context ensure that when students have the book with them at home, they will still recognize the academic qualities of the volume. Erin O’Connor, Bridgewater State University This is an innovative approach to teaching social history and colonialism in Africa. The graphic history contains beautiful and compelling artwork, and the text closely follows historical documentation. Furthermore, the inclusion of the actual document transcription and historical context make it possible to teach this book on many different levels, getting students to think deeply about and probe the process of how history is made (both in the past and by historians). It would work well in courses on either African history or world history. Tiffany F. Jones, Cal State-San Bernardino This is a pioneering work in the narration and representation of African History and will appeal to students of all levels. The book engages in the actual historical process and makes it very evident for students the processes historians go through when compiling such a document. The fact that Abina and the Important Men highlights the difference between primary and secondary documents, and talks in detail about representation and translation, makes it particularly valid for all history classes. Alicia D. Decker, Purdue University This is an excellent project! It is fresh, engaging, and historically sound. I would definitely use this text in my Modern Africa and African Women’s History classes. I really like the way that the author and illustrator have divided the book into sections for different levels of analysis. Beginning students can focus on the graphic novel, while more advanced students can also discuss the production of historical knowledge and the larger historiography. Paul S. Landau, University of Maryland This is an important departure for Oxford University Press and an excellent combination of research and pedagogy. It is a fine work and I will use it in my teaching…. Students today do not easily grasp the difference between a primary and secondary source. This text merges that appreciation — for how historians work — into the fabric of the book. Maxim Matusevich, Seton Hall University The project’s originality is its main strength; it certainly stands out among other texts on slavery. It also makes the experience of enslavement more immediate, more visual, in other words, it brings it to life.
<urn:uuid:86cea9c7-28e8-4728-b0b9-1cabeb265576>
CC-MAIN-2018-47
https://abina.org/
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748901.87/warc/CC-MAIN-20181121133036-20181121155036-00286.warc.gz
en
0.940853
1,470
3.140625
3
By Maeve Lewis BA (Hons) (Psych), Dip. Psychotherapy, H.Dip.Ed.. MIAHIP The term “eating disorder” covers a wide variety of behaviours ranging on a continuum from the self-starvation of anorexia nervosa through the binge eating and purging of bulimia, to the consistent extreme overeating of obesity. Eating disorders imply a pattern of eating which is outside that which is socially acceptable and which, in extreme situations, can lead to ill-health or even death. Nutritional needs and hunger are replaced by an intense, obsessive relationship with food as the primary motivator of what, and how much to eat. Incidence of Abusive Relationships with food When I reviewed the clinical histories of the female clients I have worked with who have been sexually abused in childhood, I realised that, almost without exception, abuse of food is an issue. There is a wide literature available on the extreme forms of eating disorders – anorexia, bulimia and obesity – and there is strong evidence to suggest a history of childhood sexual abuse in significant numbers of women who develop these disorders (¹). Very little has been written about the array of eating behaviours along the continuum which are not in themselves life threatening, but which overshadow the lives of those caught up in abusive relationships with food: excessive continuous dieting, comfort eating, overeating, “chocaholism”, mild bulimia etc. Yet, for clients who have been sexually abused, it is in this arena that the vast majority conduct their battles with food, and these are the issues which tend to recur again and again in the therapeutic setting. Therefore, it is my intention in this article to concentrate on the unhealthy, but not necessarily life threatening, relationships which women who have been sexually abused tend to develop with food and eating, and the challenges this presents for the therapist. Influence of Other Traumas on Eating Behaviour It is difficult for two reasons to isolate sexual abuse in itself as a predictor of eating disorders. Firstly, I think it is fair to say that most women in the Western world have an uncomfortable relationship with their bodies and very few are satisfied with their natural body shape (²). Women diet and exercise themselves in a futile effort to reach what is, for most of us, an impossible target weight or size. Women who have been sexually abused are subject to the same societal influences as everybody else. Secondly, sexual abuse always takes place within a context, and the experience of sexual abuse is highly individualised. It is generally accepted that long-term sexual abuse within the family is among the most destructive forms of sexual abuse. The factors which allow intrafamilial sexual abuse to take place, to remain secret often even among family members, or to be tolerated by other adults in the family, suggest an environment where the needs of the child are subsumed to the needs of the sexual abuser. Quite apart from the experience of sexual abuse, children growing up in such an environment have great difficulty in developing a strong healthy sense of self. In addition, other traumatic experiences in childhood, such as emotional abuse or neglect, unresolved bereavement, abandonment by primary caretakers, perinatal trauma and poverty, can enhance the impact of sexual abuse. Where food and eating is an issue, the influence of other traumas can often be seen in the complex patterns of eating behaviours which emerge, and it becomes impossible to identify sexual abuse specifically as a causative factor. Most women who have been sexually abused in childhood, present with a distorted relationship to food. This tends to mirror the dynamic, which operates in a relationship where a child is being sexually abused, and the ways in which this dynamic affect the developing intrapsychic world of the person (³). It is as if the effects of sexual abuse as experienced by the client can be expressed symbolically through the patterns of eating. There are a number of themes which tend to recur among people who have been abused which manifest in the abuse of food. The Physical Nature of Sexual Abuse Sexual abuse by its nature is very physical. The abuse is perpetrated through the body, and the resultant trauma becomes embodied. It is very common for people who have been sexually abused to develop patterns of eating which are either an attempt to use physical means to assuage and control the emotional and psychological distress associated with the abuse, or to use food in such a way that the physical discomfort arising from the abuse of food mirrors the emotional and psychological distress. In the first situation, the client may present with a pattern of over or under eating where food or hunger is used to repress uncomfortable feelings, in much the same way as alcohol or recreational drugs can be used (4). In the second situation, the client may develop a pattern of eating which causes actual physical pain. This is especially obvious in clients who binge eat to the point where they develop severe abdominal cramps. It is as if the physical pain expresses the inexpressible internal pain. Client who have developed this coping strategy may also have a tendency to self-mutilate. The choice to over or under eat does not seem to be as important as the function it serves. Development of Distorted Body Image Children who are sexually abused tend to internalise the abuse, and take upon themselves the responsibility for provoking the abuse. As a result, sexually abused clients will usually describe very low levels of self-esteem. This is often projected onto the client’s physical appearance, and they feel ugly and unattractive. It is common for sexually abused clients to have a very distorted body image, usually experiencing themselves as being much bigger than they actually are. Dissatisfaction with external appearance, and the battle to conform to stereotyped notions of beauty through modifying food intake, can become the metaphor for a hopeless internal struggle towards self-acceptance. Setting unrealistic goals of weight reduction sets up a cycle of dieting and bingeing which serves further to undermine self esteem and to perpetuate self-loathing. The discipline involved in perpetual dieting, and the rituals associated with purging, can also be seen as a form of self-punishment, denying oneself the nurturance and the pleasure that can come from good food. Children who have been sexually abused have experienced profound helplessness and powerlessness. In adulthood, this can manifest as a victim consciousness, where the person sees themselves as being helpless to effect change in their circumstances, and seems frozen in a position of childlike dependence, unable to move towards a more adult autonomy. The relationship which sexual abuse survivors develop with food, can serve to reinforce the sense of powerlessness, where food and eating patterns are perceived to be outside the person’s control, or are believed to control the person. It also assists in maintaining a victim consciousness by removing responsibility from the client for her life decisions by postponing living: “When I’m a size 10, then I’ll be happy/attractive etc.” Food, Shame and Guilt In sexual abuse, particularly where the perpetrator is a family member, a relationship develops where normal boundaries are violated and breached. As children learn how to relate through their experience in early relationships, the adult survivor is usually struggling with issues of boundaries and control. This is often reflected in the relationship the survivor develops with food: the severe control of dieting and, in extreme cases, anorexia, or the apparent lack of control in the binge eater or obese person. The secrecy and lies involved in an abusive relationship are also mirrored in the food relationship – eating secretly and covertly, covering up amounts eaten or not eaten, and the shame and guilt which ensues. Sexuality and Eating Disorders It is perhaps in the area of sexuality that the link between sexual abuse and eating disorders is most clearly seen. It is almost impossible for a person who has been sexually abused to develop a comfortable relationship with their own sexuality. While most people who have been sexually abused go on to have sexual relationships, it is an aspect of relationship that is fraught with difficulty. In particular, fear of intimacy and revulsion towards bodily functions, prevent the person engaging freely and openly in sexual activities. In addition, many people who have been abused are very cut off from their bodies. They may experience themselves as being disembodied, existing from the neck up, and therefore out of touch with bodily sensations. They may also feel ugly and unattractive, as discussed above. Since sexual expression is inextricably linked with the body, it is very common for clients to use food to control or change their natural body shape, as an avoidance of sex. This emerges in two forms. At one extreme, clients will control their body to the point that they have not developed their bodies. This will be noticed particularly in women in their thirties and older who have retained an adolescent shape, where they do not “fit” with their bodies, where the body appears almost stunted. At the other extreme, clients will eat excessively to the point that their bodies are bloated with weight, so that their shape is lost inside an armouring of fat (6). Given the social ideal of sexual attractiveness currently prevailing, and the obsession with thinness, the layers of fat not only effectively protect the woman from involvement in sexual activity, but also reinforce her self-image of ugliness. Exploring the Client’s Relationship with Food Women who have been sexually abused, and who seek therapeutic intervention will rarely present at the first session with food issues, unless their disorder has evolved into anorexia or serious bulimia. This is partly because of the shame that is attached to lack of control around food, and partly because, for many clients, an unhealthy relationship with food has become such an intrinsic part of their lives that they are not consciously aware of it being a problem. In general, with issues of food abuse, the approach I take is to avoid becoming entangled in the mechanics of the patterns, but rather to explore the client’s relationship with food as a metaphor for their relationship with self and others. Where anorexia, bulimia or severe obesity are involved, I will insist on a medical check-up to evaluate the level of physiological damage that has been sustained. With anorexia, I will contract a minimum weight which the person must sustain, and explain that should they go below that weight, they will no longer be sufficiently well to engage in psychotherapy. Healing within the Relationship The person who has been sexually abused, has been traumatised in the context of a relationship, and I believe that healing must also take place within a relationship. The food relationship may be the most intense relationship in the client’s life at that point, and the type of pattern that the client has evolved with food will generally be a good indicator of the transference issues which are going to surface in the therapeutic relationship. A key issue will be intimacy, and the client’s difficulty in being vulnerable and trusting within the relationship with the therapist. Victim consciousness is likely to emerge in relation to the client’s life in general, but also as regards the relationship with food. The therapist will find that the client is projecting the same power and responsibility onto the therapist as she has done with food. However, even when the pattern of eating seems to be out of control, and the client perceives herself as helpless, the underlying dynamic will be one of the client trying to control her world through eating or not eating, and the impact this has on her body. Realising the Danger of Power The therapist can find herself getting drawn into subtle (or not so subtle) power games. If the current pattern of eating has been allowed to become a major issue in therapy, the therapist may find herself involved in a struggle to enforce healthy eating patterns which the client will sabotage. Decisions around food must be presented as a matter of choice which only the client has the power to exercise. It is helpful to explore fully the ways in which the eating patterns have served the client, the investment she has in maintaining these patterns and her fears regarding life possibilities if the relationship with food were to be let go. It will usually be found that the client is terrified of moving on to a life where she is free to make life choices but also has to be responsible for them. The familiarity of the constricted world of pain and struggle may seem safer than the possibility of living to the full. Exploring the patterns of food abuse, and the needs which the client is attempting (but failing) to meet through eating, allows those needs to be acknowledged as valid, but more appropriate ways of meeting them can be developed. As one client who developed anorexia in adolescence remarked: “I wanted my family to notice how unhappy I was without me having to tell them.” Freeing the Intrapsychic Pain Food abuse, in whatever form, will, at its core, be an attempt to numb the appalling intrapsychic pain of sexual abuse and to block the traumatic experience. One of the tasks of therapy is to facilitate the person to move towards the repressed pain, to allow themselves to experience it and, ultimately, to free themselves from it. This is usually terrifying for the client, and as she moves towards experiencing the emotional pain, the food relationship may intensify in a desperate attempt to ward off what seems to be unbearable and overwhelming. The therapist needs to be very open and non-judgemental, to be able to hold the client psychologically, but also to be able gently to challenge. As the sexually abused person has been traumatised through their body, most of the trauma will be held in the body. Bodywork can be very helpful at this time, in helping the person to embody herself and become aware of her body and its sensations, in facilitating emotional catharsis, and, ultimately, in developing a new relationship with her body. Focussing on the Body Image Once a client becomes more in touch with her body, it becomes possible to begin to work with body image, which I see as a projection of internal self-image. Challenging distorted cognitive perceptions, using imagery, art work and suggesting regular massage, can all facilitate this process. I view the focus on body image as a parallel process which must be accompanied by a shift in the underlying perception with self. Encouraging the client to begin to care for her body in a non-punitive, loving way opens the way towards self acceptance at a deeper level. Helping the client to see the patterns of eating she has developed as a survival mechanism, can facilitate shifting the shame she has experienced both at the time of the abuse as a result of her eating patterns. In other words, the food relationship can come to be seen as a tactic that allowed the client to survive sexual abuse and its aftermath. While at one time it had value, it is now no longer necessary. Moving towards Self Acceptance Working with people who have been sexually abused in childhood is usually a long, slow process. The patterns of eating and the intense relationship with food will wax and wane during this time. Ultimately, my aim with a client is to reach a point where she understands the way in which her relationship with food enabled her to block off the traumatic experience to support her in facing her pain so that she no longer needs to engage in old patterns of defence and to be with her as she moves towards a place of self-acceptance. Unlike other substances that a client may abuse, it is not possible to abstain from food. Therefore, it is unrealistic to assume that food and eating will never be an issue again for the client in times of stress, she may find herself slipping back into old ways of coping with emotional stress However, since she has acknowledged and experienced the trauma of the sexual abuse and moved from a victim stance she will be aware that she now has alternative ways at her disposal of responding to life. Maeve Lewis is Director of New Day Counselling Centre 1. Richard C., Hall M et al (1989) “Sexual Abuse in Patients with Anorexia Nervosa and Bulimia “ Psychosomatics, Vol I, No. 1 73-79. 2. Wolf N, (1990) The Beauty Myth, Vintage. 3. Cole P & Putman F (1992) “Effect of incest on self and social functioning: A developmental perspective.” Journal of Consulting and Clinical Psychology 60, 174-184 4. McFarland B & Baker-Baumann T. (1988) Feeding the Hungry heart, Hazelden 5. Alexander P, (1992) “Application of attachment theory to the study of sexual abuse” Journal of Consulting and Clinical Psychology 60, 185-195. 6. Orbach, S. (I984) Fat is a Feminist Issue, Hamlyn.
<urn:uuid:41e3a6d6-3757-4b22-a9a6-316aaf7749b1>
CC-MAIN-2017-22
http://iahip.org/inside-out/issue-30-autumn-1997/sexual-abuse-and-the-abuse-of-%E2%80%A8food-mirror-images%E2%80%A8
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609409.62/warc/CC-MAIN-20170528024152-20170528044152-00155.warc.gz
en
0.970506
3,450
2.921875
3
Energy Pulse Power is an entire creation power age programming that originators, coordinators, and specialists use to uncover every energy use—for warming, cooling, wind stream, lighting apparatuses, and fitting and cycle loads—and water use in plans, a touch of the significant highlights and limits of Energy Plus include: - Made, the synchronous arrangement of warm zone circumstances and central air shape reaction that doesn’t expect that the cooling framework can meet region stacks and can impersonate un-shaped and underneath changed spaces. - Warmth balance-based absolutely way of activity of wonderful and convective results that produce floor temperatures heat reassurance and improvement appraisals. - Sub-hourly, shopper discernible time undertakings for facilitated endeavour among heat zones and the climate; with generally moved time experiences for interests among warm zones and cooling frameworks. These permit Energy Plus to grandstand frameworks with quickly added substances while additionally dealing with re-enactment pace for accuracy. - Cemented warmth and mass change form that tends to air headway between zones. - The progressed fenestration moulds along with controllable window blinds, electrochromic glazing’s, and layer-by means of layer heat changes that parent daylight hours fundamentally based strength ate with the guide of window sheets. - Illuminance and glare figuring’s for uncovering visual solace and utilizing lighting apparatuses controls. - The part based central air underpins both general and novel shape plans. - Its perpetual homegrown central air and lights oversee procedures and an extensible runtime scripting structure for supporter depicted control. - The practical Mock-up Interface import and expense for co-pleasure with assorted motors. - The standard summary and point by point yield report further as benefactor quantifiable surveys with selectable time-objective from yearly to sub-hourly, all with gas source multipliers. One of the strong motivations behind Energy Plus is the coordination of all pieces of the generation – weights, systems, and plants. Considering an assessment variation of the Effect program called BLAST, system and plant yield is allowed to directly influence the construction warm response rather than discovering all stacks first, by then copying structures and plants. The propagation is coupled allowing the maker to even more decisively look at the effect of belittling fans and gear and what influence that may have on the destinations of Energy Plus are objective arranged anyway plausible through the way portrayed already. Energy Plus plans to be a program that is reasonably simple to work with according to the perspective of both the customers and the specialist. The improvement bunch set forth massive endeavours to keep diversion code and figuring’s as disconnected as could be normal and as estimated as possible to restrict the overall data that someone would need to have to add models to the program. This will restrict the resource theory and extend the impact of recurring pattern research in the field of building energy examination and warm weight assessments. Finally, the full coupling of building envelopes, systems, and plants will give a predominant appreciation of how a construction responds not solely to the characteristic factors that influence the design yet likewise the focal air structure as it attempts to meet the warm loads on the construction. It is furthermore crucial to observe that testing and check are the main issues of interest in the headway of any new program, for instance, Energy Plus. While there are gigantic fragments of Energy Plus that contain unblemished code, the greater part of the glow balance code can be followed back to the main parent programs.
<urn:uuid:cbb1656c-e9ec-46dd-96b2-a502c692b11b>
CC-MAIN-2022-40
https://www.apec-iap.org/warm-sources-of-energy-plus-power-program-that-gives-a-euthanized-way/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00225.warc.gz
en
0.920265
722
2.578125
3
Urban and industrial development pose many challenges today in the Mediterranean region, including management of waste and pollution, more than 90% of which originates on land. Besides the challenge of reducing pollution, essential elements in ongoing efforts for the ecological health of the Mediterranean are the sustainable management of maritime transport, oil exploration, industrial fishing and tourism. We must also support the creation and management of protected marine areas in order to restore the most affected ecosystems, maintain fish stocks, and preserve certain endangered ecosystems. Beyond just observing the situation and sounding the alarm, we are working to promote innovation and solutions for the future of plastics. We want to make concrete progress in the ongoing political processes – on a regional, national and international level. - TO STIMULATE DEBATE: WHAT SOLUTIONS? Reducing pollution at the source: education, recycling, promotion of a circular economy. Integrated watershed management: cleaning of canals and rivers. Green packaging: producer responsibility. Bioplastics: derived from renewable biomass sources, biodegradable, oxo-fragmentables. What real impact will they have, and which ones are a real solution ? Reduction of chemical pollution at the source: international regulations. Research and innovation: plastic and micro-organisms. Which organisms can break down what types of plastic? Prohibition of single-use plastic bags: France could become an example in this area. Europe has already adopted (in May 2014) a text setting goals for member countries to reduce the number of single-use plastic bags. Tara considers this text as a step forward, but it is insufficient. - TWO FORMS OF PLASTIC POLLUTION AT SEA WASTE AND PLASTIC DEBRIS: Bottles, bottle caps, scraps. About 6 and a half million tons of waste are dumped annually in the oceans and seas of the world. 80% is plastic, or 206 pounds per second. MICROPLASTICS: granules, beads, microbeads, textile fibers – complex, invisible pollution difficult to treat. While macro-waste directly impacts fish and seabirds, microplastics have an impact on marine microorganisms and therefore the entire food chain. - THE MEDITERRANEAN IN NUMBERS >450 million people live in coastal areas of the Mediterranean, in 22 countries. >In just 30 years, from 1970 to 2000, the overall population of the Mediterranean countries grew from 285 to 427 million people, with two collateral phenomena – coastal development and urbanization. >The Mediterranean Sea is home to nearly 8% of marine biodiversity, although it represents only 0.8% of the ocean’s surface. We have now identified 925 invasive species in the Mediterranean. 56% of these are here to stay, according to a study by the Blue Plan (UNEP). >The Mediterranean concentrates 30% of global maritime traffic, via the Suez Canal. >There are about 60 offshore oil rigs for exploration and exploitation of hydrocarbons in the Mediterranean. >An estimated 90% of pollution in the Mediterranean comes from land. >The Mediterranean region is the world’s largest tourist region, attracting about 30% of international tourism.
<urn:uuid:ed11accc-eec9-43a6-8977-1a5739550cc6>
CC-MAIN-2020-16
https://oceans.taraexpeditions.org/en/m/environment/mankind-the-ocean-pollution/les-enjeux-environnementaux-en-mediterranee/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00471.warc.gz
en
0.906554
666
3.453125
3
Inherited retinal degeneration is a group of genetic retinal disorders characterized by the death of photoreceptor cells. Over 150 genes are associated with inherited retinal degeneration; the proteins encoded by these genes are required not only for photoreceptor development, maintenance, photo transduction and synaptic transmission but also for retinal pigment epithelium cell integrity and function . The use of animal models of inherited retinal degeneration facilitates understanding of the underlying disease mechanisms and allows assessment of preclinical gene-replacement treatments. Gene therapy has been performed in animal models with different types of retinal degeneration (e.g. Leber congenital amaurosis (LCA), retinitis pigmentosa, and cone-rod dystrophies) and has been shown to significantly improve visual function . Clinical characterization and genetic diagnosis of patients with inherited retinal diseases offer opportunities for the evaluation of gene therapy in clinical trials. - inherited retinal degeneration - gene therapy
<urn:uuid:58486757-fc7a-43f9-b3d5-d47ae23b6e53>
CC-MAIN-2023-06
https://researchonline.gcu.ac.uk/en/publications/reflection-on-the-efficacy-of-gene-therapy-in-the-treatment-of-in
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00817.warc.gz
en
0.895064
206
2.984375
3
The word ‘holistic’ has been the most commonly used term by healthcare centres in the present day scenario. Have you ever thought about the term Holistic? Considering it is being endorsed by almost all the healthcare facilities such as corporate hospitals, ayurvedic treatment centres, luxurious resorts and spa and finally the integrated healthcare centres. The origin of this word is from Greek word ‘holism’ meaning all, entire and total. The health care providers use this definition to characterize their approach to the treatment of the illness as being treating the person as a whole and not restrictive to the illness. Generally, there are two meanings for holistic care as it is being used in the society. Firstly, it means that interdependent parts of the body considered as a whole for the treatment. Secondly, it means the use of alternative therapies for treatment. An interesting quote from the American Holistic Health Association is, “… for the part can never be well unless the whole is well”. They have gone further to compliment the ancient healing traditions of India and China dated 5,000 years ago, where the stress was on healthy way of life in harmony with nature. This has been the cause of upspring of ayurvedic treatment, herbal treatment, acupuncture and Chinese traditional treatment today. Healthcare centres are also cashing in on present generation’s ‘way of life’ that is packed with a bag full of stress, pressure and changing lifestyle through globalisation. These set ups have been mostly of South-East Asian origin considering their rich heritage in medicine and treatment from the ancient years. Now the question that arises is “Is there a need for Holistic Treatment and promoting holistic well being?” The answer is without doubt ‘YES’. This is with due consideration to the nature of illnesses that needs a comprehensive care plan. Physical health issues such as cardiac problem, hypertension, diabetics, arthritis, asthma, skin problems, backaches, chronic fatigue syndrome, migraine etc require an all-inclusive care even beyond general medicine and thus, a holistic care will help them overcome their symptoms and lead a pain free life. The approach to this treatment beyond medicine could be a combination of alternative medicine along with regular allopathy medication. Physical health, by and large, gets due attention. Have you ever thought about a need for your holistic mental health? I believe, there is a need for recognition and importance to be given to a person’s mental health. Being a part of the Cadabam’S Group, I have been realised the need and the importance of holistic care in crafting the success stories for those suffering with these illnesses. The holistic mental health model includes concepts based upon educational, psychological, religious and sociological perspectives. The model also includes theoretical knowledge of personality, social, clinical, health and developmental psychology. I trust that today, there is a need for identification of holistic care itself and furthermore, a higher need to understand the call for a holistic mental health care as well. This will help people to live a life entirely than he/she would have. Holistic Treatment for Mentally ill Treating bipolar disorder involves a combination of steps such as psychiatric intervention, medication, reflex therapy, etc. A holistic treatment schedule…….. Everyone entering treatment receives a clinical assessment. A complete assessment of an individual is needed to help treatment professionals…….. Instead of a standard 30-day alcoholism rehab program, an ideal rehab program will last up to a year. Think about it, one month simply is not…….. Drug alcohol rehab helps the addict to understand the complexity of addiction; it is much more than just using drugs too much. They learn that addiction…….. Drug addiction and abuse is as common as alcohol abuse and dependence with much more long lasting and disabling effect on the people abusing. Holistic treatment approach– Drug rehabs provide tools and intensive therapy that is needed for defeating drug addiction and get back on the right…….. Drug rehabilitation is a supportive approach to treatment and recovery from chemical dependency and drug addiction. Drug Rehabilitation is the umbrella term…….. Clearly, drug abuse impacts women dually, male drug abuse creates enormous burden for the affected women and drug abuse per se has…….. Best Heroin Addiction Treatment to control symptoms of drug addiction like withdrawals symptoms, inappropriate behavior, changes in personality, anger…….. There are many facts behind drug abuse which are surprising and are not known to many people.The only way to prevention is better…….. Drug abuse is a complex disorder. It develops over a period of time and thus requires a lot of effort and determination to recover…….. One of the better and effective way to get rid of this habit is to send them to a drug rehabilitation center at Cadabams in Bangalore…….. These baby blues or mood swings are the least severe form of postpartum depression. Here are a few do’s and don’t’s for you to start right and cope better……. Put the issue in its proper perspective, in fact, there is currently no concrete legal framework to tackle the problem of bullying. In schools, where maximum……….. As a matter of moral obligation, it, thus, becomes incumbent upon the estranged husband and wife to make the entire separation process less…………..
<urn:uuid:e21168ad-eb22-4638-9670-61232724f269>
CC-MAIN-2020-40
https://www.cadabams.org/holistic-treatment-and-promoting-holistic-well-being/
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401582033.88/warc/CC-MAIN-20200927215009-20200928005009-00696.warc.gz
en
0.951838
1,087
2.734375
3
The healing properties of this exotic fruit are well known to all of us. And this can be of benefit to cancer patients each year, as it contains the precious enzyme bromelain. Which is not only a prevention of malignant diseases but also helps with their treatment and is ultimate cancer killer. Scientists studied the effects of bromelain and 5-fluorouracil (5FU) in a single animal study, which is used as a conventional cancer-killer therapy. The advantage of bromelain is that it acts harmful only to malignant cells and does not cause any side effects. Otherwise, scientists from the pineapple peel identified two bromelain molecules. The first stimulates the immune system to destroy cancerous cells, and the second blocks a particular protein that is responsible for as much as 30 percent of cancers. There is a special effect on breast, lung, colon, ovarian cancer and melanoma cells. Also, bromelain is very effective in treating localized inflammation with edema, so it relieves inflammation, swelling, and scarring. It also removes cellulite, improves tissue drainage, and relieves strokes.
<urn:uuid:ed62e9b0-9bde-4f72-a3ed-3cfdae1076f3>
CC-MAIN-2019-26
https://www.oldnaturalcures.com/general/throw-away-pineapple-peel-ultimate-cancer-killer/
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998716.67/warc/CC-MAIN-20190618103358-20190618125358-00159.warc.gz
en
0.955752
234
2.59375
3
Professor David Ron FMedSci FRS David Ron’s research investigates the ways in which the cells of complex organisms cope with the effects of incorrectly folded proteins. Such proteins represent a wasted effort for cells and their build-up leads to proteotoxicity, which damages cells over extended periods of time. David aims to understand the clinical impact of this stress in diseases such as Alzheimer’s and type 2 diabetes. Newly formed proteins fold into their three-dimensional shape in an organelle known as the endoplasmic reticulum, which has a limited capacity for folding proteins. David was able to reveal the molecular mechanism that cells use to match their rate of protein production with their capacity for folding — part of the so-called unfolded protein response. David employs biochemical and genetic methods to study the unfolded protein response through the Caenorhabditis elegans worm and other model animals such as mice. His ongoing research may lead to treatments for neurodegenerative diseases — often the result of misfolded proteins — based on manipulating this response in the cell. Interests and expertise
<urn:uuid:a885672b-1ed4-459a-a8d6-0024375cf886>
CC-MAIN-2017-22
https://royalsociety.org/people/david-ron-12195/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605188.47/warc/CC-MAIN-20170522151715-20170522171715-00003.warc.gz
en
0.953262
225
2.625
3
El Salvador archaeological sites are places in which is still present the memory of our ancestors who inhabited our lands for centuries, these places were witnesses of the constructions that someday erected indigenous populations throughout the country. The main ones were Maya, Nahua and lencas. Also to be noted that these witnesses of the indigenous culture places, are part of the so-called archaeological route of El Salvador. Joya de Cerén It is located in the Department of La Libertad, 30 minutes from San Salvador. This place was declared as world heritage in 1993, be considered a site of great cultural value. His discovery occurred in 1976. Joya de Cerén you can see scenes of Indian life from many centuries before the arrival of the Spaniards. The tour for visitors is composed of three excavation areas where displayed ten separate structures which shows what life was like in those times. It is believed that this place was abandoned approximately in the year 600 a.d., due to a volcanic eruption, which left him under the ground. It is located 32 kilometers from San Salvador. This place was a site where various religious ceremonies which took place between the years 600 and 900 a.d. were carried out. It was also buried by a volcanic eruption that occurred in the year 1658. It has an area of approximately 35 hectares being one of El Salvador’s largest pre-Hispanic centers. Currently has a Museum, a shop of handicrafts, local guides and cafeteria. It was discovered in 1892, although it was officially recorded until 1940. Its location is in the municipality of Chalachuapa in the Department of Santa Ana, 80 kilometers from San Salvador. Tazumal means “place where souls consumed”, so it is believed that it was a kind of Indian Cemetery. Indeed within its structure of 24 metres high, tombs were found with more than 116 vessels, jade jewelry, iron pyrite mirrors, artifacts of ball game and lizard-shaped ceramic. This sophisticated Mayan settlement that existed around the years 100 to 1200 a.d., which was associated with Copán and Teotihuacan and Toltec influence. It includes an estimated 300 hectares of continuous buildings, including a core of large civico-religiosas structures, surrounded by a compact complex of domestic architecture. His name means: “Place of women”. Cihuatán was only occupied for a short time, between 900 and 1100 a.d., and abandoned during the phase, named Guazapa; It is the largest of El Salvador archaeological site, it has an area of three kilometres. It is located 37 kilometres north of San Salvador, on the trunk line of the northern highway that goes from San Salvador to La Palma. It is located a few meters from the entrance of Chalchuapa, in the Department of Santa Ana. It was occupied for 10 centuries, since the year 500 BC, until the arrival of the Spaniards. Four carved stones of more than one meter in height can be seen in its facilities. It currently has a workshop of Indigo in which visitors can participate by creating your own model of stamping.
<urn:uuid:9d5e695a-4a5b-4563-bea2-d37070e1ef20>
CC-MAIN-2023-23
https://www.elsalvadortips.com/archaeological-sites-in-el-salvador
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647459.8/warc/CC-MAIN-20230531214247-20230601004247-00702.warc.gz
en
0.978539
669
3.578125
4
As fewer boys today are getting circumcised than in the past, a new study adds support to the procedure. The study, published Monday in the journal Cancer, a journal of the American Cancer Society, found that circumcision reduces the risk of prostate cancer. The authors suggest that circumcision lowers the chance of infections and inflammation that may contribute to cancer. Many studies have found that sexually transmitted diseases (STDs) including HIV are associated with an increased risk of prostate cancer. Other studies have found that circumcision reduces the risk of infections including STDs. Viruses can slip into the mucosal layers under the foreskin and thrive in the moist environment there, so removing the foreskin appears to lower the risk of infections taking hold. The authors theorized that since circumcision therefore would also reduce the incidence of prostate cancer. The investigators analyzed data from more than 1,700 men with prostate cancer and a nearly equal number of men without cancer. Approximately 70 percent of all men, who ranged in age from 35 to 74 had been circumcised—the vast majority right after birth. Men who had been circumcised before their first sexual intercourse were 15 percent less likely to develop prostate cancer than those who were not circumcised. “From these results, we estimate that circumcision may prevent about 10 percent of all prostate cancer cases in the general population,” said Janet L. Stanford, a co-author of the study and research professor at Fred Hutchinson Cancer Research Center in Seattle. Men who were circumcised after their first intercourse, however, did not benefit from the reduced risk. In addition, men circumcised prior to having their first sexual encounter had an 18 percent reduced risk for developing a more aggressive form of prostate cancer. Infections are estimated to be the cause of 17 percent of cancers worldwide. Other cancers that have been linked to infections include cervical, stomach and liver cancers. In fact, one study found that circumcised men are less likely to infect their female partners with the human papillomavirus, or HPV, which is linked to cervical cancer. Infections are thought to cause a chronic inflammation, which can lead to DNA damage as well as other changes that may help cancers thrive. The study did not find a higher rate of self-reported STDs in men who had prostate cancer, though the authors suggest that the men may not have known they had an STD if they had been a carrier without symptoms.
<urn:uuid:b54b0859-9506-4655-b14f-c182b1b8025c>
CC-MAIN-2018-09
http://www.foxnews.com/health/2012/03/12/circumcision-linked-to-lower-prostate-cancer-risk.html?intcmp=related
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00234.warc.gz
en
0.977734
481
3.015625
3
Physician researchers and data scientists are expanding the potential uses of artificial intelligence in health care, but organizations still have a ways to go before they're widely used on patients, experts said at NYU Langone's Health Tech Conference last week. In November 2016 researchers at Google, along with doctors in the U.S. and India, published a study in JAMA on how it used 128,000 images to create a deep learning algorithm that could detect diabetic retinopathy, which can cause blindness. Three to five ophthalmologists, from a group of 54, evaluated each image to train the algorithm. They found the algorithm was then able to diagnose the eye disease at a similar rate as ophthalmologists. Google is now studying how it could deploy the technology in areas where eye specialists are scarce. But Philip Nelson, director of engineering at Google Research, who presented at NYU's event, said technologists and medical professionals must work closely to ensure that tools are able to answer the right questions when they're deployed. "If I'm going to go out to a village in India or Thailand screening people," he said, "they don't want to know just if they have diabetic retinopathy. They want to know if they need to see a doctor." He disputed claims that AI could result in less of a need for doctors. Improved screenings would help identify patients in need of care earlier, he said. "People talk about, 'AI is going to eliminate doctors' work.' Not at all. We're going to drive demand for doctors," Nelson said.
<urn:uuid:4ac6666b-0fcc-4e08-b2e5-25c751aaca03>
CC-MAIN-2021-04
https://www.crainsnewyork.com/health-care/artificial-intelligence-wont-replace-doctors-google-exec-says
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00441.warc.gz
en
0.977783
319
3.078125
3
News Release, Kansas Geological Survey, Sept. 18, 2009 LAWRENCE--Nearly 6,000 oil and gas fields drilled in Kansas since 1860 are shown in a series of new maps now available from the Kansas Geological Survey based at the University of Kansas. A wall-sized map of the state shows the location of all oil and gas fields as well as counties, county seats, state and federal highways, and township and range boundaries. Twelve other more detailed maps, each covering an area up to 90 by 55 miles, include individual field names. "These maps serve as an official record of oil and gas activity in Kansas," said survey geologist Lynn Watney. "They are updated periodically to reflect enlarged drilling activity and new oil and gas fields that have been discovered." In 2008, 102 new fields were discovered and other fields expanded as 1,690 oil wells and 1,620 gas wells were drilled. The fields cover more than 15.7 million acres in Kansas, about 30% of the surface area of the state. That includes the 3-million-acre Hugoton and underlying Panoma Gas Areas in southwest Kansas, one of the world's largest gas-producing regions, which has yielded more than 21 trillion cubic feet of gas since 1922. Beyond this expansive gas-producing area, the maps accentuate such geologic features as the long-buried Central Kansas Uplift, one of the most densely drilled geologic areas in the world along which fields have produced more than 2 billion barrels of oil. The patchwork of large and small oil and gas fields that dominate eastern Kansas, many producing since the early 1900s, are also depicted. "Anyone interested in oil and gas activity, past and current, such as oil and gas operators, service companies, landowners, and county and local government officials will find these maps useful," Watney said. Currently, the oil and gas fields of Kansas contain more than 55,400 producing oil wells and 24,000 producing gas wells. In 2008, these wells yielded approximately 39.6 million barrels of oil and 377 billion cubic feet of gas. Annual production totals by field, county, and lease as well as individual well data are available on the Survey's website at www.kgs.ku.edu. A map viewer showing the Kansas oil and gas fields and individual wells with overlays of cultural features, aerial photographs, and topographic maps can be accessed at http://maps.kgs.ku.edu/oilgas. To open the viewer, users must have one of the following browsers: Internet Explorer 6 or higher, Firefox 2 or higher, or Safari. The statewide oil and gas field map is drawn at a scale of 1:500,000 so that one inch on the map equals about 8 miles of actual distance. The 12 area maps are at a scale of 1:250,000 so that one inch on the map equals about 4 miles of actual distance. Copies of the maps are available from the Kansas Geological Survey at 1930 Constant Ave., Lawrence KS 66047-3724, 785-864-3965, or email@example.com and at 4150 Monroe Street, Wichita KS 67209, 316-943-2343. The cost is $20 for the statewide map and $10 for each area map plus shipping and handling. Inquire about shipping and handling charges and, for Kansas residents, sales tax. More information about the Kansas Geological Survey and its other resources is available at www.kgs.ku.edu.
<urn:uuid:b9dc9533-c5f7-428e-a772-e146ad5a7471>
CC-MAIN-2013-20
http://www.kgs.ku.edu/General/News/2009/field_maps.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702444272/warc/CC-MAIN-20130516110724-00060-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943511
725
2.84375
3
Нашли опечатку? Выделите ее мышкой и нажмите Ctrl+Enter Название: Introduction to Population Biology Автор: Neal D. This text adopts an evolutionary perspective on population biology. To help undergraduate students better understand the subject, Dick Neal presents step-by-step spreadsheet simulations of many basic equations that explore the outcomes or predictions of the various models. Proven examples demonstrate how the equations can be applied to biological questions, and problem sets and detailed solutions challenge the student's comprehension. Many real life examples are also included to help the reader relate the quantitative theory to the natural world.
<urn:uuid:0818b739-556d-435a-bc28-c1906c4f4365>
CC-MAIN-2020-10
http://lib.mexmat.ru/books/186256
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00129.warc.gz
en
0.760754
172
2.75
3
There’s a trend of fortifying foods with the omega-3 DHA. One of the reasons nutrition experts recommend eating fish twice a week is that fish is a good source of docosahexaenoic acide (DHA), an omega-3 fat that has heart-healthy benefits. Preliminary studies suggest that DHA may help boost brain power, too. It makes sense: DHA comprises much of the cell membranes in our brains. And food producers are taking the concept and running with it–they’re adding DHA to foods like yogurt, soy milk, and eggs, then marketing them with “smart” slogans. But do these products really maximize mental performance? Some research links higher intakes of DHA with reduced risk of Alzheimer’s disease and the congnitive decline that prcedes it. In a 2003 study in the Archives of Neurology, people age 65 and up who ate at least one DHA-rich fish meal per week had a 60% reduced risk of Alzehimer’s. And growing evidence suggests DHA supplementation during pregnancy and early infancy may result in superior congnitive performance of the child. In June 2007, a randomized clinical trial in the American Journal of Clinical Nutrition revealed that 9-month-old babies of mothers who had eaten DHA-fortified cereal bars (about 200 mg of DHA per day) during the last trimester of their pregnancies demonstrated better problem-solving skills than did babies whose mothers had consumed placebo cereal bars. Eating inherently healthful foods that have been fortified with DHA along with foods like salmon and tuna is a good way to increase intake of DHA, and research indicates that boosting DHA intake to about 200 mg per day–about three times what the average American consumes–may have some mental benefits. Apart from possibly boosting brain functioning, DHA has other benefits for your body. Specifically, DHA can: - Reduce inflammation throughout your body - Maintain the fluidity of your cell membranes - Lower the amount of lipids (fats such as cholesterol and triglycerides) circulating in the bloodstream - Decrease platelet aggregation, preventing excess blood clotting - Inhibit thickening of the arteries by decreasing the endothelial cells’ production of a platelet-derived growth factor (The lining of the arteries is composed of the endothelial cells.) - Increase the activity of another chemical derived from endothelial cells (endothelium-derived nitric oxide), which causes arteries to relax and dilate - Reduce the production of messenger chemicals called cytokines, which are involved in the inflammatory response associated with atherosclerosis - Reduce the risk of becoming obese and improve the body’s ability to respond to insulin by stimulating the secretion of leptin, a hormone that helps regulate food intake, body weight, and metabolism and is expressed primarily by adipocytes (fat cells) - Help prevent cancer cell growth Other conditions or symptoms that may benefit from introducing more omega-3 into the diet include: - Cardiovascular disease - Type 2 Diabetes - Dry, itchy skin - Brittle hair and nails - Inability to concentrate - Joint pain So, how can you incorporate more DHA into your diet? Try some of these foods rich that are rich in omega-3 DHA: - Flax seeds - Mustard seeds - Brussel sprouts - Cooked soybeans
<urn:uuid:f8418f47-58aa-4c7d-84c1-1bfdc72a3f33>
CC-MAIN-2020-40
https://blog.lucilleroberts.com/featured/diet-tip-do-omega-3s-improve-brain-function
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00498.warc.gz
en
0.939736
709
2.703125
3
Epistemology of Memory We learn a lot. Friends tell us about their lives. Books tell us about the past. We see the world. We reason and we reflect on our mental lives. As a result we come to know and to form justified beliefs about a range of topics. We also seem to keep these beliefs. How? The natural answer is: by memory. It is not too hard to understand that memory allows us to retain information. It is harder to understand exactly how memory allows us to retain knowledge and reasons for our beliefs. Learning is largely a matter of acquiring reasons for changing views. But how do we keep reasons for the views we keep? The epistemology of memory concerns memory’s role in our having knowledge and justification. This branch of epistemology, unlike nearly all other branches, addresses our having knowledge and justification over time. This article reviews the major epistemic roles that philosophers have assigned to memory. Section 1 surveys the nature of memory and the various memory systems. Some philosophers think the relation knowledge bears to at least one memory system is maximally strong: remembering just is a way of knowing. Section 2 covers this strong relation. Section 3 canvases the main problems that data on human memory pose to theories of justification and the central attempts to solve these problems. Section 4 discusses the historical and contemporary responses to two main skeptical challenges about memory. Table of Contents - The Nature of Memory - Memory and Knowledge - Memory and Justification - Memory and Skepticism - References and Further Reading Traditionally, philosophers have likened memory to a storehouse or a recording device. In the Theaetetus, Plato claims that the mind is analogous to a wax tablet. To perceive is to make an impression on the tablet, leaving behind an exact image or representation of what was perceived. Memory keeps the images and forgetting is a matter of losing them. In his Confessions, Augustine says perception deposits images of objects into the storehouse of memory and the process of recalling is the process of retrieving these deposits. Locke and Hume tell much the same story, as do many other philosophers up through the 20th century. On this storehouse view, memory stockpiles experiences and beliefs. Stored items may eventually degrade or become hard to access, but otherwise do not change (see Audi (1994: 420-1), Burge (1997: 321) and McGrath (2007: 13)). This view is commonsensical. It explains how it is that we are able to represent the past accurately in our thoughts and recollective experiences. It also explains why each of us, over time, tends to believe the same thing occurrently more than once. Yesterday, Maria believed that she went to high school in Santa Fe and she believes that today too. During the 20th century psychologists generally abandoned the storehouse view (see for example Bartlett (1932) and Schacter (1996, 2002)), though still thinking that memory stores information. They believe human memory processing is much more complicated than the mere depositing of items and later withdrawing them. Memory selectively stores information, expands part of it, combines it with background information and adds data from the context, in which the subject later retrieves the information. In other words, memory generally alters significantly what enters it. As a result, recollecting is not the retrieving, but rather the generating of representations of the past. Recollecting actually generates new beliefs about the past. Empirically minded philosophers of memory also have generally abandoned the storehouse view in favor of this generative view (see, for example, Debus (2010) and Michaelian (2011a, 2011b)), but epistemologists have been slower to shift models. Since this article covers the epistemological discussion of memory up to the beginning of the 21st century, the storehouse view will generally be implicit. Setting aside how exactly memory works, it will aid our epistemological discussion to get clearer on what memory is of or for. At least as far back as Henri Bergson (1896/1994) and Bertrand Russell (1921/1995), philosophers have recognized that there are different kinds of memory, or different memory systems and 20th century psychological research has confirmed the philosophers’ distinctions. Talk of ‘memory’ simpliciter, as if there were a single, uniform faculty, can obscure it. Distinct memory systems allow us to do different things and consist of different networks of rule-governed psychological processes. Two memory systems that are important to distinguish are declarative memory and procedural memory. Declarative memory is memory of information and events. Procedural memory is memory for skills and of how to perform actions. Different parts of the brain house, on the one hand, our data about bicycle riding and our riding experiences and, on the other hand, our acquired talent for riding. This helps explain the familiar phenomenon of finding it easy to do something, yet hard to state instructions for doing it (think of swimming, playing a flute, or tying a shoe), or vice versa. Declarative memory divides into semantic (or propositional) memory and episodic (or experiential) memory. Semantic memory is memory for propositions and episodic memory is memory for events, one has experienced. To see this distinction, consider how these types of memory can come apart. You remember that Plato taught Aristotle, but you do not remember Plato teaching Aristotle. How could you remember it? You were neither there nor did you witness it. I can remember that I was born in a hospital, but (mercifully) I cannot remember being born in a hospital. Semantic memory underlies memories with propositional content; semantic memory claims are often of the form “S remembers that p”. Episodic memory underlies memories with a kind of non-propositional content; episodic memory claims are often of the form “S remembers x”. Semantic memory is by far the most discussed memory system in epistemology. This is understandable, since epistemology centers on states that have propositional content. Epistemologists primarily discuss what it is for S to know that p, or what it is for S to have justification for believing that p, or the like. They focus less on non-propositional knowledge and justification. The epistemology of memory, as a result, has chiefly been the epistemology of semantic memory. But it is worth noting that neglecting to consider other memory systems can render our epistemological theories vulnerable. Some philosophers have objected to certain theories of propositional knowledge on the grounds that they do not accommodate the role that episodic memory plays in our believing (see, for example, Shanton (2011)). And deeper reflection on procedural memory may advance other debates in epistemology, such as debates concerning knowledge-how. Knowledge-how is a practical knowledge, what you have when you know how to swim or how to tie a shoe. There is debate about whether knowledge-how is reducible to knowledge-that. That is, there is debate about whether practical knowledge can be fully understood in terms of knowing various propositions. But procedural memory seems to ground our knowledge-how and it differs importantly from declarative memory (see Michaelian (2011a)). In fact, psychological research suggests that sophisticated procedural memory can be retained even when semantic memory is crippled (one artist entirely lost his knowledge of language due to brain-damage, having to relearn his native tongue altogether and yet he remembered how to paint! See Schacter (1996: 140-2)). Investigating procedural memory may help reveal that knowledge-how is not reducible to knowledge-that. Most of the interesting features of memory’s relationship with knowledge originate in memory’s relationship with justification. Knowledge requires justification. As a result, when justification connects in interesting ways with a topic, knowledge shares those connections. This section covers what is perhaps the only unique connection between memory and knowledge. Semantic memory is responsible for our remembering that something is true. Much philosophizing in the 20th century tried to state necessary and sufficient conditions for propositions of the form S remembers that p. The theory that dominated that discussion is especially important in epistemology: the epistemic theory of memory (see, for example, Anscombe (1981), Ayer (1956), Audi (2002), Locke (1971), Malcolm (1963), Moon (2013), Owens (2000), Pappas (1980) and Williamson (2000)). Roughly put, the epistemic theory states that remembering is a kind of knowing. If S remembers that p, then S knows that p. Many philosophers go even further: if S remembers that p, then S knows that p because S previously knew that p. You remember that Plato taught Aristotle, and this is because in the past you came to know that Plato taught Aristotle, and because that past knowledge has contributed to your present knowledge. (Incidentally, Plato might even agree; he appears to endorse the epistemic theory of memory in the Theaetetus.) If the epistemic theory of memory is correct, we might not remember as much as we think we do. Remembering requires knowing and the standards for knowing are not low. In particular, it is generally accepted among philosophers that S knows that p just in case p is true, S believes that p, believing that p is justified for S and it is not accidental that S’s justification for p gives S a true belief that p. Knowledge is a kind of justified true belief, a kind where the truth of the belief is tightly connected to its justification. When the connection is not tight, the belief might be “Gettiered” or true by sheer accident. If you see someone walking down the street dressed as a postal worker, you might justifiedly believe that your mail will be delivered soon. Suppose the person you see is not in fact a postal worker, but is merely testing out a Halloween costume. And suppose that, nonetheless, the mail will indeed be delivered soon; your regular postal worker is just around the corner, delivering mail to your neighbor. Your belief that the mail will be delivered soon is justified, but true only by coincidence. So you do not know that the mail will be delivered soon. If remembering requires knowing, then remembering requires everything required for knowing. If any requirement is not met, one does not remember, but at best merely seems to remember. In other words, if you seem to remember that the keys are on the dresser, but they in fact are not there, or you have no reason to believe that they are there, or you simply deny that they are there, then you do not remember that they are there. Why endorse the epistemic theory of memory? A main reason is that it fits our ordinary uses of “remembers” and “knows” (see Moon (2013)). Consider the following conjunctive claim: Sally remembers that she has visited Rhode Island, but she does not know that she has. This conjunction sounds odd and one plausible explanation of the oddness is that remembering requires knowing. The second conjunct denies something the first conjunct asserts, so the conjunction seems incoherent. Here is a closely related reason for endorsing the epistemic theory. Remembering requires knowing just in case all of the following are true: remembering requires believing, remembering requires justification and remembering requires non-accidental truth. And we can argue, one at a time, that remembering does indeed have these requirements. For example, the best explanation of the oddness of certain conjunctive claims is that remembering requires believing. Consider: Peter remembers that he owes Paul a dollar, but he does not believe that he owes Paul a dollar. At least at first glance, it is hard to make sense of this. How could Peter remember that without believing it? Andrew Moon (2013) proposes another reason for supposing that remembering requires believing. He claims that if S remembers that p, then S can use p as a premise in certain justifying inferences. But, Moon adds, a premise is usable in justifying inference only if believed. If you do not believe that all tigers are mammals and that all mammals are animals, you cannot use these propositions as premises for reasonably inferring that all tigers are animals. So, remembering requires believing. Similarly, Moon claims that remembering requires justifiedly believing. This is because a premise is usable in justifying inference only if justifiedly believed. And inferences based on remembered propositions are justifying. So, remembering requires justified belief. However, Moon’s argument faces worries. Suppose S remembers that p, but also remembers that all experts deny that p. Can S use p as a premise in any justifying inferences? Perhaps not. If S cannot, then not all we remember is usable as a justifying premise and Moon has not shown remembering requires believing. Or, suppose S justifiedly does not believe p. Couldn’t S nonetheless have reason to believe that if she uses p (rather than not-p) in her inferences, she will be more likely to arrive at the truth (if, say, p is a scientific theory that is likely ‘false but approximately true’)? If so, S might be able to use p as a premise in justifying inference, without believing p. Even if remembering that p allows justified inference from p, justified inference from p would not guarantee belief that p. It would not follow that remembering requires believing. While the epistemic theory may make sense of certain conjunctive claims, it faces many objections. As noted above, if remembering requires knowing, then remembering requires everything required for knowing: belief, justification and non-accidental truth. Arguments against the epistemic theory have tried to show that remembering is possible even when at least one of these three requirements for knowledge is not met. Martin and Deutscher (1966) give a well-known example, in which there (allegedly) is remembering without believing. A painter paints a detailed farmyard scene. He believes he merely imagined the scene. However, it turns out that the painting captures an actual farmyard scene that the painter saw as a child. Unwittingly, the painter simply reproduced that scene. Martin and Deutscher (1966) add that the painter “did his work by no mere accident,” suggesting that the painter’s childhood experience caused him to bring to mind the scene (even though he believes that he merely imagined the scene). They conclude that this is a case of remembering without belief. Since knowing requires believing, this would be a case of remembering without knowing. Martin and Deutscher’s conclusion may in a sense be right, yet their example may also not pose any problem for the epistemic theory. We can agree that the painter does not believe that the scene occurred. But exactly what is it the painter is remembering? It is plausible that, if he is indeed remembering something, he is remembering the scene or his visual experience of it. It is less plausible that he is remembering that the scene occurred or remembering the scene as having occurred. In other words, Martin and Deutscher may have given a case of remembering without believing, but the remembering is not semantic. It is episodic or some other sort of memory. If that is correct, then the example is no threat to the epistemic theory of memory, since that theory concerns only semantic memory. Audi (1995) and Bernecker (2010: 75-7) appear to offer cases of remembering without the sort of justification that knowledge requires. Knowledge requires fairly strong justification and this justification must not be defeated. If Billy knows that there is a cookie on the table, then Billy has strong reason to believe that it is on the table. Even if he has some reason to doubt that there is a cookie on the table (he may have reason to suspect that his sister shaped some clay to look like a cookie), these doubts do not defeat his justification, when he knows that there is a cookie on the table. Audi and Bernecker offer the following kind of case. Suppose you remember that Plato taught Aristotle. However, your friends go on to play a prank on you and give you convincing reasons to think Plato never taught Aristotle–Plato never existed and Aristotle had no teacher. You retain your belief, but the prank defeats your justification. Your justification is no longer strong enough for you to know that Plato taught Aristotle. Nonetheless, Audi and Bernecker would think, you remember that Plato taught Aristotle. So, they conclude, remembering does not require justification. But why suppose that, after the prank, you still remember that Plato taught Aristotle? The answer is unclear. Is it because you still have a true belief, which you acquired in the past, even though you lack overall reason for keeping it? Why would that be sufficient for remembering? Unless an explanation is offered, we may not have reason to count the case as a counterexample to the epistemic theory of memory. Bernecker (2010) describes a case, in which there appears to be remembering without non-accidental truth–that is, the remembered proposition is true by mere accident: you justifiably, but incorrectly believe that your friend has borrowed a certain book from the library. Later, your friend indeed checks out that very book. As a result, your belief is true, but by coincidence alone. Bernecker thinks you still count as remembering that your friend has borrowed the book from the library. If this is a case of remembering an accidentally true proposition, it is a case of remembering without knowing. But is the antecedent here true? Some philosophers (for instance Moon (2013),) see no reason to suppose that it is. If Bernecker can persuade us that it is in fact true, he will have provided a genuine counterexample to the epistemic theory of memory. We have seen several attempts to show that remembering does not require knowing. Each attempt faces a similar problem: when knowledge is absent, it is unclear whether semantic remembering is present. Support for the claim that semantic remembering is indeed present has typically involved an appeal to intuitions that some critics apparently lack. But there may be a less controversial way of showing that remembering does not entail knowing. If epistemologists discard the storehouse view of memory and adopt the generative view, they may discover clearer kinds of cases, where propositions are remembered, yet not known, or at least not known in the past by the subject. For debates about the epistemic theory of memory, it matters significantly whether remembering entails knowing. And it matters significantly for another debate in epistemology. Timothy Williamson (2000) has influentially argued that the concept of knowledge is fundamental in our thinking. Having the concept of knowledge crucially allows us to understand quite a bit of psychology and epistemology and we cannot fully explain knowledge in terms of other psychological or epistemological conditions and relations. In support of this, Williamson (2000: 34) claims that “knowing is the most general factive stative attitude.” He means roughly that, if the state of having a certain kind of attitude toward p (like hearing that p or seeing that p) guarantees that p is true then being in that state guarantees that p is known. Knowing is the most general factive stative attitude, in that there is no way that S could be in the state of having a truth-guaranteeing attitude toward p without also knowing that p. Now, many philosophers think that remembering that p guarantees that p is true, even if remembering that p does not guarantee belief that p, strong overall justification for believing that p or the non-accidental truth of p. If they are right and remembering does not require knowing, then Williamson’s claim is incorrect. Remembering is factive, but is not knowledge, so knowledge is not the most general factive stative attitude. As a result, his argument would weaken; it is less clear that the concept of knowledge is fundamental to our thinking. A closely related claim of Williamson’s may also be challenged, if remembering does not require knowing. Williamson says that all and only evidence is knowledge. More precisely, he says that S knows that p just in case p is included in S’s total evidence. It is plausible that if S remembers that p, then S’s total evidence includes p. If this is right and if remembering does not require knowing, then not all evidence is knowledge. Some of what we remember is evidence, yet not known. For most debates in the epistemology of memory it does not matter whether remembering entails knowing. This is because most debates ultimately concern the connections between memory and epistemic justification. So, even if remembering does not entail knowing, there remains much to discuss. One neutral way of proceeding is to think about cases of apparent remembering: cases, in which a subject has a memory experience that p, or recollects that p, or recalls p as known or as true and so on. Even if the subject is not in fact remembering that p, memory may still justify the subject in believing that p. But how? And in exactly what circumstances? In debates about epistemic justification, philosophers have construed memory mainly as a source of challenges. A main way to test a theory of justification is to see if it has the right implication in cases, where memory plays some special role. Philosophers apply this test most frequently in the debate about internalism and externalism in Epistemology. It is controversial what these views even are, but here is a rough characterization. At a minimum, internalism states that mentally alike individuals are completely alike in their justification (see Conee and Feldman (2001)). Environmental differences by themselves make no difference to justification. So if, for example, you are justified in believing that there are boxes in the basement, that justification would remain even if your neighbor stole all the boxes from the basement. In order for your justification to change, your mental life would have to change–you would need to have a visual experience of an empty basement, or to seem to hear your spouse report that the basement is bare and so forth. You, and someone mentally just like you, are both justified in believing that there are boxes in the basement, even if only one of you has boxes, even if only one of you lives in a world with basements. Externalism is the denial of internalism. It states that environmental differences can result in differences in justification, even if they do not result in mental differences. What is actually downstairs may matter. Or it may matter what is downstairs in nearby possible worlds. It may matter whether the particular way, in which you would form or keep the belief that there are boxes in the basement, tends to get at the truth. Any theory of justification appears to face some challenge from facts about human memory. Externalists have argued that their view can overcome these challenges better than internalism can (see, for example, Bernecker (2008, 2010), Goldman (1999, 2009, 2011), Greco (2005) and Senor (1993, 2010)). A fine way to test a theory of justification is to check its implications about particular cases. A complete theory of justification will have implications about every particular case. The implications of a good theory of justification will also match our intuitive judgments about each case. The implications of a bad theory will not. That is, a good theory will typically imply that ordinary people, in ordinary circumstances are justified in believing what clearly trustworthy people tell them, in believing what their senses tell them about the world, in believing what seems to them to be the best explanation of what they have to go on and so on. A bad theory will not have all these implications and will imply that in some of these circumstances believing what is commonsensical is unjustified. The circumstances of concern in this article all involve memory. The next sections cover particular kinds of circumstances that help test the implications of theories of justification. Think of each kind of circumstance as introducing a problem for these theories. If externalists are correct, and their view indeed has an easier time accommodating our intuitions and thereby solving these problems, then internalism is in bad shape. If, however, internalism can solve these problems easily enough, then it is much better off than many externalists suppose. After introducing the problems we will consider the main responses to them. Of course, these are neither the only problems memory poses to theories of justification, nor the only responses. They are just the ones that have received the most attention. We are forgetful. We forget email passwords, where we put the car keys, anniversaries, acquaintances’ names and more. In some cases this gets us into trouble and in other cases it is harmless. Interestingly, when we do not forget and we keep beliefs about these things, we often nonetheless forget our original evidence for our beliefs. I cannot recall how I learned that Fred’s name is “Fred”–did a trustworthy friend tell me? Did Fred himself tell me? And you know that your email password is “iluvphilosophy,” but you cannot remember choosing it all those years ago. That password just seems familiar and using it works. Forgetting is an epistemologically significant phenomenon. Here is one reason for that. In many cases, it seems that when we keep a belief, yet forget our original evidence for it, the belief remains justified. But this appears to conflict with certain theories of justification. In particular, it apparently conflicts with evidentialism, the view that the justified attitude for a subject toward a proposition is the attitude that fits the subject’s evidence (see Conee and Feldman (2008, 2011), Feldman and Conee (1985) and McCain (2014)). Understood broadly, your evidence is what you have to go on–your experiences, thoughts, feelings, background information and so forth. Evidentialism implies that if you are justified in believing that Fred’s name is “Fred”, then believing that fits what you have to go on. If you lose crucial evidence, however, believing that Fred’s name is “Fred” may no longer fit your evidence. The Problem of Forgotten Evidence is the problem of accommodating our intuitions about justification, in cases where key-supporting evidence has been forgotten. There seem to be a lot of cases of this sort; we regularly forget our original evidence, while retaining the belief. Gilbert Harman (1986) is typically credited with developing this problem, though he never called it the “Problem of Forgotten Evidence”. Which theories face the Problem of Forgotten Evidence? As mentioned above, evidentialism faces it. Traditionally, evidentialism has been understood to be a form of internalism. As a result, philosophers have understood the Problem of Forgotten Evidence to be a problem only for internalist theories of justification (for instance, Bernecker (2008, 2010)). But there are some evidentialist forms of externalism (see Comesaña (2010) and Goldman (2011)). These theories do not quite understand evidence to be all that you have to go on. Rather, evidence is understood more narrowly: it is just the stuff you have to go on, such that beliefs formed on its basis tend to be true (where contingent environmental factors partly determine what tends to be true). So, some forms of externalism face the problem; evidence, even on the narrower understanding, can be forgotten. The problem also challenges any theory of justification that states that S’s having evidence for p is necessary for S’s being justified in believing that p. Some non-evidentialist externalist theories state roughly this necessary condition (see Alston (1988)). And finally, while the Problem of Forgotten Evidence is stated in terms of forgetting evidence, there is a more general problem here: how do we accommodate our intuitions about justification in cases, where whatever it is that originally conferred justification (be it evidence or something else) is forgotten? It could be that most theories of justification face this more general problem, which is discussed prior to Harman (1986) by George Pappas (1980). Any theory that faces, but cannot solve, the Problem of Forgotten Evidence is doubtful. It is important to consider, then, possible solutions to the problem and to consider which theories have solutions available. Before considering these matters, two related problems about memory and justification should be mentioned. Unfortunately, we forget more than just our original reasons for believing. We also forget our defeaters, that is, our reasons for not believing, or for doubting our reasons for believing. Sometimes we remember our original reasons, yet forget our defeaters. You remember your original reason for believing that there are boxes in the basement: this morning you saw what looked to you like boxes, in what looked to you like the basement. But suppose your spouse tells you that the children have since taken all of the boxes out of the basement, in order to build a fort outside. Or, your spouse tells you that you did not in fact see boxes in the basement–you saw them in the attic. If you forget what your spouse told you, yet you retain your belief that there are boxes in the basement, you have forgotten a defeater for your belief. On some theories of justification, your belief can still count as justified. Another kind of forgotten defeat is this. Suppose you never had any reason to believe that there are boxes in the basement, but you believed it anyways. Some theories will count this belief as justified, once you forget that you never had any reason for it. Some philosophers find this result unacceptable (see Annis (1980), Goldman (1999, 2009), Greco (2005) and Huemer (1999)). The Problem of Forgotten Defeat is the problem of accommodating our intuitions about justification in cases where key-defeating evidence has been forgotten. Far more theories face the Problem of Forgotten Defeat than face the Problem of Forgotten Evidence and that is one of the reasons why it is worth distinguishing these problems. Often these problems are conflated–in fact, the former problem has never been given a name before. The reason that many more theories face the Problem of Forgotten Defeat is this. Just about every theory of justification–even theories that deny that some evidence can play a justifying role–grants that some evidence, understood broadly, can play a defeating role. That is, nearly all theories agree that, even if having evidence cannot by itself justify, having evidence can by itself eliminate justification. Your visual experience of the cookie on the table is part of your evidence that there is a cookie on the table. Non-evidentialists will deny that your evidence on its own justifies believing that there is a cookie on the table. But typically they would grant that your evidence at least partially defeats any justification you had for believing that there is nothing on the table. So, cases of forgotten defeat challenge both evidentialist and non-evidentialist theories, although philosophers (for example, Annis (1980), Goldman (2001, 2009), Greco (2005) and Huemer (1999)) have presented the problem as though only internalist and evidentialist theories face it. The final problem centers on beliefs that are merely stored. (Some philosophers instead call these beliefs non-occurrent or standing or dispositional.) These are beliefs that are in no way before the subject’s mind. The believer is not thinking about, reasoning from, acting from, or having an experience concerning them or their content. Contrast these with occurrent beliefs, which are before the subject’s mind. When you are remembering that Plato taught Aristotle, or are telling others about it, your belief that Plato taught Aristotle is occurrent. At most other times–when you are sleeping, driving, playing chess, washing dishes–that belief is merely stored in memory. (This seems true on the storehouse model of memory, at least; on a generative model you may lack the belief of these other times). A belief can be both occurrent and stored, just as a song can be both playing and stored on your computer. A merely stored song is stored but not playing. Similarly, a merely stored belief is stored, but not occurrent. It is commonsensical to attribute countless stored beliefs to people, who are in normal circumstances. A few moments ago, you had beliefs about chemistry, the first U.S. President, your childhood, panda bears, the Indian Ocean, the Super Bowl and countless other topics. A few moments ago almost all of these beliefs were not just stored, but were merely stored. And it is plausible that many of these beliefs were justified a few moments ago. The Problem of Stored Beliefs is the problem of explaining how the merely stored beliefs that seem justified are indeed justified (for simplicity, the discussion below for the most part omits the ‘merely’). Thomas Senor (1993) and Alvin Goldman (1999) influentially pose this as a special problem for internalism about epistemic justification (though George Pappas (1980) briefly discusses the more general problem even earlier). Goldman (2011) and Matthew McGrath (2007) target internalist evidentialism in particular. Our occurrent experiences, thoughts and feelings might justify some of our stored beliefs, but not nearly enough. Our active mental lives, at any given time, simply do not bear on most of our stored beliefs. As a result, internalism appears unable to explain how all justified stored beliefs are justified. The same goes for evidentialism, since our evidence is too constrained to fit all our justified stored beliefs. Andrew Moon (2012) directs a knowledge-version of the Problem of Stored Beliefs toward an evidentialist view concerning knowledge. The evidentialist view is that S knows that p at t only if S believes that p on the basis of evidence at t. We have stored beliefs while we sleep and we know many of these believed propositions. But while we sleep, these beliefs have no evidential basis, so knowledge does not require an evidential basis. Though Moon’s argument concerns just knowledge, we can offer a parallel argument that concerns justified belief. If his original argument is sound, then the parallel argument is too and so justified belief does not require an evidential basis. Of course, externalist and non-evidentialist theories also face the Problem of Stored Beliefs. But these theories can avail themselves of non-mental, non-evidential resources, so they appear to have an easier time solving the problem. The next section reviews some of these resources. It is important to distinguish the Problem of Stored Beliefs and the Problem of Forgotten Evidence. The phenomenon of forgetting is essential to the latter problem, but not to the former. We can store in memory our original evidence for a justified, merely stored belief. So there is no relevant forgotten evidence here, but some questions remain: what evidence could justify the belief? Is it the evidence that is stored in memory? How could it justify when it is not accessed? And the phenomenon of having stored beliefs is not essential to the Problem of Forgotten Evidence, but it is obviously essential to the Problem of Stored Beliefs. We can forget the original evidence for a belief that remains occurrent: if I am distracted and exhausted when we meet at a bustling party and you tell me that you are from Santa Fe, I might form the belief that you are from Santa Fe, but immediately forget that you just told me so. I might even be slightly puzzled as to why I find myself believing that you are from Santa Fe. My belief was justified when formed, but what justifies it a moment later, when I have forgotten my evidence? It is clear, then, that the Problem of Stored Beliefs and the Problem of Forgotten Evidence are dissociable. Consequently it is a mistake to assume that they must share a solution. And it is possibly misleading to introduce the two problems simultaneously with a single example, as some philosophers do, without distinguishing them (see Goldman (2011), for instance). Doing so invites conflation of the problems. The three problems discussed above are challenging. Tackling them has, however, helped inspire novel epistemological theses and observations about memory, some of which are general and may solve multiple problems, while others are more piecemeal and particular. This section looks first at the more characteristically evidentialist or internalist responses to the Problem of Forgotten Evidence and the Problem of Stored Beliefs and then at the more ecumenical responses. Replies to the Problem of Forgotten Defeat follow. In answer to the Problem of Forgotten Evidence, Earl Conee and Richard Feldman (2001) point out that in ordinary cases, even when all of S’s original evidence for p is lost, S still has a host of evidence that could justify her in believing that p. This evidence could for example be rooted in induction, background information about memory or conscious recollection. You have forgotten why you originally believed that Fred’s name is “Fred”. But you have reason to believe that you tend to form beliefs with good reason, so you have evidence that you originally had good reason for your belief and this supports the belief. And you have reason to believe that your memory is fairly accurate. Since memory is supplying your belief about Fred’s name, you have justifying evidence for it. And if you are consciously recollecting that Fred’s name is “Fred”, then your experience is displaying that proposition as true, just as perceptual experiences display propositions about the external world as true. So, evidentialists of any stripe (internalist or externalist) can claim that there generally is justifying evidence in the central cases that motivate the Problem of Forgotten Evidence. However, we do not usually have all of this evidence for a belief that is merely stored. A merely stored belief, by stipulation, is not being consciously recollected. Hence, the Problem of Stored Beliefs remains. Feldman (1988) and Conee and Feldman (2001) propose that justified stored beliefs can have “stored justifications”; S can recall some justifying evidence for p, when S has a justified stored belief that p. S’s evidence for p is stored (compare McCain (2014)). On this view, justified stored beliefs typically are not justified in the most fundamental sense, in the sense that justified occurrent beliefs typically are. When justified in the most fundamental sense, not all of the justifiers are stored, but rather some justifiers are occurrent: experiences, inferences and so on. If it is plausible that justified stored beliefs have the most fundamental kind of justification, then Conee and Feldman’s proposal will not solve the Problem of Stored Beliefs. On a closely related proposal, the evidence and justifiers are occurrent. Call the proposal dispositionalism: dispositions of the right sort can justify (see Audi (1995), Conee and Feldman (2011), and Ginet (1975)). These dispositions can be memorial. Maria is disposed to recollect that she went to high school in Santa Fe. With the right cue, in ordinary circumstances, she will recollect that fact about her past. On dispositionalism, this disposition justifies her in believing that she went to high school in Santa Fe. The disposition is only occasionally manifest–she only occasionally thinks about where she went to high school–but she nonetheless has the disposition right now; it is not simply stored. As a result, the disposition can epistemically justify in the most fundamental sense right now. In some ways, dispositionalism parallels virtue ethics, which claims among other things that a virtue is a disposition that morally justifies certain actions, even when the disposition is not manifest. Dispositionalism offers a promising solution to the Problem of Stored Beliefs. It also could solve the Problem of Forgotten Evidence: typically in cases, where S has forgotten her original evidence for her justified belief that p, S still has a disposition to recall p as known or as true. If this disposition justifies believing that p for her, then the Problem of Forgotten Evidence may disappear. However, dispositionalism still needs crucial development. More must be said about exactly which dispositions justify believing exactly which propositions and how; and it would be good to have a principled way of determining which dispositions a given subject has, in order to see whether dispositionalism attributes to the subject justification for believing just the right propositions. Conee and Feldman (2001) offer starting material for a final internalist, evidentialist-friendly solution to the Problem of Stored Beliefs. If we have stored beliefs, then these beliefs can justify other beliefs, including other stored beliefs. We can direct this proposal at the Problem of Forgotten Evidence too: stored beliefs can justify a belief, for which all original evidence has been forgotten. A worry for this proposal is that we may not have enough stored beliefs to solve the two problems. We may not, in other words, have enough stored beliefs that could justify all justified stored beliefs and all beliefs that lack their original evidence. Goldman (2009) voices another worry: what ultimately justifies any stored belief? If a belief that p is justified by a stored belief that q, the latter belief should be justified too. It is hard to see how an unjustified belief can by itself justify another. But what justifies the belief that q? Does a stored belief that r justify it? If so, what justifies this stored belief that r? And so on. A moderate form of coherentism could address Goldman’s worry: if S’s belief that p coheres with certain of S’s other beliefs, then S’s belief that p is justified. Coherence among stored beliefs can justify them. And coherence can justify belief in the face of forgotten evidence. Beliefs can have a special, mutually supporting relationship. However, coherentism has its costs; see Coherentism. But perhaps it could, if suitably defended, substantiate Conee and Feldman’s proposal. Still, if any stored belief is justifiedly based on something other than beliefs, then coherentism, even if correct, does not fully solve the Problem of Stored Beliefs (compare Moon 2012: 316-7). The remaining responses to the problems are also available to externalists and non-evidentialists. A view nearly universally endorsed by discussants of the problems is what we might call preservationism (see Annis (1980), Bernecker (2008), Burge (1997), Goldman (2009, 2011) Naylor (2012), Owens (2000), Pappas (1980) and Senor (2010); some philosophers use ‘preservationism’ to refer to the view called ‘anti-generativism’ below). Roughly put, memory preserves the justification of the beliefs it preserves. More precisely, if S is justified in believing that p at t1, and retains in memory a belief that p until t2, then at t2 S’s belief that p is prima facie justified. (The ‘prima facie’ here allows that the belief may not be justified overall if there are defeaters for it.) Your belief that Plato taught Aristotle was justified when you formed it: a professor or some other clearly credible source told you that Plato taught Aristotle. And you have kept that belief ever since. So, your belief has ever since been justified. Preservationism seems to provide a simple solution to the Problem of Stored Beliefs. Regardless of whether a belief is stored rather than occurrent, it can retain its justification as long as memory preserves it. A stored belief can inherit justification from the past and this appears to solve the problem. And forgetting evidence does not block the inheritance. So, preservationism appears to solve the Problem of Forgotten Evidence. In fact, a main motivation for preservationism is that it seems to solve these problems at no cost. But is preservationism true? Externalists think that it is true only if certain features that are external to the mind obtain. Process reliabilists, for example, think that preservationism is true just in case memory is reliable. Process reliabilism is roughly the view that justification of a belief depends entirely on the reliability of the process that forms or retains the belief. According to preservationism, beliefs retain justification by being retained in memory. As a result, reliabilists think memory must be reliable, in order for preservationism to be true. Since it is contingent whether memory is reliable, on reliabilism it is contingent whether preservationism is true. Reliabilists, who appeal to preservationism, in order to solve the Problem of Stored Beliefs and Problem of Forgotten Evidence, bear the burden of showing that memory is reliable. And, if the storehouse view of memory is indeed incorrect, then preservationism appears vacuous unless modified. If memory typically alters the information entering it, it is hard to see how memory could preserve many beliefs over time–the beliefs would seem to be destroyed, once their exact content is no longer represented in memory. Preservationists, who reject the storehouse view, must explain either of two things: first, how memory can nonetheless tend to preserve beliefs, even though it tends to modify the content that enters it; or second, how something other than memory preserves beliefs. Pursuing either option may require developing a novel theory of belief. Now for replies to the Problem of Forgotten Defeat: Richard Feldman (2005) and Matthew McGrath (2007) in a sense deny that this problem exists. When a defeater is forgotten, it is no longer relevant to what one is justified in believing. Once you forget that your spouse told you that the children removed all of the boxes from the basement, your spouse’s testimony ceases to defeat; you are overall justified in believing that there are boxes in the basement, as long as you still have some support for believing that. Feldman and McGrath press their point: some attitude toward the proposition that there are boxes in the basement must be justified for you. But which? Abandoning belief in the proposition seems unjustified, since you no longer have reason to abandon your belief. And suspending judgment in the proposition seems unjustified, since you still have some justifying support for believing it–for example, a vivid recollection of what looked like boxes in what looked like the basement. The only potentially justified attitude remaining for you is belief. It is hard to see a competing option. If that is correct, then forgetting defeaters poses no problem. Nothing, which is forgotten, can defeat. (Of course, something other than the original defeater can still defeat. If for example you recall that you have forgotten a defeater for p, but cannot recall what it was, then arguably you still have a defeater for p: you have reason to believe that you had reason to doubt p. Having reason to believe this is itself reason to doubt p.) Why, then, suppose that there even is a Problem of Forgotten Defeat? Why suppose that forgotten defeaters remain at all relevant to justification? The main reason is this: many philosophers think that memory, unlike perception, testimony, rational intuition and reasoning, is not a generative source of justification. Memory cannot create or strengthen justification. Rather, memory at most preserves justification that has been acquired from some source (such as perception, testimony and so on). Call this thesis about memory anti-generativism. It is a “garbage in, garbage out” view of justification and memory. An unjustified belief that enters memory remains unjustified, unless new reasons for the belief are acquired from some faculty other than memory. Anti-generativism is traditional and popular (see Annis (1980), Goldman (2009, 2011), Owens (2000) and Senor (2007)), and so are variants of the view that concern knowledge or warrant (see Audi (1997), Burge (1997), Dummett (1994) and Plantinga (1993)). With respect to knowledge, many philosophers think memory and testimony are alike in this way: coming to know that p via testimony requires that the testifier knows that p; testimony does not generate knowledge from non-knowledge. Sometimes anti-generativism is called “preservationism”, but this is infelicitous. Anti-generativism primarily states a limit on memory: memory does not generate justification, or knowledge, or anything similar. The theory does not centrally concern memory’s power to preserve anything (unlike the theory that is called “preservationism” above, which does centrally concern memory’s preservative power). If anti-generativism is plausible, then the theories of justification that are compatible with it may avoid the Problem of Forgotten Defeat and theories that are incompatible with it may on that account face the Problem of Forgotten Defeat. However, generativism, the view that memory can generate justification, is increasingly common (see Audi (2002), Bernecker (2010), Lackey (2005, 2007), Huemer (1999), Michaelian (2011a) and Owens (1996)). Arguments for this view reveal that it comes in several forms. Jennifer Lackey (2005, 2007) and Sven Bernecker (2010) join ranks with Feldman and McGrath in thinking that memory generates justification in cases of forgotten defeat. But notice that the justification generated in these cases is overall, not prima facie. That is, since memory is responsible for the loss of a defeater, memory results in a balance of justification that favors belief. This is not yet to say that memory is creating new reasons for belief. One generativist view, then, is that memory can generate overall justification, even if it cannot generate prima facie justification. Lackey offers other support for generativism: a subject’s memory can store information, which, in the past, the subject never paid attention to. If the subject recalls and attends to the information afterward, the subject can use it to form justified belief. The basis of this belief would be memory. Lackey builds her support with an example. Suppose that Clifford has his mind on many things, while he is driving. Later, his friend Phoebe asks him whether construction on the freeway has begun. Clifford then recalls seeing construction on his recent drive and only then forms a belief that construction on the freeway has begun. His belief is justified and memory is its source. Generativism follows. Still, as Bernecker (2010) observes, if Lackey is correct, she has only supported the generativist view that memory can generate doxastic justification. She has not shown that memory can generate propositional justification. In other words, at best Lackey demonstrates that memory can generate a reasonable belief, not that memory can generate new reasons for believing. Memory merely based a belief on a reason that perception generated. Huemer (1999) and Michaelian (2011a) endorse the stronger thesis that memory can generate new reasons for believing. Huemer thinks that S’s seeming to remember that p can produce reason for S to form a belief that p, regardless of whether S already had reason to believe that p. And Michaelian attacks the storehouse view of memory, arguing that memory can generate new content and new belief in that content. The belief can have justification when formed as long as certain external conditions are in place (and Michaelian thinks they are). Consequently, sometimes, when memory generates justified belief, it generates justification for believing. Since anti-generativism is controversial, the severity of the Problem of Forgotten Defeat is unclear. Interestingly, although it is primarily externalists who find the problem to be severe, the mix of internalist and externalist advocates of generativism is fairly even. So far the surveyed discussion has assumed that memory plays some role in our actually having justification and knowledge, and the discussants have simply debated the margins of that role. But many early and mid-20th century epistemologists worried about this assumption. Why believe memory has an important, or even any, epistemic role? Since this question may invite skepticism, call it a skeptical question for simplicity. Satisfactorily answering this sort of skeptical question about memory is a fundamental epistemological problem. In fact, according to Richard Fumerton (1985), answering it is the most fundamental epistemological problem. If memory has no epistemic role, then we have no reason to believe just about anything we ever learned, or think we learned, at any time in the past. What is more, memory appears to be involved not just in our retaining what we have learned, but in our very learning. When Chloe tells you “I am changing the oil in my car today,” you use memory even to understand what she is saying–some memory system is responsible for your applying the concepts that make “changing” and “oil” and “car” (and so on) intelligible to you. And you use memory not just to grasp the meaning of words, but also of sentences. Memory is holding fixed in your mind the beginning of Chloe’s statement when she finally says the word “today,” allowing your mind to string concepts together in a way that yields in you a mental representation of what she has testified. Without memory there is no understanding of what is testified. If memory has no epistemic role, then it is hard to see how we could even learn from testimony in the present. Memory seems similarly involved in intuition, reasoning, introspection and perception. Accordingly, it is hard to see how we could learn from those sources if memory plays no epistemic role. Philosophers have sharpened the general skeptical question about memory into more challenging related sub-questions. This section discusses responses to two of these sub-questions. Answering them is not easy, since they introduce foundational problems that do not arise with other kinds of skepticism. Yet, oddly, philosophers exploring contemporary skepticism have mostly neglected the issue of memory skepticism. Half way through the 20th century C. I. Lewis (1946) thought the issue was so significant that the level of silence on it even then was “a bit of a scandal.” And the times have not changed. (MR) Memory is reliable. MR states that memory tends to get things right and that it is generally accurate. It does not state that memory is perfectly accurate. A first skeptical question is: why believe MR? If there is no reason to believe MR, then memory may not provide (either by preserving or by generating) any support for what it represents as true in a given case. If you have no sense as to whether, say, a particular political blog tends to get things right, then there may be no sense in believing anything on the mere basis of the blog. Don Locke (1971) thinks that if we have no reason to believe MR, we have no knowledge via memory at all. If he is correct, it may be critical that we identify support for MR. It is not at all clear that Locke is correct. But even if he is, our troubles are not as severe as they might seem. Suppose we have little to say positively in answer to the first skeptical question. We may still have reason to believe that memory is often correct (correct, say, around 40% of the time), and even that in the kinds of cases we care about it is usually correct. Further, having no reason to believe MR is not the same as having reason to believe MR is false. Having no reason to believe MR may just require us to be neutral about it. Granted, process reliabilists, who must be neutral about MR, may be in trouble. They may have to suspend judgment about whether any given belief that memory preserves is justified, since they must suspend judgment about whether such a belief is preserved by a reliable process. But on other theories of justification, perhaps we remain reasonable in thinking that memory justifies. Still, it would be somewhat troubling if there were no reasons to believe MR. It would be strange for us to rely so heavily in our reasoning and behavior on something we have no reason to believe is typically accurate. It is worth considering MR’s status for us. Locke considers the following line of support: doubting MR is self-defeating. To raise doubts about MR requires the use of memory. Raising relevant doubts requires citing examples, in which memory has erred. But memory alone can supply these examples. If these examples impugn MR, it is because memory supports believing something: the fact that it has erred in certain cases. So, the mere attempt to undermine memory itself vindicates memory in a way. This result is not clearly worth celebrating. We have merely established that memory supports believing that it itself fails here and there. And even if this result is established, it yields no support for MR. Memory may sometimes support belief, but it could nonetheless typically fail to support belief and could be unreliable. But the self-defeat consideration reveals something unique about memory skepticism: using or contemplating arguments for or against it requires the use of the very faculty being scrutinized. We cannot help but use memory, in order to explore memory skepticism. Nothing parallel is true about, say, external-world skepticism. It is not the case that thinking about or offering an argument for and against it must occur via our perceiving something external. Thus, memory skepticism is especially thorny: addressing it uniquely, unfailingly involves some kind of circularity. Thomas Senor (2010) claims that there is no non-circular “demonstration” of MR. If this is true, any demonstration of MR may be suspect. Richard Brandt (1955) offers an alternative line of support: MR is the best, and only, explanation of our data. What are our data? For Brandt it is our present experience and our having a host of cohering beliefs about the past and about science. According to Brandt (1955: 93), we have these beliefs, because our brains have over time interacted with the world in a truth-conducive way and “the only acceptable theory is one which asserts that a large proportion of our memory beliefs are veridical. No alternative to such a theory has been proposed; nor can one imagine what one would be like.” If MR is the only explanation of our data, MR is by default the best explanation and it may thereby be credible. Contra Senor and others, Brandt thinks this support for MR is non-circular, since it does not take for granted that any recollections are accurate. But there is reason to think Brandt’s support for MR is indeed circular. To support MR, Brandt makes an explanatory inference based on our data. But why suppose we have the very data he thinks we have–why suppose we have a host of cohering beliefs? That we have them is not wholly manifest to us at one time. We must use memory, in order to appreciate it. We think about our various beliefs, how they fit together, how snug that fit is and we make an inference about how our beliefs cohere. This thinking and inferring is not instantaneous. It unfolds over time and (we presume) memory holds fixed and supports the parts that (we also presume) have already unfolded. So there is a kind of circularity: memory is used in establishing the data MR allegedly explains (see BonJour (2010: 169-171) and Plantinga (1993: 61-4)). If this circularity is vicious, Brandt’s argument yields no new reason for believing MR. Another objection to Brandt’s argument is that MR is not the only explanation of our data. Bertrand Russell (1921/1995) provides a famous rival hypothesis: we and the world came to exist only five minutes ago and it merely appears that everything is much older. In each of us is a package of cohering beliefs about the past. And we find rings in trees, rust on cars and ruins in Rome. All of this is misleading. Everything is new. As unpalatable as this hypothesis is, it is not easy to disprove. At any rate, it is a rival explanation of our data. Oddly, Brandt actually considers a Russellian hypothesis, but dismisses it as a fantasy, wholly lacking “evidential foundation”. But there is no need for evidence for Russell’s hypothesis, beyond this: it fits the data. Since it does, MR has an explanatory rival. We cannot assume MR is the best explanation. We must do the hard work of showing it is better than Russell’s hypothesis. Our target has shifted from defending MR to defending something more basic. Memory could be massively misleading. For any view about the past, why suppose it is even approximately right? That is our second skeptical question. The first skeptical question challenges our view about how memory performs overall. It still allows that memory provides reason to accept some appearances about the past. The second question goes further, probing each appearance. It challenges our view about memory’s performance in each given case. Answering this question well is especially demanding. Senor (2010) claims that most philosophers agree that Russell’s hypothesis has not been refuted. Regardless of whether Senor and these philosophers are correct, note that the demand here is greater than just refuting the particular hypothesis that Russell offered. Russell’s exact hypothesis may be bad: it seems ad hoc and uninformative. The present demand is to show why all hypotheses like Russell’s are inferior. One hypothesis similar to his is that the world and its inhabitants all popped into existence six minutes ago. Another is that the world is as old as it seems, but just its inhabitants popped into existence five minutes ago. In order to reasonably hold our commonsensical beliefs about the past, we must have reason to reject each skeptical hypothesis that is incompatible with the truth of our commonsensical beliefs. Moreover, we must have reason to think that what we commonsensically believe explains better our data than the entire disjunction of skeptical hypotheses does. Russell proposes a pragmatic answer to the second skeptical question. Taking memory appearances at face value is extremely practical. We cannot help but do it and it works. Skepticism, therefore, poses no genuine threat. This answer appeals to something like the practical rationality of believing that the past really is how it seems. But this answer tells us nothing about the epistemic rationality of believing anything about the past. Even if Russell is right, we do not have on that account a key ingredient for knowledge of the past: epistemic justification. One family of replies to the second skeptical question uses transcendental arguments to reject Russell’s hypothesis. A transcendental argument is of this form: A obtains; A is impossible in the absence of B; (therefore) B obtains. Norman Malcolm (1963) and Sydney Shoemaker (1967) offer the following transcendental argument: we know how to make past-tense statements; this competence requires that most of these statements are true; (therefore) most of these statements are true. Since these statements express our beliefs about the past, most of these beliefs are true. Not only does MR follow, but it also follows that the past tends to fit our expressed views about it. The general idea behind this argument is that one’s having skill at using a kind of statement is incompatible with one’s systematically misusing it. If Elmer sincerely refers to toasters, clouds and orange things as “rabbits,” then Elmer must not be using that word to talk about rabbits. There must be an alternative way of understanding his “rabbits” expressions, such that they tend to be true. Now, we are competent at making statements about the past. It follows that most of these statements are true and so are our corresponding beliefs. Don Locke (1971: 135-7) offers a transcendental argument for MR (compare Lewis (1946)), which may also answer the second skeptical question. The fact that we have knowledge at all and that we inquire, requires that we have memory knowledge. And we in fact know things and we in fact inquire. (In support of this claim we might note that it seems readily proven: are we inquiring? Yes!). So there is memory knowledge. And, as noted earlier, Locke thinks that if there is memory knowledge, then MR is true. So, he concludes that MR is true. If Locke is right, it follows that Russell’s hypothesis is false. The world did not come into existence five minutes ago. Many hypotheses like Russell’s will also be false. This may answer the second skeptical question. Our reason to suppose that a given belief about the past is true is that it is of a class of beliefs that tend to be correct. Some philosophers doubt that transcendental arguments can rationally support any anti-skeptical conclusions. But even if some can, the transcendental arguments covered here are questionable. Malcolm and Shoemaker take it as a datum that we know how to make past-tense statements. But why accept the datum? In answer, we can at best cite the kinds of statements we can recall ourselves competently making. And why suppose that the past resembles those recollections? If we popped into existence five minutes ago, those recollections are misleading. If the transcendental argument simply assumes that the recollections are accurate, then the argument fails to generate support for believing that they are accurate. Similarly, in reply to Locke: why suppose we inquire? You might blush with embarrassment and note that to ask that question is to inquire. But why suppose a question has been asked? Observing inquiry may rely on memory. Perhaps we cannot even think at all or observe a case of inquiry all in one moment. Perhaps thought and observation are always extended in time and we may need to use memory, in order to observe the temporal extension of anything. As noted, a transcendental argument is of the form: A obtains; A is impossible in the absence of B; (therefore) B obtains. The replies to the transcendental arguments here question the first premise. To support anything as data, we may need to use memory. If that is correct, it may seem viciously circular then to use this data, in order to support either memory or beliefs about the past. However, one might think that this reveals that memory skepticism is indeed self-defeating. Merely raising a skeptical challenge to MR or to views about the past uses some data about memory or the past. This data may include the fact that observing inquiry requires memory, or that Russell’s hypothesis is compatible with one’s having a given recollection. But if we need to use memory in order to support any data, then raising a skeptical challenge about memory uses memory. So, anyone who offers such a challenge undermines her own position. If memory truly supported nothing, skepticism could have no support. This line of reasoning notes a conflict between an activity (supporting memory skepticism) and a theory (memory skepticism). Unfortunately, even if there is a conflict, the theory may still be correct (compare Bernecker (2008: 130-1) and Fumerton (1995: 52)). Why believe memory skepticism is false? Even if supporting memory skepticism is self-defeating, it may still be true. And, we may still be justified in believing memory skepticism, but simply unable to demonstrate its support. Finally, Sven Bernecker (2008: 131-3) attempts to “disarm” Russell’s hypothesis and skepticism about the past by taking a relevant alternatives approach (see Contextualism in Epistemology). Bernecker thinks memory can provide us with knowledge about the past, even if we do not know that there is a past and even if we do not know that Russell’s hypothesis is false. Here is why: a table can be flat, even if it appears bumpy under a microscope. The table is not relevantly bumpy, so it counts as flat. Bernecker thinks knowledge is similar to flatness. S’s knowing that p does not require that S is able to know every alternative to p to be false. S might know that p and yet be unable to rule out some situation in which not-p is true. All S must be able to rule out are the relevant alternatives to p–the relevant situations in which not-p is true. For example, in order for you to know that Plato taught Aristotle, you must be able to rule out the relevant alternatives to that fact. One relevant alternative is that Socrates alone taught Aristotle. And you can rule this out: you have reason to believe that Socrates swigged his poisoned hemlock years before Aristotle’s birth. Although Russell’s hypothesis is an alternative, to what you believe about the past and you may be unable to rule out, Bernecker thinks it is ordinarily an irrelevant alternative. So memory can provide knowledge of the past even when you cannot rule out Russell’s hypothesis. Bernecker’s reply faces difficult objections. It is not obvious that knowledge is sufficiently like flatness. Supposing it is, it is unclear that Russell’s hypothesis is ordinarily irrelevant. And, supposing it is, why agree that we can rule out the alternatives that are relevant? What enables you to rule out that Socrates alone taught Aristotle–evidence from memory? The strength of this evidence should be in question if Russell’s hypothesis is not yet ruled out. But, supposing we can rule out the relevant alternatives, Bernecker’s reply may leave us unsatisfied. At best it secures for us bits of knowledge about the past, yet it does not secure knowledge that the past exists or knowledge that Russell’s hypothesis is false. The latter two results seem simply to concede victory to an unpalatable skepticism. And they pair oddly with the former result–how could we simultaneously have knowledge about the past from memory and yet lack knowledge from memory that the past exists? Whatever ultimately explains the one, suggests the other is false. It is clear that satisfactorily answering the skeptical questions is not easy. There have been other attempts to answer them, but none more promising or developed than those mentioned here (for additional discussion, see Locke (1971) and Bernecker (2008)). Since memory skepticism threatens most of our knowledge and justification, failing to rule it out would be uncomfortable. Still, for two reasons it would be premature to despair. First, even if we cannot show that memory skepticism is false, it is unclear what is thereby threatened or what we are thereby required to believe, if anything. This is because even if memory skepticism is true, it is unclear what we can conclude (compare BonJour (2010: 170-1)). If memory must support any justifying inference or data about the past and memory cannot support, then what can we are justified in inferring from the truth of memory skepticism? It is hard to say. Second, we should not confuse our failing to disprove memory skepticism with our having no reason to believe anything about the past or having reason to deny MR. It could very well be that memory is reliable and justifying, but that we simply have a hard time showing it. - Alston, William P. “An Internalist Externalism.” Synthese 74.3 (1988): 265–283. - Offers an externalist theory of justification that respects key epistemic roles of mental phenomena. - Annis, David B. “Memory and Justification.” Philosophy and Phenomenological Research 40.3 (1980): 324–333. - An article weighing in on several main issues concerning memory and justification, including preservationism, anti-generativism and the Problem of Forgotten Defeat. - Anscombe, G. E. M. Collected Philosophical Papers, Vol. 2: Metaphysics and the Philosophy of Mind. University of Minnesota Press, 1981. - The chapter “Memory, ‘Experience’, and Causation” discusses the relationship between remembering and knowledge. - Audi, Robert. “Dispositional Beliefs and Dispositions to Believe.” Nous 28.4 (1994): 416–434. - Distinguishes beliefs that are stored (dispositional beliefs) from inclinations to form beliefs. Likens memory to a computer. - Audi, Robert. “Memorial Justification.” Philosophical Topics 23.1 (1995): 31–45. - Discusses from an internalist perspective a number of topics concerning memory. - Audi, Robert. “The Place of Testimony in the Fabric of Knowledge and Justification.” American Philosophical Quarterly 34.4 (1997): 405–422. - Discusses a version of preservationism about knowledge and memory’s similarity to testimony in epistemology. - Audi, Robert.. “The Sources of Knowledge.” The Oxford Handbook of Epistemology. Ed. Paul K. Moser. Oxford University Press, 2002. 71–94. - Defends generativism and an epistemic theory of memory. - Augustine. Confessions. Ed. H. Chadwick. Oxford University Press, 1991. - In Book X, describes memory in terms of a storehouse. - Ayer, A. J. The Problem of Knowledge. Vol. 8. Harmondsworth, 1956. - Chapter 4 endorses the epistemic theory of memory and other connections between memory and knowledge. - Bartlett, Frederic. Remembering: a Study in Experimental and Social Psychology. Cambridge University Press, 1932. - Commonly thought to be the first work in psychology to present memory as generative. - Bergson, Henri. Matter and Memory. Trans. N.M. Paul and W.S. Palmer. Zone Books, 1896/1994. - Early distinction of memory systems by a philosopher. - Bernecker, Sven. Memory: A Philosophical Study. Oxford University Press, 2010. - One of the only recent philosophical monographs on memory, this book develops themes from Bernecker’s earlier work, defends generativism and attacks the epistemic theory of memory. - Bernecker, Sven. The Metaphysics of Memory. Springer, 2008. - Thorough philosophical discussion of many metaphysical and some epistemological issues bearing on memory, including skepticism about memory and problems for internalism. - BonJour, Laurence. Epistemology: Classic Problems and Contemporary Responses. Rowman & Littlefield Publishers, Inc., 2010. - Written for a general philosophical audience, chapter 8 introduces many problems in the epistemology of memory. - Brandt, Richard B. “The Epistemological Status of Memory Beliefs.” Philosophical Review 64.1 (1955): 78–95. - Provides an inference to the best explanation reply to memory skepticism. - Burge, Tyler. “Interlocution, Perception, and Memory.” Philosophical Studies 86.1 (1997): 21–47. - Endorses preservationism and anti-generativism, while alleging parallels between memory and testimony. - Comesaña, Juan. “Evidentialist Reliabilism.” Noûs 44.4 (2010): 571–600. - States an evidentialist version of process reliabilism. - Conee, Earl, and Richard Feldman. “Evidence.” Epistemology: New Essays. Ed. Quentin Smith. Oxford University Press, 2008. - The best-known defenders of evidentialism develop and clarify several aspects of their theory. - Conee, Earl, and Richard Feldman. “Internalism Defended.” American Philosophical Quarterly 38.1 (2001): 1–18. - Defends internalism from the Problem of Forgotten Evidence, the Problem of Stored Beliefs and other objections. - Conee, Earl, and Richard Feldman. “Replies.” Evidentialism and Its Discontents. Ed. Trent Dougherty. Oxford University Press, 2011. - Proposes a dispositionalist solution to the Problem of Stored Beliefs. - Debus, Dorothea. “Accounting for Epistemic Relevance: A New Problem for the Causal Theory of Memory.” American Philosophical Quarterly 47.1 (2010): 17–29. - Considers generative aspects of memory, while criticizing Martin and Deutscher’s rival to the epistemic theory of memory. - Dummett, Michael. “Testimony and Memory.” Knowing From Words. Ed. A. Chakrabarti and B. K. Matilal. Kluwer, 1994. 251–272. - Likens memory to testimony and endorses anti-generativism. - Feldman, Richard. “Having Evidence.” Philosophical Analysis. Ed. D. F. Austin. Kluwer Academic Publishers, 1988. 83–104. - Proposes that justified stored beliefs usually only have stored justifications. - Feldman, Richard. “Justification Is Internal.” Contemporary Debates in Epistemology. Ed. Matthias Steup and Ernest Sosa. Blackwell, 2005. 270–84. - Defends internalism from the Problem of Forgotten Defeat. - Feldman, Richard, and Earl Conee. “Evidentialism.” Philosophical Studies 48.1 (1985): 15–34. - The most influential paper to state and advocate evidentialism. - Fumerton, Richard A. Metaepistemology and Skepticism. Rowman & Littlefield, 1995. - Brings out the difficulty of satisfactorily rejecting memory skepticism. - Fumerton, Richard A. Metaphysical and Epistemological Problems Of Perception. Lincoln: University Nebraska Press, 1985. - Highlights the importance of the epistemology of memory to epistemology in general. - Ginet, Carl. Knowledge, Perception, and Memory. Vol. 26. D. Reidel Pub. Co., 1975. - Perhaps the first contemporary statement of dispositionalism. - Goldman, Alvin I. “Internalism, Externalism, and the Architecture of Justification.” Journal of Philosophy 106.6 (2009): 309–338. - Argues for externalism and against internalism in light of the epistemology of memory. - Goldman, Alvin I. “Internalism Exposed.” Journal of Philosophy 96.6 (1999): 271–293. - An influential criticism of internalism that has drawn attention to the Problem of Forgotten Evidence and the Problem of Stored Beliefs. - Goldman, Alvin I. “Toward a Synthesis of Reliabilism and Evidentialism? Or: Evidentialism’s Troubles, Reliabilism's Rescue Package.” Evidentialism and Its Discontents. Ed. Trent Dougherty. Oxford University Press, 2011. - Continues to press several objections to internalism rooted in the epistemology of memory and sketches a version of reliabilism that incorporates evidentialist insights. - Greco, John. “Justification Is Not Internal.” Contemporary Debates in Epistemology. Ed. Matthias Steup and Ernest Sosa. Blackwell, 2005. 257–269. - Attacks internalism in light of the Problem of Forgotten Defeat, among other problems. - Harman, Gilbert. Change in View. MIT Press, 1986. - Chapter 4 responds to the Problem of Forgotten Evidence and has popularized it. - Huemer, Michael. “The Problem of Memory Knowledge.” Pacific Philosophical Quarterly 80.4 (1999): 346–357. - Endorses the Problem of Forgotten Defeat, yet also a form of generativism. - Lackey, Jennifer. “Memory as a Generative Epistemic Source.” Philosophy and Phenomenological Research 70.3 (2005): 636–658. - Argues for generativism and against anti-generativism. - Lackey, Jennifer. “Why Memory Really Is a Generative Epistemic Source: A Reply to Senor.” Philosophy and Phenomenological Research 74.1 (2007): 209–219. - Defends her earlier arguments for generativism and against anti-generativism from Thomas Senor’s objections. - Lewis, Clarence I. An Analysis of Knowledge and Valuation. Open Court, 1946. - Early and influential discussion of memory skepticism. - Locke, Don. Memory. Vol. 13. Macmillan, 1971. - One of the few book-length philosophical discussions of memory. Nearly all replies to memory skepticism on offer are scrutinized and a transcendental argument against memory skepticism is advanced. - Malcolm, Norman. Knowledge and Certainty. Englewood Cliffs, N.J., Prentice-Hall, 1963. - Defends the epistemic theory of memory and a transcendental argument against memory skepticism. - Martin, Charles B., and Max Deutscher. “Remembering.” Philosophical Review 75.April (1966): 161–96. - One of the first criticisms of the epistemic theory of memory. Presents an influential rival theory. - McCain, Kevin. Evidentialism and Epistemic Justification. Routledge, 2014. - Develops and defends what may be the most complete and detailed statement of an evidentialist, internalist theory of justification. Advocates a “stored justifications” type reply to some problems in the epistemology of memory. - McGrath, Matthew. “Memory and Epistemic Conservatism.” Synthese 157.1 (2007): 1–24. - Uses the epistemology of memory, in order to criticize evidentialism and to defend a rival internalist theory of justification. - Michaelian, Kourken. “Generative Memory.” Philosophical Psychology 24.3 (2011a): 323–342. - Assembles wide-ranging cognitive psychological research in an effort to challenge the storehouse model of memory and to advance a generative model. Sketches how reliabilism might accommodate a generative model. - Michaelian, Kourken. “Is Memory a Natural Kind?” Memory Studies 4.2 (2011b): 170–189. - Empirically informed philosophical discussion of the various memory systems. Denies that memory is a natural kind. - Moon, Andrew. “Knowing Without Evidence.” Mind 121.482 (2012): 309–331. - Presents to evidentialism a knowledge version of the Problem of Stored Beliefs centered on the basis of stored beliefs. - Moon, Andrew. “Remembering Entails Knowing.” Synthese 190.14 (2013): 2717–2729. - Argues that remembering entails knowing and criticizes Bernecker’s attempts to show otherwise. - Naylor, Andrew. “Belief from the Past.” European Journal of Philosophy 20.4 (2012): 598–620. - Adopts preservationism, while arguing for a theory about what it is to believe one did something from having done it. - Owens, David J. “A Lockean Theory of Memory Experience.” Philosophy and Phenomenological Research 56.2 (1996): 319–32. - One of the first arguments for a kind of generativism. - Owens, David J. Reason without Freedom: The Problem of Epistemic Normativity. Routledge, 2000. - Discusses preservationism, a kind of anti-generativism and the epistemic theory of memory. - Pappas, George S. “Lost Justification.” Midwest Studies in Philosophy 5.1 (1980): 127–134. - An early and underappreciated statement of many problems in the epistemology of memory, including the Problem of Forgotten Evidence and the Problem of Stored Beliefs. - Plantinga, Alvin. Warrant and Proper Function. Oxford University Press, 1993. - Endorses anti-generativism about warrant and criticizes inference to the best explanation replies to memory skepticism. - Plato. Theaetetus. Trans. H.N. Fowler, Loeb Classical Library. London: William Heineman, 1921. - Among Western philosophy’s earliest work in the epistemology of memory, endorsing a storehouse model and epistemic theory of memory. - Russell, Bertrand. The Analysis of Mind. London: Routledge, 1921/1995. - One of the first discussions of memory skepticism, famously hypothesizing that we came to exist only five minutes ago. - Schacter, Daniel L. Searching for Memory: The Brain, the Mind, and the Past. New York: Basic Books, 1996. - Summarizes a considerable amount of psychological research on memory for a popular audience, with many citations for further reading. Explains how a generative model of memory, rather than a storehouse model, better fits the research. - Schacter, Daniel L. The Seven Sins of Memory: How the Mind Forgets and Remembers. Boston: Mariner Books, 2002. - Presents for a general audience a wealth of findings on the psychology of memory, exploring whether the general limits of human memory constitute defects. Provides additional references for further reading and supports the generative model of memory. - Senor, Thomas D. “Internalistic Foundationalism and the Justification of Memory Belief.” Synthese 94.3 (1993): 453–476. - Presents the Problem of Stored Beliefs as a special problem for internalism. - Senor, Thomas D. “Memory.” A Companion to Epistemology. Ed. Jonathan Dancy, Ernest Sosa, and Matthias Steup. Wiley-Blackwell, 2010. - Concisely surveys many issues in the epistemology of memory. - Senor, Thomas D. “Preserving Preservationism: A Reply to Lackey.” Philosophy and Phenomenological Research 74.1 (2007): 199–208. - Defends anti-generativism from Lackey’s criticisms. - Shanton, Karen. “Memory, Knowledge and Epistemic Competence.” Review of Philosophy and Psychology 2.1 (2011): 89–104. - Argues that a condition, which Ernest Sosa and others think is necessary for knowledge, rules out knowledge from episodic memory. - Shoemaker, Sydney. “Memory.” The Encyclopedia of Philosophy, Volume 5. Ed. P. Edwards. Macmillan, 1967. 265–274. - A summary of the philosophy of memory up to the mid-20th century. Offers a transcendental argument against memory skepticism. - Williamson, Timothy. Knowledge and Its Limits. Oxford University Press, 2000. - Endorses the epistemic theory of memory and the view that all and only evidence is knowledge. U. S. A.
<urn:uuid:53e9105b-b079-4656-91ae-a3225d21c782>
CC-MAIN-2017-22
http://www.iep.utm.edu/epis-mem/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607998.27/warc/CC-MAIN-20170525044605-20170525064605-00419.warc.gz
en
0.938083
18,121
3.796875
4
Biodiversity action plan - This article is about a conservation biology topic. For other uses of BAP, see BAP (disambiguation). A biodiversity action plan (BAP) is an internationally recognized program addressing threatened species and habitats and is designed to protect and restore biological systems. The original impetus for these plans derives from the 1992 Convention on Biological Diversity (CBD). As of 2009, 191 countries have ratified the CBD, but only a fraction of these have developed substantive BAP documents. The principal elements of a BAP typically include: (a) preparing inventories of biological information for selected species or habitats; (b) assessing the conservation status of species within specified ecosystems; (c) creation of targets for conservation and restoration; and (d) establishing budgets, timelines and institutional partnerships for implementing the BAP. - 1 Species plans - 2 Habitat plans - 3 Specific countries - 4 Criticism of Biodiversity Action Plans - 5 Biodiversity planning: a new way of thinking - 6 See also - 7 References - 8 External links A fundamental method of engagement to a BAP is thorough documentation regarding individual species, with emphasis upon the population distribution and conservation status. This task, while fundamental, is highly daunting, since only an estimated ten percent of the world’s species are believed to have been characterized as of 2006, most of these unknowns being fungi, invertebrate animals, micro-organisms and plants. For many bird, mammal and reptile species, information is often available in published literature; however, for fungi, invertebrate animals, micro-organisms and many plants, such information may require considerable local data collection. It is also useful to compile time trends of population estimates in order to understand the dynamics of population variability and vulnerability. In some parts of the world complete species inventories are not realistic; for example, in the Madagascar dry deciduous forests, many species are completely undocumented and much of the region has never even been systematically explored by scientists. A species plan component of a country’s BAP should ideally entail a thorough description of the range, habitat, behaviour, breeding and interaction with other species. Once a determination has been made of conservation status (e.g. rare, endangered, threatened, vulnerable), a plan can then be created to conserve and restore the species population to target levels. Examples of programmatic protection elements are: habitat restoration; protection of habitat from urban development; establishment of property ownership; limitations on grazing or other agricultural encroachment into habitat; reduction of slash-and-burn agricultural practises; outlawing killing or collecting the species; restrictions on pesticide use; and control of other environmental pollution. The plan should also articulate which public and private agencies should implement the protection strategy and indicate budgets available to execute this strategy. Where a number of threatened species depend upon a specific habitat, it may be appropriate to prepare a habitat protection element of the Biodiversity Action Plan. Examples of such special habitats are: raised acidic bogs of Scotland; Waterberg Biosphere bushveld in South Africa; California’s coastal wetlands; and Sweden’s Stora Alvaret on the island of Öland. In this case also, careful inventories of species and also the geographic extent and quality of the habitat must be documented. Then, as with species plans, a program can be created to protect, enhance and/or restore habitat using similar strategies as discussed above under the species plans. Some examples of individual countries which have produced substantive Biodiversity Action Plans follow. In every example the plans concentrate on plants and vertebrate animals, with very little attention to neglected groups such as fungi, invertebrate animals and micro-organisms, even though these are also part of biodiversity. Preparation of a country BAP may cost up to 100 million pounds sterling, with annual maintenance costs roughly ten percent of the initial cost. If plans took into account neglected groups, the cost would be higher. Obviously costs for countries with small geographical area or simplified ecosystems have a much lesser cost. For example the St. Lucia BAP has been costed in the area of several million pounds sterling. Australia has developed a detailed and rigorous Biodiversity Action Plan. This document estimates that the total number of indigenous species may be 560,000, many of which are endemic. A key element of the BAP is protection of the Great Barrier Reef, which is actually in a much higher state of health than most of the world’s reefs, Australia having one of the highest percentages of treated wastewater. There is, however,serious ongoing concerns, particularly in regards to the ongoing negative impact on water quality from land use practices. Also, climate change impact is feared to be significant. Considerable analysis has been conducted on the sustainable yield of firewood production, a major threat to deforestation in most tropical countries. Biological inventory work; assessment of harvesting practices; and computer modeling of the dynamics of treefall, rot and harvest; have been carried out to adduce data on safe harvesting rates. Extensive research has also been conducted on the relation of brush clearance to biodiversity decline and impact on water tables; for example, these effects have been analyzed in the Toolibin Lake wetlands region. New Zealand has ratified the Convention on Biological Diversity and as part of The New Zealand Biodiversity Strategy and Biodiversity Action Plans are implemented on ten separate themes. Local government and some companies also have their own Biodiversity Action Plan. The St. Lucia BAP recognizes impacts of large numbers of tourists to the marine and coastal diversity of the Soufrière area of the country. The BAP specifically acknowledges that the carrying capacity for human use and water pollution discharge of sensitive reef areas was exceeded by the year 1990. The plan also addresses conservation of the historic island fishing industry. In 1992, several institutions in conjunction with native fishermen to produce a sustainable management plan for fishery resources, embodied in the Soufrière Marine Management Area. The St. Lucia BAP features significant involvement from the University of the West Indies. Specific detailed attention is given to three species of threatened marine turtles, to a variety of vulnerable birds and a number of pelagic fishes and cetaceans. In terms of habitat conservation the plan focusses attention on the biologically productive mangrove swamps and notes that virtually all mangrove areas had already come under national protection by 1984. The Tanzania national BAP addresses issues related to sustainable use of Lake Manyara, an extensive freshwater lake, whose usage by humans accelerated in the period 1950 to 1990. The designation of the Lake Manyara Biosphere Reserve combines conservation of the lake and surrounding high value forests with sustainable use of the wetlands area and simple agriculture. This BAP has united principal lake users in establishing management targets. The Biosphere Reserve has induced sustainable management of the wetlands, including monitoring groundwater and the chemistry of the escarpment water source. The United Kingdom Biodiversity Action Plan covers not only terrestrial species associated with lands within the UK, but also marine species and migratory birds, which spend a limited time in the UK or its offshore waters. The UK plan encompasses "391 Species Action Plans, 45 Habitat Action Plans and 162 Local Biodiversity Action Plans with targeted actions". This plan is noteworthy because of its extensive detail, clarity of endangerment mechanisms, specificity of actions, follow up monitoring program and its inclusion of migrating cetaceans and pelagic birds. On August 28, 2007, the new Biodiversity Action Plan (BAP) [launched in 1997] identified 1,149 species and 65 habitats in the UK that needed conservation and greater protection. The updated list included the hedgehog, house sparrow, grass snake and the garden tiger moth, while otters, bottlenose dolphins and red squirrels remained in need of habitat protection. In May 2011, the European Commission adopted a new strategy to halt the loss of biodiversity and ecosystem services in the EU by 2020, in line with the commitments made at the 10th meeting of the Convention on Biological Diversity (CBD) held in Nagoya, Japan in 2010. In 2012 the UK BAP was succeeded by the 'UK Post-2010 Biodiversity Framework'. UK BAP website To support the work of the UK BAP, the UK BAP website was created by JNCC in 2001. The website contained information on the BAP process, hosted all relevant documents, and provided news and relevant updates. In March 2011, as part of the UK government’s review of websites, the UK BAP site was ‘closed’, and the core content was migrated into the JNCC website. Content from the original UK BAP website has been archived by the National Archives as snapshots from various dates (for example, UK BAP: copy March 2011; copy 2012). Twenty-six years prior to the international biodiversity convention, the United States had launched a national program to protect threatened species in the form of the 1966 Endangered Species Act. The legislation created broad authority for analyzing and listing species of concern, and mandated that Species Recovery Plans be created. Thus, while the USA is an unratified signer of the accord, arguably it has the longest track record and most comprehensive program of species protection of any country. There are about 7000 listed species (e.g. endangered or threatened), of which about half have approved Recovery Plans. While this number of species seems high compared to other countries, the value is rather indicative of the total number of species characterized, which is extremely large. Five major divisions of habitat have been identified in Uzbekistan’s BAP: Wetlands (including reed habitat and man-made marsh); desert ecosystems (including sandy, stony and clay); steppes; riparian ecosystems; and mountain ecosystems. Over 27,000 species have been inventoried in the country, with a high rate of endemism for fishes and reptiles. Principal threats to biodiversity are related to human activities associated with overpopulation and generally related to agricultural intensification. Major geographic regions encompassed by the BAP include the Aral Sea Programme (threatened by long-term drainage and salination, largely for cotton production), the Nuratau Biosphere Reserve, and the Western Tien Shan Mountains Programme (in conjunction with Kazakhstan and Kyrgyzstan). Criticism of Biodiversity Action Plans |This section does not cite any references or sources. (January 2010)| Some developing countries criticize the emphasis of BAPs, because these plans inherently favour consideration of wildlife protection above food and industrial production, and in some cases may represent an obstacle to population growth. The plans are costly to produce, a fact which makes it difficult for many smaller countries and poorer countries to comply. In terms of the plans themselves, many countries have adopted pro-forma plans including little research and even less in the way of natural resource management. Almost universally, this has resulted in plans which emphasize plants and vertebrate animals, and which overlook fungi, invertebrate animals and micro-organisms. With regard to specific world regions, there is a notable lack of substantive participation by most of the Middle Eastern countries and much of Africa, the latter of which may be impeded by economic considerations of plan preparation. Some governments such as the European Union have diverted the purpose of a Biodiversity Action Plan, and implemented the convention accord by a set of economic development policies with referencing certain ecosystems' protection. Biodiversity planning: a new way of thinking The definition of biodiversity under the Convention on Biological Diversity now recognises that biodiversity is a combination of ecosystem structure and function, as much as its components e.g. species, habitats and genetic resources. Article 2 states: in addressing the boundless complexity of biological diversity, it has become conventional to think in hierarchical terms, from the genetic material within individual cells, building up through individual organisms, populations, species and communities of species, to the biosphere overall...At the same time, in seeking to make management intervention as efficient as possible, it is essential to take an holistic view of biodiversity and address the interactions that species have with each other and their non-living environment, i.e. to work from an ecological perspective. The World Summit on Sustainable Development endorsed the objectives of the Convention on Biological Diversity to “achieve by 2010 a significant reduction of the current rate of biodiversity loss at the global, regional and national level as a contribution to poverty alleviation and to the benefit of life on Earth”. To achieve this outcome, biodiversity management will depend on maintaining structure and function. Biodiversity is not singularly definable but may be understood via a series of management principles under BAPs, such as: 1. that biodiversity is conserved across all levels and scales – structure, function and composition are conserved at site, regional, state and national scales. 2. that examples of all ecological communities are adequately managed for conservation. 3. ecological communities are managed to support and enhance viable populations of animals, fungi, micro-organisms and plants and ecological functions. Biodiversity and wildlife are not the same thing. The traditional focus on threatened species in BAPs is at odds with the principles of biodiversity management because, by the time species become threatened, the processes that maintain biodiversity are already compromised. Individual species are also regarded as generally poor indicators of biodiversity when it comes to actual planning. A species approach to BAPs only serves to identify and at best, apply a patch to existing problems. Increasingly, biodiversity planners are looking through the lens of ecosystem services. Critics of biodiversity are often confusing the need to protect species (their intrinsic value) with the need to maintain ecosystem processes, which ultimately maintain human society and do not compromise economic development. Hence, a core principle of biodiversity management, that traditional BAPs overlook, is the need to incorporate cultural, social and economic values in the process. Modern day BAPs use an analysis of ecosystem services, key ecological process drivers, and use species as one of many indicators of change. They would seek to maintain structure and function by addressing habitat connectivity and resilience and may look at communities of species (threatened or otherwise) as one method of monitoring outcomes. Ultimately, species are the litmus test for biodiversity – viable populations of species can only be expected to exist in relatively intact habitats. However, the rationale behind BAPs is to "conserve and restore" biodiversity. One of the fastest developing areas of management is biodiversity offsets. The principles are in keeping with ecological impact assessment, which in turn depends on good quality BAPs for evaluation. Contemporary principles of biodiversity management, such as those produced by the Business Biodiversity Offsets Program are now integral to any plans to manage biodiversity, including the development of BAPs. - 2010 Biodiversity Target - 2010 Biodiversity Indicators Partnership - Holocene extinction event - Climate Action Plan - IUCN Red List - Regional Red List - Glowka, Lyle; Françoise Burhenne-Guilmin and Hugh Synge in collaboration with Jeffrey A. McNeely and Lothar Gündling (1994). Guide to the Convention on Biodiversity. IUCN. ISBN 2-8317-0222-4. - IUCN Red-list statistics (2006) - Government of St. Lucia (2001). "National Biodiversity Strategy and Action Plan of St. Lucia". Archived from the original on 2006-11-05. Retrieved 2006-08-30. - Natural Resource Management Ministerial Council (2011). "Australia's Biodiversity Conservation Strategy 2010-2030". Retrieved 2012-12-07. - Commonwealth of Australia, Department of the Environment and Heritage (September 2005). "Great Barrier Reef Water Quality Protection Plan Annual Report 2004-2005". Archived from the original on 2006-08-22. Retrieved 2006-08-30. - Andreas Glanznig, Native Vegetation Clearance, Habitat Loss and Biodiversity Decline: an overview of recent native vegetation clearance in Australia and its implications for biodiversity, Biodiversity Series, Paper No. 6, Biodiversity Unit, June 1995 - The New Zealand Biodiversity Strategy. [Wellington, N.Z.]: Dept. of Conservation; Ministry for the Environment. February 2000. ISBN 978-0-478-21919-7. - St. Lucia National Marine Fisheries Act of 1984, Section 10, (1984) - Joint Nature Conservation Committee, London (2006). "United Kingdom Biodiversity Action Plan". Retrieved 2006-08-31. - BBC NEWS, Hedgehogs join 'protection' list - Joint Nature Conservation Committee, London (2012). "UK Biodiversity Action Plan". Retrieved 2012-10-28. - National Archives, London (2011). "UK Biodiversity Action Plan archive copy". Retrieved 2012-10-28. - , JNCC. Accessed via National Archives, London (2012) - Biodiversity Conservation National Strategy and Action Plan of Republic of Uzbekistan, 1997 - International Society for Fungal Conservation (2012). "Micheli Guide to Fungal Conservation". Retrieved 2012-12-07. - Noss, R.F. (1990) Indicators for Monitoring Biodiversity: A Hierarchical Approach. Conservation Biology 4 (4) 355–364. - Lindenmayer, D. B., Manning, A. D., Smith, P. L., Possingham, Hugh P., Fischer, J., Oliver, I., McCarthy, M. A., (2002) The Focal-Species Approach and Landscape Restoration: A Conservation Biology 16(2) 338–345 - "Principles in Biodiversity Offsets". - IUCN Summary Statistics for Globally Threatened Species - Mexico Biodiversity Action Plan - Philippines biodiversity inventory - UK Biodiversity Action Plan (home page) - USA Endangered Species Act of 1973 - The Convention on Biological Diversity (home page)
<urn:uuid:606c9a8c-d361-4923-b171-5e5c98893c56>
CC-MAIN-2014-35
http://en.wikipedia.org/wiki/Biodiversity_Action_Plan
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00307-ip-10-180-136-8.ec2.internal.warc.gz
en
0.894659
3,710
3.875
4
A Basis of Consciousness (Copyright 2003, James I define "consciousness" as one brain mechanism having to inhibit or enhance another. This makes one mechanism "aware" of the other. Now this may be magnified into any level of complexity of "on off" mechanisms monitored by other "on off" mechanisms. This is a form of "consciousness." However, I suggest the real key to "consciousness" is awareness of "need." When the above scenario is connected with a mechanism which generates "need," the interaction reaches the level of self awareness. Hence, hunger, thirst, etc. and sex produce "self awareness" in their satisfaction or deferral. (The memories of how these needs were met are stored in the association areas which refine the opportunity for satisfaction and increase awareness. I suggest “consciousness” cannot exist without memory.) The third part is "drive." "Need" generates drive. This is the area which machines, at least at this time, may not be able to mimic. A machine that must seek and find energy, for example, is simply using "on off" mechanisms. I suggest the "drive" of animals results from the characteristic of nerves which differentiates them from other tissues. I think our drive comes from the "addiction" mechanism. That is, our nerves evolved the ability to increase receptors in response to the stimulus of entering molecules which trigger the addiction mechanism. Therefore, an accumulation of nerves, the brain, becomes a site which is constantly increasing its "need" for various molecules by constantly increasing receptors for these molecules. This is the basis of our drive mechanism. I suggest "consciousness" consists of these mechanisms: control of one mechanism over another, especially involving mechanisms that are identified as "needs," and the addiction mechanism which constantly renews "needs."
<urn:uuid:4581d495-21b9-4007-8f7a-9eef4e763ca6>
CC-MAIN-2017-30
http://anthropogeny.com/Consciousness.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423183.57/warc/CC-MAIN-20170720121902-20170720141902-00059.warc.gz
en
0.96531
374
2.609375
3
Bone Loss Means Loss of Support Bone loss in your teeth, or rather, around them, means loss of support. Once lost, bone doesn’t easily grow back.Bone is important to your smile; losing it can complicate your health and appearance. Is there any way dental implant therapy at Chrysalis Dental Centres can help? How Bone Loss Affects Your Smile Bone loss is a hallmark of advanced gum disease. When gum disease goes untreated, the bone surrounding the affected tooth breaks down. This means that even once the disease stops advancing, the tooth loses a lot of support. Periodontal, or gum, disease is a common cause of bone and tooth loss. Another cause of bone loss is a denture. Dentures are meant to rest over gums and bone with a snug fit. This helps them to stay in place. But with time the pressure from dentures can cause bone to wear away. This means that dentures need to be adjusted on occasion to fit your mouth. Have you had any teeth shifted or lost due to gum disease? Have you noticed a change in the shape and height of your smile since you’ve worn dentures? Preventing bone loss in your teeth can help you to enjoy a beautiful and natural smile. Addressing Bone Loss Around Your Teeth To cope with changes in bone, you have a few options: - Continue adjusting your denture to fit your smile - Surgical bone grafting or augmentation - Implant therapy As mentioned earlier, adjusting a denture won’t prevent further bone loss. Bone grafting in itself can prepare a site to receive an implant, but requires more surgery and time. What are the benefits of dental implant therapy in addressing bone loss? How Dental Implants Work Dental implants don’t just fill in a gap in your smile. An implant serves as the replacement for the unseen root of a tooth, as well. Like a tooth root, the implant puts pressure inside the bone of the jaw during biting. This natural force can stimulate bone growth. This in turn can stabilize an implant and reinforce surrounding bone tissue. Dental implants could prevent bone loss because they rest inside bone, not on top of it. This means that they could save your jaw from changing in shape and height. Your Dental Implant Options From a single implant to an implant-supported bridge to full-mouth implants, you have many options. If you need to address bone loss in one area of your smile, there is likely a way implants can help. Even if you already have a full denture, your smile is not beyond saving. As few as four dental implants are often placed at strategic angles to make the most of existing bone in your arch. These implants can then support a full mouth dental prosthesis. This implant-supported denture shouldn’t rub against your gums, which can prevent bone loss. This technique can also help you avoid the need for bone grafting. The specialists at Chrysalis Dental Centres are ready to help you start saving the bone in your smile. Give us a call today to learn more! Call 1 888 733-6983
<urn:uuid:a9becbe0-7c1c-4f38-92a1-f83495a86850>
CC-MAIN-2019-09
https://www.chrysaliscanada.com/blog/bone-loss/
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487624.32/warc/CC-MAIN-20190218175932-20190218201932-00631.warc.gz
en
0.924399
659
2.828125
3
Over 150 people from all over Italy participated in a conference on the role of feeding stations for the conservation of scavenger bird species, that took place on February 19 at the Regional Natural Reserve Monterano, organized as part of the LIFE+ Monti della Tolfa. The conference gathered experts and conservation managers that have worked in several projects across Europe, including at least 5 LIFE projects, where feeding stations were used as a tool for the conservation of endangered species like the red kites, griffon vulture or the globally Endangered Egyptian Vulture. The conference has also highlighted the threats to scavenger birds posed by poisoning, the use of lead in hunting ammunition and the use of veterinary drugs toxic to scavenging birds, such as diclofenac, an anti-inflammatory that is deadly to various species of vultures and is legally available in Italy and Spain. Its widespread use in India has caused a massive vulture population crisis there. The main conclusions of the meeting were as follows: • Feeding stations are, in certain contexts, important tools for the conservation of scavenging birds, particularly for young individuals, populations newly reconstituted and at stopover sites during migration; • They alone are not the solution for the conservation of these species and should be used together with other actions (eg. antipoisoning campaigns); • Should be designed to maximize the impact on the target species, while minimizing the negative side effects; • Decision regarding their establishment should be taken on a case-by-case basis: there is no generic protocol that applies to every situation, but a feasibility study be conducted that analyses the ecology of the target species and the surrounding human context; • The effects on target species should be evaluated by long-term monitoring, and their management should be adaptive, considering any problems and issues; • The costs of management and maintenance, meat supply and of the permitting procedures should not be underestimated; • These feeding stations have also the potential to interfere with the natural foraging behavior of some species; • It may be useful to sign agreements with suppliers of meat in the area, farmers or slaughterhouses, and close scrutinize their procedures, so that all veterinary legislation is fulfilled. For the providers, there are opportunities to be labeled as "friends of the vultures"; • Video cameras allow for a better identification of the individuals attending the feeding station, including individual recognition if birds are ringed; • In Italy there are less than 10 feeding stations – in Spain more than 200. Unlike in other countries, there is no law that promotes the use of the carcasses that die in the fields – according to Italian law they need to be removed and incinerated or buried, at a very significant cost to the taxpayer, and removing valuable food resources from the food chain; • In recent years the Italian Ministry of Health has regulated the use of meat from various sources for the supply of feeding stations and has arranged a special database, according to the specific requirements laid down in Regulation 1069/2009 ex (EC) No. 1774/2002; • In the period 2008-2012, the EU has invested a minimum of €11 million on the conservation of scavenging species, through 67 different projects; • The anti-inflammatory drug Diclofenac should be banned in Europe and in Italy, as was done in India, as it is highly toxic for several species of vultures; • Feeding stations should not take in carcasses of animals treated with antibiotics and anti-inflammatory drugs, and in particular those animals treated with medicines containing the active ingredient of Diclofenac; • An enhanced dialogue between experts and decision makers in the fields of human health, veterinary medicine and nature conservation should take place on this matter, both at both national and local level so that there is better information and coordination on the establishment of and the feeding stations and their management. •Feeding stations have an enormous potential educational value, and they can have positive economic effects for local communities (from saving the costs of disposing of carcasses to local ecotourism projects) •Not surprisingly, the ecotourism activities associated with the vultures have already been identified as one of the main markets of ecotourism development. The VCF was also present at the meeting, through Fulvio Genero, a member of the VCF scientific advisory committee. For more information, please check http://www.lifemontidellatolfa.it/index.php?option=com_content&view=article&id=142%3Agrande-successo-del-convegno-sulla-tutela-dei-rapaci-attraverso-luso-dei-carnai-febbraio-2015&catid=50%3Acomunicati-stampa&Itemid=81&lang=it
<urn:uuid:d534b068-2883-4636-bad7-3ce602a51201>
CC-MAIN-2017-39
https://www.4vultures.org/2015/03/05/italian-experts-and-conservation-managers-discussed-the-pros-and-cons-of-feeding-stations-for-scavenger-bird-species/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00501.warc.gz
en
0.939854
1,027
3.0625
3
This blog post series, is focused on the overall encompassing impact the adoption of a deaf child can have on a family, especially if that family is only minimally aware or even if they feel they are somewhat aware of all that surrounds deafness and those who live in the Deaf World. The word deaf will be used to reference a child with any degree of deafness, including those labeled Hard of Hearing. There are many things to seriously research and prayerfully consider before moving forward with the adoption of a deaf child, even if you are encouraged to do so by others and by your adoption agency. The life-long journey of the adoption of a deaf child is anything, but easy, and it will be life-changing for all! |Can you pick out the deaf child(ren) in this picture?| Because deafness cannot be detected with the eye (visually) some families are eager to adopt a child labeled “deaf” or “hearing impaired” more so than one who has obvious visually recognizable “imperfections”. Deafness is often only detected after a child fails to begin talking by the age of two or three and sometimes beyond. Some of the misdiagnoses of deafness include autism, mental retardation and/or the inability or refusal to speak which is often referred to as “mute” or “non-verbal”. Deaf children, in general, tend to be about 18 months to two years behind their hearing peers, emotionally and socially, given their communication gap and lack of language acquisition from birth. If you do not already know, adopted hearing children who have been institutionalized then adopted are typically about 18 months or so behind (emotionally and socially) hearing children raised in loving homes from birth. This makes that emotional and social gap for the deaf adopted child to be as much as 4 years behind hearing biologically raised children. This is important to keep in mind at all times! In some countries, the labels “dumb” or “deaf and dumb” are still used for those who cannot speak, somehow relating the inability to speak with low-intelligence. These terms were coined and accepted here in the US by the hearing population many years ago. Today, the term “dumb” is no longer accepted in this country, but the hearing population has once again promoted a new, politically correct, label for the Deaf population, “hearing impaired”. Deaf people who identify themselves with the Deaf culture, prefer to be called what they are “Deaf”, a label that can encompass varying degrees of deafness and their precious Deaf culture since they do not believe they suffer from any impairment whatsoever. To learn more about how the Deaf population generally defines the above terms go to the National Association of the Deaf website. Not until hearing testing/screening is performed is it discovered the child has deafness to some degree. This may be primitive-like testing in other countries, the use of a squeaky toy or clanging pot lids behind the child. Even then many people do not make the connection the reason the child does not speak (mute) is because they have not heard people speaking to them nor heard themselves vocalizing as they progressed through the babbling stage of development for spoken language. This is the natural way a child without deafness learns to speak a spoken language. The deaf child’s voice is usually intact, but it is untrained and useless for communication because the hearing aspect needed for acquiring and learning the accurate spoken language is not. The loud noises the deaf child will make with their voices and the sounds relating to other bodily functions, sounds they are oblivious to, will be the topic of another blog post in this series. By the way, there is no known deaf child in the above picture. It is simply a random photo off the Internet. You cannot tell by looking at a child if they are deaf or not. If you think God is “calling” you to adopt a deaf child and you have little to no experience with deafness and you do not know anyone who lives in the Deaf World…STOP! Go back to your spouse and the two of you take whatever time is necessary to make sure God has “called” you to this, before moving forward. Ask God to show you in unmistakable ways. You can be assured if you move forward without knowing this is from God, somewhere down the road you will begin questioning, “Why did we ever do this?” If you KNOW for sure, God has “called” you to adopt a deaf child, that knowledge will give you the added strength you will need to persevere no matter the cost to you and any other hearing family members for the challenges ahead. The affects the adoption of a deaf child has on hearing siblings (adopted and/or bio), as well as other extended hearing family members, is often challenging as well and this, too, will be covered in this series. Adoption agencies sometimes do a great job of preparing families to embrace the culture from which their soon to be adopted son or daughter are born, but rarely do adoption agencies give the same attention to the Deaf Culture. Most are clueless when it comes to Deaf Culture and deafness. The adoption of a deaf child will automatically thrust you and your family into the Deaf Culture whether you like it or not. Bear in mind when you adopt a deaf child from a different ethnic background (a different country) the number of cultures your family will now be exposed to will be more than just one. It is only reasonable to ask that you research their Deaf culture well, with an open mind, before proceeding with a deaf adoption. This is the same experience for the hearing family who gives birth to a child with deafness. 95% of all deaf children are born to and/or raised by hearing parents. In this country, today, less than 10% of hearing parents learn to sign with their biologically born deaf children. Deaf people often desire to give birth to deaf children just like them, but less than 5% of Deaf parents will have the coveted opportunity to do so. Here are a few links for increasing your knowledge surrounding the Deaf Culture here in America: In addition, to learning all you can about the culture of the Deaf, you and every member of your immediate family MUST be willing to commit to learning American Sign Language. This should not be an option, but should be mandatory by your adoption agency. The vast majority of deaf children available for adoption are considered “older”, above the age of 3, and that, in and of itself, will be challenging for most families. Even if you start the adoption process when the deaf child is under 3, they will often turn 3 before you can bring your son/daughter home. In case no one has told you, there really is nothing magical about adopting a child under the age of three. Adopting a child under the age of three can also be just as challenging for bonding and connecting with their adoptive family as it can for a child over the age of three. With the vast brain development research, available now, it is readily understood the brain of a child who has been traumatized develops much differently from one that has not. What is meant by traumatized? Examples of trauma include institutionalization (orphanage-life) where neglect and lack of nurture abound; abandonment also breeds trauma for the adopted child, even if they cannot remember when it happened. The brain of a child is deeply affected by trauma experienced in utero, as well, and could be related to the mother’s use of substances harmful to her unborn child and/or what she may be experiencing herself during the pregnancy. Mom’s stress levels, during pregnancy. also impacts the physical development of her unborn child and not just their brain. Dr. Karyn Purvis, author of The Connected Child, sites cleft lip and palate abnormalities in Asian babies as being directly related to stress the mother experiences when she discovers she is pregnant. The one-child policy evokes great emotional upheaval for Asian mothers and that occurs about the time the palate inside the mouth of her unborn baby is developing in utero.The brain of a child cared for by a loving family from birth compared to the brain of a child who is born into uncertainty, lack of nurture and neglect is vastly different. Families adopting a child that has experienced neglect, lack of nurture, hunger, and possible mind-altering events that took place in utero or trauma during a stressful birth must receive appropriate training for how best to connect with them, appropriate for parenting a child from the “hard place”. This kind of training will grant parents the ability to provide an environment geared toward increased brain function which will, in turn, grant the child the ability to respond in appropriate ways and not with unacceptable behaviors. The Empowered to Connect Conferences (now, Hope for the Journey, read more here), led by Dr. Karyn Purvis, are the best way to prepare for adoption. They are also a wonderful place for learning more tools to help with parenting adopted children once they are home. Dr Purvis believes it is never too late to begin using the tools she equips parents with at these conferences and I agree. Signs for Hope believes it is the right of the deaf child for their adoptive family to be fully aware of as many of the challenges they will face while raising them, yes, because of their deafness, but also because of their coming from the “hard places.” Trying to prepare yourself and your family, as best you can, to bring home a child from the “hard places” and learn ASL adequately, at the same time, is next to impossible. You will either prepare well for one or be inadequately prepared for both. One more thing to prepare yourself for when adopting a deaf child; it is very possible and highly likely your deaf son or daughter has been physically and/or sexually abused. It is sad, but it is a common occurrence here in this country, so you can only imagine how much greater the possibility is for this to happen in other countries’ institutions. A child from “hard places” is a phrase Dr. Purvis uses to describe children who have experienced trauma during institutionalization and/or foster care placements. Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is–His good, pleasing, and perfect will. Romans 12:2 There are 13+ blog posts in this series, So You Want to Adopt a Deaf Child? You can follow the sequence in the link below.
<urn:uuid:fa38474a-c7ea-4b41-bfac-a6336ecbf99a>
CC-MAIN-2021-17
https://signsforhope.org/so-you-want-to-adopt-a-deaf-child-overview/
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039550330.88/warc/CC-MAIN-20210421191857-20210421221857-00097.warc.gz
en
0.968106
2,239
3.4375
3
Tuesday, April 26, 2011 CANTABA LA RANA by Rita Rosa Ruesga Poetry Tag continues with a book review of a new book of poetry connected to yesterday's book review. Today’s tagline: More poems with a Spanish connection Guest Reviewer: Lydia Rogers Featured Book: Ruesga, Rita Rosa. 2011. Cantaba la rana/The Frog Was Singing. Ill. by Soledad Sebastián. Scholastic. ISBN: 9780545273572 Lydia writes: CANTABA LA RANA, THE FROG WAS SINGING is a tribute to the poetry that most Hispanic children have heard at least once in their lives. Even English speakers will enjoy these much-loved Spanish nursery rhymes from Latin America as Rita Rosa Ruesga has mainstreamed the poetry by translating each poem. A surprising quality to each poem is that a musical tune that accompanies it. The reader can feel the beat embedded in each poem as they read it in Spanish or English. The illustrations by Soledad Sebastián are in step with art typical of Latin America. Each piece of art for enhances every poem with vivid tones of red, green, purple and blue. The pictures are significant enough to hang as art in a classroom, library or home. Here’s a sample poem (in Spanish and English): Señora Santana ~ Mrs. Santana Señora Santana, por que llora el niño ¿por una manzana? que se ha perdido. yo le daré una, yo daré dos, una para el niño y otro para vos. Dear Mrs. Santana, Why is the child crying? Because of an apple, I think he just lost it. I will give him one, I will give him two, One is for the boy, The other one’s for you. This poem starts off with crying and ends with a smile. Any student will feel the natural connection and fall into a rhythm as they clap, chant, or sing this tune. So, ¡Grab your maracas! Or collaborate with the music teacher; she/he can help you get the tune to this well loved lullaby from Spain. Your K-2 students will appreciate the poem even more if you act it out with them and all share an apple treat in the end. Students in grades 3-5 will identify with the idea of being kind and why that is important in daily life. (They should get some apple too as a reminder to be helpful and kind.) Tomorrow’s tagline: More poems from the oral tradition [We’re heading down the homestretch of National Poetry Month—still time to get your copy of the e-book, PoetryTagTime, an e-book with 30 poems, all connected, by 30 poets, downloadable at Amazon for your Kindle or Kindle app for your computer, iPad or phone for only 99 cents. Grab it now.] Image credit: PoetryTagTime; Scholastic Posting (not poem) by Sylvia M. Vardell and students © 2011. All rights reserved.
<urn:uuid:005fb7e7-f3a2-4c88-adac-ee341a75e36c>
CC-MAIN-2019-22
http://poetryforchildren.blogspot.com/2011/04/cantaba-la-rana-by-rita-rosa-ruesga.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260161.91/warc/CC-MAIN-20190526225545-20190527011545-00297.warc.gz
en
0.900181
675
2.609375
3
Can we learn from the past and avoid making the same mistakes in the future? This is what this chapter aims to explore. A Brief Assessment of the Present We cannot take the present state of the world for granted. Current economic successes may only herald tomorrow's decline. Easter Island was an example of At face value, perpetual growth holds many promises. However, within a context in which resources are limited, increasing economic activity might not bring about the desired outcome—a better life for all. On the contrary, it could aggravate problems and hasten the world's demise, ruining the planet and the future for many generations. Before getting into details, let us take a closer look at the current state Background: Limits to Growth It is not always easy to anticipate what the future will bring. Four decades ago, the Club of Rome (a non-profit global thinktank) issued a report which pointed at the possibility of a world collapse halfway through the 21st century (Limits to Growth, Meadows, D.H., Meadows, D.L., Randers, J., et al., The analysis, based on a computer model called World3, was the first of its kind in trying to tackle the issue of sustainability by simulating interactively five global variables: population growth, industrial production, food production, pollution, and consumption levels of non-renewable resources. intent of the Limits to Growth initiative was not to make exact predictions about the timing of a world collapse but rather to create a dynamic model with feedback loops that would simulate real life interactivity between major global subsystems and show trends and how a change in one variable would impact others. of the model's importance also rested in its ability to demonstrate that some of the subsystems, for example population, could grow geometrically or exponentially (1, 2, 4, 8) rather than linearly (1, 2, 3, 4) and have a dramatic impact on the speed at which resources that respect, the simulations were fully successful but, by the same token, were also heavily criticized, often discredited as gloom and doom scenarios. Some of the claims made by the report's detractors were later found to be themselves exaggerated when not entirely is certain is that the report's conclusions were shocking: the potential for a world collapse in this century. The question that interests us at the moment is whether the model was accurate in its portrayal of reality and conclusions. Three updates of the report (a second edition in 1974, Beyond in 1992, and Limits to Growth, The 30-Year Update in 2004) have been produced and improved the model's accuracy. In all cases, the general conclusions in terms of the seriousness of the world's problems and the possibility of a collapse remained essentially the same. of the Limits to Growth With Thirty Years of Reality, (2008), Graham Turner, a senior scientist at CSIRO Sustainable Ecosystems, Canberra, Australia, compared three of the original scenarios with actual data from the last 30 years to determine the legitimacy and accuracy of the Limits to Growth simulations. first scenario, the standard run, essentially involved a current-policy type of situation in which governments continue doing as they have in the past: limited efforts in addressing environmental problems such as pollution, global warming, the conservation of non-renewable resources, population second one, the comprehensive technology simulation, assumed a significant amount of human intervention in terms of addressing sustainability problems with the use of technology. For example, recycling levels are increased to 75%, pollution reduced by 25%, food production doubled, etc. third simulation, the stabilized world scenario, involved much more aggressive human intervention in both the technological and socio-political arenas. This meant, for example, birth control policies, an orchestrated economic shift towards services and away from physical goods, the protection of agricultural land with regulations, etc. in addition to renewable energy initiatives and other technology-based solutions. findings can be summarized as this: observed historical data for 1970–2000 most closely matches the simulated results of the LtG [Limits to Growth] “standard run” scenario for almost all the outputs reported; this scenario results in global collapse before the middle of this century. (Turner, results are interesting firstly because real data of the period between 1970 and 2000 was plugged into the heavily criticized model and showed it to be consistent and reasonably close to what actually occurred over the last few decades. they were found to compare closely to the standard run simulation—a business-as-usual scenario that assumed no significant shift in government commitment to the environment—which parallels what has happened over the last four decades. Had the original standard run simulation been exaggerated as critics professed, the real data would have shown it to be overly pessimistic and would have compared favorably with the other two more positive scenarios (comprehensive technology and stabilized world). while the real-world data does not provide absolute proof of a collapse halfway through this century, the 30-year period between 1970 and 2000 represents almost 40% of the timespan between 1970 and a presumed global crisis. Such significance cannot be ignored. Another 30 years of waiting could put us at the doorstep of a world technology scenario does postpone the global meltdown but by only a few years, to the second half of the century. The real-world data of the 1970-2000 period offers little hope unless governments take a lot more determined action with respect to pollution, global warming, resource depletion, and other environmental problems. third scenario, the stabilized world, does paint a more optimistic view of the future, but again, real-world data of the 1970-2000 period does not show us to be on that road. Nor is it reasonable to assume that society would be prepared anytime soon to take the actions required to bring about such an outcome: aggressive technological changes and determined social policy. Even under this scenario, the original Limits authors did not totally rule out collapse as a possible outcome. cautiously concluded that the 1970-2000 data only partially confirmed the World3 simulation results. However, he pointed out that many current developments, namely with respect to oil reserves, climate change, and the prospect of food shortages, seem to be trending similarly to the now 40-year-old simulations. well, he highlighted the interesting fact that as growth continues (standard run scenario), The attempts of the World3 model to alleviate pressures in one sector of the global system by technological means generally results in increasing pressures in other sectors, often resulting in a vicious cycle or positive feedback. (Turner, 2008, p.34) recent example of this is in 2008 when the production of biofuels using agricultural crops resulted in food shortages and sharp increases in the price of staples, especially in the developing world. The production of biofuels from edible crops might have been part of the solution in a three-billion-people world. It is not part of it when almost seven billion have to be fed. Many potential solutions will decrease in effectiveness or vanish altogether as growth continues. (2008) also noted that increased efficiency generally had adverse effects as it promoted growth. As supporting evidence, he pointed out that while carbon intensity decreased over the last century, greenhouse gas emissions continued to increase over the same period of time (p. 35). efficiency should be a positive element for the future and the environment, but in the absence of appropriate socio-economic policy (for example, to shift consumption away from non-renewable resources and reduce the world's population), it can make problems worse. This is in line with one of the conclusions of the second simulation: technological solutions alone are not enough and will only delay a potential collapse by a few years. we have to remain careful about trends and simulation results, the findings based on 30 years of recent data are just too powerful, too close to what had originally been expected for that period of time, and too heavy of consequences to ignore. A Fourth-Wave Simulation model has not been tested with a market-integrated strategy such as the Green Economic Environment, which perhaps offers the only hope at this point in time. As argued in the first book of the Waves of the Future series, the GEE might be the only approach powerful enough to address the issues at hand. That being said, knowing that it would work will not help us if the environmental strategy is never Erring on the Side of Caution the book Limits to Growth: The 30-Year Update, the authors concluded: “Humanity has squandered the opportunity to correct our current course over the last 30 years” (Meadows, D.H., et al., 2004, A p. 5). While a collapse might still be avoidable at this point in time—with a huge amount of resolve and action—we did lose precious years (actually decades) for failing to heed the call made by the Club of Rome in 1972. It is true that the World3 simulation results were dramatic, but had we taken remedial action then and found out four decades later that the model was overly pessimistic—which now appears not to be the case—we would only have ended up with a world less polluted, less populated, less plagued with climate change problems, and with enough food for That would also have meant more plentiful resources for the future as well as cheaper prices. We would already be ahead of the game in terms of transitioning to renewable and cleaner energies. If World3 had erred, it would have been on the side of caution with only positive consequences for us. Many corporations have a vested interest in preventing progress from being made on environmental issues and resource conservation. Many oppose a green agenda because they are large polluters or enrich themselves by depleting the earth's resources. Scientists and environmentalists were right about global warming. Yet, the industry and its lobby kept denying its existence for decades just like they denied the toxicity of many chemical compounds we now know are harmful to human The issue of a potential world collapse may be just like a cancer which if detected and treated early is curable and if not, is deadly. The Club of Rome did detect the problem soon enough, but 40 years of inaction might have just squandered the only opportunity we had for a cure. Erring on the wrong side can have very dramatic and horrifying consequences. We might or might not find out for ourselves in the next few decades. Easter Island learned the lesson the hard way. What is certain is that we cannot count on the corporate world to sound the alarm about pollution and the depletion of resources. After the first and second oil crisis in the 1970s and 1980s, the oil industry was heavily criticized for price gouging and increasing profits at the expense of consumers. You would think that the sector would have tried to adopt more morally and socially responsible policies, but questions were raised again about the same issue in 2008. Guess who was laughing all the way to the bank when oil prices peaked at over $140/barrel that summer? Not the consumer! Mark Cooper, Director of Research at the Consumer Federation of America, looked at the profits of the five big oil companies (ExxonMobil, Shell, BP, ChevronTexaco, and ConocoPhillips). He (2008) concluded: The unprecedented increase in oil industry profits in 2008 is the culmination of a six-year run up that has seen petroleum industry profits increase by more than 600 percent since 2002. Cooper reported that profits went from about $30 billion in 2002 to an expected $180 billion by the end of 2008. During the same period, the weekly price of gasoline at the pump (all grades) went from about $1.50/gallon to more than $4.00/gallon (Cooper, 2008, November 2). This was all happening in times when people were hurting from already high prices. were talking about forming a rice cartel when there were fears of mass starvation in 2008 as a result of a tripling of the price of that staple. For corporations, shortages are a positive occurrence. We should not count on them to sound the alarm or err on the side In the decades preceding a presumed world collapse, power will shift to corporations as the supply of many resources decreases. Profits will flow into their coffers as people themselves are being squeezed and find it increasingly difficult, if not impossible, to make ends meet. At least, this is what our experience with petroleum is showing us. Is there any reason to believe that the future will be This section will look at two significant contamination indicators. In 2005, the Environmental Working Group (EWG, a US nonprofit research organization) and Commonweal (a nonprofit institute) produced a report on the carcinogens and toxic compounds found in the blood of the umbilical cords of 10 human babies. The sample was small but the results were shocking. In total, 287 industrial chemicals and pollutants were identified. According to the report, these Eight perfluorochemicals used as stain and oil repellants in fast food packaging, clothes and textiles — including the Teflon chemical PFOA, recently characterized as a likely human carcinogen by the EPA's Science Advisory Board — dozens of widely used brominated flame retardants and their toxic by-products; and numerous pesticides. (Houlihan, J., Kropp, T., Wiles, R., Gray, S., & Campbell, C., 2005, July Of the total number of compounds, 180 have been found to be carcinogenic in humans and animals. Many—217 to be more specific—are also known to be harmful to the nervous system. Tests on animals have proven some 208 chemicals to produce developmental problems and abnormalities. Do you remember the corporate world ever warning us about our babies being born with hundreds of potentially harmful compounds in their bodies, producing reports on the harmfulness of their activities, or sounding the alarm bell about the problem? The research conducted by EWG and Commonweal is highly significant in that it provides a snapshot of the state of the planet at the moment and the depth of the quagmire we are in. Our bodies are part of the very environment in which we live. They are actually made of it and cannot be dissociated from it. As such, it should not surprise us that if we live in a cesspool of toxic materials, the bodies of the babies we give birth to will be composed of a cocktail of harmful chemicals The World's Ultimate Sewage Lagoon indicator of pollution levels is the state of the world's oceans. Many of the chemicals that we use everyday—for cleaning, washing, and grooming—or spray on the ground in pursuit of higher agricultural yields end up in our rivers and lakes. So do the outflows from domestic sewage systems and those from industrial They are then carried downstream and eventually make their way to the oceans, which become their final resting place. As contaminants keep flowing into them year after year, oceans will over time become the world's ultimate sewage lagoon, if that is not already the case. As time goes on, pollution levels in the world's oceans will increase, resulting in further damage and the destruction of more and more of their resources. Mercury contamination in tuna fish is only one of many stories pointing to the fact that significant damage has already occurred. Like the people of Easter Island, we are starting to lose commercial species, ones that have fed generation after generation How are we doing in terms of renewable resources? Here is an example. Diamond (2005) listed the following fisheries as having collapsed or been lost in the 20th century: “Atlantic halibut, Atlantic bluefin tuna, Atlantic swordfish, North Sea herring, Grand Banks cod, Argentinian hake, and Australian Murray River cod” (p. 480). What is even more shocking than losing the fisheries themselves is that they were supposed to be renewable resources. The sad fact is, at this point in time we cannot even manage renewable resources, and pressures will only increase as the world's population continues to grow. Just like the people of Easter Island, we are destroying renewable resources and important sources of food and income for the present and the future. In terms of non-renewable resources such as metals, there is virtually no plan to conserve them at the moment. The more minerals we dig out and consume every year, the more wealth is generated and jobs created. Governments are more than likely to promote the industry at the moment than engage in conservation. The problem is especially acute for nonfuel minerals because of their low substitutability, as already expressed. As reserves decrease, shortages will begin to occur and prices will increase and reach The crucial question at this point in time is, how much is really left? If we still have thousands of years' worth of reserves, it would essentially be a non-issue. On the other hand, if the resource estimates of the World3 computer model are accurate, we are already at a critical stage. Here is a closer look at the issue. estimates of the amount of mineral reserves left at the moment are difficult to establish for a number of reasons. For example, they vary depending on price and technological development. Global Mineral Resource Assessment Project (http://pubs.usgs.gov/fs/fs053-03/) is perhaps the most, if not the only, comprehensive attempt at trying to assess the total reserves of most nonfuel minerals on the is a cooperative international effort run by the US Geological Survey (USGS) and aiming to provide countries around the world with better information on the availability and supply of minerals in order to improve government decision-making with respect to resource development and economic project's conclusion: “No global shortages of nonfuel mineral resources are expected in the near future” (US Geological Survey, 2003). The real difficulty with respect to this statement lies in interpreting what it really means. It could refer to an absence of shortages for the next six to eight years, which is often as far as governments plan ahead, but no one really knows for sure. any case, even a six- to eight-year window does not really tell us how severe the problem is. A given timespan can have a very different meaning depending on what it leads to: a mild economic slowdown or a rapid decline resulting in a world collapse. important element to take into account is the size of the world's population. It has almost doubled since the release of the Limits report in 1972. This means that resources are being exhausted much faster than they were 40 years ago and that finding new supplies able to satisfy the much higher annual demand becomes increasingly are also magnified by the shortage of energy. When the price of oil is up, everything that requires energy becomes more expensive, including food and minerals. Metals themselves already see stiff price increases in periods of economic growth and when shortages occur. Higher energy prices only serve to compound the problem. statement made by the USGS perhaps has more meaning in its omissions than in what it actually says. While it might be true that there will not be significant shortages of minerals in the near future, the statement fails to point out the potential consequences resulting from the low substitutability of metals were shortages to occur in the medium term. Issues Relating to Reserve Estimates The US Geological Survey defines the word reserves as follows: That part of the reserve base which could be economically extracted or produced at the time of determination. The term reserves need not signify that extraction facilities are in place and operative. (US Geological Survey, 2010, therefore only the part of a resource that is economically exploitable. The reserve base is a broader concept sometimes referred to as total reserves and defined as That part of an identified resource that meets specified minimum physical and chemical criteria related to current mining and production practices, including those for grade, quality, thickness, and depth.... The reserve base includes those resources that are currently economic (reserves), marginally economic (marginal reserves), and some of those that are currently subeconomic (subeconomic resources). (US Geological Survey, 2010, p. For the sake of simplicity, in further discussions the reserve base will be defined as reserves and subeconomic resources (of which marginal reserves are a part). Since reserves are actually only what is economically recoverable, their quantity depends on market prices. For example, tripling the amount currently paid for metals would make profitable some of the resources considered uneconomic at the moment. As such, reserves are extendable on account of market value. However, the reality of even just a doubling in price of a commodity can be harsh as the oil experience has shown us: a rise in the cost of living, price increases for other commodities and goods, food shortages, economic Higher market values can extend reserves, but there are limits as the expense of exploiting low grade minerals tends to grow exponentially and eventually reach a mineralogical barrier, a point at which the extraction costs in energy and other resources become prohibitively expensive and beyond anything that could be considered economically feasible. The Cost of Energy and Other Mining Inputs The costs of energy and other mining inputs (for example, machinery) have the opposite effect of a rise in price. The higher they are, the more reserves shrink, primarily because the latter are defined as that part of the resource that is economically recoverable. While consumers might be willing to pay a higher price for a given resource—and in doing so increase the ability of a company to develop more costly deposits—a rise in the costs of energy and other inputs can cancel out that effect. Depending on the price of energy and other resources like metals (out of which machinery is made), some of the subeconomic part of the resource base might never become For example, if a mineral from a given deposit currently costs $100 per ton to extract and the market price for it is $110, the commodity would be profitable, and so would all other deposits whose extraction costs are between $100 and $110. Suppose that the price of oil triples and increases extraction costs of the mineral by $10. Then, the deposits whose extraction costs are between $100 and $110 would become unprofitable, not only shrinking existing reserves but also pushing subeconomic resources farther away from ever becoming The Total World Population As expressed earlier, the size of the world's population is an important factor in terms of assessing how long reserves will last. For example, if 100,000 units of a certain resource were available in 1965 when the total world population was about 3.3 billion and the annual consumption was 1,000 unit, the total supply of the resource would have been 100 years' worth of consumption. the same quantity in today's reality would last less than 50 years on account of the world's population nearing 7 billion, assuming the per person consumption remained the same. The total resource would actually have to double to 200,000 units for it to last 100 years—which would be very difficult to do. in the total number of people on the planet reduces reserves not in quantity but in the length of time that they would last. The world's population has been growing rapidly and is expected to continue to do so for several decades. Science and Technology Science and technology have served to increase reserves in the past. For example, horizontal drilling and other new techniques have enabled the exploitation of oil and gas resources that would have been otherwise out of reach or too expensive to extract. Research and new technologies will continue to develop and help extend reserves. But, they are only two of the many parameters of the equation, and there are limiting factors. Science itself tends to behave like a depletable resource. Discoveries are easy to come by at first, then solutions become more and more complex, expensive, and difficult to While there is a lot of expansion to expect in new sciences like genetic engineering, breakthroughs in many of the older physical sciences occur less often and are generally more costly and elaborate in nature. Will science solve all of our problems as many environmental deniers profess? It has not done so in the past, despite the exponential scientific growth of the last century. The fact that oil reserves have or are expected to peak soon proves the point. All of the new science and technologies have not been enough to prevent the world's consumption of petroleum from outstripping new discoveries. The same can be said about the fact that our own children are now born with dozens of harmful chemicals and carcinogens in their tissues, that dozens of species go extinct every year, that world commercial resources like tuna fisheries are being degraded, or that the cutting down of tropical rainforests continues unabated despite decades of activism. Birth control has been around for decades, yet the world's population continues to grow despite many going hungry. News headlines in 2008 were that food stockpiles were at historical lows. Despite the Green Revolution and its boost to agricultural productivity, there are more people going hungry today than ever before! Many hold the belief that science will find a solution to all of our problems. In practice, it has failed to do the job because it has limitations and does not exist in isolation. Since 1972, science and technology themselves have proven that they were not able to prevent oil from peaking, to stop the destruction of the environment, to halt the growth of the world population, to provide enough food for all, or to slow down the depletion of resources. They are positive factors in terms of extending reserves and helping to postpone a potential world collapse. However, their track record over the last 40 years should dispel any hope that they will be a panacea for the Energy is a poster child for the unlimited-resource argument and the concept of substitutability. As oil becomes depleted and its price increases, society will convert to other sources of energy just as has begun to happen since petroleum hit $140 a barrel. As such, we may deplete oil reserves completely but will not run out of energy because petroleum has alternatives that are both renewable and available in almost unlimited supplies. The question is, does the same model apply to metals? It does not because substitutability is low and there are essentially no renewable alternatives nor any available in the enormous quantities that will be needed. Here is a closer look. There is a certain amount of substitutability among metals. Aluminum, steel, and magnesium alloys are all heavily used today and represent possible substitutes for each other in many industrial applications, including electrical wiring and motor vehicle parts. Their reserves are larger than those of other minerals although by no means should they be considered extensive. They could also be replaced in specific cases by plastics, fiberglass, or carbon fibre. Despite this, they may not survive the test of true substitutes as shown For a metal to be considered a suitable replacement for another as resources peak and shortages begin to occur, certain conditions have to be met. True substitutes have to be available at reasonable prices. On that account alone, the world will not generally be able to transition to a variety of reasonably priced metallic alternatives. It is not only one mineral resource that is being depleted at the same time, it is all of them. Prices will increase across the board as they peak and shortages are in the offing. When it comes to mineral resources, a doubling or even tripling in price can be considered a small difference. Between the bottom of a recession and the peak of the next economic growth cycle, the price of many commodities often more than doubles. Table 1 shows the prices of different minerals in two periods of strong economic growth (1989 and 2008) and at the bottom of the market downturn that followed the Internet and technology stock crash in 2002. With the exception of aluminum and zinc, all metals at least doubled in price in the 2002 to 2008 six-year period. Nearly half (cobalt, silver, tin, and copper) more than tripled. This is still at a point in time when shortages are not in sight, at least for most metals, and speculation is not a concern. Price hikes will occur much more rapidly when either of those enters the picture. Remember that the price of crude oil went from about US $90 a barrel in February 2008 to a high of $147.27 on July 11 of the same year. That is an increase of over 60% in less than six months! swings in price are normal for many minerals, these can be very painful for consumers and countries. Between 2004 and mid 2008, petroleum prices more than tripled. In addition to the resulting pain at the pump, the peak in the price of oil pressured the US financial system at its weakest point—the subprime mortgage sector—and crashed the world economy. 1. Mineral Price Variation for Select Years. ----- 1989 ----- in constant 1998 ----- 2002 ----- in constant ----- 2008 ----- in constant Between 2002 & 2008 are in constant 1998 US dollars/ton. Data for aluminum is from bauxite sources only. Source: US Geological Survey. Historical Statistics for Mineral and Material Commodities in the United States, Version 2010. MT = Metric Tons; TMT = Thousand Metric Tons; MMT = Million Metric Tons; BMT = Billion Metric Tons. The above underscores two things: the fragility of the world economy with respect to relatively small variations in the price of mineral commodities and the potentially catastrophic consequences of moderate and larger price increases as would occur at a more advanced stage of resource advocates argue that we will not run out of metals because once a commodity is exhausted, society would jump to an alternative. Once that supply is gone, it would then move on to another one. In addition to good substitutability and the need for reasonable prices discussed above, a true substitute would have to be available in large quantities or be renewable. Again, oil is the poster child for this argument. Most alternative energies are available in relatively large quantities and a variety of forms and sources (biofuels, wind energy, hydro-electricity, solar power, etc.). Many are also renewable, meaning that they are theoretically unlimited. In practice, several types of power or fuels require a lot of energy in their production and will be constrained to various degrees by increases in the price of oil and other resources. Metals are used massively in today's society. In fact, they are a mainstay of the infrastructure of countries as well as of their manufacturing industry. Massive use would point to the need for massive reserves, which brings us to the next point. If metals are being depleted simultaneously, none of them will be available in the substantial quantities required to act as substitute when others begin running short. Neither would they last any significant amount of time as when they start replacing other metals, their consumption would double, triple, and more (their own share plus that of the other minerals they are substitutes for). there are no true substitutes for metals. From the above, it should be clear that there will not be any jumping from one metallic resource to another as they become depleted. Most metals are massively and concurrently used today. Prices will increase across the board as reserves peak and start running short. No plentiful substitutes will exist at that point in time, nor would they be available at reasonable prices in most if not all cases. Conservation and Recycling programs could reduce our consumption of metals, and recycling would decrease demand for raw materials. Both are positive factors in terms of extending reserves, but after decades of environmentalism, little occurs in that respect. Of course, everybody recycles, but in terms of percentage of material recovery, the results are still cannot really be counted on to take action aggressively enough to do what needs to be done in that respect. A more likely scenario is that as the price of metals doubles, triples, and quadruples, markets for recyclables will develop on their own. The only problem is, by the time this occurs, it will be too late. Many metals will have peaked already. a green economic environment as proposed in the first book of this series would create markets for recyclables sooner and could be the only approach powerful enough to address the issue of resource depletion. Conservation is central to and one of the pillars of the growth is an important consideration with respect to reserves. The greater the industrial production, the faster the depletion of non-renewable resources. Governments around the world are pushing for increased economic growth as a means to improve the welfare of China—representing together almost one third of the world's population—have seen tremendous growth in the last decade. How long will the planet be able to sustain this? While economic expansion is a positive for society, it is a negative factor in terms of mineral reserves. The world's population has grown at an annual rate of a little above 1% in the first decade of this century, economies have expanded at a rate of about 2.57% during the same period. Not only are there more of us, but we also consume resources at a faster rate. Manganese Nodules: The Last Frontier When oil became increasingly difficult to find on solid ground, the industry moved offshore. Is the same thing going to happen with respect to nonfuel There are significant amounts of several types of metals in oceans. They lie in large seabed deposits of potato-size nuggets called manganese nodules. They were discovered in 1803 and are essentially chunks of rocky material that contains significant amounts of manganese, iron, and base metals. They have been found in several locations around the world at various depths. Manganese nodules are technically renewable as they grow by bacteria depositing on their surface certain minerals found in sea water. However, their rate of formation is so slow (2 mm or 0.8 inch per 1,000,000 years) that the renewability of the resource is in question. Furthermore, their rate of growth depends on the total surface available for depositing minerals. Any exploitation of the resource would reduce its ability to renew itself. There are many considerations with respect to mining manganese nodules. The more obvious ones are environmental concerns and the difficulty and cost of exploiting a resource that is deep below the ocean's surface (two to five kilometers on average). nodules represent a significant source of nonfuel minerals. They are a positive factor in terms of extending reserves of certain metals. The question is whether we want to save some of that resource for our children or wipe everything out. The issue is perhaps better captured in The 21st Century Environmental Seabed resources are probably the only thing future generations will have left after we are done. The last thing we want to do at this point is to move into this last frontier. The solution to our problem does not consist in wiping out one resource after another. It lies in bringing ourselves under control. (Henderson, 2010, p. 60) oil has relatively cheap renewable substitutes, the impact of its depletion on future generations will be moderate. This is not the case for nonfuel minerals. Will society resist the temptation of delving into what might be our last significant source of minerals? What will happen to this last-frontier resource? Will it be first come, first served or are we going to try to preserve it for future generations so that they too have resources to build their physical infrastructure with? World3: The Factors the World3 model did take into account many of the factors discussed above. In fact, population, industrial production, and the consumption of non-renewable resources represent three of the five economic subsystems on which the model is based. Factors influencing mineral reserves, such as substitutability, the cost of resource extraction, the potential for technological development, recycling rates, etc. are also considered. This brings us to the last question: how much resources is really left?
<urn:uuid:fa7f6539-238d-4469-9c83-6546033edfd5>
CC-MAIN-2023-06
https://wavesofthefuture.net/conservation-non-renewable-resources-energy-metals.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00352.warc.gz
en
0.935765
8,696
3.125
3
This past weekend Yael Stav offered tours of her home and vertical garden as part of the “Houses From Within” Batim Mibifnim exhibition in Tel Aviv. Some of the garden was built using recycled materials and compost from her children’s eco-friendly diapers. Jerusalem Bird Observatory Brown Roof – Local Flora no Irrigation by Rov-Noy, image via the World Green Infrastructure Network Stav is currently completing her doctoral thesis on the environmental benefits of vertical greenery, which professor of public health at Columbia University Dickson Despommier proposed in 2009 as a perfect fit for the Middle East. (Stephen Colbert asks Dickson Despommier about vertical farming video). Stav’s research shows that growing plants on a buildings roof and walls can save over 20 percent of the energy used to cool the building. “The vertical greenery can enable the whole family in the city to grow a variety of plants, including assorted vegetables and herbs,” she says to Haaretz newspaper, pointing to tomatoes curling down from one of the holders hanging from the fence. “You can do this in any building and use a variety of materials.” The annual ‘Houses From Within’ event also included tours of local community gardens and environmental academic centers in Tel Aviv. The city is at the heart of Israel’s environmental movement, and every year there are new citizen-lead developments to celebrate. Image via Haaretz
<urn:uuid:c7fb2692-ba0b-4b57-b30c-22f8bc1b3d75>
CC-MAIN-2017-04
https://www.greenprophet.com/2012/05/vertical-gardening-tel-aviv/
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00360-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939995
310
2.703125
3
Winner! 2008 Children's Science Picture Book Where in the Wild? Camouflaged Creatures Concealed and Revealed by David Schwartz and Yael Schy, with illustrations by Dwight Kuhn (Tricycle Press, 2007) Purchase this book Listen to an interview with the authors. An online lesson from Science NetLinks is available for this book. The title and subtitle of this book, replete with an ellipsis pause, had me expecting an exposé of deception in the natural world. I wasn't disappointed. "Camouflage," from a French word meaning "to disguise," seems to have entered common usage from military vocabulary in the early 20th century, but even then, military applications alluded to the many animals in nature that were able to avoid detection by blending into their surroundings. The authors ask readers to look carefully at a series of 11 full-page color photographs and find in them the animals, or their eggs in one instance, camouflaged from would-be predators or prey. Ten of the photographs have poems on a facing page offering rhyming hints as to where to look and what to look for. Read the poems out loud for their full descriptive effect, but cleverly, some of the poems also have the printed lines arranged as visual hints: "Motionless" is printed in four "double-jointed" lines representing the four pairs of the motionless spider's legs, and the lines of "Serpentine" undulate across the page. At the bottom of each picture is the notation "lift to find me." Each folio unfolds to vividly reveal the previously camouflaged creature against a faded-out background. Each animal revealed has an accompanying page of life history information, with additional lore on its use of color and behavior in avoiding predation or in assisting in capturing prey. However, suggestions for further reading are lacking. Although the authors present camouflage "experts," from coyotes and deer fawns to green snakes, tree frogs, and salamanders, the masters of deception are the insects, here represented by ladybug beetles and moths, the latter often camouflaged in developmental stages and in the adult. In spite of the many nature documentaries on public and cable television and the seemingly endless proliferation of nature and animal sites on the Web, these authors and their photographer have put together interactive hard copy that should captivate today's youngsters. The only problem with the book is that it ends abruptly. I turned the last page expecting more! David Schwartz is the author of fifty children's books that make math and science come alive, including How Much is a Million?, G is for Googol, Q is for Quark, and If You Hopped Like a Frog. David is a popular speaker at schools in the U.S. and abroad. Learn more about him at www.davidschwartz.com. Yael Schy is a consultant, trainer, facilitator, and coach, who uses improvisational theater and dance to encourage leadership, teambuilding, and creative thinking in the workplace. She is co-author of a business training book, Teamwork Tools, and she also loves to write poems and songs. David and Yael live in Oakland, California, with their two well-camouflaged cats, Sushi and Sashimi. Dwight Kuhn's beautiful composed nature photographs have been featured in 125 children's books. His work has been recognized by the Children's Book Council, the Scientific American, and the John Burroughs Association. He lives with his wife and two dogs in Dexter, Maine.
<urn:uuid:5d51514b-3d2b-47a6-9958-5d330098edbb>
CC-MAIN-2016-22
http://www.sbfonline.com/Subaru/Pages/WhereintheWild.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270798.25/warc/CC-MAIN-20160524002110-00186-ip-10-185-217-139.ec2.internal.warc.gz
en
0.966096
732
2.625
3
Teasing Detailed Home Habits from Aggregate Energy Consumption Data Machine learning techniques can be applied to sensor data collected from smart homes to reveal activity patterns of the residents, which can then be correlated with measured energy consumption. By associating activities with energy use and costs, intelligent systems can be devised to automatically control home environments so as to improve energy efficiency and cut expenses. Although households and buildings are responsible for over 40 percent of energy usage in most countries, many people receive little or no detailed feedback about their personal energy usage. Bills traditionally provide a month's total energy and a total price to be paid, leaving homeowners to guess—after taking any changes in fuel costs into account—what might explain a higher or lower than usual bill. The typical utility bill provides no information about the relationship between a person's behavior and corresponding energy usage. Since such information could help individuals modify habits in ways that would be beneficial for both the household and community, it would be desirable to develop technologies that could extract the information from smart homes and communicate it to residents. A smart home environment is one that acquires and applies knowledge about its residents and their physical surroundings in order to improve their experience in that setting. Such home environments, equipped with sensors for detecting motion, light level, temperature, and energy and water consumption, are ideal testbeds for investigating techniques of inducing behavior changes to reduce energy footprints. One such technique is energy consumption analysis. The general idea is to employ data mining techniques in order to analyze electricity consumption data and to identify patterns of interest to utility companies and their customers: Sequences of usage patterns that appear frequently at different time scales (daily, weekly, monthly, yearly) and across different homes; trends of electricity consumption (steadily increasing, decreasing, cyclic, seasonal) for individual homes and across the community; and anomalies (sudden peaks or drops on consumption) for individual homes and across the community. For example, using abnormality detection algorithms, customers can be notified that they consume an exceptionally large amount of energy during some specific period. With the help of other sensors and techniques, more detailed information can also be provided, including when customers performed certain activities, which rooms they occupied, and what appliances they used most frequently during that period. This information can be transmitted to customers in timely fashion via phone, email or the Internet. To make informed decisions, residents need to know current energy consumption in real time and ideally would need to be alerted when appliances are being activated at unfavorable times. Without installing additional sensors, Non-intrusive Load Monitoring (NILM) techniques can be designed to detect switch events at individual appliances on a single electrical circuit and communicate those events to a home energy management system. Several suitable Internet protocols exist, including Extensible Messaging and Presence Protocol (XMPP), which enables servers to communicate directly with smart phones, email and webpages. PlotWatt applies cloud-based load monitoring algorithms to analysis of smart meter data and tells customers how much they are spending to run each appliance, without actually monitoring the individual appliances. The web-based interface can be accessible from any web-connected computer, tablet or phone. Other analytic techniques include energy consumption prediction and consumption visualization. With the aid of environmental sensors, machine learning algorithms can identify resident behaviors in smart homes and predict future energy usage given the time of day and day of week as well as sensor readings and identified activities. Visualization techniques give the non-technically-minded smart home resident an informative, user-friendly, and intuitive graphical interface that presents information about their energy consumption in the home. The visualization tools should allow residents not only observe their electricity consumption in real time, but also visualize consumption patterns, trends and anomalies. These tools can be presented locally, remotely over the web or via a mobile device. Additionally, there is a place for the user interface to provide remote control features to manually interact with devices throughout the home. Some such smart grid technologies are already widely applied in the industry. Within the field of smart grid architectures, smart home-based technologies represent a small, but growing part, helping individuals to monitor their personal power usage with the goal of making changes in their lifestyles and achieving energy efficiency. Continued research and engineering effort would provide insights into the relationship between home resident behaviors and energy consumption. Additionally, this work will generate a suite of tools that directly benefit individual people wishing to reduce their energy footprints and power utilities by providing a more efficient and adaptive smart grid.
<urn:uuid:cbde66fe-5a67-4265-a4d8-a81d5662db19>
CC-MAIN-2014-35
http://smartgrid.ieee.org/newsletter/february-2012/507-teasing-detailed-home-habits-from-aggregate-energy-consumption-data
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917663.12/warc/CC-MAIN-20140901014517-00076-ip-10-180-136-8.ec2.internal.warc.gz
en
0.934767
905
3.28125
3
The Reburial Controversy: a general overview and exploration of a method for resolution of the ethical dilemma by Eric Pettifor Good friend, for Jesus sake forebear To dig the dust enclosed here. Blest be the man that spares these stones And curst be he that moves my bones. (epitaph on his grave marker) (Riverside Shakespeare, 1974) It is easy on first consideration of the reburial controversy to see a double standard in play, where European graves are sacred, and Indian graves are not, but as was clear to William Shakespeare, Europeans have long had a penchant for moving bones about when it has suited them (as well consider the grave digger in Hamlet who tosses up Yorick's skull while digging Ophelia's grave - in a couple of decades it will be Ophelia's skull that makes an airborne reappearance to make way for someone else). The sacredness of European graves is provisional and of limited time duration, typically the lifetime of the immediate descendants of the deceased, providing they are concerned for the preservation of the grave. Sometimes even if they are concerned it is of no consequence if they are in the lower socioeconomic class and do not have the resources to assert their interest. There are many factors that come into play which will not be discussed here (see McGuirre, 1989), but sufficeth to say, the grave which is least likely to be disturbed is the one from the wealthy family with the huge tombstone and the conspicuousness of prestige (or else the grave of a cultural treasure like William Shakespeare, especially if there is an explicit curse attached!) "The sanctity of the grave is regarded as being primarily a concern of the family and the cemetery, and only secondarily as a community matter." If the grave is ancient, descendants of the deceased themselves long buried, the original community lost even to memory, then it is fair to dig it. McGuirre notes that those of European descent "accept without question the routine excavation and curation of Indian graves which they equate with the ancient graves regardless of the age of the Indian burials." (McGuirre, 1989, emphasis In Western thought, primitive is a temporal concept that creates otherness by relegating people to an ancient time, regardless of their true historical context (Fabian, quoted in McGuirre, This tendency to regard Indians as primitive, an ancient race, and so on, is the legacy of the 19th century idea that the Indian was a dying race. To the perception of archaeologists are graverobbers we may add archaeologists as undertakers putting to rest the Indian people - "It is clear that the Indian with his inability to preserve his own culture or to assimilate ours, is bound to disappear as a race indeed if he has not already found his way into the pious hands of museum archaeologists." (Barbeau, 1923, quoted in Doxtator, 1988) By asserting their rights over reburial of remains claimed to be those of their ancestors Indians are effectively making their presence known and in a sense are saying 'We have been, we are, and we will continue to be, and you must respect us as we define ourselves.' Yet it would be a mistake to characterize Indian demands for reburial as being entirely a political strategy. Clearly it can be and is used to these ends, but even from this perspective we have to ask what it is that they are fighting for. Only a committed cynic could maintain it is entirely an exploitation of 'white guilt' in order to gain commercial property and profit. At the root is a concern for cultural identity, continuity, and survival. A significant part of Indian culture is in its spirituality. Indians did not suffer a split as Western society did at the beginning of the 17th century between science and religion. Just as prior to this time in the West religion was in a real sense a science, a world view which explained "Life, the Universe, and Everything" (Adams, 1982), and Indian spirituality also serves this function, including knowledge of prehistory - "[Archaeologists] understand the past - but we know the past" (Cecil Antone, in Spirituality is not divorced from ecology in this belief system, they are part of the same thing, and this is of direct relevance He [the dead] has done his work in this world and he is going to another world to go back to the mother earth where we all came from . . . if he is disturbed he is out there, wandering, his spirit is not fully with the mother earth . . . (Cecil Antone, in Hubert 1989). . . . [no] digging up the liver of mother earth, the veins, the rivers of mother earth . . . the natural world is what we would like to preserve for our future generations, we would like them to see what we see today, where they can enjoy seeing their brothers, their clan relatives, the eagles, the crows, the buzzards, the rattlesnakes and those animals, those human beings - the sonora fruit cactus - the various cacti and trees who through the burials have grown up into trees and into cactus, and they are with us too in that form, and we want our relations to be with us in whatever form they are. (Cecil Antone, in Hubert 1989). Violating the principles of this spiritual ecology can lead to dire consequences. We want to get rid of the sicknesses, we want to get rid of the unhappy land . . . that is the result of digging up and leaving empty the homes of the ancestors. From the empty homes, that is where the sickness comes . . . the unhappiness; that is how our children are killed, that is how we lose them, because we have disturbed and desecrated those areas where we had our ancestors' homes. (Cecil Antone, in Hubert 1989). When the ferry terminal was being built at Twassen, British Columbia, the province wanted a small amount of the band territory for a road. The band agreed, provided the province build an access road to their marina. The province agreed, but noted that the road they wanted would be through a sacred site. The site was excavated and arrangements made with Simon Fraser University for the study of burial remains for a period of one year, after which they would be returned to the band. Six months later the band urgently requested the immediate return of the remains. During that time there had been a run of what would be regarded from a Western perspective as 'bad luck'. There were several deaths and a band member had absconded with all of the band money. The Twassen tribe attributed this misfortune to the removal of the bones (Hobler, 1995). The consequences of violation of these principles are not necessarily limited to Indians. "Many traditional people believe that the continuing desecration threatens the spiritual balance and harmony of the entire world..." (Hammil and Cruz, Bones should become dust. Mother earth lacks these bodies; if they are not returned there will be earthquakes and mother earth will take all these people. (Arizona Inter Tribal Council, in Hubert, 1989) Given that the concerns of Indians are very real, even if we don't share their belief system, why not simply cede their demands? The bones are those of their ancestors, after all. Shouldn't they be the ones to say how they are treated? The loss to archaeology of a source of data of such major importance would be devastating. Study of burials can yield data as to disease patterns, diet, changes in population, demographics, culture, environment, and society (Hubert, 1989). Arguments for social stratification are often based upon variance in grave goods. The principles involved in Indian demands for reburial would prohibit of digging graves at all, and grave goods would also fall into the category of the sacred (do not touch). Reburial of the source of data used in archaeological interpretations would make it exceedingly difficult for future archaeologists to confirm or disconfirm those interpretations through recourse to the source of the data. Concerning the historic period one of the blessings archeology has bestowed is the ability to lend support or call into question historical accounts. Without recourse to the original data source, archaeological interpretations would become in time no better than the documents of the historians themselves, and any claims of archeology to being a science, hard or soft, would be seriously called into question. Clement W. Meighan worries that these restriction will lead to a loss in the vitality of American archaeology in general. An entire field of academic study may be put out of business. . . . archaeology students are now steered away from digs where they might actually find some American Indian remains. American archaeology is an expiring subject of study ~ one in which new students no longer choose to specialize. Instead, they specialize in the archaeology of other countries, where they will be allowed to conduct their research and have some assurance that their collections will be preserved. (1994) . . . When scholarly classes in United States archaeology and ethnology are no longer taught in academic departments (they are diminishing rapidly), when the existing collections have been selectively destroyed or concealed, and when all new field archaeology in the United States is a political exercise rather than a scientific investigation . . . . [American] leadership in archaeological research . . . . will be lost, and it will be left to other nations to make future advances in archaeological methods, techniques, and scholarly investigations into the ancient past. (1992) Concerning the above question as to the remains being ancestral to Indians and therefore subject to Indian claims, there is not universal agreement on this point. It would be difficult to argue that even the most ancient human bones are not ancestral to modern day Indians in general, but claims of specific tribes to bones allegedly ancestral becomes increasingly difficult to prove as one moves farther and farther back into prehistory. Meighan is very clear on this point: Museum materials 5,000 years old are claimed by people who imagine themselves to be somehow related to the collections in question, but such a belief has no basis in evidence and is mysticism. Indeed, it is not unlikely that Indians who have acquired such collections for reburial are venerating the bones of alien groups and traditional enemies rather than distant relatives. (1992) In this same article Meighan suggests that "Professional organizations should work to amend the legislation dealing with archaeology to get a time cut-off inserted: Remains older than a certain age should not be subject to reburial." and goes on to state that the reburial of a 10,600 year old skeleton in Idaho "should never have happened". What Meighan fails to realize is that from a pan-Indian perspective, relationships like friend or enemy do not matter. Further, Indian spiritual beliefs also operate in the present and apply to all people - that is, their spiritual beliefs are not perceived as only being true for Indians. The worldview is exactly that, global, and no one is outside it, past, present, or future. We can argue that these beliefs are incorrect, that we do not see them as applying to ourselves, but we must appreciate their perspective and attempt to understand it if we are to avoid arguing at cross-purposes. From the Indian perspective it would be wrong even for the bones of Meighan, a contemporary enemy, to be displayed in a glass museum case and for his spirit to be trapped there. To treat such beliefs as mysticism, whether they are or not (and assuming Meighan is using the term in the pejorative), is to invite an adversarial relationship. From a pragmatic perspective this clearly should be avoided, unless we are very certain of being able to force From the ethical perspective things do not seem immediately clear. If ethics are relative, then the ethics of archaeology are at odds with the ethics of the Indian community. Will the 'correct' ethics be chosen through a democratic expression of public opinion? Perhaps, but if so, given current public opinion in favour of the Indians, this would not bode well for archaeology. Or one can adopt the premise that ethics are not relative and that therefore can be useful as a tool for the assessment and resolution of dilemmas. This perspective is one taken by the Social Science Federation of Canada (SSFC), and is outlined in their publication Ethical Decision Making for Practicing Social Scientists (Cannie Stark-Adamec and Jean Pettifor, 1995). a) Dilemmas may result from conflict between the interests of different parties.... Can all interests be served, or must priorities be set and choices be made? b) Dilemmas may arise from conflict between principles... c) Dilemmas may arise from the sheer complexity of competing parties and pressures. d) Dilemmas may arise from lack of awareness of the probable consequences of some behaviour, or lack of foresight, or lack of knowledge of ethical principles. To some extent the reburial conflict involves all of the above sources of dilemma. The SSFC advocates 9 steps towards the resolution of such dilemmas, the first five of which are: 1. Identify the ethically-relevant issues, principles, standards, rules and practices. 2. Identify the different parties affected by your decision and their special characteristics and interests. 3. Develop all the alternative courses of action. 4. Analyze the likely short-term, ongoing, and long term risks, benefits, consequences of each course of action on different persons who may be affected by your decision. 5 Consider how any personal values, biases, beliefs, or self-interest may influence your decision ~ either positively or negatively. In considering the first point, an ethically-relevant question might be can human remains be treated as 'stuff'? Can a person or group 'own' human remains? The answer from a Western perspective is clearly 'yes'. Drawers and cabinets are filled with such remains. It is not only clear from an atheistic perspective, but even Western spirituality regards the dead body as something which is left behind in favour of some sort of spiritual body. The remains of the dead are to be treated with respect in the short term at least, there is often some ceremony, and survivors may have some attachment to the place where such remains are laid to rest, but all of this is of limited duration and primarily for the benefit of the bereaved. The soul has gone elsewhere. From the Indian perspective the only owner who has clear title the remains of the dead is Mother Nature. The remains are not static, but play a dynamic role in a spiritual-ecological process. They have their place, and it is not in the possession of any person or institution. Spirit is attached to the dead matter, and if it is in a drawer in an archaeology lab, then there also is the spirit trapped. Or as Chief Seattle summarized this difference, "To us the ashes of our ancestors are sacred and their final resting place is hallowed ground, while you wander far from the graves of your ancestors and, seemingly, without regret." (1854, in Turner, 1989) Many archaeologists (including Meighan) feel that they have some sort of obligation to tell the story of the people of prehistory. They feel that the best way to do this is through some form of scientific method which draws cultural inferences from material remains. Historical records are a good source, but may be suspect and are stronger if they can be supported by archaeological evidence. Oral 'history' is often seen as falling into the category of mythology and reliability is perceived as decreasing the farther back in time the events being related. If relationship is not recognized between these people of prehistory and contemporary Indians, then there is no obligation to contemporary Indians. They are unrelated and contemporary Indian claims can be dismissed as irrelevant. Indians, on the other hand, place a greater emphasis on oral history and tradition. Literacy is a recent acquisition for Indians, and they are aware that it is a European import which their ancestors did without from the beginning of time when they were created. They don't only understand their prehistory, they "know" it (Cecil Antone, in Hubert 1989). The people of prehistory are regarded as ancestors and very strongly related. For them a keenly felt issue is that of respect. Hammil and Cruz (1989) asked of the O'Odham nation in Southern Arizona "What message . . . would you send to a world organization of archaeologists?" You tell them that we do not treat our bones with such disrespect. Those bones are our ancestors . . . and they are sacred. By disturbing the ancestors' graves and spirits, they have caused many problems and hard times for our people and this makes us very sad. You tell them that the bones of our ancestors must be returned. They are sacred and we do not treat our ancestors with such disrespect. Yet the return of "the bones of the ancestors" seems to some archaeologists to violate what they consider an ethical obligation not only to the people of prehistory, but to living people today (including Indians), as well as to future generations (Goldstein and Kintigh, 1990). Destruction of data is analogous to the burning of libraries. The second point in the SSFC document (Stark-Adanec and Pettifor, 1995), "Identify the different parties . . . special characteristics and interests," I believe has been covered adequately so far in the brief overview that this paper represents. The third point is "Develop all the alternative courses of action." These courses fall broadly into three categories which (viewed from the archaeological perspective) are: unconditional surrender, never surrender, and compromise. I will begin by exploring the 'unconditional surrender' approach, since there is a holdover from the first point which is best addressed in relation to this, and this is the idea that archaeologists somehow have an inherent 'right' to dig graves. This a point which Anthony L. Klesert and Shirley Powell (1993) stress again and Archaeologists have no intrinsic right to survey, excavate of manipulate the material remains of the past, and their failure to understand this constraint is, we believe, the source of the current and continued contention between archaeologists and Native It is a perilous delusion to ever believe that archaeologists have a natural "right" or overriding "mandate" to dig up anything at all (Goldstein and Kintigh, 1990; Meighan, 1984; Turner, 1986; White, 1991), much less when that act interferes with or is contrary to the religious and cultural beliefs of and interests of those being studied or of their descendants (Adams, We have no inherent right to dig or study human remains. Furthermore, our obligations, once we might be permitted to conduct such work, go well beyond the human tissue lying in our hands, to the entire living system it represents. To Meighan's criticism that constraints constitute an infringement of academic freedom (Meighan, 1986, in Klesert and Powell, 1993), they respond that Academic freedom involves the freedom to think, to inquire, and to espouse diverse philosophies. It does not and should not include the freedom to act as one pleases. Actions (methods and techniques) are not covered under academic freedom, nor should they be. Excavating, analyzing, studying details of indigenous cultures, and curating human remains are actions, not thoughts, and are therefore subject to ethical constraints. However, this begs the question of the extent of religious freedom as well. Does it extend to dictating what archaeologists may or may not do? "Under the first amendment one is free to believe whatever one wishes but cannot compel the actions of others in accord with one's religious beliefs. Reburial is an "action" which is forced upon archaeologists based on professed Indian religion." A great deal of how one interprets which rights are to be respected and which are being violated depends very much upon the beliefs the interpreter possesses prior to even considering the question. To some extent, then, higher principles are evoked to defend a priori positions. This problem should be recognized up front in order to avoid wasting time and energy self-righteously waving the banners of rights and freedoms, something to which both sides have equal access with neither side being wrong. From a pragmatic perspective, archaeologists will dig whatever and wherever they want, according to opportunity and constraints. Indian interests threaten to impose constraints, and focus must be upon what their interest constitutes, how far it is justified, and what measures need to be taken to see that it is insured only to that extent. It is tempting to characterize the 'never surrender' perspective with the following quote from a newsletter of the American Committee for Preservation of Archaeological Collections (ACPAC) (1986, quoted in Hubert, 1989) Archaeologists, your profession is on the line. Now is the time to dig deep and help ACPAC with its expenses for legal fees. Next year or next month will be too late; we have to act immediately to fight this issue. This one will be resolved in court, not by the press. We will be able to cross-examine Indians on their tribal affinities, religion, and connection to the archaeological remains they seek to destroy. We will be able to challenge anti-science laws based on race and religion. We can make a strong case, but it takes money. Send some! Clearly, this is from the extreme edge of this perspective. However, archaeologists in this camp (in any camp) have legitimate concerns about loss of data that has been collected, and potentially serious impairment of their ability to collect data and to interpret North American prehistory in the future. The most promising approach seems to be compromise, though it is fraught with problems. In compromise neither side gets completely what they want, and furthermore, to even begin an honest attempt at compromise, both parties must be prepared to recognize, at least to some reasonable extent, the legitimacy of the other. What is the basis upon which the parties can arrive at this acknowledgment? For archaeologists a critical factor in dealing with any Indian group is the issue of relationship. What they offer the Indians as a basis for their own legitimacy is simply the value of archaeology as a tool with which to understand prehistory. We concur with the Society for American Archaeology's (SAA) position that the basis of legitimacy in the Native case is "relationship," and the basis of legitimacy for the scientific case is scientific value. To the extent that we can achieve agreement, in the abstract here, we believe that we can and should compromise. On the other hand, to the extent that some Native Americans or some archaeologists wish to have their belief systems dominate, we must strive to prevent it. (Goldstein and Kintigh, 1990) Goldstein and Kintigh are well advised to use the term "in the abstract". They do not consider pan-Indian claims any more legitimate than the "claims of a specific tribe where the archaeological and historical evidence clearly indicates that there is no relationship..." However, even if archaeologists were prepared to accept pan-Indian claims, there would still be concern for the legitimacy of the representation. If just any old Indian person were to show up at the Archaeology Department office of a major university and demand that his sack be filled with remains for reburial, he would likely not be successful in Meighan asks "whether a majority of living persons of Indian descent actually favor reburial or the continued preservation display, and study of Indian remains and artifacts..." Hammil and Cruz insure that the credentials of their association, American Indians Against Desecration (AIAD), and its objectives are known (1989): [AIAD] is a project of the International Indian Treaty Council which was formed ... in 1974 with delegates representing some 97 Indian tribes and Nations from across North and South America. We hold non-governmental status in the United Nations. They estimate that half a million bodies are stored in government financed institutions in the United States and that a half million more are stored outside the United States. It is AIAD's objective, and intent to ensure that all Indian remains and sacred objects buried therewith are returned ... Anything less is unacceptable and to ensure our objective's success, we are training our children and grandchildren in locating and securing the return of our ancestors and sacred items. Though it would be comforting for archaeologists trying to justify a hard line to characterize Indians seeking reburial of remains as unaffiliated fanatics or political opportunists, it seems in actuality that there are legitimate groups with claims which should be considered, and that from the archaeological side there is a basis for compromise. Can Indians regard archaeologists as legitimate enough to try to work at a compromise with? As a result of this confrontation and conflict of values, anthropology and archaeology are normally rejected by American Indian students as potential professions, and by Native Americans generally as irrelevant (or even pernicious) fields of study. Klesert and Holt (1985, 1990), in an analysis of questionnaires sent to nearly 300 American Indian tribes, support this contention; fewer than half the responding tribes considered archaeology to be of any benefit at all. (Anthony L. Klesert, 1992) Meighan provides an example of going too far, in his opinion, in cooperation in his 1991 example of the West Virginia Department of Transportation's agreement with a "committee of Indians and non-Indians" and an agreement made whereby all human remains, chipping waste, food refuse, pollen samples and even soil samples had to be given up within the year. The total cost to taxpayers was $1.8 million U.S. dollars. Further, "Indian activists were paid by the state to monitor the excavation and to censor "objectionable" photographs or data appearing in the final report." (1994). Also upsetting to Meighan was the pan-Indian nature of the committee. As if to emphasize their contempt for real ancestral relationships, the activists . . . . included Indians from tribes as far away as northwestern Washington, as well as non-Indians. Meanwhile, the views of a local West Virginia tribe that favored preservation of the remains were ignored. (1994, italics mine) While for many Indians the legitimacy of archaeologists based on the value of science is non-existent or severely circumscribed at best, there are some positive examples. The Navajo are taking an affirmative action approach with their Navajo Nation Archaeology Department (NNAD) which, if successful in its long term objectives will eliminate the question of archaeologists work with Indians by developing Indian archaeologists. Currently they have professional non-Indian archaeologists in the top positions, but they are aggressively training Navajos, in cooperation with Northern Arizona University, towards a day when the NNAD will be entirely staffed by Navajos. They regard archaeological expertise as very important in land claim cases, the protection of endangered graves, and tourism (the establishment of Tribal Parks emphasizing prehistory of Navajo people) (Klesert, 1992). The Navajo example, with their establishment of a complete archaeology department, is the best example of cooperation that I have come across in my review, but there are many others as well. Klesert points to the cooperation between the Kodiak Area Native Association and Bryn Mawr College at Kodiak Island (Pullar, 1990, in Klesert, 1992). Phil Hobler's work on the North West Coast has led to Indians developing a renewed interest in their cultural past with the direct consequence of reinstitution of potlatch festivals (Clark, 1995). A similar situation occurred for the Colorado River Indians, who were given songs preserved earlier by a museum: ... the elders started remembering. It started coming back to them. They started singing, and somehow, the tribe got involved in trying to recoup those old songs that had been lost. . . .The elders would not have picked up some of those songs if they had not gotten about them [sic] from the project. (Antone, in Hubert, The benefits of cooperation do not extend only in one direction either. Hobler has pointed out that the oral histories can provide clues and confirmation for archaeological research. He points to the legends of the two floods on the N.W. Coast, and how they relate well to geological data supporting two marine transgressions in prehistory (Hobler, 1995). Larry J. Zimmerman likewise points to collaborative work between archaeologists and The Pawnee Indians to summarize the archaeological record of their tribe for a court case involving repatriation of human remains. At the same time Pawnee tribal historian Roger Echo-Hawk gathered previously recorded oral history and other materials pertaining to Pawnee origins and history. Since the case, archaeologist Steve Holen has worked with Echo-Hawk to compare the archaeological record and the oral history . . . . Many Pawnee narratives are reflected in the archaeological In a sense, we lost the bones, but we gained something else in return, compromise in action. Another point in favour of greater respect for the oral tradition is that it deals with meaning. Hobler (1995a) has pointed out that a difference between Old World archaeology and New World is that the former is more humanistic in its perspective, while the latter tends to be more 'scientific' (just the facts ma'am). If there were a greater sense of meaning, a story unfolding, then it would be of greater interest to people in general, Indians and non-Indians alike. The fourth point in ethical resolution of dilemmas outlined by the SSFC is to "Analyze the likely short-term, ongoing, and long term risks, benefits, consequences of each course of action on different persons who may be affected by your decision." Since this is a general overview of the reburial controversy I will restrict myself to a summary of the effects on the different groups in question. The hard-line approach on the part of archaeologists may in the short term preserve collections, but the current atmosphere is tense and not conducive to getting work done, since the trust of Indians is required for optimum results, especially when seeking to work on Indian lands, and distrust leads to obstructionism. In the long term this could lead to court cases and interpretations of legislation which will be very much to archaeology's disadvantage. The danger in giving in totally to Indian demands is that items of archaeological value, even of the greatest value, will be lost (as in Meighan's reference to reburial of 10,500 year old remains in Ohio. (1992)) Furthermore, giving in completely to pan-Indian demands could lead to complicity of archaeology in the domination of the interests of local tribes by groups claiming to represent pan-Indian interests (Meighan's example of the West Virginia tribe which favoured preservation but was overruled (1994)). In the short term compromise will require leaning more in the direction of Indian's interest and the dangers outlined in the paragraph above may be unavoidable. As far as the ongoing is concerned there is not much advantage, since Indians will perceive themselves as getting no more than what they've been entitled to all along. In the long term, however, closer cooperation between archaeologists and Indians will lead to a greater familiarity with and respect for archaeology on the part of Indians, coupled with a greater desire to utilize it as a tool. Positive examples of cooperation will increase and attitudes towards archaeology will change for the better. Our understanding of North American prehistory will be richer when archaeological method works with meaning rich traditional Indian perspectives to tell this ongoing story of which we are all now a part. The fifth SSFC point is to "consider how any personal values, biases, beliefs, or self-interest may influence your decision ~ either positively or negatively." If we are aware of our Eurocentric perspective, then we can consider it and retain what is of value in it (we Indo-Europeans are not conquering the world by simple application of brute force alone), while at the same time being cautious of those elements which function to the detriment of ourselves and others. If we are not aware of this perspective, we will regard its products and processes as being those of some fundamental 'truth' which they are not. The final three points of the SSFC are: 7. Act with a commitment to assume responsibility for the consequences of the action. 8. Evaluate the results of the course of action. 9. Assume responsibility for consequences of your action, including correction of negative consequences, if any, or reengaging in the decision-making process if the ethical issue is not resolved. Evaluation and responsibility are ongoing. The greater issue of which the reburial controversy is a part is the cooperation of Indians and archaeologists towards a deeper understanding of North American prehistory. The reburial controversy is a litmus test of the extent to which this is possible. The situation cannot be resolved, but it will evolve, and we can act in our own interests by respecting the interests of Indians. The best case scenario is one in which we harvest and share the fruits of both perspectives ~ not once and for all, but season by season, always aware of the weather. Finally, as a closing note, I would like to ask the question, very pertinent to reburial, whether we can close our minds totally to spiritual concerns: I think it is foolish to pretend on the basis of a wholly materialistic science (which can only measure quantities) that there is nothing spiritual and nonmaterial in our universe. It is this attitude, as much as anything, that distinguishes Indians from the rest of American society and most certainly from the scientific endeavor. Whether there is sufficient proof of the Indian beliefs and experiences or not, it is a hazardous thing to assume without good cause that the Indians are lying or simply superstitious. (Vine Deloria, Jr., 1992) Having opened this essay with a spiritual warning from the Bard, let me close with one from an Indian perspective: The White Man will never be alone. Let him be just and deal kindly with my people, for the dead are not powerless. (Chief Seattle, 1854, in Turner, 1989) Adams, Douglas, 1982. Life, the Universe, and Everything. New York: Random House. Clark, Mike, 1995. In tutorial, Archaeology of North America course, Simon Fraser University. Deloria, V., Jr. (1992) Indians, Archaeologists, and the Future. Am. Ant. 57:595-598 Goldstein, Lynne and Kintigh, Keith (1990). Ethics and the reburial controversy. Am. Ant. 55(3) Hammil, Jan and Robert Cruz (1989). Statement of American Indians Against Desecration before the World Archaeological Congress. In Layton, R. Conflict in the Archaeology of Living Traditions. London: Unwin Hyman Hobler, Phil (1995) in conversation .....(1995a) In lecture, Archaeology of North America course, Simon Hubert, Jane (1989). A proper place for the dead: a critical review of the 'reburial' issue. In Layton, R. Conflict in the Archaeology of Living Traditions. London: Unwin Hyman Klesert, Anthony L. & Powell, Shirley (1993) A perspective on Ethics and the Reburial Controversy. American Antiquity Klesert, A.L. (1992) A view from Navajoland on the reconciliation of anthropologists and Native Americans. Human Organization McGuirre, Randall H. (1989). The sanctity of the grave: White concepts and American Indian burials. In Layton, R. Conflict in the Archaeology of Living Traditions. London: Unwin Hyman Meighan, C.W. (1992) Some Scholars' Views on Reburial. Am. .....(1994) Burying American Archaeology. Archaeology 47:6 .....(1996) email to author, May 17th Shakespeare, William. The Riverside Shakespeare, 1974. New York: Houghton Mifflin Stark-Adanec, Cannie and Jean Pettifor, 1995 (in press, February 20th draft), Ethical decision making for practising social scientists: Putting values into practice. Ottawa: Social Science Federation of Canada. This has since been published and is available from Humanities and Social Sciences Federation of Canada (email@example.com) Turner, Earnest (1989). The souls of my dead brothers. In Layton, R. Conflict in the Archaeology of Living Traditions. London: Unwin Hyman Zimmerman, Larry J. (1994) Sharing Control of the Past. Archaeology
<urn:uuid:6ccb4f79-84c4-41ed-b8bc-7e9a99f769eb>
CC-MAIN-2016-44
http://www.wynja.com/arch/reburial.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00405-ip-10-171-6-4.ec2.internal.warc.gz
en
0.93347
8,206
2.5625
3
Dr. Bruce Schulte, head of WKU’s Department of Biology, is featured in a BBC report about research on how elephants impact ecosystems in Africa. The study by Schulte and other scientists suggests that areas heavily damaged by elephants are home to more species of amphibians and reptiles than areas where the beasts are excluded. The findings have been published in the African Journal of Ecology. “Elephants, along with a number of other species, are considered to be ecological engineers because their activities modify the habitat in a way that affects many other species,” Schulte said. “They will do everything from digging with their front legs, pulling up grass to knocking down big trees. So they actually change the shape of the landscape.” Contact: Bruce Schulte, (270) 745-4856.
<urn:uuid:58a33f59-c44d-46e6-b661-80cb84d3be17>
CC-MAIN-2016-36
https://wkunews.wordpress.com/2010/10/25/schulte-elephants/
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292697.31/warc/CC-MAIN-20160823195812-00221-ip-10-153-172-175.ec2.internal.warc.gz
en
0.945814
175
3.15625
3
How One School District Helps Students Avoid Reading Failure Brandon is a busy and capable kindergartner. One morning before Christmas, he kept right up with his classmates at Clear Lake Elementary as he counted the 67 days of school he's attended so far, recited a poem about the five little Santas, and made a construction-paper wreath. When his work was done, he settled on the floor to play with trucks and blocks. Brandon also knows quite a few letter sounds. "That one says /b/ and that one says /a/," he tells a visitor, pointing at the large letter cards hanging over the blackboard. As he speaks, he gets up from the rug. "I have to stand up to do it," he explains. Demonstrating his expertise in the alphabetic principle, it seems, takes Brandon's full concentration. "That one's /r/ and that one says /f/," he continues. "And that one is /p/. And /s/," he announces proudly with a prolonged, snakelike hiss. Brandon is on target to become a reader. But if he'd been in school seven years ago, he might well have been on track for special education instead. That's because he started kindergarten showing clear signs of reading difficulties. An assessment found that he was having trouble with such tasks as identifying letters and recognizing or reproducing the initial sounds in spoken words. Most telling, he was making little or no progress after a few weeks in school. The school's old approach wasn't geared to dealing with reading problems quickly and systematically. A learning disability label and a referral to special education might have been the outcome for this bright boy. But luckily for Brandon, an innovative approach adopted by the Bethel School District in Eugene, Ore., several years ago rapidly intervened with strategies tailored to his needs. The results district-wide have been stunning. Today, only two percent of kids leave first grade as nonreaders*—phenomenal for any district, and especially so given the low socioeconomic status and high mobility rates of Bethel students. Before the initiative, the numbers were discouraging. In those days, 15 percent of kids left first grade unable to read. Second-grade special education referral rates were soaring—hitting 17 percent at one school in 1996–97. Worried, the district began analyzing its approach to reading. Recalls Carl Cole, special services director, "We were concerned about the high number of kids identified as learning disabled, and when you're talking about kids who are learning disabled, you're almost exclusively talking about kids with reading disabilities." Looking into the matter, the district found that the problem was not with the assessments and identifications of the referred students. They were accurate. But assessments of kids were not tied to what was happening in the classroom instructionally. Sometimes the evaluation team referred kids to special education to make sure students would get instruction of a kind not available in the regular classroom. "When it was discovered that kids were discrepant readers, we didn't use that information to say, ‘What are we doing instructionally that's causing this?'" Cole recalls. What they were doing instructionally was, as in many districts across the country, "a recipe for disaster," says Cole, particularly for a student population in this low-income community where transient hotels and homeless shelters are plentiful. Because the district had a site-based approach—allowing each school to choose its own reading program—there was no consistency from school-to-school, grade-to-grade, room-to-room. Different textbooks were in use across schools, within schools, and even within grade levels at the same school. Also, the district's half-day kindergarten was mainly a social-readiness program, not an instructional program. Had Brandon entered a Bethel kindergarten back in the old days, he would not have been tested and monitored regularly on indicators of progress toward reading. His exposure to letters and letter sounds would have been incidental, not direct. If he didn't seem to be catching on—if, for instance, he had nothing to contribute when his teacher asked the class to brainstorm for words that start with a b—his teacher would have concluded that he was just "not ready for reading." District administrators became convinced that most kids identified as learning disabled are actually "instructionally disabled," meaning they hadn't received the instruction appropriate for their needs. So they set out to build a reading program that would be effective for all students. They joined forces with University of Oregon's Institute for Development of Educational Achievement, directed by nationally known reading researchers Drs. Edward Kame'enui and Deborah Simmons. A four-year, $700,000 grant from the U.S. Office of Special Education Programs was committed to the development, implementation, and evaluation of Bethel's reading initiative. "The amount of support we had was phenomenal," says Cole. Besides bringing in the expertise of Kame'enui and Simmons, the grant paid for staff development and a new position—reading coordinator. Today, Bethel's approach to reading is more than an instructional model—it's also a prevention model, designed to head off many learning disabilities at the pass. The model includes: • Measurable district goals for each grade level; • Regular and frequent assessment and monitoring; • Research-based reading curricula that involve direct, explicit, and systematic instruction; • Protected time for reading instruction; • Instruction in small groups at each child's skill level; • Leadership role for principals; and, • Training for all teachers and educational assistants in using the curricula and assessment measures. * * * Research shows that the "wait-and-see" attitude toward reading problems—common at many schools—is a mistake. Instead, Bethel takes an "as-early-as-possible" approach. In the second week of school, a building assessment team (typically, the Title I and special education teachers, plus educational assistants) tests kindergartners for initial-sound fluency and letter-naming fluency using a set of indicators and benchmarks developed at the University of Oregon. The DIBELS (Dynamic Indicators of Basic Early Literacy Skills), each of which takes about three minutes per student to administer, are reliable predictors of later reading performance, according to research findings. Based on these assessments, as well as subsequent teacher observations, students are placed in small groups in one of three categories: "benchmark," which means on track to meet district goals and ultimately state standards; "strategic," meaning progressing but behind; and "intensive," meaning at risk of failing to meet goals. By the beginning of October, the at-risk kindergartners are getting an extra 30 minutes per day of reading instruction. They also get progress monitoring with DIBELS twice a month—twice as often as their classmates. The extra instruction is not a pullout but an add-on. At Clear Lake, the additional time is sandwiched between morning and afternoon kindergarten. A van collects and delivers the afternoon extended-day kids early, and takes the morning group home half an hour later than their classmates. The curriculum for this extended kindergarten (playfully named the "Reading Raccoons") is Early Reading Intervention (ERI), developed by Kame'enui and Simmons and field-tested in the Bethel district before being published recently by Scott Foresman. During the half-hour lesson, the instructors—teachers and educational assistants—move almost seamlessly from one activity to the next, hardly wasting a breath. Speaking smoothly and sometimes rhythmically, they deal out and sweep up manipulatives such as letter tiles, erasable white boards, alphabet and picture cards, tracing cards, game boards, pencils, and paper. As they do, they model and test children on very specific phonological skills, for instance, the ability to isolate particular initial and final sounds. Teacher Jane Sterett's group of five Reading Raccoons is all attention as she passes out yellow plastic letter tiles—clink, clink—p, t, s, m, and l to each child. In front of each child is a laminated strip printed with a row of three squares. The teacher holds up a picture card. "This is cat," she says, then asks, "What is this?" "Cat," they chorus. Then, following her instructions, the students move their index fingers along the strip, pointing to each square as Sterett slowly says each sound: /k/, /aaa/, /t/. "Where is /t/?" she asks. The students point to the last square. "That's right, /t/ is the last sound in cat. Now find the letter for the sound /t/ and put it in the last square." The plastic tiles clink as each child finds the "t" and places it on the strip. Each daily lesson offers many chances for children to respond individually and as a group. Though ERI is highly scripted, experienced teachers often fit in even more opportunities for responses, while still delivering the program as intended, says district reading coordinator Rhonda Wolter. The 126 ERI lessons take students along a skills continuum—from learning letters and sounds to segmenting and blending phonemes in sequence to reading words and, finally, to reading sentences and storybooks. Each lesson includes writing and spelling activities as well as activities for phonological awareness and alphabetic understanding. Another research-based curriculum, Open Court, is the core reading program at Clear Lake and most of the district's seven elementary schools, where it is used for daily class instruction, K–3. Each day, following the whole class instruction, students break into small groups. In those groups, which last about 30 minutes, three educational assistants join the teachers to provide instruction geared to the kids' skill levels. The "strategic" group (progressing but behind), gets ERI. The "benchmark" group (readers who are on track) might read decodable or leveled books. And the "intensive" group gets a "double dose" of reading with different materials, a reinforcement of ERI material they have already encountered in Reading Raccoons. The extra instruction for at-risk readers, as well as the small daily groups for all students, continues through the primary grades. "Part of what has been really successful with our model is that for kids who need interventions like this, we always try to make it in addition to the regular program," says curriculum director Drew Braun. "In the past, it was ‘instead of.' For example, when you broke into reading groups, if you were Title I, you went to Title I. Now Title I and other services are a second dose for those kids—not instead of—because kids are not going to get caught up unless we give them extra." Another "extra" is the district's five-week summer school for students who are not meeting benchmarks or who are in danger of losing ground over the break. "They're kids that we're not sure how much support they're going to get over the summer, whether anybody's going to get them to the library, so we give them the opportunity to continue practicing their skills," says Wolter. * * * The district's commitment to reading is paying off. For children who have been in the Bethel reading program since kindergarten, second-grade special education referrals are now between four and six percent, even though students are actually entering school with lower prereading skills than before. And, despite the fact that the proportion of children eligible for free or reduced-price lunch has increased in recent years from 37 to 48 percent, the proportion of third-graders meeting state standards in reading has also increased—from 79 percent in the 1998–99 school year to 92 percent in the 2003–04 school year. Results like these are just what Bethel's educators were hoping for when they began using DIBELS in 1998–99 and ERI in 1999–2000. "I think one kind of kid we catch is a kid who has trouble paying attention," says Wolter. "We have a lot of those kinds of kids. In a big group, they start losing out on what's going on. By doing our small groups, we've been able to capture those kids, keep them in a structured setting, and work with them." Some kids, despite the research-based core classroom curricula, the twice-monthly progress monitoring assessment, and the early and extra intervention, still don't make progress. In that case, says Wolter, "a whole series of checks" happens. "Has the student been absent a lot, does the student have health problems, has their vision been checked, their hearing?" Wolter says. "Maybe it's in the instruction. Maybe the instructor's been shaving off five minutes because the kids have been coming in late. Are they in a group too large? Is the program being used with fidelity?" Going down this checklist usually roots out the problem. Sometimes, it's found in surprising places—literally. Two years ago, a doctor turned up foreign objects—a bead and a twisted piece of aluminum foil—in the ears of a boy whose progress in extended kindergarten had stalled. Flip of a Switch The reading initiative has wrought changes on a lot of levels. At the district level, it broke down a dividing line between regular and special education. These days, Cole—the special education expert—might run a general curriculum meeting, while Braun—the generalist—might facilitate a special education meeting. "It's just a continuum," says Braun. "We've taken a lot of the bags of tricks of special education and put them in the regular classroom because they work really well." Kindergarten teachers were resistant at first to the new instructional methods and assessments when the program began to phase in spring of 1999. "It was very, very difficult—I got my phone unlisted," Cole jokes. Clear Lake Principal Betsy Fernandez also recalls some tension. "Some of the teachers in my building were pretty outspoken in the questions they asked—‘What about the pressure that's being put on kids academically? What about the whole developmental approach to teaching kindergarten?'" she says. "They were tough questions. Now, kindergarten teachers are some of the most dedicated to the program, and the results they're getting are really good." The change, she says, came when teachers began seeing the hard data after the first half-year. "It was like the flip of a switch," she remarks. Kindergarten teacher Elizabeth Radke notes that "having more intentional instruction and more direct instruction to their levels was helping" the strugglers. The other thing that changed kindergarten teachers' minds, besides the data, was the depth of the district's investment in the program. "We couldn't do it without all the support," says kindergarten teacher Linda Tindal. "It wouldn't work if I had to try to run four reading groups by myself. But our district is very committed to it, and it's wonderful." With students entering first grade more prepared, they catch on more quickly. To accommodate those stronger learners, teachers are making adjustments. Reading coordinator Wolter had to scramble for appropriate reading materials when the first wave of better-prepared students hit first grade. "Most of the kids come in, if they've been here, knowing the majority of their letter names and sounds," says first-grade teacher Vivian Ewing. "They're really ready to take off with the reading. It's amazing to see, because it used to be that out of a class this size, a third of them knew all the letter names and sounds, a third of them knew about 10, and a third of them hadn't had any experience or they'd had experience but it wasn't consistent enough." Wolter has seen the same effects. "Even Title I classrooms don't have as many kids in them as they used to," she says. "And the kids are doing higher skills"—not the typical Title I work. Another outcome, she says, is that many Title I first-graders are new kids moving in who haven't experienced an academic kindergarten. "So," she says, "we need to start all over with them." A major impact of the program has been in special education classrooms. There are many fewer kids in special education; and those who are there have much more severe, hard-to-remedy reading difficulties. "They're really challenging" says Clear Lake's special education teacher Linda Duke. Still, Duke likes the new continuity between special and regular education. For instance, she uses the same DIBELS system of progress monitoring that the regular teachers use, just more frequently. Sometimes teachers are using the same direct instruction programs with their low readers as she does in the resource room. And when she mainstreams a child, various interventions, such as an oral reading fluency lab, are available in the regular program, allowing the child to keep working on key reading skills. "The teachers are working together and the whole system is so fluid that we can move kids in and out of programs," she says. Meanwhile, to determine how best to help the kids who are not responding adequately to the reading interventions, the district is involved in federally funded research studies with the University of Oregon. "I Can Do It" The most dramatic changes in Bethel are in student performance. The statistics tell part of the story. Compare, for example, the first-grade oral reading fluency scores† of kids who move to the district in the fall of first grade with scores of kids who enter in the fall of kindergarten. At the beginning of first grade, there's a significant difference between the groups, says Braun. By spring, the new kids have not caught up. They are still 10 words behind in oral reading fluency. For kids who enter the district in second grade, the end-of-year difference is 22 words per minute. "Kids who have been here are reading 25 percent faster than kids who came in at the beginning of second grade," Braun reports. Late-entering students, in fact, are Bethel's next challenge, particularly with its high mobility rate. Between the beginning of kindergarten and the beginning of first grade, the district loses about 22 percent of its original kindergarten class and gains about 20 percent in new students in first grade. Scores and statistics, however, don't tell the whole story. Changes have shown up, too, in student behavior. "Previously, kids were starting to misbehave because they were having difficulty with skills," says Wolter. "By putting them in a small group, by getting them right where their skill level is, we alleviate some of those problems. They start feeling good about themselves, and they don't have to act out." Clearly, Brandon is one child who feels confident in his abilities. He likes to tell about the letters he's learned and show off how fast he can spell his name. "There's a lot of stuff I do in Reading Raccoons," he says. "We do Writer's Warm-Up, and that's hard. But," he reports with pride, "I can still do it." Catherine Paglin is a freelance journalist who frequently writes for the Northwest Regional Educational Laboratory (NWREL). This article is updated and reprinted with permission from "Double Dose: Bethel School District's intensive reading prgram adds beefed-up instruction for at-risk readers from Day One" in NWREL's Northwest Education Magazine, Vol. 8, No. 3. *At the end of first grade, Bethel's nonreaders are defined as those students who can read less than 15 words per minute on the DIBELS measure of Oral Reading Fluency. According to the DIBELS Administration and Scoring Guide, students should attain a score of at least 40 words per minute at the end of first grade; those who obtain a score of 20 or below are considered at risk for reading difficulties and those who score below 10 are in need of intensive instructional support. (back to article) †The DIBELS measure of Oral Reading Fluency--the number of words a student can read correctly in one minute--is a reliable predictor of how well a student will do on later comprehension tests, such as the Oregon State Assessment. (back to article) Avoiding the Devastating Downward Spiral| The Evidence That Early Intervention Prevents Reading Failure By Joseph K. Torgesen How One School District Helps Students Avoid Reading Failure By Catherine Paglin
<urn:uuid:4730fae5-82b3-4f73-b475-89c8bc5700d3>
CC-MAIN-2017-09
http://www.aft.org/periodical/american-educator/fall-2004/practicing-prevention
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00176-ip-10-171-10-108.ec2.internal.warc.gz
en
0.975499
4,256
2.71875
3
New Guide Helps Clarify Complex Clean Air LawsFor information contact: e:mail: Public Affairs Golden, Colo., February 20, 1998 The U.S. Department of Energy (DOE), in cooperation with the U.S. Environmental Protection Agency (EPA), has released A Guide to the Emissions Certification Procedures for Aftermarket Conversions. This new federal guide can help fleet managers, equipment manufacturers and installers of vehicle conversion kits navigate through emissions regulations for vehicles converted to alternative fuels (known as "aftermarket" conversions). Tests conducted by DOE's National Renewable Energy Laboratory (NREL) show that vehicles converted from gasoline to alternative fuels do not necessarily emit fewer pollutants and in some cases emissions may even increase. In response to these findings, the EPA issued an addendum to the legislation that once governed such conversions, Mobile Source Enforcement Memorandum 1A, Section 203 (a) of the Clean Air Act. "We had a general sense that the old guidance under 1A wasn't comprehensive enough for evaluating either aftermarket parts or conversions," said Richard Ackerman, an EPA senior environmental engineer. "Compelling data from NREL supports conclusions that aftermarket conversions done under the old policy didn't provide adequate assurances of emissions compliance" (Propane Vehicle, December 1997, pg. 6). The new addendum clearly outlines three alternatives that build a "reasonable basis" for determining that an aftermarket conversion complies with the emissions regulations. The Guide is a simple step-by-step reference manual that explains EPA certification requirements under the newest rules. It covers dedicated and dual-fuel conversions and includes agency contact information, certification process flow charts, reference tables and an extensive description of responsibilities and requirements such as information labels, warranties and records keeping. Other features include answers to frequently asked questions, a glossary of terms and a table of emissions standards, broken down by vehicle weight. DOE, the EPA, the Colorado Department of Health and Environment, fleet managers and industry organizations contributed to and reviewed the Guide, which was produced by NREL. To obtain a copy, call the National Alternative Fuels Hotline at 1-800-423-1DOE or visit DOE's Alternative Fuels Data Center and click on "What's New."
<urn:uuid:d893a2d2-0655-439b-b3ee-d86de56e9201>
CC-MAIN-2016-40
http://www.nrel.gov/news/press/1998/07guide.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738663010.9/warc/CC-MAIN-20160924173743-00296-ip-10-143-35-109.ec2.internal.warc.gz
en
0.921638
461
2.546875
3
Travelers may be surprised to find Iceland’s environment so temperate for its location in such a northern region. The climate supplies Iceland with two primary vegetation zones: the tundra, a zone of treeless plains; and the taiga, a zone of coniferous forests. Prior to human settlement, woodlands and birch forests are estimated to have covered 25-40% of the land area, but by the early 1900s these forests had nearly been exhausted. Today, the number of trees has once again increased due to reforestation initiatives, but unfortunately much of the native forest ecosystem has been lost. One-fourth of the land is continuously covered by vegetation, which consists of forestland, bogs, moors, and grasslands. Iceland’s natural resources are limited in variety, but each resource in itself is considerable: fish, hydropower, geothermal power, and diatomite. Iceland’s primary industries include fish processing, aluminum smelting, ferrosilicon production, geothermal power, and a growing travel and tourism sector. Fishing provides 70% of the countries export earnings, and employs four percent of the workers. Over-fishing has long been an issue, though the strict catch quotas imposed in the late 1980s and early 1990s have helped somewhat to stabilize the fish population. Despite its small size, Iceland also faces increasing issues with air pollution. Water pollution from fertilizer runoff is also a concern, as is inadequate wastewater treatment, but awareness is key and Iceland is taking proactive steps to improve these environmental distresses. Less than five percent of the country’s workers are involved in agriculture, due to the small amount of arable land and short growing season. Icelanders make up for this deficit, however, by using geothermal energy to heat a large number of greenhouses. Agricultural products include potatoes and green vegetables. On an Iceland tour you are likely to also notice a variety of cattle farms. Many Icelandic farmers raise sheep and dairy cattle, making the country nearly self-sufficient in dairy products.
<urn:uuid:afdae369-ab72-4194-ae28-55e8c932fbda>
CC-MAIN-2016-30
http://www.adventure-life.com/articles/iceland-environment-319
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258936356.77/warc/CC-MAIN-20160723072856-00059-ip-10-185-27-174.ec2.internal.warc.gz
en
0.963431
420
3.5
4
This Pike County town is in the southwest corner of the state about 2.5 hours from Little Rock. The area is known for its tremendous geological diversity and is home to Crater of Diamonds State Park, the only diamond site in the nation open to the public. For a small fee, visitors can dig for diamonds and other gemstones at the park and keep what they find. The first known inhabitants of the area were Native Americans including the Quapaw and Caddo tribes. The first white men seen by the Indians were European explorers—members of Hernando de Soto’s party in the mid 1500’s. Markers and a statue commemorate a violent clash the de Soto expedition had with the Tula Indians in Caddo Gap, about twenty-five miles north of town. After Pike County was created in 1833, the area was given the name Zebulon and it was basically still a wilderness. A few years later, the name was changed to Murfreesborough (and later Murfreesboro). The name supposedly came from settlers who named it after their Tennessee hometown. Agriculture was the major source of income for the town. Murfreesboro escaped the major battles of the Civil War, although it served as a winter quarters for the Confederate army. In 1906, John Wesley Huddleston found the first diamonds on his property, which led to Crater of Diamonds State Park. Many are amazed to learn there is a place in Arkansas where they can go and dig for diamonds, which were first mined in India over 2700 years ago. The park is located above an eroded volcanic pipe. The crater, which became a state park in 1972, is a 37 1/2 acre open field that is plowed from time to time to bring diamonds and other gemstones to the surface. Noteworthy finds include the “Uncle Sam” (40.23 carats), the largest diamond ever unearthed in the nation, the “Amarillo Starlight” (16.37 carats) the largest diamond ever unearthed by a visitor, and the “Strawn-Wagner Diamond”, which was certified a perfect grade by the American Gem Society. It weighed 3.03 carats in the rough and 1.09 carats cut. A diamond this perfect, and weighing over a carat after cutting it, is estimated to occur around 1 time in a billion. It’s even rarer coming from a non-commercial diamond mine such as Crater. Some diamonds from the park are also on display at the Smithsonian’s Musuem of National History in Washington, DC. The leading industry in Murfreesboro was the Anthony Lumber Company. John William Anthony moved his family to Murfreesboro in the late 1920s and started a lumber mill, which became one of the leading sawmills in the South. Anthony retired in the 1940s, leaving the mill to be managed by his sons. In 1941, Congress authorized $3 million for the construction of Narrows Dam. The dam was finished by 1951, creating Lake Greeson, now a popular recreational destination. The areas around Narrows Dam and Lake Greeson have continued to grow and are now known for their solid hunting and fishing. The upper waters of the Little Missouri River are excellent for canoeists. The discovery of diamonds is celebrated every June with the Diamond Festival. On a separate date, the park also celebrates John Huddleston Day, honoring the man who discovered the first diamonds in the area.
<urn:uuid:024f65b6-1387-4351-b02d-4b05a1c8a9e2>
CC-MAIN-2014-52
http://www.arkansas.com/places-to-go/cities-and-towns/city-detail.aspx?city=Murfreesboro&mobileRedirect=no
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768169.20/warc/CC-MAIN-20141217075248-00113-ip-10-231-17-201.ec2.internal.warc.gz
en
0.972686
736
3.1875
3
Revelation, 14 November 1835 - Source Note Dictated by JS on 14 November 1835, this revelation was directed to the man who recorded it, . Since joining the church in 1833, Parrish had become a trusted associate of JS and had already served informally as a clerk. Less than a year after his conversion, Parrish and his wife, Elizabeth Patten Parrish, marched with JS and approximately 225 other men, women, and children to , Missouri, on the expedition. Sometime in late June or early July, Elizabeth Parrish died from cholera, as did approximately twelve other members of the expedition. Warren Parrish likely remained in until 12 September, when he and his brother-in-law, , left on a proselytizing mission that took them through Missouri, Kentucky, and Tennessee. Patten and Parrish, later joined by , established several small branches in those states between October 1834 and July 1835, when Parrish returned to . Shortly after his return, Parrish was named to the First Quorum of the .Upon his return, fulfilled a number of clerical responsibilities during fall 1835 and winter 1836. In addition to periodically taking minutes for the , acting as a scribe to the , keeping a personal journal for JS, and copying material from the journal and other records into JS’s 1834–1836 history, Parrish acted as a scribe as JS translated portions of the Egyptian papyri that had arrived in Kirtland sometime in late June. It is to these “ancient records” that the following revelation most likely refers. Parkin, Max H. “Zion’s Camp Cholera Victims Monument Dedication.” Missouri Mormon Frontier Foundation Newsletter 15 (Fall 1997): 4–5. Lyman, Amasa. Journals, 1832–1877. Amasa Lyman Collection, 1832–1877. CHL. MS 829, boxes 1–3. “History of George Albert Smith,” ca. 1857–1858. George Albert Smith, Papers, 1834–1877. CHL. MS 1322, box 1, fd. 1. Bradley, James L. Zion’s Camp 1834: Prelude to the Civil War. Logan, UT: By the author, 1990. Burgess, Harrison. Autobiography, ca. 1883. Photocopy. CHL. MS 893. Also available as “Sketch of a Well-Spent Life,” in Labors in the Vineyard, Faith-Promoting Series 12 (Salt Lake City: Juvenile Instructor Office, 1884), 65–74. Patten, David W. Journal, 1832–1834. CHL. MS 603. Latter Day Saints’ Messenger and Advocate. Kirtland, OH. Oct. 1834–Sept. 1837. Woodruff, Wilford. Journals, 1833–1898. Wilford Woodruff, Journals and Papers, 1828–1898. CHL. MS 1352. Partridge, Edward. Journal, Jan. 1835–July 1836. Edward Partridge, Papers, 1818–1839. CHL. MS 892, box 1, fd. 2. Jessee, Dean C. “The Writing of Joseph Smith’s History.” BYU Studies 11 (Summer 1971): 439–473. Hauglid, Brian M. A Textual History of the Book of Abraham: Manuscripts and Editions. Studies in the Book of Abraham, edited by John Gee and Brian M. Hauglid. Provo, UT: Neal A. Maxwell Institute for Religious Scholarship, Brigham Young University, 2010.
<urn:uuid:5fb206d8-281e-4faa-afea-ef7f1474d93a>
CC-MAIN-2022-27
https://www.josephsmithpapers.org/transcript/revelation-14-november-1835?print=true
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00661.warc.gz
en
0.94418
828
2.546875
3
We celebrate Christmas in our home and we’ve been incorporating math into all of our traditions this season. I’ve shared several ideas on my social media over the last few weeks and I’d like to collate a few ideas in today’s post. Advent calendars are a popular way to build anticipation up to the big day. The calendars we have this year have the dates 1-24 randomly placed. Use your advent calendar to initiate a math conversation with your child: - Number recognition – each day your child has to find the number that matches the date - Date Recognition – discuss how each day gets a new number. - Counting Down – each time a day goes by, we get one day closer to Christmas. This idea enforces counting backward. - Arrays – the calendars pictured are arrays. Arrays provide many opportunities for math talk including repeated addition, subtraction, rows, columns, multiplication, and more. Want to learn more about arrays? Visit my recent post Math Talk from Arrays. Making Hot Chocolate We’ve reached cold temps here in Ontario and you know what that means… hot chocolate all around. Wondering how to incorporate some math talk into making that silky goodness? - Procedural writing – write out the steps to making the hot chocolate. - Following algorithms – follow your procedure by implementing each step. - Measuring ingredients – measure your chocolate powder and hot water. - Counting – count out your marshmallows. Today is wrapping day and we’ve been deep in the boxes and wrapping paper for a couple hours now. What math conversations can you spark while wrapping your gifts? - What is the shape of the box? How many faces? Edges? Vertices? - Estimate how much wrapping paper is needed to cover the box. Everyone does this a little differently, but the goal is to get all the faces completely covered. - How can you wrap the box most efficiently? No one wants to waste paper. Let’s figure out how to do it using the least amount of wrapping paper. - How long of a strip of tape will we need for the whole box? How many pieces should we use to keep the paper in place? - Do you wrap ribbon around the box? What length of ribbon will be needed? What if we wrap it around twice? If you celebrate this season, please share your math conversations! Keep spreading the math love <3 519 total views, 1 views today
<urn:uuid:5c540119-c732-487a-ae7e-3dd21d7a3485>
CC-MAIN-2021-31
http://everyonecanlearnmath.com/tis-the-season-for-math-talk/
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00525.warc.gz
en
0.891278
524
3.59375
4
Pompeii's frozen victims on displayBreaking News Pompeii and the nearby settlement of Herculaneum were consumed by a mixture of heat, falling pumice stone and ash. Mount Vesuvius, about 9km (5.5 miles) away, had exploded, sending a mass of volcanic debris high into the air, which then landed like a military bombardment on the citizens of the two cities below. Estimates of deaths in both places range from between 10,000 and 25,000. In Pompeii, the effects of the cataclysm were especially vivid, leaving as they did a city almost frozen at the moment of its expiration. So fast and vast was the tonnage of volcanic rock and dust dumped on its residents and livestock that many were killed on the spot. It is these emblematic "figures" of Pompeii that are now the subject of an extraordinary new exhibition. They are the skeletal remains of the victims that have been preserved under a thin veneer of plaster, to give them their life form. "Until now, these figures have been dispersed around Pompeii itself, or to other museums around the world," says Grete Stefani, the organiser of the exhibition at the nearby Antiquarium de Boscoreale, a five-minute drive from Pompeii. "They've never been seen together." The process of unearthing the bones and preserving them in plaster has gone on since the 19th Century, when archaeologists really began the work of prizing out Pompeii's buried existence. One of the exhibits shows a figure, probably a man, clasping a step. Another shows a man with his arm over his mouth, most likely trying to hold back the choking dust. A third shows a family, their arms raised, as though trying to fend off the calamity that was engulfing them. The figures are exactly how the archaeologists found them buried in the layers of ash. Once discovered, the cavity containing the skeleton is filled with a liquid plaster mixture. After 48 hours the plaster hardens and the life-like figure can be lifted out. Not even the animals had the speed to escape. The exhibition includes a pig and alongside a dog, his four legs contorted together to form one point and his mouth open. You can see a tooth and a collar, and even make out the lines of his fur. "The detail of the figures is remarkable" says Mrs Stefani. "They have been preserved at the very second of their death." On another figure you can make out the creases of a scarf they were wearing as they struggled to breathe. One of the saddest is the figure of a child. The exhibition reflects the merciless, indiscriminate nature of the volcanic eruption. The authorities decided to mount the displays partly because of the ignorance surrounding the figures. "Many visitors to Pompeii thought they were sculptures, the work of artists," says Mrs Stefani. "But they are the remains of real people". The work of preservation falls to Pompeii's workshop of experts. Set in a former villa in the city, the team prepare the plaster mixture. Too thin and it would not be strong enough to support the skeletal frame, too thick and it would obliterate the detail of the person or animal being covered. "It is a very delicate operation," says Stefania Giudice, one of the preservers working here. "The bones are very brittle, so when we pour in the plaster we have to be very careful, otherwise we might damage the remains and they would be lost to us forever." A little more than 100 figures have been preserved in plaster, though not all are on show at the exhibition. That is out of a total of about 1,150 bodies that have been discovered in Pompeii. Some are not suitable to be covered as they have already been damaged, either by the debris of the volcano, or when they were unearthed. As a third of Pompeii has yet to be excavated, more human and animal remains could be found. Where possible these, too, will be treated with the plaster, removed and preserved. To preservers like Ms Giudice, it is more than just a job. "It can be very moving handling these remains when we apply the plaster," she says. "Even though it happened 2,000 years ago, it could be a boy, a mother or a family. It's human archaeology, not just archaeology." The exhibition lasts until the end of the year. comments powered by Disqus - A New Target for Old Spies: Congress - Antigua and Barbuda Asks Harvard University for Slavery Reparations - Historian: Nixon DID contest the 1960 election - Killer took selfie after stabbing historian over rare ‘Wind in the Willows’ book - VW fires corporate historian who drew attention to wartime ties to Nazis - Historian Jeremy Kuzmarov calls on Obama to pardon Ethel Rosenberg - Garry Wills says there’s one human test we can use to decide who’s the better candidate: Trump or Clinton - Get to Know the Semifinalists for the National Book Award - Steven Runciman — historian, tease and professional enigma — is the subject of a biography - Historian Eric Foner: Trump is Logical Conclusion of What the GOP Has Been Doing for Decades
<urn:uuid:744d43ef-2511-4b2b-b451-6aaccfd33256>
CC-MAIN-2016-44
http://historynewsnetwork.org/article/125223
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00069-ip-10-171-6-4.ec2.internal.warc.gz
en
0.969273
1,134
3.296875
3
- What are the methods of accessing files? - Which security tools do you use? - What are the types of security testing? - What is file integrity and security? - What is security need? - What are the two types of data access methods? - What is the primary method of protecting sensitive data? - What are security methods? - What tools do hackers use? - What are security tools in e business? - What are the memory access methods? - What are the various methods used in protecting software and its data? - Which software is used in cyber security? - What is file integrity? - How do I keep files safe? - What is data protection process? - What is file security? - What are security tools? - What is network security techniques? - What is file explain types of file? - What is the relationship between hardware and software? What are the methods of accessing files? There are three ways to access a file into a computer system: Sequential-Access, Direct Access, Index sequential Method.Sequential Access – It is the simplest access method. Direct Access – Another method is direct access method also known as relative access method. Index sequential method –. Which security tools do you use? Here are 14 different cyber security tools and techniques designed to help you do just that:Access control. … Anti-malware software. … Anomaly detection. … Application security. … Data loss prevention (DLP) … Email security. … Endpoint security. … Firewalls.More items…• What are the types of security testing? What Are The Types Of Security Testing?Vulnerability Scanning. … Security Scanning. … Penetration Testing. … Security Audit/ Review. … Ethical Hacking. … Risk Assessment. … Posture Assessment. … Authentication.More items…• What is file integrity and security? File integrity monitoring (FIM) refers to an IT security process and technology that tests and checks operating system (OS), database, and application software files to determine whether or not they have been tampered with or corrupted. What is security need? Security, in information technology (IT), is the defense of digital information and IT assets against internal and external, malicious and accidental threats. This defense includes detection, prevention and response to threats through the use of security policies, software tools and IT services. What are the two types of data access methods? Two fundamental types of data access exist:sequential access (as in magnetic tape, for example)random access (as in indexed media) What is the primary method of protecting sensitive data? Encrypting your computer Whole disk encryption software ensures that no unauthorized user may access the device, read data, or use the device as a tool to enter AU’s network. If a device gets into unauthorized hands, the data is securely protected, even if the hard disk is removed. What are security methods? Methods are the procedures and written guides that define how security is implemented. Technologies are the mechanisms used to implement security. Mechanisms — not methods — include firewalls, network intrusion-detection systems, antivirus software, VPN systems, etc. What tools do hackers use? In this article, we’ll be discussing the top 10 ethical hacking tools till 2019:Acunetix.Nmap.Metasploit.Wireshark.Nikto.John the Ripper.Kismet.SQLninja.More items…• What are security tools in e business? There are tools for cyber defense and secure communication. Included are encryption applications, security testers, secure communication tools, password apps, online security platforms, an open threat exchange, and a cyber security planner for small businesses. What are the memory access methods? Storage access methodsBDAM – Basic direct access method.BSAM – Basic sequential access method.QSAM – Queued sequential access method.BPAM – Basic partitioned access method.ISAM – Indexed sequential access method.VSAM – Virtual storage access method, introduced with OS/VS.More items… What are the various methods used in protecting software and its data? Various tools and technologies used to help protect against or monitor intrusion include authentication tools, firewalls, intrusion detection systems, and antivirus and encryption software. Which software is used in cyber security? Comparison of the Top CyberSecurity SoftwareOur RatingsBest ForSolarWinds Security Event Manager5 StarsSmall to large businesses.Intruder5 StarsSmall to large businesses.Bitdefender Total Security5 StarsSmall to large businessesMalwarebytes4.5 StarsSmall to large businesses & personal use.5 more rows•Nov 13, 2020 What is file integrity? Being one member of CIA triad, file integrity refers to the processes and implementations aiming to protect data from unauthorized changes such as cyber attacks. A file’s integrity tells if the file has been altered by unauthorized users after being created, while being stored or retrieved. How do I keep files safe? Below are five ways to keep your data safe.Regularly backup your files. If a virus infects your operating system, it’s often necessary to completely wipe your computer and reinstall programs. … Use an external hard drive. … Store files in the cloud. … Control access to your files. … Encrypt your hard drive. What is data protection process? Data protection is the process of safeguarding important information from corruption, compromise or loss. The importance of data protection increases as the amount of data created and stored continues to grow at unprecedented rates. What is file security? File security is a feature of your file system which controls which users can access which files, and places limitations on what users can do to various files in your computer. What are security tools? Network Security Tools. Network security tools can be either software- or hardware-based and help security teams protect their organization’s networks, critical infrastructure, and sensitive data from attacks. … These include tools such as firewalls, intrusion detection systems and network-based antivirus programs. What is network security techniques? Network security is a broad term that covers a multitude of technologies, devices and processes. In its simplest term, it is a set of rules and configurations designed to protect the integrity, confidentiality and accessibility of computer networks and data using both software and hardware technologies. What is file explain types of file? There are three basic types of files: regular. Stores data (text, binary, and executable). directory. Contains information used to access other files. What is the relationship between hardware and software? Essentially, computer software controls computer hardware. These two components are complementary and cannot act independently of one another. In order for a computer to effectively manipulate data and produce useful output, its hardware and software must work together.
<urn:uuid:e25dde35-f067-42f9-b615-0afb33c5eca7>
CC-MAIN-2021-43
https://ivanpawluk.com/qa/question-what-are-the-methods-of-file-security.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00275.warc.gz
en
0.897876
1,432
3.21875
3
About this book Deploy deep learning applications into production across multiple platforms. You will work on computer vision applications that use the convolutional neural network (CNN) deep learning model and Python. This book starts by explaining the traditional machine-learning pipeline, where you will analyze an image dataset. Along the way you will cover artificial neural networks (ANNs), building one from scratch in Python, before optimizing it using genetic algorithms. For automating the process, the book highlights the limitations of traditional hand-crafted features for computer vision and why the CNN deep-learning model is the state-of-art solution. CNNs are discussed from scratch to demonstrate how they are different and more efficient than fully connected networks. You will implement a CNN in Python to give you a full understanding of the model. After consolidating the basics, you will use TensorFlow to build a practical image-recognition application and make the pre-trained models accessible over the Internet using Flask. Using Kivy and NumPy, you will create cross-platform data science applications with low overheads. This book will help you apply deep learning and computer vision concepts from scratch, step-by-step from conception to production. - Understand how ANNs and CNNs work - Create computer vision applications and CNNs from scratch using Python - Follow a deep learning project from conception to production using TensorFlow - Use NumPy with Kivy to build cross-platform data science applications Deep Learning Computer Vision Python Machine Learning Neural Network Convolutional Neural Network Image Processing TensorFlow - DOI https://doi.org/10.1007/978-1-4842-4167-7 - Copyright Information Ahmed Fawzy Gad 2018 - Publisher Name Apress, Berkeley, CA - eBook Packages Professional and Applied Computing - Print ISBN 978-1-4842-4166-0 - Online ISBN 978-1-4842-4167-7 - Buy this book on publisher's site
<urn:uuid:ac30ff0a-a100-4787-8dc5-fb59c1fe7233>
CC-MAIN-2019-13
https://rd.springer.com/book/10.1007%2F978-1-4842-4167-7
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202347.13/warc/CC-MAIN-20190320125919-20190320151919-00480.warc.gz
en
0.768872
410
2.640625
3
This exciting resource presents students and teachers with high interest activities that will arouse curiosity and extend thinking. - Students will enjoy playing mathematical tricks on their peers, and teachers can begin class with a dazzling demonstration that will intrigue students. - Older students can be encouraged to discuss algebraic ideas and concepts behind the ?tricks? while younger students can simply be captivated by the magic of maths. - Detailed explanations and answers provided. - Ideal for Ages 8-14 Be The First To Review This Product! Help other Ed Resources Pty Ltd users shop smarter by writing reviews for products you have purchased.
<urn:uuid:d308e9a0-f581-4a08-a331-2885f273c806>
CC-MAIN-2020-05
https://www.edresources.com.au/motivational-maths
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00302.warc.gz
en
0.909026
126
3
3
A sundress is an informal sleeveless dress of any shape in a lightweight fabric, for summer wear. The dress is intended to be worn without a layering top, and the design must therefore cut a balance between modesty and allowing sun exposure. Wednesday, May 27, 2009 Wednesday, May 13, 2009 Men's caftans often had gores added, causing the caftan to flare at the bottom, while women's garments were more closely fitted. Women were more likely to add sashes or belts. A sultan and his courtiers might layer two or three caftans with varying length sleeves for ceremonial functions. An inner short-sleeved caftan, was usually secured with an embroidered sash or jeweled belt, while the outer caftan could have slits at the shoulder through which the wearer's arms were thrust to display the sleeves (sometimes with detachable expansions) of the inner caftan to show off the contrasting fabrics of the garments. Historically, the kaftan is a mans cotton or silk cloak buttoned down the front, with full sleeves, reaching to the ankles. It was the traditional wear in the Eastern Mediterranean.The caftan may also be worn by women in the US, where it is typically called a muumuu. It is again, usually not belted and may come in a variety of prints. Some are Hawaiian inspired, and others may have prints that are reminiscent of African designs. The caftan in the US may be cotton, cotton/polyester, or cotton gauze. It is usually worn as a house dress, a nightgown or a swimsuit cover-up, and can be extremely comfortable.
<urn:uuid:2bfc5c36-f725-474a-9cc7-a00ffc7e12f2>
CC-MAIN-2020-16
https://www.maya-india.com/2009/05/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00209.warc.gz
en
0.968087
348
2.96875
3
Here’s a case where quality means quantity: people who lack joy and don’t consider life worth living are likely to experience significantly shortened lifespans. These early deaths can come from suicide, but a generally unhappy outlook also increases the chance of death from heart disease and stroke. So says a study in Japan. As much as studies focus on nutrition and supplements for improving longevity, the old proverb about happiness creating good health appears to be true. Love and joy are where it’s at. For seven years, over 43,000 men and women were studied and asked if they had joy in their lives. Those who said no were less likely to be married, employed, educated and were in worse health and chronic pain. These folks were more likely to die sooner than later. This begs the question: did the lack of joy cause the poor health and bad social life, or was it the other way around? One thing is certain: life is better with joy. Exuding happiness attracts great people and opportunities into your life, thereby making life more wonderful. But experiencing a frequent state of joy shouldn’t feel like a chore; the whole point is to increase pleasant thoughts and emotions. So, how to cultivate more joy in your life? Gratitude is a great place to start.
<urn:uuid:fe7e6cd6-2c90-4ff8-aa3f-f0913e35ffda>
CC-MAIN-2022-40
http://ecosalon.com/take_two_and_call_me_in_the_morning/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00377.warc.gz
en
0.97382
278
2.703125
3
There are many different types of essay structure. Students may find some of these hard to comprehend and difficult to implement. Fortunately, essays can be categorized into four main groups. Further distinctions within these categories can be appreciated once an understanding of these main types is grasped. Knowing the different types of essay structures can help writers decide upon the most appropriate type for the topic and help organize the structure of the essay. Expository, Descriptive and Narrative Essays Expository essays explain how to do something. These essays might delineate the necessary steps required to complete a task or how to accomplish a specific activity. For example, essays which describe how to establish a new business or how to sew a dress are expository essays. Descriptive essays use details to paint a visual picture. For example, a descriptive essay might describe what you can expect to find at a beach resort. Narrative essays tell a story and are one of the less common types of essay. Argumentative or Persuasive Essay The argumentative or persuasive essay is one the most common type of essay that students write. It is also one of the most difficult to write well. The argumentative essay requires a thesis that states a position and paragraphs that defend it. The aim is to sway the reader's opinion towards that position. A good argumentative essay will aim to persuade logically and thoroughly. The opposing side of the argument will be anticipated and refuted in the body of the essay. Compare and Contrast Essays Compare and contrast essays explore similarities and differences. For example, the similar and different features of two different cars, two characters in a novel or two hotels in a vacation resort might be explored. These essays may follow the point-by-point method or the block method. The block method means that all the features of the first item are described, followed by all the similar features of the second item. The differences would then be grouped together in the same way. The point-by-point method involves alternating similar and different features throughout the essay. Analysis or Cause and Effect Essays Cause and effect essays explore the root causes of situations. These essays attempt to answer the questions "why?" and "what is the result?" For example, if the topic of the essay was on people who drop out of high school, the essay would discuss all the possible reasons why students might drop out of school. These may include learning disabilities, behavior problems and low socioeconomic status." The causal relationship is further explored by probing into the results. In the example, this might mean exploring how the results of low socioeconomic status affect high school drop out rates. - Comstock Images/Comstock/Getty Images
<urn:uuid:5a10c9b0-b0a1-4cbd-9f8c-d0b97674530d>
CC-MAIN-2018-47
https://classroom.synonym.com/different-types-essay-structure-8432977.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742567.46/warc/CC-MAIN-20181115054518-20181115080518-00265.warc.gz
en
0.91877
544
3.859375
4
Look further into the capitalize meaning… If a company capitalizes its costs, then it means that the charges do not show up on the income statement. However, the expense will show up as a depreciation expense. The total amount can also be seen on the balance sheet accounts. This means that a company has the ability to spread the amount of its expenses over time. This means that the net income for that company will have a smoothing effect over the life of the investment or asset, and in the first year artificially inflates the net income. For further explanation, look at the following capitalize example. For example, Beat Box Co. manufactures and assembles stereo systems. Recently, Beat Box has just leased a new piece of equipment for its operations. Instead of the company incurring the cost all at once it has decided to capitalize the cost over time. Thus, if the entire cost of the equipment was $1,000, and it depreciates over ten years then the entire amount of expense incurred each year on the balance sheet would be $100. Note that if the amount of income was $600 for the next ten years then the amount in the first year would be a loss of $400. However, because this amount was capitalized the company will show a profit of $500 each year for the next ten years. If you want to add more value to your organization, then click here to download the Know Your Economics Worksheet. [box]Strategic CFO Lab Member Extra Access your Projections Execution Plan in SCFO Lab. The step-by-step plan to get ahead of your cash flow. Click here to access your Execution Plan. Not a Lab Member?
<urn:uuid:03b8fc5b-a0e7-4e9e-832f-e5aa2d801f5a>
CC-MAIN-2020-50
https://strategiccfo.com/capitalize/
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184123.9/warc/CC-MAIN-20201125183823-20201125213823-00163.warc.gz
en
0.953904
345
3.21875
3
In 1968, Alabama rhythm-and-blues artist Clarence Carter hit the Billboard charts with “Slip Away,” followed by another hit in 1970, “Patches.” A scholar of the same name who grew up in Jacksonville never achieved that sort of fame, but left a sizable contribution to American history. Clarence Edwin Carter, who was born in Jacksonville on Feb. 6, 1881, was a university professor, writer and the editor of the Territorial Papers of the United States, a massive effort that collected the official papers of 28 territories that later achieved statehood. The Territorial Papers remain standards in American historical study. After graduating from Illinois College in 1905, Carter earned an M.A. from Wisconsin in 1906 and received a fellowship at the University of Illinois, where he earned a Ph.D. in 1908. He began his teaching career that year at his alma mater before accepting a position as professor of history at Miami University in Oxford, Ohio, in 1910. He remained in Oxford for most of the next 28 years. Carter also held several summer lecturing positions around the nation, including at Ohio State (1912), Columbia (1913), Illinois (1915) and Iowa (1925). He also served a one-year stint as a professor of history at the University of Texas in 1922-23. In addition to his teaching acumen, Carter established himself as a foremost scholar at an early age. In 1908, he produced Great Britain and the Illinois Country 1763-1774, which won recognition from the American Historical Association. He also collaborated with another noted Illinois historian, Clarence W. Alvord, to compile and edit documents of Illinois history from 1763-69. Those works were published in several issues of the annual Collections of the Illinois State Historical Society. Another project was his editorship of The Correspondence of General Thomas Gage, a two-volume work that collected the documents of Gage, the commander of British forces in North America at the start of the American Revolution. The first volume was released in 1931, with the second volume following two years later. Carter also produced an array of magazine and journal articles, as well as scholarly reviews. However, Carter’s greatest accomplishment was the Territorial Papers, which comprise the documents relating to each U.S. territory before statehood. Carter was appointed to the editorship in 1931, spending ample time in the nation’s capital compiling and analyzing records. The information in each volume includes lists of officials from each territory and administrative documents such as treaties, policies, appointments, land use and creation of militias. A 1935 review declared “the need for such a publication is obvious, and has long been urged by the American Historical Association and by the historical agencies” of the 30 states covered in the project. A modern bookseller summarized the Territorial Papers as “document(ing) the expansion of the United States from 1787 to 1845.” In all, the Territorial Papers detail 28 volumes on states stretching from Missouri to New England and the Southeast. In 1950, the project was transferred to the National Archives, a year before Carter’s mandatory retirement at age 70. In 1937-38, he served as president of the Mississippi Valley Historical Association. He discussed his approach to editorship by expressing his adherence to “the principle of accuracy.” He stuck with obsolete forms of punctuation and abbreviation in his reproductions, and seldom used editorial brackets, saying “to follow with some literalness the writer’s style is to place it in the era in which the document was produced.” In 1977, one commentator noted that Carter’s editorial style may have reflected that of historian Frederick Jackson Turner, who wrote Carter in 1908 that “the satisfaction of the editor must chiefly lie in the joy of finding and making available material for use. There is little personal advantage.” Carter remained active with the Territorial Papers project into his later years and was working on the Wisconsin volume when he died following a brief illness on Sept. 11, 1961. Tom Emery is a freelance writer and historical researcher from Carlinville. He may be reached at 217-710-8392 or [email protected]
<urn:uuid:2411221f-0c96-41c2-896a-98acf12d60af>
CC-MAIN-2016-40
http://myjournalcourier.com/news/ap-wire/97572/city-native-was-prominent-historian
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660181.83/warc/CC-MAIN-20160924173740-00001-ip-10-143-35-109.ec2.internal.warc.gz
en
0.966818
878
2.59375
3
Has anyone seen the new commercial aimed to calm the bad press regarding high fructose corn syrup? "It's natural and it has the same calories as sugar," the woman reassures her boyfriend in a condescending tone. She's kind of like the pusher that hangs out at the fence of the school yard. "One time won't hurt you - just try it!" So what is HFCS? It's created by causing enzymatic changes in regular corn syrup. This is done by treating corn starch with alpha-amalyze, an enzyme which breaks it down to a shorter chemical chain of sugars. A chemical called glucoamalyze, which is created by adding a fermented fungus, is then added, and then the substance receives a treatment of chromatography, which separates the remain components even further. Some HFCS has been shown to contain trace amounts of mercury from the processing. HFCS is becoming more commonly used than sugar because it is cheaper. It also masquerades under the names glucose-sucrose, isoglucose, maize syrup or glucose-fructose syrup. HFCS is being touted as a natural product. As our commercial's pusher says, "Silly, it's made from corn." True, it originally starts out as corn, but recently the CSPI (Center for Science int the Public Interest) has recently threatened lawsuits against companies referring to it as a natural ingredient. Their position is that the high level of processing the corn undergoes, the genetically modified enzyme that is added to separate the molecules of the corn, and the synthetic fixing agents used in this process, rules out the definition of "natural". Cadbury Schweppes voluntarily changed their labeling when threatened with this suit. Critics of the commercial use of HFCS point out that the low cost makes the high sugar content more easily available, contributing to obesity. Some studies have said that the higher content of fructose, instead of sucrose, is more likely to trigger insulin resistance. Animal studies have proven that HFCS suppresses the sensation of fullness, causing over consumption. Over consumption, in turn, caused the rats to suffer from fatty liver disease and Type II Diabetes. In reality the studies that say HFCS is worse than sugar are not conclusive. HFCS is, however, similar to sugar, in that it should be avoided. It carries with it the same, if not necessarily worse, health risks as sugar. http://afuturesuccessstory.blogspot.com/2010/02/feb3-toxin-of-day-sugar.html Sweeten with honey, sucanat, agave nectar, and maple syrup, and use these in moderation!
<urn:uuid:75053953-50d0-4125-8473-25b42cd744b6>
CC-MAIN-2018-09
http://afuturesuccessstory.blogspot.com/2010/03/mar1-toxin-of-day-high-fructose-corn.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814124.25/warc/CC-MAIN-20180222140814-20180222160814-00395.warc.gz
en
0.953056
560
2.640625
3
XML is currently generating a great deal of interest as the universal language of electronic business. Much effort and expense has been spent explaining the benefits of XML technology, but not much attention has been given to answering practical questions such as "How much data is currently available in XML and where does it come from?" The XML data that is interesting to you is obviously dependent on your particular requirements, but it is possible to identify some general answers and point you to some tools that support the storage of XML. In brief, there's no shortage of XML data available on the Internet, and there are lots of ways to convert legacy data to XML relatively easily. The amount of data and number of support tools has increased very noticeably in the past year, and will surely grow exponentially in the years to come. In fact, most enterprises will probably soon find themselves overwhelmed by XML data that may come from all sorts of non-XML sources and generated by "middleware" components and applications, but have lasting value and will need to be persistently stored. As this scenario unfolds, many organizations will find it necessary to have a scalable, reliable database such as Software AG?s Tamino, which uses XML and Internet standards to store, retrieve, and query all this data. Note that the companies and products noted here are intended to be representative of what is possible today, and not by any means an exhaustive list of what is available. XML on the Web or in messages Over the next year or two, more and more data that you will come across in the normal course of your business will be in XML format. - XHTML. This dialect of HTML in well-formed XML syntax is becoming fairly common on the Internet. For example, http://www.infoworld.com/ presents much of its content in XHTML. The sorts of tools that currently produce proprietary binary formatted data -- such as word processors, spreadsheets, data entry forms, etc. -- have already begun to be supplemented by equivalent products that produce XML. The biggest vendors, especially Microsoft, have shown a clear commitment to accelerate this trend by saving data in XML format. In the meantime, you can employ products - XMetaL or other word-processor-like applications that can be used by ordinary office workers without XML expertise to produce documents in XML - Tools are available that produce XML data from online forms that ordinary users can easily fill out. See the offerings from icomXpress - eNumerate is developing spreadsheet-like application that will produce XML data in a format that can be displayed in browsers via XSL and graphed, plotted, etc. by a free browser plug-in. As all the companies that have jumped on the XML bandwagon actually implement XML support in their products, it will be increasingly common to be able to simply export data from existing tools in XML format. - MS Office 2000 exports specialized markup data in XML "islands" inside an HTML data format that is almost well-formed - ERP and other enterprise-level systems are increasingly supporting XML as an output format. See http://www.mysap.com/ for one - Software design tools such as Rational Rose are supporting the UMI XML format for exchange of UML diagrams, rules, etc. Finally, a number of specialized tools are being designed to easily convert data in conventional databases and flat files into is a Windows GUI stream editor that works in a similar manner to Unix sed, perl, grep, etc., converting the data to XML format and optionally generating a DTD to describe the result. - Dave Raggett's famous tidy program easily converts messy, non-standard HTML such as that found on the Web to well-formed XHTML. - upCast, from infinity loop, has both client-side and server-side tools which convert the RTF format supported by Microsoft and other word processor vendors into XML, using heuristics to recreate the logical structure from the layout.
<urn:uuid:1a0b0b95-a6b1-464c-a9ad-57fbaf4ef328>
CC-MAIN-2020-10
https://www.datamystic.com/press/xmldata.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00516.warc.gz
en
0.892621
885
2.609375
3
Animals are protected from cruelty and mistreatment by the Animal Welfare (Guernsey) Ordinance, 2012. Follow this link to view the Ordinance. This legislation protects animals from acts of violence and neglect and places a duty on owners to care for any animal that they keep on the basis of the five freedoms which are - - FREEDOM FROM HUNGER AND THIRST by ready access to fresh water and a diet to maintain full health and vigour. - FREEDOM FROM DISCOMFORT by providing an appropriate environment including shelter and a comfortable resting area. - FREEDOM FROM PAIN, INJURY OR DISEASE by prevention or rapid diagnosis and treatment. - FREEDOM TO EXPRESS NORMAL BEHAVIOUR by providing sufficient space, proper facilities and company of the animal's own kind. - FREEDOM FROM FEAR AND DISTRESS by ensuring conditions and treatment which avoid mental suffering. If you have reasonable grounds to believe that an animal is being mistreated, contact us. - Agriculture, Countryside and Land Management Services has issued a number of welfare codes on the care of various animals along with guidance on how to comply with the relevant code. Follow this link for further information. Animals in greenhouses - For information on precautions to take in the summer when animals and birds are kept in greenhouses, follow this link. - Follow this link for information on the minimum standards for tethering cattle. - Hunting is permitted on private land and certain public land and the animals that can be hunted are wild Rabbit, Wood pigeon, Common Pheasant, Grey or English Partridge, Red-Legged or French Partridge, Eurasian Woodcock, Common Snipe, Cross-bred Mallard and Greylag Goose. - Follow this link for further guidance. Follow this link for the Animal Welfare (Slaughter, Killing, Euthanasia etc) (Prescribed Animals) Regulations 2014. Follow this link for The Animal Welfare (Requirements for Slaughter, Killing, Euthanasia, etc) Order 2014. - These documents provide information on who can hunt, the places where game animals can be hunted, the methods of hunting that can be used and, in certain cases the times of the year when game animals can be hunted (there is a close season for some species). Hunting with dogs and ferrets is permitted and nets and cage traps can be used. All other traps and snares are prohibited. - The use of firearms is regulated and hunting on public land requires a hunting permit. The permit only allows hunting on some public land (which is listed on the Permit), it does not allow the holder to hunt on all public land. The possession and use of firearms is strictly regulated. Information on shotgun certificates and permits can be obtained from Guernsey Police on 725111. The importation of firearms is also regulated and further information can be obtained from the Guernsey Border Agency on 741417. - Agriculture, Countryside and Land Management Services has authorised personnel from the GSPCA and Animal Aid to seize and detain stray animals. - A stray is any domestic animal that has wandered or escaped from its normal place of confinement and is not under the supervision or control of its owner or keeper. Wild and feral animals are not strays. - If an animal is seized as a stray every effort will be made to identify the owner. Owners can collect an animal from the relevant organisation and may have to pay a fee for its detention before it is released. If an animal is not claimed within 21 days of being seized it may be rehomed or disposed of. - A person who finds a stray animal must first make every attempt to return it to its owner. If that is not possible, they can take it to the GSPCA or Animal Aid. - Agriculture, Countryside and Land Management Services has designated the following animals as pests - brown rat, house mouse, carrion crow, magpie and feral pigeon. Download 'Identifying Pest Animals'. - These animals can be controlled, but only using particular methods and in the case of feral pigeons, only by an approved pest controller subject to certain conditions. Certain types of Larsen trap have also been approved to control carrion crows and magpies, but these can only be used by an approved pest controller. Download the 'Code of Practice for the Responsible Use of Larson Traps'. - Follow this link for further guidance and this link for the relevant legislation.
<urn:uuid:8d5a79d8-8031-49b0-b078-09c8bdb561a5>
CC-MAIN-2021-10
https://www.gov.gg/animalwelfare
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363211.17/warc/CC-MAIN-20210302003534-20210302033534-00072.warc.gz
en
0.912302
929
3
3
BERKELEY – The economist Suresh Naidu once remarked to me that there were three big problems with Karl Marx’s economics. First, Marx thought that increased investment and capital accumulation diminished labor’s value to employers and thus diminished workers’ bargaining power. Second, he could not fully grasp that rising real material living standards for the working class might well go hand in hand with a rising rate of exploitation – that is, a smaller income share for labor. And, third, Marx was fixated on the labor-theory of value. The second and third problems remain huge analytical mistakes. But, while Marx’s belief that capital and labor were substitutes, not complements, was a mistake in his own age, and for more than a century to follow, it may not be a mistake today. Think of it this way. Humans have five core competencies as far as the world of work is concerned: · Moving things with large muscles. · Finely manipulating things with small muscles. · Using our hands, mouths, brains, eyes, and ears to ensure that ongoing processes and procedures happen the way that they are supposed to. · Engaging in social reciprocity and negotiation to keep us all pulling in the same direction. · Thinking up new things – activities that produce outcomes that are necessary, convenient, or luxurious – for us to do. The first two options comprise jobs that we typically think of as “blue collar.” Much of the second three options embody jobs that we typically think of as “white collar.” The coming of the Industrial Revolution – the steam engine to generate power and metalworking to build machinery – greatly reduced the need for human muscles and fingers. But it enormously increased the need for human eye-ear-brain-hand-mouth loops in both blue-collar and white-collar occupations. Over time, the real prices of machines continued to fall. But the real prices of the cybernetic control loops needed to keep the machines running properly did not, because every control loop required a human brain, and every human brain required a fifteen-year process of growth, education, and development. But there is no iron law of wages that requires technologies of power and matter manipulation to advance more rapidly than technologies of governance and control. The direction of technological progress today is toward moving very large parts of both the blue-collar and white-collar components of overseeing ongoing processes and procedures from humans to machines. How many of us can be employed in personal services, and how can such jobs be highly paid (in absolute terms)? The optimistic view is that those, like me, who find ourselves fearing the relative wage distribution of the future as a source of mammoth inequality and power imbalance simply suffer from a failure of imagination. Marx did not see how the replacement of textile workers by automatic looms could possibly do anything other than lower workers’ wages. After all, the volume of production could not possibly expand enough to reemploy everyone who lost their job as a handloom weaver as a machine-minder or a carpet-seller, could it? It could, but Marx’s mistake was not a new one. A century earlier, the French physiocrats Quesnay, Turgot, and Condorcet did not see how the share of the French labor force employed in agriculture could possibly fall below 50% without producing social ruin. After all, in a world of solid farmers, useful craftsmen, dissolute aristocrats, and flunkies, demand for manufactured items and flunkies was limited by how much of each aristocrats could use. Thus, a decline in the number of farmers could produce no outcome other than poverty and widespread beggary. Neither Marx nor the physiocrats could imagine the great many well-paid things that we could find to do once we no longer needed to employ 60% of the labor force in agriculture and another 20% in hand spinning, handloom weaving, and land transport via horse and cart. And today, the optimistic view is that those with excess wealth will continue to think of lots of things for everyone else to do to make their lives more convenient and luxurious, and that the ingenuity of the rich will outstrip the supply of labor by the poor and turn the poor into the middle class. But, given the rapid development of technologies of governance and control, the pessimistic view deserves attention. In this scenario, pieces of option three remain stubbornly impervious to artificial intelligence and continue to be mind-numbingly boring, while option four – engaging in social reciprocity and negotiation – remains limited. Welcome to the virtual sweatshop economy, in which most of us are chained to desks and screens – so many powerless cogs for Amazon Mechanical Turk, forever. Editor's note: A shorter version of this commentary was previously published by the New York Times.
<urn:uuid:0dc6ddfe-aae1-4253-ab36-10ac1e73fde2>
CC-MAIN-2015-11
http://www.project-syndicate.org/commentary/j--bradford-delong-wonders-whether-capital-now-substitutes-for--rather-than-complements--labor
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463475.57/warc/CC-MAIN-20150226074103-00169-ip-10-28-5-156.ec2.internal.warc.gz
en
0.952233
998
2.625
3
In last month's "Personal," we saw that the older generation of the Israelites whom God had freed from slavery in Egypt died in the wilderness. The story of their journey through the wilderness shows that they never overcame their slave mentality, the mind-set they brought with them from Egypt. Their thinking—and thus their attitudes and conduct—constantly reverted to the way it had been molded in Egypt. Despite witnessing awesome miracles, enduring terrible plagues that demonstrated God's mercy upon them and His punishment of the Egyptians, living "under the cloud" and having their daily needs supplied directly by God, the Israelites found the wilderness to be nothing more than a huge cemetery in which they wandered for forty years. The warning is clear to those of us "on whom the ends of the ages have come" (I Corinthians 10:11). In this type of our spiritual journey, Canaan, the Promised Land, represents the Kingdom of God. But those older Israelites never made it there! They fell short of the goal because a carnal mind, shaped and hardened by this world into inordinate self-concern, so dominated their choices that they dropped like so many flies. In graphic language the apostle Paul writes, "Now with whom was He angry forty years? Was it not with those who sinned, whose corpses fell in the wilderness?" (Hebrews 3:17). According to a number of commentaries, the last phrase indicates a scattering of dismembered bodies, as if they had been left unburied. These "corpses" were the same people who came out of Egypt with great joy, exulting in their new-found liberty. They yearned for a settled and free life in a land that was their own. But, instead of knowing the joy and plenty of the Promised Land, they chose to sentence themselves to live a life of homeless wandering in a barren land and to die and perhaps be buried in an unmarked grave. Chosen to be the beneficiaries of God's great blessings in a rich land, they instead lived poor and hungry in the wilderness, discontented and often at war because of their sins. Their example ought to be a sobering warning. In Hebrews 3:19, Paul puts his finger on the source of their problem, why their heart could not be changed, why they consistently and persistently sinned and rebelled: "So we see that they could not enter in because of unbelief." Paul later turns this thought into an admonition for us: Therefore, since a promise remains of entering His rest, let us fear lest any of you seem to have come short of it. For indeed the gospel was preached to us as well as to them; but the word which they heard did not profit them, not being mixed with faith in those who heard it. (Hebrews 4:1-2) Not only did Israel have the witness of numerous demonstrations of God's presence and power among them to provide a foundation for faith, but they were also given the Word of God by His servants Moses and Aaron. In addition, they had living examples of faith in Moses, Aaron (most of the time), Joshua, Caleb and others. God supplied these men with gifts by His Spirit as a testimony that should have provided more incentive for the Israelites to believe Him. But Hebrews 3:17 says He was angry with them forty years! If ever a people almost drove God to the point of exasperation, it was Israel in the wilderness. We must not allow such a powerful lesson to pass by unheeded. Paul agrees, "For whatever things were written before were written for our learning, that we through the patience and comfort of the Scriptures might have hope" (Romans 15:4). The lesson is clear. Those who believe God reveal their faith by obeying Him. Those who do not believe, disobey. Hebrews 3:12 warns, "Beware, brethren, lest there be in any of you an evil heart of unbelief in departing from the living God." Unbelief is evidence of an evil heart, and an evil heart departs from God. Like Hebrews 3:16-4:2, this verse equates unbelief with disobedience. Living By Faith How important is faith? "For yet a little while, and He who is coming will come and will not tarry. Now the just shall live by faith; but if anyone draws back, My soul has no pleasure in him" (Hebrews 10:37-38). "The just shall live by faith" is both a statement of fact about the basis of a Christian's life and a command. It is so important that it appears once in the Old Testament and three times in the New (Habakkuk 2:4; Romans 1:17; Galatians 3:11). In each case, the context is somewhat different, but its importance to a Christian's salvation is not lost. The concept is not difficult to understand. Paul further clarifies it in II Corinthians 5:7: "For we walk by faith, and not by sight." A simple definition of faith in Webster's New World Dictionary is "complete trust, confidence, or reliance." At the end of the definitions, "belief" is listed as a synonym. Belief means "faith, esp. religious faith; trust or confidence." The dictionary definitions show that the two words are virtually synonymous. However, in the Bible and in practical application a very wide difference separates merely believing and living by faith. The practical application of faith is more than simply acknowledging the reality of God. Living by faith involves qualities that are better expressed by the word "trust." This kind of faith produces loyalty or faithfulness expressed in the Christian's life by works of obedience. Do you think for a moment that the Israelites in the wilderness disbelieved that God existed? Some few may have argued that the miracles they had experienced from the arrival of Moses in Egypt until they died in the wilderness were nothing more than natural phenomena. There are always some doubters and scoffers of that sort (II Peter 3:3-7). But the vast majority of Israelites could not deny to themselves God's mighty acts on their behalf. They had heard the voice of God at Mount Sinai, had seen a wind from God part the Red Sea and had escaped death on Passover while the Egyptian firstborn had died. But when God required a higher level of obedience to follow His cloud across the wilderness and depend on Him to supply their every need, the record shows they did not trust Him. Their loyalty dissolved, and they rebelled! They did not have it within them to live, or walk, by faith. "Walk" is frequently used in the Bible to indicate movement through life. When used figuratively, the context shows the manner or condition of the "walk." For example, "walk honestly" (Romans 13:13, KJV); "walk worthy of the calling" (Ephesians 4:1); and "no longer walk as the rest of the Gentiles walk, in the futility of their mind" (verse 17) are examples of a manner of living life. "Walk by faith" (II Corinthians 5:7); "walk in the flesh" (II Corinthians 10:3); and "walk in newness of life" (Romans 6:4) are examples of living in a certain state or condition. The Israelites of the Exodus definitely lived according to the flesh, fulfilling the desires of their bodies and minds. They conducted their lives as if God did not exist, as though they would never have to answer to Him or anybody else. They lived seemingly without regard for what He said and with little or no concern about consequences to themselves or their posterity. They simply moved in the direction their carnal impulses drove them. Somewhere along the way, they lost the vision of entering the promised homeland. They forgot about settling on their own property and living free under the government and laws of God. Yes, that older generation literally walked in following the cloud as it moved toward the Promised Land, but their manner of life under the cloud corresponded to living in darkness. So, they never made it to Canaan. The Right Kind of Faith We can tell whether we have the right kind of faith. Hebrews 11:1 provides a definition: "Now faith is the substance of things hoped for, the evidence of things not seen." Hupostasis, the word translated "substance," means "that which underlies the apparent; that which is the basis of something, hence, assurance, guarantee and confidence" (Spiros Zodhiates, The Complete Word Study Dictionary: New Testament, p. 1426). The English "substance" is built from a prefix and a root which together mean "that which stands under." Webster's defines it as "the real or essential part or element of anything; essence, reality, or basic matter." It is very similar in meaning to hupostasis. Paul is saying that, for Christians, faith underlies what is seen externally in the conduct of their lives. Underlying a building is its foundation, and in most buildings, the foundation is rarely seen. If it is seen at all, usually only a small portion is visible, but it is there. If no foundation exists, the building soon becomes crooked and warped. In most cases, it will collapse and be completely unusable. Since Paul says, "We walk by faith, not by sight," we understand that underlying the conduct of a Christian's life is not merely believing that God is, but a constant and abiding trust in Him. Since it is impossible for God to lie, we trust that what God has recorded for us to live by is absolute and must be obeyed, and that it will work in our lives regardless of what may be apparent to the senses. How much of what you do is really motivated by an implicit trust in God's Word? This is how we can tell whether we are living by faith. We must be honest in our evaluation though. We find it very easy to shade the truth through self-deception. We justify disobedience by rationalizing around God's clear commands or examples, saying that our circumstance is special because . . . (fill in the blank). If we are honest, we also have to admit that Abel, Enoch, Noah, Abraham, Joseph, Moses, Daniel, Shadrach, Meshach, Abed-Nego, Paul, Christ and a whole host of others could also have rationalized that surely their circumstances were special. But in their cases, faith undergirded how they lived even when the going really got rough. We like to think of ourselves as rising to the occasion when a time of great crisis arises. We all hope to emulate what the heroes of faith did. But as great as they were, Jesus says in John 15:13-14, "Greater love has no one than this, than to lay down one's life for his friends. You are My friends if you do whatever I command you." It is very easy to think of the sacrifice implied in "lay[ing] down one's life" as dying for another in one moment of time. Though that may occasionally occur, the context shows this sacrifice within the framework of friendship. Friendship occurs over months and years, not just in one moment in time. In true friendships, because we are eager to help, we willingly spend ourselves ungrudgingly, without tallying the cost. Friends open their hearts and minds to each other without secrecy, which one would not do for a mere acquaintance. True friends allow the other to see right in and know them as they really are. Friends share what they have learned. Finally, and most importantly for this article, a friend trusts the one who believes in him, and risks that the other will never doubt his loyalty but look upon him with proven confidence. Though the principle given by Christ is applicable to all friendships, He has one specific friendship as His primary focus: ours with Him, or more generally, ours with God. Proverbs 18:24 says, "A man who has friends must himself be friendly, but there is a friend who sticks closer than a brother." That friend is Jesus of Nazareth, but He made it very clear that if we are His friends, we will show it in our obedience to His commands. But before we can obey, we must trust Him. Take a moment to evaluate yourself. Are you as open and frank with Him as He is with us through His Word? Often our prayers are stiff and formal, not truly honest. Besides that, sometimes we become bored in His presence and soon have nothing to say to Him. Is it not true that we do not trust Him as fully as we should? That we are often quick to doubt Him? That we easily grow suspicious of Him? That we lose heart or fear that He has forgotten us? That He is not really trying or is unequal to the task of shepherding us into His Kingdom? Though He has never failed us, we are so quick to suspect and blame Him! Israel did all of these things in the wilderness because they did not believe God. Much to our dismay, we do them now, in our time of salvation! The Faith That Saves Faith's importance to salvation is accentuated by Ephesians 2:8, where Paul writes, "For by grace you have been saved through faith, and that not of yourselves; it is the gift of God." Faith plays a role in the entire process until we enter the Kingdom of God. It is the sum of what God is doing in our lives: "Jesus answered and said to them, ‘This is the work of God, that you believe in Him whom He sent'" (John 6:29). In the wonderfully "meaty" fourth and fifth chapters of Romans, Paul mentions faith a dozen times, almost all concerning justification, being made righteous or having access to grace, and thus, having the hope of the glory of God. The faith that saves has its beginning when God, on His own initiative, calls us (John 6:44) and leads us to repentance (Romans 2:4). He does this by His Spirit guiding us into all truth (John 16:7-14). Stirring up our minds to knowledge, His Spirit enables us to perceive from a perspective we never before seriously considered. This, combined with the confrontation that occurs with the carnal mind when we are forced to choose what to do with this precious truth, gives birth to a living faith, a faith that works, a faith that walks in godliness. This would never occur if God did not first do His part. We would never find the true God on our own or understand His gospel of the Kingdom of God. We would never be able to choose the real Jesus, our Savior and Elder Brother, from the mass of false christs created in the minds of men. Not knowing what to repent of or toward, we would never repent. As miraculous and powerful as God's liberation of Israel from bondage was, even more so and of greater importance is the breaking of our bondage to Satan, this world and human nature. This is why Ephesians 2:8 says the faith that saves is "the gift of God." Israel's release from Egypt was God's gift too. Regardless of how much they cried out to Him, the Israelites would never have left Egypt without Him. If God had not been merciful and faithful, if He had not been trustworthy, they would never have been freed. What did God lead us to that sparked this saving faith in us? He led us to His Word. We can glean a measure of faith from observing God's creation, but this faith cannot save because it does not reveal His purpose. It gives us no direction or outlet for the soaring thoughts and creative energies of the God-given gift of a mind trained in His image. But do we find God's purpose and His revelation of Himself in His Word. "So then faith comes by hearing, and hearing by the word of God" (Romans 10:17). Of course, this does not mean that all who hear the message will understand and accept it. Without the message, however, there would be nothing to believe in, nothing that one could trust to lead him to salvation. In practical application, this means that one should always most carefully evaluate the message being preached rather than the man or the corporate body he represents. It is essential that we put our trust in the right teachings. Most of the people who claim to be "Christian" are living by false gospels. The Bible shows this principle from beginning to end. Adam and Eve put their trust in Satan's message rather than God's (Genesis 3:1-6). The children of Israel listened to Korah, Dathan and the two-hundred fifty leaders (Numbers 16:1-3), and later they succumbed to the Moabites‘ appeal to sexual license (Numbers 25:1-3). In each case many died as a witness to us. After Solomon's reign, Israel followed Jeroboam's false message. Christ prophesied that many would proclaim that He (Jesus) is the Christ yet deceive many. We Must Choose to Live by Faith We must learn the valuable lessons regarding faith shown in the wandering of the Israelites in the wilderness because they have direct application to us (Romans 15:4; I Corinthians 10:11). The people knew the history of their ancestors with whom God had worked, yet they chose to forget His graciousness to Abraham, Isaac, Jacob and Joseph. God demonstrated His presence to them, but the Israelites chose to disregard Him. They had the gospel preached to them, and they chose not to believe it. They had among them the godly witness of men of faith, men in whom the Spirit of God dwelled, and the rebellious children of Israel chose not to follow them. God does not ask us to believe His message without evidence. He presents us with an overwhelming body of proofs that He does exist and is working out a great purpose that now includes us. We would not even be in a position to read this had He not personally acted to stir our minds to understand things of His Spirit. He has given us His Spirit that we might know the things of God. When we have faith, we trust God that what He has said and promised are true. Though we may at times feel all alone in the midst of a trial, we can take comfort that so did all those others of the faithful who went before us. The very nature of faith demands that such a feeling of "going out on a limb" occur. If we had what we desire, we would not need faith (see Hebrews 11:13). Now the weight of responsibility for making choices grounded on trust in God's Word has fallen upon us. It is awesome to think of ourselves as baptized into the history of the same spiritual company of those greats of the past, men and women of faith whose names are emblazoned in our memories. We must not forget either their standing with God because of their faith or Israel's failure in the wilderness because they did not trust Him. Remember the warning and advice God gave to Israel in the days before they entered the Promised Land: For this commandment which I command you today, it is not too mysterious for you, nor is it far off. It is not in heaven, that you should say, "Who will ascend into heaven for us and bring it to us, that we may hear it and do it?" Nor is it beyond the sea, that you should say, "Who will go over the sea for us and bring it to us, that we may hear it and do it?" But the word is very near you, in your mouth and in your heart, that you may do it. See, I have set before you today life and good, death and evil, in that I command you today to love the LORD your God, to walk in His ways, and to keep His commandments, His statutes, and His judgments, that you may live and multiply; and the LORD your God will bless you in the land which you go to possess. But if your heart turns away so that you do not hear, and are drawn away, and worship other gods and serve them, I announce to you today that you shall surely perish; you shall not prolong your days in the land which you cross over the Jordan to go in and possess. I call heaven and earth as witnesses today against you, that I have set before you life and death, blessing and cursing; therefore choose life, that both you and your descendants may live; that you may love the LORD your God, that you may obey His voice, and that you may cling to Him, for He is your life and the length of your days; and that you may dwell in the land which the LORD swore to your fathers, to Abraham, Isaac, and Jacob, to give them. (Deuteronomy 30:11-20) The choice is ours. Receive Biblical truth in your inbox—spam-free! This daily newsletter provides a starting point for personal study, and gives valuable insight into the verses that make up the Word of God. See what over 140,000 subscribers are already receiving. © 1995 Church of the Great God PO Box 471846 Charlotte, NC 28247-1846
<urn:uuid:a768067f-f1f7-428a-a512-5af00b380a00>
CC-MAIN-2019-35
https://www.cgg.org/index.cfm/fuseaction/Library.sr/CT/PERSONAL/k/524/Wandering-Wilderness-Faith.htm
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330233.1/warc/CC-MAIN-20190825130849-20190825152849-00094.warc.gz
en
0.970561
4,434
2.671875
3
World Environment Day was observed on Monday to highlight hazards of plastic pollution imposing grave impacts on nature and human health. This year's theme of the World Environment Day was solutions to plastic pollution under the campaign #BeatPlasticPollution. More than 430 million tonnes of plastic is produced every year worldwide, half of which is designed to be used only once. The civil society, government departments including Pakistan Environmental Protection Agency, non-governmental organisations and others will hold awareness walks, seminars and symposiums to sensitise masses on plastic pollution. Meanwhile, Minister for Climate Change Sherry Rehman has said the consequences of plastic pollution are intense and long term, as it irreversibly damages our environment and threatens the very fabric of life on earth. In her message on World Enviroment Day, the Minister urged a call for action as plastic production is set to triple by 2060 if ‘business-as-usual’ continues. Naval Chief Admiral Muhammad Amjad Khan Niazi has said Pakistan Navy is taking steps for the protection of aquatic environment. In his message on world environment day, he said these include cleaning of harbors, establishment of Reedbed plants in the areas under Pakistan Navy's precinct and extensive plantation drives. He said those associated with the industries must also adopt such methods that help effectively address the issue of plastic pollution. United Nations Chief Antonio Guterres has stressed the importance of curbing the "catastrophic" consequences of waste plastics. In his message for World Environment Day he said that Every year, over 400 million tons of plastic is produced worldwide - one-third of which is used just once. He said that every day, the equivalent of over 2,000 garbage trucks full of plastic is dumped into our oceans, rivers, and lakes. He noted that micro-plastics are finding their way into the food we eat, the water we drink, and even the air we breathe. Source: Radio Pakistan
<urn:uuid:f90866ed-fa33-43b3-b454-104e254c4805>
CC-MAIN-2023-40
https://asianetpakistan.com/world-environment-day-observed/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510412.43/warc/CC-MAIN-20230928130936-20230928160936-00809.warc.gz
en
0.949699
402
3.234375
3
SST GCP HCMC 2013 Cu Chi Tunnel Background information to the Vietnam War Origins & cause of the war Since Vietnam was under the French rule, the US decided to help them as they are allies. Furthermore, the US and french supports the government and so they decided to attack Vietnam as Vietnam are a communist society. Who are the Viet Cong? It is a Vietnamese word, belonging to or supporting the National Liberation Front of the nation formerly named South Vietnam. How were the Cu Chi Tunnels used in the war? The tunnels were used by Viet Cong soldiers as hiding spots during combat, as well as serving as communication and supply routes, hospitals, food and weapon caches and living quarters for numerous North Vietnamese fighters. The tunnel systems were of great importance to the Viet Cong in their resistance to American forces, and helped to counter the growing American military effort. Done By Group 5 Share to Twitter Share to Facebook Share to Pinterest Post a Comment
<urn:uuid:8d003931-943f-4541-89e6-f935684d8a33>
CC-MAIN-2018-13
http://sst-gcphcmc2013.blogspot.com/p/cu-chi-tunnel.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647530.92/warc/CC-MAIN-20180320185657-20180320205657-00209.warc.gz
en
0.969266
201
3.484375
3
Have you ever worried that you’re going to turn into your parents? Well, if you’re not careful it’s going to happen sooner than you think—and in ways you might not have expected. The study measured various aspects of metabolic health, including the prevalence of being overweight or obese and body mass index (BMI). All three are notably higher in young people today than they had been in the past, with obesity among women in their 20s up twofold from a generation earlier. That means that a 20-something is two times more likely to be obese by her 30s than a woman 10 years older than she is. These scary generation shifts were seen nearly across the board for both men and women. Poor metabolic health isn’t just about whether you can eat as much as your friend Sally without gaining weight. It’s the first step on the slippery slope to a heap of problems, from increased belly fat to obesity to type-2 diabetes, high blood pressure and high cholesterol. But fear not! You’re not destined to go down this road. Though there is a genetic component at play, you can control your metabolic health. Good lifestyle and eating habits will help you stay healthy and keep from aging ahead of your years. Here’s what you need to do: We’ve said it before and we’ll say it again: Regular physical activity—whether at the gym or by speed walking to work and actively tending your lawn—is a key to keeping your body humming along. Thirty minutes of aerobic exercise, five days a week helps you stay at a healthy weight, decreases damaging inflammation, helps regulate blood pressure and cholesterol and supports your system’s ability to use insulin efficiently. We’d really like you to take ten thousand steps every day. Start small and build (and by the way, a pedometer is much easier than counting). You know when you’re looking at a package of snack cakes that they’re not good for you. But do you really stop to think about what they’re doing? Routinely filling up on carbohydrates from white flour (cakes, white bread, rice) and tons of added sugar (why else would those treats taste so sweet?) forces your body to pump out more and more insulin to break down the torrent of sugar rushing through your bloodstream. Eventually, your body gets so used to those levels of insulin that the same amount doesn’t work anymore. That’s called insulin resistance, and it’s just a step away from type-2 diabetes. What’s more, packaged foods are full of trans fats, which contribute to high cholesterol. That snack doesn’t seem so so sweet after all, does it? Skip the Soda Did you think that just because diet soda was calorie-free that it was guilt-free, too? Sorry to break it to you, but even drinking diet sodas is associated with a higher risk of metabolic syndrome, a nasty combination of many of the risk factors of metabolic aging. The sugar and corn syrup in regular (sugared) soda is a major source of calories and a leading culprit in this country’s fight with obesity. Surprisingly, the no-calorie sweetener in diet drinks may be dangerous, too. One study showed that having more than one soft drink a week—whether regular or diet—was associated with a 44 percent increase in the risk of metabolic syndrome among 50 year olds. One theory is that the high sweetness of diet drinks causes people to crave sweet foods, eating more sugar in the end. Another thought is that some ingredients in artificial sweeteners might lead to insulin resistance or inflammation. Either way, steer clear and go for black coffee, natural teas, water or seltzer instead.
<urn:uuid:a7f66877-fdb7-44bd-b5c6-560ec43f4666>
CC-MAIN-2016-36
http://www.youbeauty.com/beauty/metabolic-health/
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292734.19/warc/CC-MAIN-20160823195812-00048-ip-10-153-172-175.ec2.internal.warc.gz
en
0.955908
784
2.5625
3
Are Prevailing Wage Laws Discriminatory? If you work as a contractor on projects with federal funding, prevailing wage laws may be pertinent to your rate of pay. An opinion piece published in the Albuqurque Journal makes the argument that “prevailing wage” laws are discriminatory. Understand what these laws say and how they affect you. (Photo Credit: daily sunny/Flickr) Little Davis Bacon The Davis Bacon Act (DBA) was enacted on March 3, 1931. It has been amended twice and temporarily suspended three times, twice to control spending on hurricane recovery, and once by President Roosevelt for the New Deal. In a publication by Powell & Booth, PC Presents, the authors discuss that the Davis Bacon Act is controversial, with some strong supporters and equally strong detractors. The DBA affects public construction projects that exceed $2,000 in federal funding. It is designed to protect workers. By saying that contractors must pay their workers the prevailing wage generally earned in that area, workers are protected from low pay and contractors are unable to lower their bids for the project by paying workers less than the prevailing wage in the area. Discrimination and Trade Unions One motivation behind the passing of the DBA was to prevent “fly by night” contractors from coming into an area, winning the project and paying migrant workers low wages. This may be a form of discrimination, but the opinion piece in the Albuqurque Journal made a different discrimination argument. The author points out what she claims is widely known, and is also included in the Powell & Booth report: that the federal government’s assessment of prevailing wages in different areas of the country are inflated. Contractors must bid enough to cover paying workers the official prevailing wage, not necessarily the actual prevailing wage. She makes the argument that it is trade unions who are behind keeping these laws in place and behind the inflated wage assessments. It may be less paranoid that it sounds; in 2009 Senate Bill 33 was passed in New Mexico. Prevailing wages are now “determined by the director of the Labor Relations Division of the Department of Workforce Solutions, at the same wage rates and fringe benefit rates used in collective bargaining agreements as supported by the unions.” It seems that smaller contractors in New Mexico are having trouble competing with those who employ unionized labor. The smaller contractors are less able to pay the prevailing wages and, therefore, bid too high to be awarded the contract. And this argument sounds like it is less about bidding on government contracts, and more about disabling trade unions. If the trade unions lose power, then more contractors could potentially bid lower, win the jobs, and pay workers less money. In the end, this may simply be about whether one supports unions and higher pay for honest labor, or instead supports the claims of smaller contractors that the playing field is stacked against them because they can’t afford to pay their workers. Tell Us What You Think Do you think prevailing wage laws should be supported, or should we allow for competition and lower costs, including lower wages? We want to hear from you! Leave a comment or join the discussion on Twitter.
<urn:uuid:0c20ee79-c7b8-476c-931d-a182a7b2ccfa>
CC-MAIN-2016-40
http://www.payscale.com/career-news/2014/02/are-prevailing-wage-laws-discriminatory-
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661900.67/warc/CC-MAIN-20160924173741-00232-ip-10-143-35-109.ec2.internal.warc.gz
en
0.956367
648
2.515625
3
Counts of Flanders The county of Flanders was created 864 when the French king Charles the Bald granted it as a fief to his son-in-law Baldwin with the Iron Arm. Flanders was a part of France but distinguished itself from the rest of the country with its Germanic Flemish population and close economic ties to England. Unlike other French fiefs it was never returned to the French king's control, instead it became a part of the duke of Burgundy's possessions in 1384, which would evolve into present day Belgium. United with Burgundy in a personal union 1384
<urn:uuid:d2947d9d-61a2-46a3-b6be-6ed50a13a46a>
CC-MAIN-2019-09
http://www.tacitus.nu/historical-atlas/regents/benelux/flanders.htm
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247517815.83/warc/CC-MAIN-20190222114817-20190222140817-00459.warc.gz
en
0.976465
133
2.8125
3
Headaches result when muscles and blood vessels outside of your skull put pressure on your nerves, sending a "pain message" to your brain. "Every-day habits that prevent headaches include a healthy, regular sleep schedule, stress management and consistent exercise," said To get back on track to a happy, healthy life, be proactive. "Headaches don't have to keep you from the activities you enjoy," said A Good Night's Sleep — Changes to your sleep schedule can cause cluster headaches, which occur without warning and are characterized by a sudden, sharp pain that can reach maximum intensity within minutes of onset. — Your nervous system requires sleep to function properly. A regular schedule of seven to nine hours of sleep every night will keep your body's sleep-wake cycle in order. — To measure the quality of your sleep, gauge your energy level throughout the day. Are you waking up refreshed? If not, headaches may result and your sleeping habits may be to blame. — According to the Better Sleep Council, 65 percent of Americans lose sleep because of stress, which itself is a trigger for headaches as well. — Muscles within the neck, shoulder and scalp tighten and contract as a response to stress. This anxiety-induced reaction often leads to tension headaches. — Tension headaches, the most common type, cause a mild to moderate pain some people characterize as a tight band wrapped around the head. — High levels of stress and the resulting headaches can eventually take their toll on your general health. As a professional ice skater, I used breathing exercises to calm my nerves and eliminate stress before competitions. Try this: take slow, deep breaths to help clear your mind, slow your heart rate and rid your body of tension. — Usually a simple nap or an over-the-counter medication, such as aspirin or ibuprofen, can relieve the pain associated with tension headaches. Be careful not to consume too much pain medication, because overuse can lead to rebound headaches. — To cope with stress, change how you approach situations that negatively affect your mood. We often can't change such situations, so relief may lie in how you react to the problem itself. — Problems with stress and sleep can also cause a more debilitating type of headache, migraines, which can last between a few hours and a few days. — The intense pain of migraines is often accompanied by sensitivity to light, nausea or vomiting, changes in body temperature and confusion. If you begin to experience a migraine, seek a dark, quiet room to rest. A cool compress on the forehead and the back of the neck may also help alleviate pain. — Migraine sufferers, who often have a family history of migraines, should visit their physician to further explore prevention and treatment options. When to See Your Doctor — Don't hesitate to go to the emergency room for headaches that start after a head or neck injury, or cause difficulties with speech. This may be a sign of a more serious condition. — If headaches occur at least three times a month, or you experience an abrupt, severe headache, pay your doctor a visit. To help your physician diagnose and treat your headaches, keep a journal that notes how long each headache lasts, the severity and description of the pain, the location of the headache and any triggers or effects you notice. — In addition to lifestyle changes, your doctor may prescribe medicine and, in order to diagnose, arrange for a blood test and an X-ray, MRI or CT scan. Another Reason to Get Fit — To reduce the frequency and intensity of headaches, maintain a regular exercise routine to help relax muscles. — The U.S. Department of Health and Human Services recommends 30 minutes of physical activity on most, preferably all, days of the week. If you keep a busy schedule, try breaking the 30 minutes into three 10-minute intervals throughout the day: a quick run in the morning, a short walk during your lunch break and 10 more minutes of activity in the late afternoon. Remember that you're more likely to stick to an exercise regimen that focuses on an activity you enjoy. — While caffeine can help relieve headaches, too much of the stimulant can cause headaches. If your caffeine intake exceeds 500 mg per day, or approximately four cups of coffee, gradually decrease your consumption. — Allergies also cause headaches. When air cannot enter the sinuses because of swollen tissue, it causes pressure inside the head. To relieve the pain, use a decongestant to unblock nasal passages and eliminate any bacterial infection that may be present. Time with your family, your focus at work and your general day-to-day activities are too important to be disrupted by headaches. Adjust your lifestyle to prevent headaches before they strike: sleep well, get plenty of exercise and keep stress at a minimum. If that doesn't work, your physician can help you take steps to relieve your pain. HealthSaver, an emerging health care discount program, offers savings on prescriptions, vision care, complementary and alternative health care treatments, vitamins and supplements by mail and more than 1,500 fitness clubs nationwide, including select Bally Total Fitness, World Gym and Ladies Workout Express locations. HealthSaver offers discounts of 20 percent on vision care, as well as discounts of 10 to 50 percent on prescriptions at participating pharmacies, 20 percent off complementary and alternative health care treatments and fitness club benefits. HealthSaver also offers discounts of 10 to 35 percent on dental care services at some 42,000 participating provider locations nationwide, including routine cleanings, X-rays, fillings, orthodontics, and even popular cosmetic dentistry procedures such as teeth whitening. Members can also save from 5 to 50 percent off vitamins and supplements by mail. Discounts are based upon reasonable and customary costs or manufacturers suggested retail price (MSRP) and are only available from participating providers. HealthSaver is not an insurance product or service. More information about HealthSaver is available online at www.healthsaver.com or toll free by calling 1-800-7HEALTH (1-800-743-2584). A one month trial membership in HealthSaver (www.healthsaver.com or 1-800-743-2584) costs About Affinion Group, Inc. As a global leader with nearly 35 years of experience, Affinion Group (www.affinion.com) enhances the value of its partners' customer relationships by developing and marketing valuable loyalty, membership, checking account, insurance and other compelling products and services. Leveraging its expertise in product development and targeted marketing, Affinion helps generate significant incremental revenue for more than 5,300 affinity partners worldwide, including many of the largest and most respected companies in financial services, retail, travel, and Internet commerce. Based in Available Topic Expert(s): For information on the listed expert(s), click appropriate link. Peggy Fleming https://profnet.prnewswire.com/Subscriber/ExpertProfile.aspx?ei=57424
<urn:uuid:ad6e1d07-fe70-4102-9054-34c95bdcad53>
CC-MAIN-2017-51
https://www.massagemag.com/figure-skater-peggy-fleming-teams-with-healthsaver-to-relieve-headaches-be-proactive-2134/
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588420.68/warc/CC-MAIN-20171216181940-20171216203940-00790.warc.gz
en
0.929904
1,447
2.796875
3
Translation of Multi-Word Verbs in English Cookbook into Indonesian This paper examines translation of multi-word verbs in English cookbook into Indonesian. This study emphasises on the analysis of the translations of multi-word verbs in English cookbook and its translation into Indonesian. The classification and the meaning of phrasal verbs are proposed by Quirk (1985). The research is descriptive qualitative. Methods of collecting data are observation and documentation. The data source was taken from English cookbook entitled The Essential Book of Sauces & Dressings from Murdoch Books published by Periplus, Singapore and its translation into Indonesian entitled Saus dan Dressing yang Esensial by Hadyana P. published by Periplus, Indonesia. The study shows that there are three types of multi-word verbs found in data source. They are phrasal verbs, prepositional verbs and phrasal prepositional verbs. Coghill, J., & Stacy, M. (2003). English Grammar. New York: Univ. Press. Downing, A., & Locke, P. (2006). English Grammar A University Course Second edition. New York: Routledge. https://doi.org/10.4324/9780203087640 Eni, N. P. S., Artawa, K., & Udayana, I. N. (2017). The Analysis of Phrasal Verbs in the Novel "The Hobbit" By J. R. R. Tolkien. Jurnal Humanis, Fakultas Ilmu Budaya Unud, 18, 244-251. Greenbaum, S., & Nelson, G. (2002). An Introduction to English Grammar. Second edition. London: Pearson Education Limited. Hadyana. (2006). Saus dan Dressing yang esensial. Jakarta: Periplus. Jayantini, S. R. (2016). The Art of Translating Theory and Analysis. Denpasar: Cakra Press. Larson, M. L. (1984). Meaning-based Translation: A Guide to Cross Language Equivalence. New York: University Press of America. Lowery, B., Brodhust, W., Goggin, W., & Earl, M. (1996). The essential book of sauces and dressings. Singapore: Periplus. Quirk, R. (1985). A Comprehensive Grammar of the English Language. USA: Longman Inc. Santika, I Dewa Ayu Devi M., Putri, I Gusti Vina W., & Suastini, N. W. (2017). Translation of Phrasal Verbs Into Indonesian. Lingual, 9(2), 16-21. https://doi.org/10.31940/jasl.v2i1.804 This work is licensed under a Creative Commons Attribution 4.0 International License. Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
<urn:uuid:0deed7ad-1856-4330-acf2-4ee79f34982a>
CC-MAIN-2019-30
https://j.ideasspread.org/index.php/ilr/article/view/292
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00107.warc.gz
en
0.743734
667
2.71875
3
Cryptocurrencies have caused a stir among both mainstream financial experts and everyday consumers in recent months. Thanks to last year’s bull run, more people than ever know about Bitcoin and different “altcoins”. However, the reaction hasn’t been totally positive. Hence, a lot of folks are very skeptical of Bitcoin and the blockchain, leading to a lot of misinformation. Thus, some of these untruths have morphed into widely held but false beliefs about cryptocurrencies. “They are worthless” Likely one of the biggest myths that people enjoy circulating, is that, they have no value. This is simply not true. One needs to understand that other goods and services do not acquire their value due to an inherent worthiness. Value is acquired based on what people are willing to pay for it. If you’d like to see this in action, then you can compare a designer t-shirt to a department store one. Is the designer shirt really worth that much more material wise? Not likely, but people have placed more value on designer items thanks to branding. This mentality applies to literally everything in life. No practical use case – Only for criminals There are those who bash cryptocurrencies for not having any practical applications. This has also been proven false. The average user may not see the immediate need for them in their everyday transactions. Especially, in developed countries where they have the convenience of credit cards and online banking. However, they can still be useful for many other purposes. For example, the speed and efficiency of the blockchain enables global remittance, allowing money to cross borders instantaneously instead of waiting days for settlement to occur. This puts cryptocurrencies in a place that is far outside the box created by the average user, who largely thinks cryptocurrencies are strictly for illegal transactions. While it’s easy to think that, the fact is that criminals actually prefer to utilize cash. Most cryptocurrencies are not as anonymous as you think, and people who commit crimes don’t want all of their activity recorded in a ledger believe it or not. Coins and the Blockchain are the same. The most common misconception though, is that, the blockchain and cryptocurrencies are the same thing. Cryptocurrencies simply use the blockchain to function. They are as different as night and day, and the blockchain can be applied to many more things other than just crypto currencies. As time passes, this would likely be the most laughable of all these myths. Every day, there are more innovative technologies being developed via the blockchain, to help us find real solutions to problems in our everyday lives.
<urn:uuid:5b3a3fc3-351b-4b2b-8837-f0d7fd777e70>
CC-MAIN-2018-43
https://cryptocoremedia.com/ridiculous-myths-cryptocurrencies/
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511703.70/warc/CC-MAIN-20181018042951-20181018064451-00239.warc.gz
en
0.963732
529
2.6875
3
The European enlightenment, the belief in rational thought, gave long term impetus to scientific investigation of the world and humanity’s place within the scheme of things. However, the spur which led to classificatory systems for plants and animals, to taxonomies of heavenly bodies and the investigation of human origins also led to spurious science when linked to the prevailing needs of those in power. First and foremost of those needs is the maintenance of power and, it often appears, by generating ‘rational’ evidence as required. All it needs is the correct criteria. Thus in 1879, when equal suffrage for English women, for example, was still half a century away, is was possible for one of the founders of social psychology to state with complete assurance and “In the most intelligent races, as among the Parisians, there are a large number of women whose brains are closer in size to those of gorillas than to the most developed male brains... they represent the most inferior forms of human evolution and that they are closer to children and savages than to an adult, civilised man... Without doubt there exist some distinguished women, very superior to the average man, but they are exceptional as the birth of any monstrosity, as for example, of a gorilla with two heads, consequently, we may neglect them entirely.” Thus cranial capacity (physical and measurable) is asserted as the index of intelligence (however that is to be defined). Women, children, savages and gorillas are excluded from the category that counts. The subtlety of ‘science’ in a context like this is not to be underestimated. from a work with ethnological (and imperial) aspirations: “This well-to-do farmer of Northern Tirol represents the fine highlander stock, which has been found to possess the largest average brain capacity of all races yet closely studied by men of science.” The reliance on animal breeding terminology is in period (and linked to other important issues) but what is most relevant here is the emotive constellation in the description - ‘fine highlander stock’, ‘largest average brain capacity’ and ‘well-to-do’. All characteristics attach themselves to the male farmer. The poor need not apply.
<urn:uuid:128f21d6-c158-414a-a910-c88eca4deac8>
CC-MAIN-2019-35
http://imaginative.link/htm/033/page-e.htm
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00179.warc.gz
en
0.95722
482
2.828125
3
The Rapaälven making its way through the Rapadalen valley in Northern Sweden. After over a week of hiking through pure wilderness I reached the summit of Skierffe together with three friends. We were just blown away by the view and completely in awe for the beautiful shape of the rivers course… little lakes in between river channels of different sizes as well as dense vegetation forming a habitat for so many animals.. all controlled by erosion and the force of the water cycle… A few days later we learned how difficult it is to hike through terrain like this for humans but how fast and quiet a moose can move right across the valley. The Rapaälven is the biggest river in the Sarek National Park, the Swedish part of Lapland. Four smaller rivers (Smájllajåkkå, Mikkájåkkå, Guohperjåkkå, Áhkáåkkå) form this 75km long stream that drains around 30 different glaciers and the surrounding national park. Description by Florian Konrad, as it first appeared on imaggeo.egu.eu Imaggeo is the EGU’s online open access geosciences image repository. All geoscientists (and others) can submit their photographs and videos to this repository and, since it is open access, these images can be used for free by scientists for their presentations or publications, by educators and the general public, and some images can even be used freely for commercial purposes. Photographers also retain full rights of use, as Imaggeo images are licensed and distributed by the EGU under a Creative Commons licence. Submit your photos at http://imaggeo.egu.eu/upload/.
<urn:uuid:20c8c66b-7dd7-4b6d-ac60-6e7448bd8ecf>
CC-MAIN-2021-21
https://blogs.egu.eu/geolog/2019/09/09/imaggeo-on-mondays-a-lifeline-between-light-and-shadow/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00378.warc.gz
en
0.944413
359
2.625
3
Word of the Day: July 6, 2020 What It Means 1 : sleight of hand 2 : a display of skill and adroitness legerdemain in Context "An example of Mr. Northam's political legerdemain is his tax proposal, which avoided the minefields of income or sales tax increases. Instead, he suggested hiking the gas tax while scrapping mandatory annual vehicle inspections and halving vehicle registration fees." — The Washington Post, editorial, 20 Dec. 2019 "One must find the resonance between ancient and contemporary, blending incongruous elements in a way that seems not only right but inevitable: telling the story of a founding father with hip-hop lyrics, as in 'Hamilton,' or presenting the myth of Theseus in the milieu of reality television as in 'The Hunger Games.' Kekla Magoon manages a similar feat of legerdemain in 'Shadows of Sherwood,' her compelling reboot of the Robin Hood myth." — Rick Riordan, The New York Times, 23 Aug. 2015 Did You Know? In Middle French, folks who were clever enough to fool others with fast-fingered illusions were described as leger de main, literally "light of hand." English speakers condensed that phrase into a noun when they borrowed it in the 15th century and began using it as an alternative to the older sleight of hand. (That term for dexterity or skill in using one's hands makes use of sleight, an old word from Middle English that derives from an Old Norse word meaning "sly.") In modern times, a feat of legerdemain can even be accomplished without using your hands, as in, for example, "an impressive bit of financial legerdemain." Test Your Vocabulary with M-W Quizzes Word Family Quiz What relative of legerdemain refers to quickness of mind or body?VIEW THE ANSWER Theme music by Joshua Stamper ©2006 New Jerusalem Music/ASCAP More Words of the Day
<urn:uuid:e5a364a0-8cd3-48e8-a332-47e4f70a3dec>
CC-MAIN-2023-23
https://www.merriam-webster.com/word-of-the-day/legerdemain-2020-07-06
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643462.13/warc/CC-MAIN-20230528015553-20230528045553-00294.warc.gz
en
0.926648
481
2.734375
3
Best Tips For Group Discussion. A group discussion can be defined as a formal discussion involving 10 to 12 participants in a group. It is a technique used by organisation to evaluate a candidate’s personality traits and ability to work in a team. In this method, the group of candidates is given a topic or a situation, and are asked to discuss in within the group. A conclusion may not be drawn. Now you can scroll down below and check Tips For Group Discussion Group discussion is a famous selection process apart from regular tests and interview. Tests and interviews test a candidate technically. However in a professional set up, the candidate must be able to perform even when working with other people. Best Tips For Group Discussion Importance of Group Discussion - Enhances learning of a subject. - Increasing critical thinking. - Helps in problem- solving skills. - Improves decision-making skills. - Improve communication skills. - Builds confidence and positive attitude. Skills Needed in a Group Discussion - Communication skills. - Knowledge and ideas regarding a given subjects. - Leadership and coordinating capabilities. - Exchange of thoughts - Addressing the group as a whole. - Thorough preparations. Guidelines For a Successful group discussion A. : – Be a team player The foremost objective of a group discussion is to asses an individual’s ability to perform in a team. Being a team player is a strong personality trait and a times, a difficult one too. An individual’s Communication skill or perspective is seldom looked forward to in a GD. What counts is the participation of every member and jointly reaching to a mutual conclusion. For any professional, being an active team member is essential to succeed. A good team member will have the following qualities: 1 Build a positive rapport with fellow members. 2 Encourage other member’s participate. 3 Respect other member’s opinion. 4 Does not interrupt while other member is speaking 5 Participates in discussion. B. :- Reasoning ability skill Careful arguments must be given in group discussions. A group discussion has many participates with all kinds of sensibilities. So, the speaker participant must be very careful while presenting his/her views to the group. Sometimes the topics of group discussion can be sensitive religious-wise; ethnicity- wise, caste-wise, etc. So, the arguments must always be supported by appropriate facts and figures. C : – Leadership role “A leader is an authority who influences the group towards achieving the objective.” The leader in a discussion plays the role of a facilitator. And has to often act in situation like: 1 A discussion where participants do not speak much and are unable to build a proper rapport. 2 A discussion where participants get emotionally charged which results in a chaotic situation. 3 A discussion where participants discuss the topic in an aggressive manner. In situations like these, the leader steps in and facilities the discussion. The leader interrupts and gets the group back to the subject of discussion. He/she then coordinates with members and their efforts. The leader inspires and motivates the team members to express their views and collectively reach to a conclusion. D :- Qualities of a good leader A leader should have the following qualities: 1 The leader shows the direction to the group moves away from the topic in the discussion. 2 The leader coordinates the efforts of all team members in the discussion. 3 The leader stimulates and contributes by giving his/her valuable insights. 4 The leader motivates the team members to express their views and reach to a collective mutual conclusion. A leader is not a mere coordinator in a discussion, the role of a coordinator is a secondary role. A good leader makes a contribution to the discussion with his/her ideas and opinion, stimulates and steers the conversation towards achieving a goal.
<urn:uuid:edc3d440-d273-4d08-967d-5450d8a89f9c>
CC-MAIN-2018-26
https://knowledgeking.co.in/best-tips-for-group-discussion/
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864354.27/warc/CC-MAIN-20180622045658-20180622065658-00344.warc.gz
en
0.921687
802
2.90625
3
June 2, 1537 - Pope Paul III banned the enslavement of Indians. June 2, 1692 – Bridget Bishop became the first person to go to trial in the Salem witch trials in Salem, Massachusetts. She was found guilty and hanged on June 10. June 2, 1731 – Martha Washington was born Martha Dandridge on the Chestnut Grove Plantation in New Kent County, Virginia. June 2, 1743 – Italian occultist and explorer Alessandro Cagliostro was born in Albergheria, the old Jewish Quarter of Palermo, Sicily. June 2, 1774 – The Quartering Act was enacted, allowing a governor in colonial America to house British soldiers in uninhabited houses, outhouses, barns or other buildings if suitable quarters were not provided. June 2, 1774 – English-Australian explorer William Lawson was born in Middlesex, England. He was an explorer of New South Wales, Australia who co-discovered a passage inland through the Blue Mountains from Sydney. June 2, 1776 - Major General John Thomas died of smallpox. June 2, 1777 - The British captured Fort Ticonderoga. June 2, 1815 – Future Union general Philip Kearny was born in New York City. He was killed at the age of 47 on Sept. 1, 1862 when he accidentally rode behind Confederate lines at Chantilly, Virginia. Confederate General Robert E. Lee, who had witnessed Kearny’s daring battlefield exploits in Mexico, returned his body under a flag of truce. June 2, 1838 – Confederate soldier Bright Waters was born in Burnt Corn, Ala. He enlisted at Bells Landing on July 28, 1861 and served with the Monroe Guards. He was taken prisoner at Gettysburg and was later exchanged. He was wounded near Fredericksburg on May 19, 1864 and was discharged. He is buried in Mt. Pleasant Methodist Cemetery at Skinnerton. June 2, 1840 – English poet and novelist Thomas Hardy was born in Upper Bockhampton, Dorset. His books include “Far From the Madding Crowd” (1874), “The Return of the Native” (1878), “Tess of the d’Urbervilles” (1891) and “Jude the Obscure” (1895). June 2, 1847 - Confederate heroine Emma Sansom was born in Social Circle, Ga. Around 1852, she and her family moved to just outside Gadsden, Ala. June 2, 1862 – During the Civil War, “affairs” occurrred at Galloway's Farm, Arkansas and near Rienzi, Mississippi. Skirmishes were also fought at Tranter's Creek, North Carolina and at Woodstock and Strasburg, Virginia. June 2, 1863 – During the Civil War, skirmishes were fought at Jamestown, Kentucky and at Upperville, Virginia. June 2, 1863 – The siege at Vicksburg, Miss. entered Day 15. June 2, 1864 – Pvt. John L. Nixon was killed in the Battle of Cold Harbor, Va. Earlier in the war, he enlisted with Co. D of the 5th Alabama Infantry. Co. D became Co. C after reorganization on April 27, 1862 under Capt. Thomas Mercer Riley. June 2, 1864 - Union General Ulysses S. Grant prepared for a major assault along the entire Confederate front. He attacked the next day. June 2, 1864 – During the Civil War, ordered to pursue and destroy General Nathan Bedford Forrest, General John Sturgis left Memphis with a force of 8,100 men. An “affair” also occurred at Covington, Virginia. June 2, 1865 - In an event that is generally regarded as marking the end of the Civil War, Confederate General Edmund Kirby Smith, commander of Confederate forces west of the Mississippi, signed the surrender terms offered by Union negotiators. With Smith's surrender, the last Confederate army ceased to exist, bringing a formal end to the bloodiest four years in U.S. history. The war that cost 620,000 American lives was over. June 2, 1883 - The first baseball game under electric lights was played in Fort Wayne, Indiana. June 2, 1886 - Grover Cleveland became the second U.S. president to get married while in office. He was the first to have a wedding in the White House. June 2, 1887 – German SS officer Gottlieb Hering was born in Warmbronn, German Empire. June 2, 1896 - Guglieimo Marconi's radio telegraphy device was patented in Great Britain. June 2, 1896 – Graduation exercises were scheduled to take place at 10 a.m. at the Southwest Alabama Agricultural School in Evergreen, Ala. Diplomas were to be awarded by the Rev. B.F. Riley of Athens, Ga. The Class of 1896 included Mary Howard Watkins, Elvie Liverman, Sallie Stallworth, Mary Robbins Sampey, Fannie Kemp Dennis, Hallie Watkins, Mary Liverman, Arthur Cunningham, Willie Wilton Watts. June 2, 1897 - Mark Twain, at age 61, was quoted by the New York Journal as saying "the report of my death was an exaggeration." He was responding to the rumors that he had died. June 2, 1907 – Harlem Renaissance writer Dorothy West was born in Boston, Mass. June 2, 1910 - Pygmies were discovered by explorers in Dutch New Guinea. June 2, 1911 – W.B. Coker, who lived a few miles west of Evergreen, Ala., left at The Evergreen Courant’s office the first cotton bloom reported in Conecuh County for the 1911 season. June 2, 1911 – During the night, several casks of beer and three cases of liquor were stolen from one of the lower rooms of the Covington County Jail in Andalusia, Ala. The booze had been seized by Sheriff Livings and stored there for safekeeping. June 2, 1911 – Deputy Collector of Internal Revenue W.F. Nabors returned to Mobile on this Friday afternoon after a “strenuous trip” through Monroe County, Ala., where he captured and seized five complete distilling plants over a two-day period. June 2, 1913 – Comic novelist Barbara Pym was born in Oswestry, Shropshire, England. June 2, 1915 – The final day of Monroe County High School’s four-day fourth-annual commencement exercises continued on this Wednesday with baseball games between MCHS and Finchburg at 9 a.m. and 3 p.m. in Monroeville, Ala. Graduation exercises began at 8 p.m. with the address delivered by Dr. W.M. Murray of Brewton. June 2, 1915 – Southwest Alabama Agricultural School graduation exercises were scheduled to be held at the Conecuh County Courthouse at 8 p.m. in Evergreen, Ala. Congressman S.H. Dent was scheduled to deliver the commencement address. Earlier that day, a baseball game between the school and Brewton was scheduled to be played. June 2, 1915 – The Evergreen Courant reported that the “new Croom building” was now complete. The ground floor was to be occupied by J.H. Dey and the second floor was to be used as a Phythian and Woodmen hall. June 2, 1918 – Kathryn Tucker Windham, who lived in Thomasville as a child and worked in Camden for the Area Agency on Aging, was born in Selma, Ala. She promoted Alabama's lifeways and folk traditions with her writings, photography, and radio commentaries. She is best known for her series of ghost story collections, beginning with “13 Alabama Ghosts and Jeffrey” in 1969, as well as numerous other publications, photography, and storytelling. June 2, 1919 – During World War I, Army Sgt. Dewey E. Rayboun of Thomasville, Ala. “died from disease.” June 2, 1924 – U.S. President Calvin Coolidge signed the Indian Citizenship Act into law, granting citizenship to all Native Americans born within the territorial limits of the United States. June 2, 1926 – The Evergreen Courant reported that the “new highway” between Evergreen and McKenzie was “rapidly nearing completion.” Grading work had reached the intersection of Main Street. June 2, 1926 – The Evergreen Courant reported that Edwin C. Page had recently completed his “academic course at the University” and would begin the study of law next fall. June 2, 1931 – Australian politician Gerald Beresford Ponsonby Peacocke was born. He went on to serve as a member of the New South Wales Legislative Assembly. June 2, 1935 - George Herman "Babe" Ruth announced that he was retiring from baseball. June 2, 1935 – Novelist Carol Shields was born in Oak Park, Ill. Her 1993 novel, “Stone Diaries,” won the Pulitzer Prize. June 2, 1941 – The first cotton bloom of the season arrived at The Courant on this Monday and was sent by E.A. Andrews of Evergreen, Ala., Rt. C. June 2, 1941 – National Baseball Hall of Fame first baseman Lou Gehrig died at the age of 37 in New York City of the degenerative disease amyotrophic lateral sclerosis. He played his entire career (1923-1939) for the New York Yankees and was inducted into the Hall of Fame in 1939. June 2, 1943 – Aliceville, Alabam's World War II prisoner-of-war camp received its first contingent of captured German soldiers. By the end of the week, Aliceville housed 3,000 prisoners. Nearly 5,000 POWs eventually would be imprisoned in the facility, the largest of four such camps in Alabama. June 2, 1948 – German SS officer Karl Brandt, 44, was hanged at Landsberg Prison, Landsberg am Lech. June 2, 1948 – German SS officer Wolfram Sievers, who was the managing director of the Ahnenerbe from 1935 to 1945, was executed by hanging for crimes against humanity at Landsberg prison in Bavaria. June 2, 1949 - Chester “Check” Ellis Jr. was to begin working out with the Brewton Millers of the Alabama State Baseball League on this night and was expected to sign with the Class D club a few days later. “Check,” a 22-year-old right-handed pitcher, talked with Miller manager Norman Veazy on Mon., May 20, and was told to report for practice on Thurs., June 2. “Check” had been attending Troy State Teachers College, and for the past two months had pitched for Colquitt in a very fast semi-pro loop in South Georgia. He was a star athlete at Evergreen High School, where he received his diploma, and played with the Evergreen Greenies in 1948 after completing a hitch in the Navy. June 2, 1949 - Thirty-four men and four ladies were scheduled to tee off on this afternoon at the Evergreen Country Club golf course in the Evergreen Golf Club’s Handicap Tournament. The golfers were to start play at 1:30 on this afternoon. Men in the tourney included Truman Hyde, Jack Newman, Horace Deer, C.T. Ivey, Temple Millsap, Dr. Bill Turk, Waynard Price, Sam Cope, Henry Sessions, Roy Pace, Ray Canterbury, Lawton Kamplain, Bayne Petrey, Frank Johnson, Bob Bozeman, Bonnie King, Sam Granade, Bill Cardwell, C.A. Jones, Dr. Joe Hagood, Alfred Long, Harry Monroe, Byron Warren, Willard Williams, Edwin Page, Knud Nielsen, Zell Murphy, L.K. Wiggins, Hub Robinson, Bob Kendall Jr., Billy Carleton, Vernon Millsap, Sonny Prie and Herman Bolden. Women in the tourney included Helen Kamplain, Velma Cope, Mary Nielsen and Katie Newman. June 2, 1949 – The Evergreen Courant reported that the Bank of Evergreen was being remodeled. Work was fast approaching completion on a vast remodeling project at the Bank of Evergreen. The building was being done over entirely on the inside. The working space was being shifted over from the west to the east side of the building, incidentally shifting the lobby, which had been decreased in size considerably to make room for an office in front. In addition to the old entrance, which was to be retained as heretofore, another entrance had been made to enter the lobby from the hallway. New and modern fixtures were being installed, including individual tellers’ cages. June 2, 1955 – Former Evergreen Courant editor and publisher Lamar W. Matkin passed away at the age of 79 and is buried at Pine Crest Cemetery in Mobile, Ala. June 2, 1959 - Ted Williams of the Boston Red Sox got his 2,500th hit of his career. June 2, 1964 - Frank T. Salter of Evergreen, Ala. won nomination to the office of Judge of Probate of Conecuh County over veteran Judge Lloyd G. Hart in this Tuesday’s Democratic Primary Election. Nomination was tantamount to election in Conecuh. Salter rolled past Hart by a complete, but unofficial, count of 1,935 to 1,591. His margin of 344 votes came as a surprise to many political observers, although his victory had been predicted freely in the closing days of the runoff campaign. Salter, brother of State Rep. Wiley Salter, carried 27 of the county’s 38 boxes. The new judge-nominate was 38 years old and a native of Conecuh County. He graduated from the Lyeffion High School, earned his BS degree at Troy State College and his Masters at Auburn University. Salter served overseas in World War II with the U.S. Army and was recalled to active duty and served overseas again during the Korean War. Hart was elected judge of probate in 1946 and was re-elected without opposition in 1952 and 1958. June 2, 1965 – During the Vietnam War, the first contingent of Australian combat troops arrived by plane in Saigon. They joined the U.S. 173rd Airborne Brigade at Bien Hoa air base. June 2, 1966 Surveyor I soft landed on the moon and began transmitting detailed photos. June 2, 1967 - Capt. Howard Levy, 30, a dermatologist from Brooklyn, was convicted by a general court-martial in Fort Jackson, South Carolina, of willfully disobeying orders and making disloyal statements about U.S. policy in Vietnam. Levy had refused to provide elementary instruction in skin disease to Green Beret medics on the grounds that the Green Berets would use medicine as “another tool of political persuasion” in Vietnam. June 2, 1976 – NBA point guard Earl Boykins was born in Cleveland, Ohio. He went on to play for Eastern Michigan, the New Jersey Nets, the Cleveland Cavaliers, the Orlando Magic, the Los Angeles Clippers, the Golden State Warriors, the Denver Nuggets, the Milwaukee Bucks, the Charlotte Bobcats, the Washington Wizards and the Houston Rockets. June 2, 1983 – Leroy, Ala. native and Oakland A’s first baseman Kelvin Moore appeared in his final Major League Baseball game. June 2, 1985 - Tommy Sandt was ejected from a Major League Baseball game before the national anthem was played. He had complained to the umpire about a call against his team the night before. June 2, 1990 - Randy Johnson achieved the first no-hitter in Seattle Mariner history. June 2, 1990 - The Lower Ohio Valley tornado outbreak spawned 66 confirmed tornadoes across four states, starting on this date. June 2, 1993 – National Baseball Hall of Fame first baseman Johnny Mize passed away in Demorest, Ga. at the age of 80. During his career, he played for the St. Louis Cardinals, the New York Giants and the New York Yankees. He was inducted into the Hall of Fame in 1981. June 2, 1995 - Hideo Nomo got his first Major League Baseball victory. June 2, 1996 - Tim Belcher of the Kansas City Royals won his 100th career game. June 2, 1997 – In Denver, Timothy McVeigh was convicted on 15 counts of murder and conspiracy for his role in the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City. He was executed four years later. June 2, 2000 - Fred McGriff of the Tampa Bay Devil Rays became the 31st major league player to hit 400 career home runs. June 2, 2003 - In Seville, Spain, a chest containing the supposed remains of Christopher Columbus were exhumed for DNA tests to determine whether the bones were really those of the explorer. The tests were aimed at determining if Colombus was currently buried in Spain's Seville Cathedral or in Santo Domingo in the Dominican Republic.
<urn:uuid:3d5c629a-93ec-454d-8e20-b993f3874d68>
CC-MAIN-2017-13
http://leepeacock2010.blogspot.com/2016/06/today-in-history-for-june-2-2016.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.67/warc/CC-MAIN-20170322212950-00179-ip-10-233-31-227.ec2.internal.warc.gz
en
0.973128
3,597
2.71875
3
The first trace of a professional linked to radio advertising appears, according to the data provided Cape Verde Email List by their conclusions, in 1926, on Radio Barcelona, the first radio station in Spain. The only thing known about her is that her name was Rosa and that she was the secretary of the advertising department. Even so, her work went beyond management and administration and, from what Espinosa contributes, includes copying and account manager jobs. Rosa wrote the advertising texts that were later read on the air and managed which campaigns had to go on the air.When the radios were starting, male voices were the dominant ones on the air, but that soon began to change. Female voices entered the airwaves. They did it for a question of acoustics (they sounded better) but also for one of targets. Women were the main listeners of the radio, as they were the ones at home during broadcast hours. In addition, those responsible for the first commercial radio stations to appear in Europe and North America realized that women were a very attractive audience in terms of potential income. They were already the recipients of a large part of the publicity that was published in the press.“Loyalty to women as listeners meant in addition to having a stable audience, opening an easily exploitable commercial line”, explains the researcher in her conclusions.
<urn:uuid:0b2d1aba-e070-4141-bc9b-8077046ba770>
CC-MAIN-2022-49
https://www.b2blead.me/why-the-first-radio-commercials-in-spain-were-made-by-women/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00003.warc.gz
en
0.992347
271
2.84375
3
Expansion and exposure are a human tendency and this is adaptive to technology also. In Today’s world technology is rapidly changing and so on business. Now, the internet has given a new area of exposures to all business regardless of boundaries and barriers of location and physical reach to the customer. Web hosting is playing an important part in the online industry. Having an online presence is now a necessity in today’s time and, now websites are moving into the online business phase. So when you start thinking for your online business, you have to think for web hosting and domain name to register a business name. While choosing, you should be aware of which hosting platform is suitable for your website. To understand and choose the best hosting provider for your website you should know the basic of hosting and the types of web hosting. Let’s start with what is Web Hosting? Web hosting is the service which allows the business, organization, and individuals to post page or to sell the product on the Internet. It’s the technology and services which provide a website or webpage to be seen on the internet. Webpage or website which are hosted, or stored, on special computers is called servers. For this, you have to first register your domain name and then your website will be live. When someone wants to visit your website, they have to type website address or domain name into their browser. After this, the computer will connect to the server and then your webpage will be delivered to them. Web hosting is divided into two types, first on the technology and second on performance. When we say base on the technology the basic categories are Linux web hosting and Windows web hosting. There are much different operating systems who support to make available your website publicly. These operating systems can be enabled for web hosting environment using some rebuild control panels like world-class Cpanel/WHM, Plesk, Control Panel, etc. These technologies or control panel also works on virtualization like Xen, KVM, OpenVZ. We will see these different technologies and their benefits in brief below. Now how to choose the right technology is a query and this can be decided based on website coding and its supporting development language in which it’s developed. You can choose a Linux Hosting when your operating system has this PHP, HTML, Python, Perl. And you need Windows hosting especially when you have. ASP.NET or MSSQL as the backend database We have seen the technology-based approach and now, Let’s see how it affects website performance? Now, Let’s talk on Website Performance. It is like two sides of the coin, not at all complete without below mention points:- 1.How well optimized is your website code and database? Optimizing your website is very important in search engines and user point of perspective. The code is been taken care of by the developer of the website. It should be flawless and had followed do’s and don’ts of all SEO aspect. Database structure and SQL statements also play a vital role in the optimization and performance of the website. These things come together to affect the loading speed of the website and quick functioning of it. Along with it, you should also focus on the overall user experience of your site. In order to know whether your site has a good user experience or not, all you have to do is take a review of your website/app from 10-15 persons in your contact. 2. How many resources are allocated for your website/application? When you choose the Hosting provider you should ask them about the resources they are going to allocate to your website or hosting space. These resources can be listed as Disk, Bandwidth, RAM, CPU, etc. These elements are important to run any website smoothly. Let’s see the different hosting type and its performance. • Shared Hosting • VPS hosting • Dedicated Hosting - Shared Hosting – ” Shared hosting ” its name itself reflects/reflected its behavior, this type of hosting does share the server resources with all the other users hosted on the same and can be varied accordingly from time to time. This is the most sensitive hosting type you should consider if you want to isolate your website privacy or uptime. If you need maximum uptime and minimal problems with website loading speed you should not choose shared hosting as your hosting platform. This platform is mainly for bloggers or informative websites with less number of traffic. - VPS Hosting – VPS is a Virtualize technology where you get the premium access to your resources up to a certain level of capacity of the main node. And this capacity can be utilized according to your need as and when needed. You should know how many maximum resources your website or application will be consuming. If so this is the perfect solution for your website needs. Virtual private servers can be differentiated depending on the technology they run on like:- - VMware server - Parallels Virtuozzo Containers The benefit you get out of the virtualization is better utilization of available resources by managing it according to your need. The required management behind this has increased the flexibility which in return reduce the expenses you need to run the website or server. The major fact of VPS is no more overhead for purchasing or maintaining equipment for your server. In simple words reducing the number of a physical server into the virtual server which reduces the required resources of running the multiple servers like server administrator, power consumption, space, cool and lot more! - Dedicated Server Hosting – On top of VPS technology dedicated server is a 100% allocation of available resources to your website or applications. As this denies the possibility of sharing your resources to other website and application. It also secures your website from other infected content/website. A dedicated server gives you the complete freedom to optimize your server application and software on it which definitely affects your website speed and performance. You can implement an additional layer of security on a dedicated server to protect your data and privacy. Dedicated server hosting also gives us the flexibility of customizing the server according to the client’s unique list of requirements like CPU, RAM, DISK allocation and software. With the shared hosting and VPS hosting type, we get restricted to the server resources allocation as per the administrative and hosting packages. Above all do’s and don’ts must have given you the idea of types of hosting and technique to choose the right one for your hosting needs. Once you are clear about what and how much you need from the Web Hosting Provider your next step is to choose the right hosting provider who gives the maximum uptime and hassle-free support all time. Many of us must not be aware of the technical server-side issues and for that, we need the good technical support which not only understands the hosting problems but also gives the solutions to it in a timely manner. On the basis of experience & going through various troubleshooting in web hosting world, we as a hosting provider would suggest all our readers try for Hostripples Hosting which deals in all segments like shared hosting, VPS hosting, SSD VPS, Dedicated server, Cloud Hosting, and the best part is the support we offer to our clients. we hope this article clears all your thought regarding web hosting and getting started with it. If still, you have any doubts please feel free to comment and share your views.
<urn:uuid:bf944b40-2ad7-48d1-bce2-f8557b5ffc94>
CC-MAIN-2020-34
https://blog.hostripples.com/smart-way-for-entrepreneurs-to-choose-the-right-hosting-type/
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738982.70/warc/CC-MAIN-20200813103121-20200813133121-00179.warc.gz
en
0.940008
1,523
2.546875
3
The latest news about biotechnologies, biomechanics synthetic biology, genomics, biomediacl engineering... Posted: Mar 26, 2014 Researchers engineer resistance to ionic liquids in biofuel microbes (Nanowerk News) Researchers with the U.S. Department of Energy (DOE)’s Joint BioEnergy Institute (JBEI), a multi-institutional partnership led by Berkeley Lab, have identified the genetic origins of a microbial resistance to ionic liquids and successfully introduced this resistance into a strain of E. coli bacteria for the production of advanced biofuels. The ionic liquid resistance is based on a pair of genes discovered in a bacterium native to a tropical rainforest in Puerto Rico. “We identified two genes in Enterobacter lignolyticus, a soil bacterium that is tolerant to imidazolium-based ionic liquids, and transferred them as part of a genetic module into an E.coli biofuel host,” says Michael Thelen, a biochemist with JBEI’s Deconstruction Division. “The genetic module conferred the tolerance needed for the E.coli to grow well in the presence of toxic concentrations of ionic liquids. As a result, production of a terpene-based biofuel was enhanced.” JBEI researchers identified the genetic origins of a resistance to ionic liquids found in Enterobacter lignolyticus, a soil bacterium discovered in a rainforest in Puerto Rico. Thelen, a senior investigator with DOE’s Lawrence Livermore National Laboratory (LLNL), is the corresponding author of a paper describing this work in Nature Communications. The paper is titled “An auto-inducible mechanism for ionic liquid resistance in microbial biofuel production”. Thomas Ruegg, a Ph.D. student from Basel University associated with LLNL, is the lead author. Co-authors are Eun-Mi Kim, Blake Simmons, Jay Keasling, Steven Singer and Taek Soon Lee. The burning of fossil fuels continues to release nearly 9 billion metric tons of excess carbon into the atmosphere each year to the detriment of global climate trends. Advanced biofuels synthesized from the cellulosic biomass in non-food plants represent a clean, green, renewable alternative to today’s gasoline, diesel and jet fuels. JBEI researchers have previously engineered strains of E. coli bacteria to digest the cellulosic biomass of switchgrass, a perennial grass that thrives on land not suitable for food crops, and convert its sugars into biofuels and chemicals. However, the ionic liquids used to make the switchgrass digestible for the E.coli was also toxic to them and had to be completely removed through several washings prior to fermentation. “The extensive washing required for complete ionic liquid removal is not feasible in large-scale, industrial applications,” says Blake Simmons, a chemical engineer who heads JBEI’s Deconstruction Division. “An ideal and more sustainable process is to balance the costs of removing the ionic liquid with the fermentation performance by using biofuel-producing microbes that can tolerate residual levels of ionic liquids.” Two years ago, JBEI researchers returned from an expedition to the El Yunque National Forest in Puerto Rico with the SCF1 strain of Enterobacter lignolyticus, which had shown a tolerance to high osmotic pressures of the sort generated by exposure to ionic liquids. A model was developed at JBEI in which the SCF1 bacteria are able to resist the toxic effect of an ionic liquid by altering the permeability of their cell membrane and pumping the toxic chemical out of the cell before damage occurs. In this latest study, the JBEI researchers used a creative approach devised by lead author Ruegg to rapidly pinpoint the genes responsible for ionic liquid resistance in the genomic DNA of SCF1. “This genetic module encodes both a membrane transporter and its transcriptional regulator,” Ruegg says. “While the pump exports ionic liquids, the substrate-inducible regulator maintains the appropriate level of this pump so that the microbe can grow normally either in the presence or absence of ionic liquid.” The results of this study show a way to eliminate a bottleneck in JBEI’s biofuels production strategy, which relies on ionic liquid pretreatment of cellulosic biomass. It also shows how the adverse effects of ionic liquids can be turned into an advantage. “The presence of residual ionic liquids may prevent the growth of microbial contaminants, so that fermentation can proceed under more economical, aseptic conditions,” Thelen says. “Our findings should pave the way for further improvements in microbes that will contribute to the sustainable production of biofuels and chemicals.” Source: By Lynn Yarris, Berkeley Lab If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
<urn:uuid:73014794-1915-46de-a4e9-a6b92045904f>
CC-MAIN-2016-36
http://www.nanowerk.com/news2/biotech/newsid=34959.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982948216.97/warc/CC-MAIN-20160823200908-00051-ip-10-153-172-175.ec2.internal.warc.gz
en
0.902387
1,051
2.546875
3
When we offer training to teachers and volunteers, we’re often asked if fidget toys, like stress balls and tangles, really work to help students to focus and manage behavior. Our answer? Yes. And no. Some days, beautifully. Other days? Not so much. Certain kids? You bet. Others? Hardly ever. Helpful, aren’t we? Special education professionals agree that the effectiveness of fidget toys largely depends on the needs of the child. While fidgets are a popular recommendation at IEP planning meetings and workshops, they are not a cure-all. Sheri Halagan, a National Board Certified Teacher comments, “I wish they were the answer for every child. They’re not.” Intervention Specialist Amy Belew, also a National Board Certified Teacher agrees: “They become a distraction for so many kids.” Teachers and volunteers wonder, then, why fidget toys work like “magic” for some kids, but not others. Child psychologist Dr. Sherri McClurg explains that this is based on the strengths and needs of the child.”Fidgets are great for kids with anxiety or spectrum disorders; kids with ADHD will struggle to use them.” She continues by explaining that fidget toys may fill a sensory need for students on the spectrum, and also can reduce tension and nervousness in kids who struggle with anxiety. Students with ADHD, however, may become focused on the fidget toy to the exclusion of the class discussion or activity. Unfortunately, many volunteers and pastors don’t have the luxury of knowing students’ diagnoses in order to apply a “diagnostic and prescriptive” approach to intervention. So, then, what can the church folks do to help all students pay attention and participate effectively? - Be a good student of your students. Observe them carefully and decide what strategies might be appropriate based on what you see and hear. - Teach them HOW to fidget. “Kids don’t know as much as we think they know about how to act in the classroom,” Halagan shares. Before handing out fidget toys, she advises, teachers should demonstrate their use and allow kids to practice. Later, if a student’s fidget toy is becoming a distraction (e.g. the stress ball is being thrown a target across the room…), Halagan recommends cueing the child with, “Show me how to use that in our class.” This kind of prompt allows the child to remember the rules and practice them. - Set limits. If the misuse of the fidget toy continues, Halagan asserts that it is okay to take the toy away with the assurance, “We’ll try this again next time.” - Look at yourself. Dr. Rachel Jones, an elementary school principal remarks, “Instead of scrutinizing THEIR behavior, change YOURS.” If students consistently struggle with inattention, teachers should take a look at the pace and content of lessons, the classroom environment and arrangement, and their own interactions with students. Often, changes in teacher behavior can yield great increases in students’ participation and attention. So, what’s the bottom line? Fidget toys DO work, but they’re not a panacea…and that is something to celebrate, because every child is a unique and fabulous creation. PS My favorite source for fidget toys: www.therapyshoppe.com
<urn:uuid:db581335-f934-44cf-a237-24df3fb495d5>
CC-MAIN-2021-43
https://katiewetherbee.com/2011/11/15/fidget-toys-do-they-really-work/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00425.warc.gz
en
0.944954
729
2.6875
3
Why age reduces our stem cells' ability to repair muscle September 7, 2014 Ottawa, Canada — As we age, stem cells throughout our bodies gradually lose their capacity to repair damage, even from normal wear and tear. Researchers from the Ottawa Hospital Research Institute and University of Ottawa have discovered the reason why this decline occurs in our skeletal muscle. Their findings were published online today in the influential journal Nature Medicine. A team led by Dr. Michael Rudnicki, senior scientist at the Ottawa Hospital Research Institute and professor of medicine at the University of Ottawa, found that as muscle stem cells age, their reduced function is a result of a progressive increase in the activation of a specific signalling pathway. Such pathways transmit information to a cell from the surrounding tissue. The particular culprit identified by Dr. Rudnicki and his team is called the JAK/STAT signalling pathway. "What's really exciting to our team is that when we used specific drugs to inhibit the JAK/STAT pathway, the muscle stem cells in old animals behaved the same as those found in young animals," said Dr. Michael Rudnicki, a world leader in muscle stem cell research. "These inhibitors increased the older animals' ability to repair injured muscle and to build new tissue." What's happening is that our skeletal muscle stem cells are not being instructed to maintain their population. As we get older, the activity of the JAK/STAT pathway shoots up and this changes how muscle stem cells divide. To maintain a population of these stem cells, which are called satellite cells, some have to stay as stem cells when they divide. With increased activity of the JAK/STAT pathway, fewer divide to produce two satellite cells (symmetric division) and more commit to cells that eventually become muscle fibre. This reduces the population of these regenerating satellite cells, which results in a reduced capacity to repair and rebuild muscle tissue. While this discovery is still at early stages, Dr. Rudnicki's team is exploring the therapeutic possibilities of drugs to treat muscle-wasting diseases such as muscular dystrophy. The drugs used in this study are commonly used for chemotherapy, so Dr. Rudnicki is now looking for less toxic molecules that would have the same effect. The full article titled "Inhibition of JAK/STAT signaling stimulates adult satellite cell function" was published online September 7, 2014, by Nature Medicine. The studies conducted for this paper were supported by the U.S. National Institutes for Health, the Canadian Institutes of Health Research, the Stem Cell Network and the Ontario Ministry of Economic Development and Innovation. About the Ottawa Hospital Research Institute The Ottawa Hospital Research Institute is the research arm of The Ottawa Hospital and is an affiliated institute of the University of Ottawa, closely associated with its faculties of Medicine and Health Sciences. The Ottawa Hospital Research Institute includes more than 1,700 scientists, clinical investigators, graduate students, postdoctoral fellows and staff conducting research to improve the understanding, prevention, diagnosis and treatment of human disease. Research at Ottawa Hospital Research Institute is supported by The Ottawa Hospital Foundation. About the University of Ottawa The University of Ottawa is committed to research excellence and encourages an interdisciplinary approach to knowledge creation, which attracts the best academic talent from across Canada and around the world. For further information, please contact: Communications and Public Relations Ottawa Hospital Research Institute (o) 613-737-8899 x73687 or (c) 613-323-5680 Media Relations Officer University of Ottawa (o) 613-562-5800 x2529 or (c) 613-762-2908
<urn:uuid:486f58da-1062-4783-a520-7b5847ee9bc6>
CC-MAIN-2016-40
http://www.ohri.ca/newsroom/newsstory.asp?ID=522
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660342.42/warc/CC-MAIN-20160924173740-00137-ip-10-143-35-109.ec2.internal.warc.gz
en
0.937051
743
2.796875
3
The electricity grid delivers different amounts of power at different times, depending on the demand. Generally daytime has higher demand than nighttime because people are awake and using their electric devices. Generally midsummer is a high demand time in warm, wealthy countries because people will use a lot of electricity on air conditioning units at this time. In cold climates, mid-winter is high demand for many reasons, including lights, warmth, and car block heaters. Summer or Winter Peak The ‘peak’ energy usage refers to the most energy used in a certain interval of time. For example, if we look at an entire year in Saskatchewan, Canada, we will notice that the most electricity demanded at once is in the coldest part of the year. We refer to this as a winter peak. In other places, even within the same country, we can see instead a summer peak. In Ontario, they actually demand more electricity in midsummer than midwinter. This is primarily due to the electric demands of air conditioning systems, which have become increasingly popular in the developed world in the last several decades. It is important to keep in mind that quite often the greatest energy demand during a year will be on the hottest days or the coldest days. Each presents a different technological challenge to those who design and operate a power grid. In the most general sense, we are talking about moving power from one place to another. The electric grid accomplishes this by having power lines between generation stations and demand locations such as homes and businesses. Some general rules apply to this sort of technology. The more power you have to move, the more expensive it will be to build the infrastructure to do it. The further you have to move the power, the more energy losses you are going to have in doing so. These rules apply in general, but the specifics of a problem will dictate what sort of solution is applied. For instance, sometimes long-distance, high-power transmission can be the best answer if an excellent power source just happens to be far away from demand. In such a situation, the costs of the transmission infrastructure are factored into the project from day one to determine the real feasibility of the scheme. A good example of development of this type is the James Bay Project in Quebec, Canada. This project built an astonishingly large hydroelectric facility several hundred kilometers from the major cities of the region such as Montréal. The project was well-conceived however, and has proven to be a good investment despite these challenges. Capacity factor refers to the amount of power that a power plant produced compared to the amount that it could nominally have produced if it were running at maximum the entire time. For example, lets say that a nuclear plant that could provide 1 GW of power normally was shut down for one month out of ten. So for nine months it produces power at 1 GW, and for one month it produces nothing as it is being refueled or goes through a maintenance cycle. For this ten month period it would have a capacity factor of 90%. Another example, this time with wind power. The Centennial Wind Farm in Saskatchewan in its first year of operation had a 42.4% capacity factor1 . The farm is 150 MW, so we could say that on average it was producing about 42.4% of that, or 63.6 MW. This may not sound very good, but this is actually a very impressive capacity factor for wind power compared to most other places. Many nations with tremendous wind investment have average capacity factors in the range of 20-30%. Forms of Power Resources Dispatchable energy sources are those sources that can be turned on and off in a relatively short amount of time. This could refer to time intervals of a few seconds up to a couple of hours. Within the category of dispatchable power there are a lot of different technologies. On the fast end we have forms like hydroelectricity which can be fired up in minutes. On the slower end we have things like most biomass or coal plants, which can take hours to change their energy output significantly. Natural gas turbines are a very common dispatchable source, and they can generally be ramped up in minutes. We learned that SaskPower uses some gas turbines that stay spinning at relatively high speed but not producing power2 . In this way they burn very minimal fuel but are ready for almost instant deployment in energy production. For a more in-depth look at dispatchable power, see our article: How can renewables deliver dispatchable power on demand? In contrast, non-dispatchable refers to everything else. This includes all current nuclear power plants, most coal power plants, and run-of-river hydroelectric plants. It also includes intermittent energy sources such as wind, solar photovoltaics, and wave energy. These power sources cannot be relied upon to meet demand in a short amount of time, so they are non-dispatchable. The intermittent sources such as wind, solar photovoltaics, run-of-river hydroelectric, and waves, are those sources for which we do not control their power output directly. These are sources that cannot be relied upon to meet power demands. They should instead be regarded as an energy resource. The more the wind blows through our wind turbines, the less natural gas we have to burn, or the less water we have to run through out of our hydroelectric reservoirs. These intermittent sources can be used to reduce the amount of fuel we use for meeting demand with dispatchable sources. Baseload in common usage refers to power stations that are always on and are generally the biggest generation units. If this term is used it generally refers to coal and nuclear power. Sometimes reservoir-based hydroelectric power sources are also included if they have large enough reservoirs or have proven reliability. These are generally the power sources used to meet most of the demand in an electrical system. These sources are always on unless they are down for maintenance, repair or refueling in the case of nuclear. Cogeneration (also known as combined heat and power) refers to the use of waste heat from a thermal power plant to do other useful things. Coolant being expelled from an electric generation system may be still quite hot. Different uses of the waste heat are possible depending on the temperature of the available coolant, and the location of nearby industry or residences. In the case of industry nearby, the heat may be used for processes that require high temperature. The heat may also be used for heating buildings for industry or residential use. Cogeneration is essentially taking advantage of a natural synergy between thermal electric power plants and other uses for heat. - Centennial Wind Power Facility Rides the Wind to a Great First Year. SaskPower, June 14th, 2007. Retrieved Sep 9th, 2010. [↩] - Gary Wilkinson, SaskPower: Powering a Sustainable Energy Future, Saskatchewan’s Energy Future Public Consultation, Saskatchewan Legislature, available in the Standing Committee on Crown and Central Agencies Archives [↩]
<urn:uuid:d83c427d-46b0-4f14-9946-94ebbcb39762>
CC-MAIN-2018-22
https://www.visionofearth.org/industry/electricity-grid-key-terms-and-definitions/
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865691.44/warc/CC-MAIN-20180523161206-20180523181206-00488.warc.gz
en
0.951391
1,432
3.453125
3
Student-designed Study Explores the Unknown Effects of Third-hand Smoke February 17, 2020 by Brooke Thames April Hurlock '23's study analyzes the reactions that chemicals in cigarette smoke create with the materials they cling to. Emily Paine, Communications Most people know the risks of first-hand and second-hand smoke, but April Hurlock '23 is finding that a third type of cigarette smoke exposure can be just as harmful. Third-hand smoke exposure occurs when chemicals stick to surfaces and fabrics after a cigarette has been burned. Hurlock's work — under the guidance of Professor Douglas Collins, chemistry — analyzes the reactions these chemicals create with the materials they cling to. Hurlock assisted in designing a device that captures cigarette smoke in vials. Emily Paine, Communications "There's a whole mess of chemistry that goes on when these reactions happen then evaporate off, and those chemicals are extremely hard to get rid of," explains the chemistry major from Gilbertsville, Pa. "These reactions aren't really being studied, which is why we decided to put so much emphasis on it." The summer before her first semester, Hurlock spent five weeks in the lab designing an experimental third-hand smoke study as part of a federally funded science program that pairs incoming first-year students with faculty-led projects at participating universities, including Bucknell. To begin, Hurlock identified the different chemicals in cigarettes by capturing their smoke in glass vials using a network of flasks and tubes. Over time, she observed how the chemicals behaved after adhering to the glass. Hurlock then worked with Collins to design a more robust smoking device, which she's now using to study how cigarette smoke reacts in vials coated with antioxidants. "If we can track how antioxidants change and decay over time in the presence of these chemicals, then we'll be able to see the impact cigarette smoke can have on surfaces," says Hurlock, who will continue the study for 10 weeks this summer through a grant funded by the National Science Foundation. Hurlock's work is a collaboration with Professor Douglas Collins, chemistry. Emily Paine, Communications As the project develops, she plans to analyze reactions on upholstery and other materials people interact with on a daily basis. The ability to conduct relevant, real-world experiments is what drew her to Bucknell. While many of the schools she toured emphasized undergraduate research, Hurlock says she likely wouldn't have been so involved in the lab this early in her college career. "Now, I not only get to develop this project throughout my four years, but I also get to dive deep into research that has real-world impact," Hurlock says. "Having an environmental connection and real-life applications really adds purpose to my work here."
<urn:uuid:7b2ff72b-ebef-43eb-9cfe-871d6b4e1a24>
CC-MAIN-2020-45
https://www.bucknell.edu/news/student-designed-study-explores-unknown-effects-third-hand-smoke
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00350.warc.gz
en
0.955595
570
2.609375
3
Raising children is a big job making demands of parents in different ways at different stages of their development. When they are little the physical care they need can be exhausting – often accompanied by lack of sleep. As they grow and begin to become people in their own right, with their own ideas about what they want, the potential for conflict requires a different kind of care – sometimes stressful and time consuming. Mothers sometimes have mixed feelings for various reasons about asking for the help they need – from mates or others – even from their children. One mother, pregnant and having a toddler to care for, expressed the feeling that as a stay-at-home mother, quite apart from any financial consideration she should be able to handle everything herself. On one hand she feels disapproved of for giving up her professional work, on the other having made that decision she now should be a total full-time mother. Another mother who was recovering from a debilitating illness, felt compelled to do certain things with her child she was not capable of doing to compensate for not being available at other times. She, too, was using total availability to her child as her measure of herself as a mother. In both instances an unrealistic standard for motherhood was at work involving as well, unrealistic ideas about children’s needs. Despite the many changes that have been made in women’s lives as they pursue careers and work outside the home, old cultural ideas about motherhood are still strong. Also, women feel strongly about their children and are vulnerable to their demands and seeming need for attention. In fact, in both the instances cited the children were actually being disadvantaged by their mothers’ feelings rather than their own needs. There are times when the care needed does not have to be provided by the mother. The issue of asking for help also relates to expectations of children. At times, parents complain that their children are not helpful – in fact children not helping is often a source of conflict. While in some families children have definite chores, in others, parents do not ask for their help. The question is often raised as to whether children should be paid for certain jobs they are asked to do, which suggests that such jobs are outside their responsibility and should be rewarded as such. When they are little children love to help and this is when the pattern of their helping can be established. Observing young children in school it is interesting to see how they consider it an honor to be given certain jobs. Clean-up time after an activity is a given, and most of the time children participate readily. When a child does resist teachers take note of that in order to understand and address what that resistance is about. Of course, in a school setting the influence of peers plays a role and participating as a member of the group is expected. The difference at home comes when children are criticized for what is perceived as a failure in their behavior. At school they are not expected to do the job alone while at home they may be scolded for making a mess and told to clean it up. This at times seems daunting to a child. If parents initially offer to participate they can create a different attitude. In the busy and pressured life that families live these days, time is often the obstacle to involving children in helping. At first, it takes more time – and even work – to ask for children’s help. They may not set the table the way you would like, or clear the dishes quickly. Helping getting dressed takes more time than you doing it for them. Parents may resist children’s help because they just want to get the job done. As in many aspects of development the long way around may be the short way home. This means it takes time and patience to help children learn, and when they do we, as well as they, reap the benefit. As a parent, you need all the help you can get!
<urn:uuid:01a87cbb-31dc-4fe7-89ff-2c433c60dc48>
CC-MAIN-2019-04
https://goodenoughmothering.com/2016/05/16/helping/
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657867.24/warc/CC-MAIN-20190116195543-20190116221543-00320.warc.gz
en
0.986132
793
2.78125
3
Attention turns to needed changes in media coverage. Australian eating disorders specialists recently have a new incentive to address stigma and shame among persons with eating disorders. In a program begun in November 2019, the government plans to invest $110 million to subsidize eating disorders treatment services (J Eat Disord. 2020; 8:11). The 4-year plan is aimed at improving recovery of the 16% of Australians estimated to have a DSM-5 eating disorder, according to Rachel Baffsky of the University of New South Wales, Sydney, Australia. In a recent commentary, Dr. Baffsky proposes media regulations that could, in time, diminish stigma and enhance treatment-seeking. The role of the media Dr. Baffsky has singled out the popular media as creating stigma for people with eating disorders by reinforcing stereotypes that people with eating disorders are young and female, discounting the experiences of men and older adults. She noted that current articles in the popular press often focus on the social causes of eating disorders and ignore the biological causes. This reinforces the harmful stereotype that eating disorders are easy to recover from because they are a choice, which again creates stigma. Another element suggests that people with eating disorders feel undervalued by the public and as a result may conceal their eating disorder for fear of being stigmatized (J Ment Health. 2016;25:47). A number of suggestions have been made to help improve media coverage of eating disorders, according to the author. One is that media could use more precise medical language to describe eating disorders, to reduce blame-based stigma. A number of studies showed that nursing students and undergraduate psychology students, for example, showed blame-based stigma when they were presented with social causes for eating disorders. A mandatory code with 4 sections The author recommends that Australia develop a mandatory Industry Code of Conduct that specifically helps guide media toward a more “medicalized” approach to reporting stories about individuals with eating disorders. The Code of Conduct would include 4 sections. The first code would mandate a more demographically diverse representation of real individuals with eating disorders (Patient Educ Couns. 2007; 68:43). A second code would stress eating disorders articles that address biological etiologies for eating disorders. A third code would prohibit journalists from using derogatory language to label the symptoms of eating disorders. Finally, a fourth code would mandate realistic reporting of times for recovery for individual eating disorders. The author does acknowledge some limitations of the Code of Conduct approach. One such limitation is that the Code has been criticized as being paternalistic, which is potentially problematic since persons with AN, for example, often perceive a need for control. Also, media emphasis on a biological etiology for eating disorders might encourage the general public to perceive that a person with an eating disorder is helpless to combat an eating disorder because of the disorder’s biological and genetic origins. To counter this, a fourth code was strategically introduced to make certain that journalists accurately report that individuals with eating disorders “can and do recover if they seek help.” (The Butterfly Foundation, 2019; https://butterfly.org.au).
<urn:uuid:cc918ba9-eab5-4974-a30a-b5d065bcb558>
CC-MAIN-2021-43
https://eatingdisordersreview.com/addressing-stigma-about-eating-disorders/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00009.warc.gz
en
0.961059
632
2.875
3
Press contact: Craig D'Ooge (202) 707-9189 November 22, 2000 Recorded Sound Collections Endangered "Folklife Collections In Crisis" Conference Scheduled for Dec. 1-2 Hundreds of thousands of historic ethnographic audio recordings are in serious danger, according to a recent survey conducted by the Library of Congress. Of the 300 respondents to the Library of Congress national survey, more than three-fourths reported that 25 to 50 percent of their collections are "seriously deteriorated." Problems associated with audio collections include: - inadequate storage conditions - cracked wax cylinders - decomposing acetate coatings of discs that "exude" a white powder - "sticky-shed" syndrome on audio tape manufactured in the late '70s and early '80s - "drop outs" on DAT tapes - possible delaminating of CDs "There is virtually no audio repository untouched by these problems," said Peggy Bulger, director of the Library's American Folklife Center. In response to the challenges faced by ethnographic archives across the country, the American Folklife Center, in collaboration with the American Folklore Society, will host a two-day invitational conference, "Folklife Collections in Crisis," on December 1 and 2, 2000, at the Library of Congress. For the first time, 50 experts-archivists, audio engineers, preservation specialists, scholars, entertainment lawyers, and recording company executives-will discuss sound preservation, access, and intellectual property issues as they relate to ethnographic collections and make recommendations to assure long-term preservation. Participants will include representatives from the National Association of Recording Arts & Sciences, the Smithsonian Institution, the National Council for the Traditional Arts, the National Society of Audio Engineers, BMI, the Association of Recorded Sound Collections, the International Association of Sound Archives, the Society of American Archivists and others. The conference is supported by the Council on Library and Information Resources, the National Endowment for the Arts, the National Endowment for the Humanities, and the "Save America's Treasures" program of the National Park Service. The American Folklife Center was created by Congress in 1976 and placed at the Library of Congress to "preserve and present American Folklife" through programs of research, documentation, archival presentation, reference service, live performance, exhibition, public programs, and training. The Center includes the Archive of Folk Culture, which was established in 1928 and is now one of the largest collections of ethnographic material from the United States and around the world. On November 9, President Clinton signed the National Recording Preservation Act of 2000, establishing the National Recording Registry of the Library of Congress (P.L. 106-474). The new law was created to support the preservation of historic sound recordings, many of which are at risk from deterioration. It directs the Librarian of Congress to name sound recordings of aesthetic, historical, or cultural value to the Registry, to establish an advisory National Recording Preservation Board, and to create and implement a national plan to assure the long-term preservation and accessibility of our audio heritage. # # #
<urn:uuid:1c9d12bf-fb63-48e5-b11d-1abd12da0f81>
CC-MAIN-2013-48
http://loc.gov/today/pr/2000/00-185.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163053843/warc/CC-MAIN-20131204131733-00082-ip-10-33-133-15.ec2.internal.warc.gz
en
0.92284
643
2.5625
3
Study finds pigeons and other animals can place everyday things in categories like humans Pinecone or pine nut? Friend or foe? Distinguishing between the two requires that we pay special attention to the telltale characteristics of each. And as it turns out, us humans aren't the only ones up to the task. According to researchers at the University of Iowa, pigeons share our ability to place everyday things in categories. And, like people, they can hone in on visual information that is new or important and dismiss what is not. "The basic concept at play is selective attention. That is, in a complex world, with its booming, buzzing confusion, we don't attend to all properties of our environment. We attend to those that are novel or relevant," says Edward Wasserman, UI psychology professor and secondary author on the paper, published in the Journal of Experimental Psychology: Animal Learning and Cognition. Selective attention has traditionally been viewed as unique to humans. But as UI research scientist and lead author of the study Leyre Castro explains, scientists now know that discerning one category from another is vital to survival. "All animals in the wild need to distinguish what might be food from what might be poison, and, of course be able to single out predators from harmless creatures," she says. More than that, other creatures seem to follow the same thought process humans do when it comes to making these distinctions. Castro and Wasserman's study reveals that learning about an object's relevant characteristics and using those characteristics to categorize it go hand-in-hand. When observing pigeons, "We thought they would learn what was relevant (step one) and then learn the appropriate response (step two)," Wasserman explains. But instead, the researchers found that learning and categorization seemed to occur simultaneously in the brain. To test how, and indeed whether, animals like pigeons use selective attention, Wasserman and Castro presented the birds with a touchscreen containing two sets of four computer-generated images—such as stars, spirals, and bubbles. The pigeons had to determine what distinguished one set from the other. For example, did one set contain a star while the other contained bubbles? By monitoring what images the pigeons pecked on the touchscreen, Wasserman and Castro were able to determine what the birds were looking at. Were they pecking at the relevant, distinguishing characteristics of each set—in this case the stars and the bubbles? The answer was yes, suggesting that pigeons—like humans—use selective attention to place objects in appropriate categories. And according to the researchers, the finding can be extended to other animals like lizards and goldfish. "Because a pigeon's beak is midway between its eyes, we have a pretty good idea that where it is looking is where it is pecking," Wasserman says. This could be true of any bird or fish or reptile. "However, we can't assume our findings would hold true in an animal with appendages—such as arms—because their eyes can look somewhere other than where their hand or paw is touching," he explains. The study, "Pigeons' Tracking of Relevant Attributes in Categorization Learning," was published in the April 2 print edition of the Journal of Experimental Psychology: Animal Learning and Cognition. Funding was provided by the UI psychology department. Amy Mattson | EurekAlert! High in calories and low in nutrients when adolescents share pictures of food online 07.04.2016 | University of Gothenburg Brain connectivity reveals hidden motives 04.03.2016 | Universität Zürich A biological and energy-efficient process, developed and patented by the University of Innsbruck, converts nitrogen compounds in wastewater treatment facilities into harmless atmospheric nitrogen gas. This innovative technology is now being refined and marketed jointly with the United States’ DC Water and Sewer Authority (DC Water). The largest DEMON®-system in a wastewater treatment plant is currently being built in Washington, DC. The DEMON®-system was developed and patented by the University of Innsbruck 11 years ago. Today this successful technology has been implemented in about 70... Permanent magnets are very important for technologies of the future like electromobility and renewable energy, and rare earth elements (REE) are necessary for their manufacture. The Fraunhofer Institute for Mechanics of Materials IWM in Freiburg, Germany, has now succeeded in identifying promising approaches and materials for new permanent magnets through use of an in-house simulation process based on high-throughput screening (HTS). The team was able to improve magnetic properties this way and at the same time replaced REE with elements that are less expensive and readily available. The results were published in the online technical journal “Scientific Reports”. The starting point for IWM researchers Wolfgang Körner, Georg Krugel, and Christian Elsässer was a neodymium-iron-nitrogen compound based on a type of... In the Beyond EUV project, the Fraunhofer Institutes for Laser Technology ILT in Aachen and for Applied Optics and Precision Engineering IOF in Jena are developing key technologies for the manufacture of a new generation of microchips using EUV radiation at a wavelength of 6.7 nm. The resulting structures are barely thicker than single atoms, and they make it possible to produce extremely integrated circuits for such items as wearables or mind-controlled prosthetic limbs. In 1965 Gordon Moore formulated the law that came to be named after him, which states that the complexity of integrated circuits doubles every one to two... Characterization of high-quality material reveals important details relevant to next generation nanoelectronic devices Quantum mechanics is the field of physics governing the behavior of things on atomic scales, where things work very differently from our everyday world. When current comes in discrete packages: Viennese scientists unravel the quantum properties of the carbon material graphene In 2010 the Nobel Prize in physics was awarded for the discovery of the exceptional material graphene, which consists of a single layer of carbon atoms... 24.05.2016 | Event News 20.05.2016 | Event News 19.05.2016 | Event News 27.05.2016 | Awards Funding 27.05.2016 | Life Sciences 27.05.2016 | Life Sciences
<urn:uuid:f0498722-d88e-420f-92ce-f2221c0f5356>
CC-MAIN-2016-22
http://www.innovations-report.com/hrml/reports/studies/great-minds-think-alike.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277286.69/warc/CC-MAIN-20160524002117-00128-ip-10-185-217-139.ec2.internal.warc.gz
en
0.934911
1,309
3.78125
4
The Bielefeld region of Germany was a rich source of manufacturers. In the early 1900s it included typewriters, sewing machines, bicycles, and motorcycles (see Meister, Goricke, Durkopp). It was in this stronghold of manufacturing that August Rixe decided to start bicycle manufacturing in 1921. Things progressed well and Rixe decided to introduce a motorized bicycle in 1935. Although the products were well received in tough times, war soon ended production, and it was 1948 before machines were once again produced. Rixe returned to using Ilo or Fichtel & Sachs engines in a variety of their own frames. Their largest model was called the Senator and it featured at 250cc Ilo engine, but also popular was the more sporting KTS-125 featuring twin carburetors. In the early 1950s, ricks introduced new models such as the 175 and an updated Senator which featured swing arm rear suspension, telescopic front forks, elegant fuel tanks, and attractive paint schemes. Towards the end of the 1950s when everybody was facing the general decline of motorcycles relative to cars, Rixe employed a smart strategy which allowed it to survive. It had developed a line of 50cc mopeds which it began to promote heavily. Sales remained strong on the very low end and Rixe emerged as a survivor in the 1960s. The company decided to concentrate on the small displacement end of the market and continued to produce very good machines under 100cc into the 1960s and 70s. The company finally ran into financial problems in the early 1980s, and was sold to a Chinese company in 1984.
<urn:uuid:e558c532-c4b9-438d-8bbf-d74b39121145>
CC-MAIN-2023-40
https://www.classicvelocityblog.com/2014/06/13/2014-6-12-rixe/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00522.warc.gz
en
0.981158
334
2.59375
3