text stringlengths 222 548k | id stringlengths 47 47 | dump stringclasses 95 values | url stringlengths 14 1.08k | file_path stringlengths 110 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 53 113k | score float64 2.52 5.03 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
You may not realize it, but GPUs are good for more than videogames and scientific research. In fact, there’s a good chance your daily life is being affected by GPU computing.
Mobile applications rely on GPUs running servers in the cloud. Stores use GPUs to analyze retail and web data. Web sites use GPUs to more accurately place ads. Engineers rely on them in computer-aided engineering applications. Accelerated computing using GPUs continues to expand.
No longer is it something just for the high-performance computing (HPC) community. The benefits of CUDA are moving mainstream.
So, What Is CUDA?
Even with this broad and expanding interest, as I travel across the United States educating researchers and students about the benefits of GPU acceleration, I routinely get asked the question “what is CUDA?”
Most people confuse CUDA for a language or maybe an API. It is not.
It’s more than that. CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant. The developer still programs in the familiar C, C++, Fortran, or an ever expanding list of supported languages, and incorporates extensions of these languages in the form of a few basic keywords.
These keywords let the developer express massive amounts of parallelism and direct the compiler to the portion of the application that maps to the GPU.
A simple example of code is shown below. It’s written first in plain “C” and then in “C with CUDA extensions.”
More CUDA Resources
Learning how to program using the CUDA parallel programming model is easy. We have webinars and self-study exercises at the CUDA Developer Zone website.
Check it out. And be sure to let us know how you’re using CUDA to advance your work. | <urn:uuid:28da68aa-f4b7-4986-88b9-2a4e71a72504> | CC-MAIN-2016-40 | https://blogs.nvidia.com/blog/2012/09/10/what-is-cuda-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660602.38/warc/CC-MAIN-20160924173740-00242-ip-10-143-35-109.ec2.internal.warc.gz | en | 0.93292 | 386 | 3.546875 | 4 |
According to a new report, official unemployment figures in the UK may be an underestimate. The report challenges the government’s figures which suggest that unemployment is at a record low.
The thinktanks Centre for Cities and the Organisation for Economic Co-operation and Development (OECD) have published a report on economic inactivity in the UK. It shows that many people discouraged from looking for work have been excluded from the government’s figures for unemployment.
The Office of National Statistics estimated 4.6% of the working-age population in the UK as being unemployed in 2017. However, this new report suggests the actual number of unemployed people may be higher due to ‘hidden unemployment’.
Where the official figure takes into account people actively looking for work, it excludes those who may stop looking due to a perceived lack of jobs. For example, in areas where they don’t think it’s possible to find work. People with disabilities or health conditions may also stop looking for work if they can’t find flexible or accessible opportunities.
When the hidden unemployed are added to official unemployment statistics, the number of working-age jobless people not in education is estimated to jump from 4.6 per cent to 13.2 per cent
The OECD website says:
Unemployed people are those who report that they are without work, that they are available for work and that they have taken active steps to find work in the last four weeks. When unemployment is high, some people become discouraged and stop looking for work; they are then excluded from the labour force. This implies that the unemployment rate may fall, or stop rising, even though there has been no underlying improvement in the labour market.
Read on...Support us and go ad-free
Response from the Office for National Statistics
A spokesperson from the Office for National Statistics said its figures are based on internationally agreed definitions. They went on to say:
If the definition were widened, for example by including people not looking for work because of health problems, it would stop being a measure of spare employment capacity
Centre for Cities has said the report challenges the government’s claim that the UK has had “a job creation miracle” resulting in low unemployment.
Featured image via Wikimedia/ J J Ellison
We need your help to keep speaking the truth
Every story that you have come to us with; each injustice you have asked us to investigate; every campaign we have fought; each of your unheard voices we amplified; we do this for you. We are making a difference on your behalf.
Our fight is your fight. You’ve supported our collective struggle every time you gave us a like; and every time you shared our work across social media. Now we need you to support us with a monthly donation.
We have published nearly 2,000 articles and over 50 films in 2021. And we want to do this and more in 2022 but we don’t have enough money to go on at this pace. So, if you value our work and want us to continue then please join us and be part of The Canary family.
In return, you get:
* Advert free reading experience
* Quarterly group video call with the Editor-in-Chief
* Behind the scenes monthly e-newsletter
* 20% discount in our shop
Almost all of our spending goes to the people who make The Canary’s content. So your contribution directly supports our writers and enables us to continue to do what we do: speaking truth, powered by you. We have weathered many attempts to shut us down and silence our vital opposition to an increasingly fascist government and right-wing mainstream media.
With your help we can continue:
* Holding political and state power to account
* Advocating for the people the system marginalises
* Being a media outlet that upholds the highest standards
* Campaigning on the issues others won’t
* Putting your lives central to everything we do
We are a drop of truth in an ocean of deceit. But we can’t do this without your support. So please, can you help us continue the fight? | <urn:uuid:395e6745-cf5a-44ae-84c4-c8f482023593> | CC-MAIN-2022-05 | https://www.thecanary.co/uk/news/2019/10/18/unemployment-may-be-considerably-higher-than-official-government-figures-according-to-new-report/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00194.warc.gz | en | 0.950049 | 855 | 2.515625 | 3 |
Modern Lifestyle And Ailments
The modern lifestyle, despite its many conveniences, has taken a major toll on human health and well-being because of its sedentary nature. Arthritis, now a common household problem, is a result of obesity, old age, mental stress, etc.
Arthritis Caused By Stress
According to spiritual healer Chandrashekhar Patil of the Rajyog Spiritual Healing Centre, arthritis is a result of stress-induced imbalance in the body’s natural equilibrium. A disturbed mind leads to stress, he says. Due to stress, the food consumed is not properly digested. Indigestion, in turn, leads to excessive gases and increased acidity levels in the body. Many times, we experience pain, a burning sensation in the chest, besides tightening and bloating of the stomach. This is because of the presence of excessive gases in the body. When these gases settle down between the joints and other parts of the body, it causes arthritis.
Stress is mainly caused due to dormancy or blockages in the crown and brow chakra, whereas indigestion is caused due to problems with the solar-plexus chakra. By using Yogic Healing techniques, Mr Patil first activates the crown and brow chakras to normalise their energy flow. Then, he relaxes the patient’s mind, body and soul, using divine energies and by inducing Yog Nidra. The solar-plexus chakra is also activated to improve its functioning. Using Yogic Healing techniques, digestion of the stomach is improved, acid levels are brought to normal, and excessive gases are removed from the body.
Then, the part of the patient’s body that is affected, is activated by giving it divine energies. For example, in the case of a patient suffering from kneecap problems, the deteriorated cartilage is activated and its regeneration is stimulated. Blood circulation to the affected body part is also improved. With actual divine energies and techniques of Yogic Healing, patients are relieved from stiffness and pain in the body. When such Yogic Healing treatment is coupled with appropriate exercises, patients recuperate at a much faster rate. Using this technique, problems like spondylitis, rheumatoid arthritis, joint pains, frozen shoulder and tennis elbow can be naturally treated. | <urn:uuid:33e452d9-17ed-4c91-b2bd-b60e80911f78> | CC-MAIN-2016-44 | http://www.speakingtree.in/article/yogic-healing-a-boon-for-arthritis-patients | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00245-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.924241 | 478 | 2.515625 | 3 |
How to Bend Acrylic
Thanks to its clean look, unmatched durability, and that it's easy to cut, acrylic has incredible potential as a building material. In fact, acrylic can also be bent into almost any shape without sacrificing its "water white" clarity. As such, acrylic furniture has invariably emerged as a popular material for the creation of clear furniture. It's also used in more practical applications like to make sunlights, aquarium tanks, and airplane windows. When it comes to bending acrylic, there are a lot of ways you can accomplish such a process!
Acrylic Fabrication is proud to provide a variety of high-quality coffee tables, end tables, and other acrylic furniture. Rated "water white", our acrylic products are perfectly clear. With less visual density, you can make any space look larger. You can also combine our acrylic products with old pieces to give a room an eclectic look. If you're interested in our acrylic furniture, we ship our products within one day! Otherwise, keep reading to learn more about how to bend acrylic.
Methods for Bending Acrylic
One advantage that acrylic brings is that it's fairly easy to bend and mold compared to glass. This makes acrylic ideal for applications that require substantial shaping. Here are some of the methods that can be used to bend and shape acrylic:
- Cold Forming: In some cases, acrylic can be bent without needing to apply heat. This has to be done minimally, because if you overbend acrylic in this way, stress cracking can occur.
- Line Bending: Line bending is a method used to form sharp angles in acrylic. By routing a "v" shaped line where you plan to make a bend, it's possible to make sharper angles in an acrylic sheet.
- Drape Forming: Once an acrylic sheet reaches forming temperature, you can drape it over a mold so that it naturally takes that shape.
- Oven Heating Sheet: Acrylic sheets can be brought to forming temperature in ovens, and then formed accordingly. Attaining the right temperature is crucial. If the sheet is too hot or too cold, forming them can lead to imperfections. | <urn:uuid:bdbe5bb9-3d98-4de2-8222-1ee7f299036b> | CC-MAIN-2017-17 | http://acrylicparts.com/how-to-bend-acrylic.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00187-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.945902 | 440 | 2.609375 | 3 |
Childrens Books on Pollution - Empowering Kids to Care for Mother Earth
Childrens books on pollution that don't scare kids or frighten them can instead empower children to have a healthy respect for our precious planet.
The National Institute of Environmental Health Sciences, National Institutes of Health (NIH, Department of Health and Human Services (DHHS) offers an extensive series of free coloring pages and PDF e-Books dealing with a wide range of environmental issues for young kids.
I read through Auntie Pollution Coloring Book and even if you don't want your child to color it, it's one of their excellent childrens books on pollution.
The center for Science and Environmental Outreach in Upper Peninsuka, Michigan, recommends the following childrens books on pollution for k-12.
I haven't read or researched this list, except for reading The Lorax by Dr. Seuss.
This is one of his empowering and inspiring rhyming verse stories that gets right to the heart of the problem with pollution.
Our greed often causes us to take shortcuts, be short sighted, care only for money and not the environment.
The Lorax is the main character in the book, and he tries to stop unlimited and expansive growth occurring at the expense of the trees, who he says he is the spokes man for, because trees can't speak.
The Lorax is a must read for any program involved in...
Kids Environmental Education
If you want to see what kids are doing in their schools and communities to help protect our environment, visit Green Squad, "a project of the Natural Resources Defense Council, in collaboration with the Healthy Schools Network."
The link above takes you to the Green Squad Library,where you will find fact sheets for kids.
Kids environmental education is important because it affects many aspects of kid's lives, especially their health and well being.
Kids who learn about pollution may choose foods, for example, that aren't grown with pesticides, because kids see the damaging affects of pesticides on the environment and kids aren't stupid.
On health facts for kids you can discover some of the results of environmental pollution on the weakening of our immune systems, our bodies defense against disease and illness.
In Spring, 2009, I'll be publishing as an e-Book Joey's Cabbage Patch, A Read-Me, Draw-Me Book originally published as a soft cover book for kids ages 6-9 about caterpillars destroying a cabbage patch and how farmers and caterpillars reached a peaceful solution.
It had rave reviews.
Joey's Cabbage Page allows kids to create their own illustrations based on their understanding of the words on each page. Border designs on each page plus a beautiful 4 color cover inspire kids drawings.
For sample pages check out Joey's Cabbage Page.
To be informed when this environmental book for kids is available, fill out the form that follows.
My Promise: I will never share your personal information with anyone. I dislike spam as much as you do. Thank you for your trust.
Return to Home Page.
The information on Childrens Books on Pollution belongs to Childrens Educational Books.
Kindly treat us as you would family and friends. Gratefully, Harvey | <urn:uuid:5269fdd7-656c-49a1-af29-c3787bebbdc5> | CC-MAIN-2013-48 | http://www.childrens-educationalbooks.com/childrens-books-on-pollution.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164035500/warc/CC-MAIN-20131204133355-00081-ip-10-33-133-15.ec2.internal.warc.gz | en | 0.944878 | 663 | 3.046875 | 3 |
Growing plants from seed and witnessing your garden sprout from bare earth and come to life, is quite satisfying. This method also gives you the greatest range of economical plant options; a packet of seeds can produce dozens or even hundreds of plants and, if stored properly, will usually last for one or more additional seasons, depending on the type of plant.
The packet the seeds come in will provide helpful information about planting, such as when to plant and how deeply and densely to sow the seeds. Some plants, such as sunflowers or poppies, are best sown directly in the garden soil outdoors. This is often because they are plants with a tap root, a tapering main root (like a carrot, for example) that grows straight down, and they don’t transplant well. Most seeds, however, can be started indoors a month or two ahead of the time to plant outside. By following these steps, you’ll have healthy seedlings that are ready to plant outdoors at the appropriate time, see Planting Guide.
Jar of water
Potting soil or seed-starting mix
Horticultural sand (only for very small seeds)
Soak Seeds: For best results, soak seeds in room temperature tap water for a few hours or overnight. Seeds with a heavy black skin should be nicked lightly with a nail file and then soaked for several hours. Very fine seeds should not be soaked at all. Drain the seeds before planting.
Prepare Planting Medium: Moisten the soil mix lightly but thoroughly. Make sure containers are clean (wash in a 10 percent bleach solution to kill any pathogens). Fill the containers to the top with soil.
Sow Seeds: Plant one or more seeds in each container at the depth recommended on the seed packet (usually about the same depth as the size of the seed). You can plant two or three seeds in each container or planting cell and if more than one germinates, you clip off all but the strongest seedling. To sow very fine seeds, mix them with a small amount of horticultural sand and then spread the seed-and-sand mixture lightly and evenly over the top of the soil.
Water Gently: Using a hose with a misting attachment or a watering can with fine holes at the spout, water the seed containers thoroughly, making sure that they have good drainage.
Attach Label: Write the plant and variety name on the plant label as well as the date of planting and attach it securely to the container.
Keep Warm and Moist: Until seeds germinate, they need warmth more than they need heat. Place the containers on a heat mat or in some other location where they’ll stay warm. Keep them evenly watered (but not soggy). When seedlings emerge, move to a bright, sunny location with good air circulation and continue regular watering until plants are ready to transplant. | <urn:uuid:0d2b3729-4a51-4d9d-af38-ad92c2d13dd9> | CC-MAIN-2018-17 | http://www.plantcalifornia.com/learning-center/planting-seeds-successfully/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948950.83/warc/CC-MAIN-20180427021556-20180427041556-00115.warc.gz | en | 0.931224 | 593 | 3.5625 | 4 |
Ahead of Pope Francis’s recent visit to Armenia (June 24-26), there was much speculation as to whether he would again use the word “genocide” in reference to the massacres of 1.5 million Armenians by the Ottoman Turks in 1915. The prepared remarks, released by the Vatican, appeared to omit that politically charged designation—instead opting for words such as “tragedy,” “slaughter,” and “immense suffering.” Nevertheless, once in Armenia, Pope Francis departed from the prepared text and said, “Sadly, that tragedy, that genocide, was the first of the deplorable series of catastrophes of the past century, made possible by twisted racial, ideological or religious aims that darkened the minds of the tormentors even to the point of planning the annihilation of entire peoples.”
Turkey responded by suggesting that Pope Francis and his Papacy possess “all the reflections and traces of [a] crusader mentality.” Last year, after Pope Francis had referred to the massacres as what is “widely considered ‘the first genocide of the 20th century’” at a centennial commemoration in Saint Peter’s Basilica, Turkish President Recep Erdogan swiftly condemned the Pope and recalled Turkey’s ambassador to the Holy See for ten months.
Given this context, Pope Francis’s visit to Armenia came at a critical time. Armenia is a landlocked country of about 3 million people. It is bordered by Turkey, Azerbaijan, Iran, and Georgia. Two of its borders—those with Turkey and Azerbaijan—have been closed to Armenia since the majority-Armenian populated region of Nagorno Karabakh sought independence from Azerbaijan and reunification with Armenia in the early 1990’s. That conflict remains unresolved: in April of this year, Azerbaijan tried, unsuccessfully, to take back Nagorno Karabakh by force.
In many ways, Armenia’s isolation transcends its current physical boundaries. Even though the Armenians were the first people to accept Christianity as their official religion (301 AD), their church has been sacramentally independent of Roman and Eastern Christianity since its rejection of the Council of Chalcedon in 451. Despite the pressures from its geographic and economic isolation, however, Armenia has demonstrated to the rest of the Christian world that it takes its moral responsibilities seriously. Indeed, Armenia has welcomed 20,000 refugees from Syria over the past few years.
The sites chosen during Pope Francis’s visit have both religious and political significance to the Armenian people. Most evident of this was Pope Francis’s trip to Khor Virap with Karekin II (Supreme Patriarch and Catholicos of All Armenians). Khor Virap (meaning “Deep Pit”) is the site of Armenia’s conversion to Christianity: it is where Saint Gregory the Illuminator was imprisoned for thirteen years before miraculously healing King Tiridates of a mysterious illness and then baptizing the King. Khor Virap is also the vantage point of Armenia’s great territorial losses. If one looks toward the west, one can see Mount Ararat—a symbol of Armenia—which is now located within the closed borders of modern-day Turkey. At Khor Virap, Pope Francis and Karekin II released white doves toward Turkey, as a gesture of peace. In fact, throughout his visit, Pope Francis repeatedly called on Armenia and Turkey to reconcile.
Pope Francis’s visit was also ecumenically important. Reflecting on his time praying with Karekin II, Pope Francis said, “We have felt as one [the Church’s] beating heart, and we believe and experience that the Church is one.” Karekin II, addressing the faithful in Gyumri, Armenia, noted that “Gyumri and the church of the Holy Mother of God (Yotverk) became a tangible provider and preacher for ecumenism, years before the modern definition of ecumenism was established.” Indeed, throughout the Soviet period, the parish was a refuge for Armenian Apostolic, Catholic, and Eastern Orthodox Christians alike.
Cementing the ecumenical purpose of the trip, at a final Mass on June 26th, Pope Francis said, “May an ardent desire for unity rise up in our hearts, a unity that must not be the submission of one to the other, or assimilation, but rather the acceptance of all the gifts that God has given to each,” and then asked Karekin II to, “Bless me, bless me and the Catholic Church, and bless this path toward full unity.”
Pope Francis’s respect for and handling of the Armenian Church demonstrates a sophisticated understanding of the Armenian people. After centuries of constant external pressure, the Armenian self-identity has developed in a way that looks largely inward. Indeed, due to Armenia’s vacillating status as a regional power, buffer state, and finally a subjugated and persecuted people, the Armenians have learned to rely on themselves and to distrust others.
The Armenian identity thus conceives of its people’s inherent uniqueness, based on a common ethnicity, language, religion, and historical experience. Armenia’s adoption of Christianity coupled with its independence from Byzantium—its powerful neighbor—helps to explain this paradoxical self-identity. As historian Nina Garsoian writes, “The conversion of Armenia to Christianity was probably the most crucial step in its history. It turned Armenia sharply away from its Iranian past and stamped it for centuries with an intrinsic character as clear to the native population as to those outside its borders, who identified Armenia almost at once as the first state to adopt Christianity.” Moreover, the creation of the Armenian alphabet in the early 5th century helped to further homogenize and differentiate Armenian culture from its Christian neighbors, allowing its churches to conduct their Liturgies in Armenian, rather than in Greek or Syriac. Centuries later, the persecution and massacres of the Armenian people during the Ottoman Empire’s decline, undoubtedly, pushed the Armenian identity further inward. The vast territorial losses that accompanied these massacres left the surviving Armenian population clinging to the highlands of modern-day Armenia and, failed by the great powers, to each other.
Pope Francis appeared fully cognizant of this history and underscored these themes when he addressed the Armenian people, stating, “Your own people’s memory is ancient and precious. Your voices echo those of past sages and saints; your words evoke those who created your alphabet in order to proclaim God’s word; your songs blend the afflictions and the joys of your history. As you ponder these things, you can clearly recognize God’s presence. He has not abandoned you.” Pope Francis also challenged the Armenian faithful to strive for unity, so that—with the assistance of God’s mercy—we might all overcome divisions.
Thus far, Pope Francis has shown the Armenians that he stands with them and is willing, despite political pressure, to challenge Turkey’s denialist narrative. While Armenia has been understandably reluctant to look outside of itself or beyond its diaspora, the country would be well served to continue to strengthen its ties with Pope Francis and the Roman Catholic Church. This visit was a promising start.
Yelena Ambartsumian is a graduate of the Fordham College at Lincoln Center Honors Program (2010) and Fordham Law School (2013). | <urn:uuid:e532f885-4899-426c-b203-dacc2c3c660c> | CC-MAIN-2022-21 | https://publicorthodoxy.org/2016/07/05/pope-franciss-visit-to-armenia-a-bridge-out-of-isolation/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00779.warc.gz | en | 0.964559 | 1,552 | 3.21875 | 3 |
1)Lots of people like to run in their free time runnind is good exercise.Doctors say,"Getting exercise is important for good health."They suggest thirty minutes of exercise three time a week. 2)Laura Gilbert usually runs three times a week.She sometimes runs alone and sometimes with friends.But thirty minutes of running isn't enough for her.Laura likes to run long distances.She says,"It takes me about forty minutes just to warm up.I start feeling good good after two hours."Each year,she runs several marathons.A marathon is 26.2 miles long (or 42.1 kilometers).Some of Laura's races are even longer! 3)Every year,Laura runs in a race called II Passatore.It's 101 kilometers long.That's more than 62 miles.The race begins near her home in italy.The runners start at 3:00 P.M.,and sometimes they run all night.The race takes Laura about twelve hours. 4)All year,Laura looks forward to II Passatore.But before the race,she feels nervous.The race is a king of test for her.Can she do it?During the race,her legs and feet and stomach may hurt.She thinks about a hot bath.She thinks about her nice,soft bed.A part of her mind says,"Stop and go home!Why are you doings this?This is crazy!"But she goes on running."I talk to other runners," she says,"and we **** each other." 5)After 101 kilometers,Laura is glad to finish the race.Twelve hours are enough! | <urn:uuid:9f9983d0-61cb-426a-8d7c-ea2742f39157> | CC-MAIN-2017-51 | http://www.etutodasi.net/konu/turkceye-cevirirmisiniz.21249/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00073.warc.gz | en | 0.956829 | 332 | 2.703125 | 3 |
View your list of saved words. (You can log in using Facebook.)
Constitutionally mandated process for electing the U.S. president and vice president. Each state appoints as many electors as it has senators and representatives in Congress (U.S. senators, representatives, and government officers are ineligible); the District of Columbia has three votes. A winner-take-all rule operates in every state except Maine and Nebraska. Three presidents have been elected by means of an electoral college victory while losing the national popular vote (Rutherford B. Hayes in 1877, Benjamin Harrison in 1888, and George W. Bush in 2000). Though pledged to vote for their state's winners, electors are not constitutionally obliged to do so. A candidate must win 270 of the 538 votes to win the election.
This entry comes from Encyclopædia Britannica Concise. For the full entry on electoral college, visit Britannica.com. | <urn:uuid:68d3101d-f4e2-4006-b4be-846c2d4c1717> | CC-MAIN-2013-20 | http://www.merriam-webster.com/concise/electoral%20college | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00022-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.945148 | 191 | 3.734375 | 4 |
The visitors center was filled with chunks of useful information as well as chunks of stone salvaged from the excavation of the site. One info board talked about the mason's marks.
Remember these communities were self-sufficient, and would include men from all walls of life. There was a hierarchy system, and the brothers were segregated into two groups: the choir monks, the educated, usually coming from families with money because getting an education took lots of money, these men were the thinkers, philosophers, scribes, and church leaders; and the lay monks, who did the work, farming, cooking, stone work, etc. There was no contact between these two groups except for a foreman to give work orders to the lay brothers. This system existed until very recently.
If a rich man wanted to rise up in the church quickly, the fastest way was to join a monastery. A man could rise through the ranks over time and become a bishop. | <urn:uuid:3ed1cfc0-1614-4b92-a88a-f2c36f140fa1> | CC-MAIN-2018-09 | http://prismatic-palette.blogspot.com/2012/08/ireland-co-meath-mellifont-abbey-2.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814249.56/warc/CC-MAIN-20180222180516-20180222200516-00459.warc.gz | en | 0.989544 | 192 | 2.921875 | 3 |
Wind energy is more and more used as a renewable energy source character. The present wind turbine is a small one which allows to be used on roofs or in gardens to light small areas like publicity boards, parking, roads or for water pumping, heating... The present turbine has a vertical axis. Each turbine blade combines a rotating movement around its own axis and around the main rotor axis. Due to this combination of movements, flow around this turbine is highly unsteady and needs to be modeled by unsteady calculation. One of the main problems of such geometry is to simulate the two combined movements. The present work is an extended study of one’s made in 2009. In the previous study, some results like contours of pressure and velocity fields were presented for elliptic blades for one specific constant rotational speed and benefits of combined rotating blades was shown. The present paper points up the influence of two different blades geometries for different rotational speeds, different blade stagger angles and different Reynolds numbers related to a wider range of wind speeds. | <urn:uuid:0a192318-283e-4219-8383-d1569c4c6843> | CC-MAIN-2014-15 | http://www.cd-adapco.com/presentation/numerical-simulation-vertical-wind-axis-turbine-pitch-controlled-blades | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00367-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.948267 | 209 | 3.328125 | 3 |
Pages that link here:
This section is dedicated to the some of the most famous names among pirates, mostly during 16th and 17th centuries in the Caribbean.
Bonny was a strong, independent woman, who became a respected pirate in a predominantly male society.
One of the most famous female pirates of all time is without a doubt Grace O’Malley, the legendary Irish noblewoman who spent the majority of her life at sea. Here you can find out more information about her interesting life.
He was the last great pirate of the golden age who plundered more than 400 ships. His boldness and abilities made him one of the most successful pirates. | <urn:uuid:ea8b93bd-5cca-4bdb-bce8-f169638aba91> | CC-MAIN-2018-13 | http://www.thewayofthepirates.com/picture/picture-of-anne-bonny/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646176.6/warc/CC-MAIN-20180318204522-20180318224522-00716.warc.gz | en | 0.966666 | 136 | 2.515625 | 3 |
Academy award-winner Russell Crowe starred as John Nash in Ron Howard’s film, “A Beautiful Mind.” The movie is an expos of the Nash’s life; chronicling his achievements as a mathematician and more importantly, the horrific and grueling journey he underwent with schizophrenia. His genius lent itself to pattern recognition, which would eventually be identified as the stimulant for the majority of his hallucinations. The nature of schizophrenia is frightening and overpowering; and his story is an important case study for the disorder. His illness, when approached from a Western psychological method, is significantly different in character when approached from Ken Wilber’s integral psychology model. Mathematician John Nash suffered from a form of schizophrenia which can be approached from both Western and Eastern methods of psychology for analysis and is well documented in the semi-fictional, cinematographic biography, A Beautiful Mind.
According to the film, Nash was born in Bluefield, West Virginia. He came from humble beginnings; the son of a school teacher and electrical engineer who did the best they could to nurture his genius from a young age. When he was a senior in high school, he won the prestigious Westinghouse scholarship to attend the Carnegie Institute of technology. He graduated in 1948 with a masters’ degree after only three years. He moved onto Princeton where he eventually published the “Nash Equilibrium” and game theory which would win him the Nobel Prize years later. Initially, he had planned to go into chemical engineering, but his love and proficiency for mathematics soon focused his study. After receiving his PhD, he became one of the youngest professors to ever teach at the Massachusetts Institute of Technology. He spent his years there before his psychotic break continually striving for the elusive breakthrough that would make him the greatest mathematician of his time. His contributions in reality were in fact, quite impressive. In addition to his game theories and the “Nash Equilibrium,” he had inadvertently (and independently) proved Brouwer’s fixed point theorem as an undergraduate at Carnegie. Later on, he would break one of Riemann’s most perplexing mathematical conundrums. Nash provided breakthrough after breakthrough in mathematics during the course of his career (Fonesca 1).
Nash was an extremely introverted man who hardly ever attended class at Princeton and for the most part considered himself far superior to intellect and character of his peers. He was incredibly driven, to the point of egotism, but simultaneously extremely insecure. Genius is often said to be accompanied by eccentricity, and his quirks developed long before they exploded in mental breakdown. Such quirks, however, often served to isolate him from his community and stimulated many unpleasant social interactions which may have mandated a psychological coping mechanism like creating friends. For all his oddities, though, he did manage to secure the hearts of a few people close to him. His marriage with Alicia, one of his students at MIT, turned out to be his salvation. Though his personality was marked by introversion, intelligence and quiet, focused study, he still experienced basic human drives like sexual appetite and of course, his ongoing craving for recognition and acceptance.
When the break came, it was characterized by consistent and permanent auditory, visual and tactile hallucinations. He had been experiencing the hallucinations for years, and was living in a world where the characters he was imagining were among the most important people in his life. According to Howard’s film (which has been praised for its fidelity to Nash’s biography), Nash maintained three permanent hallucinations: a best friend, the best friend’s niece, and Parcher, a secret government agent who was employing’ Nash for the sort of high-profile cryptography in the important’ job he craved. The job’ required him to identify series of complex codes in the nation’s magazines, and Nash began an insane construction of patterns whose skeletons he left littered and posted in his locked office. No one questioned him, as the orders came from a high-clearance,’ but when Parcher began to lead Nash in high risk Russian-spy chases, Nash developed a heightened paranoia that isolated him even from his wife. Eventually, his fear of being captured by the Russians manifested in the disorganized speech and erratic fits of enraged behavior characteristic of schizophrenia, and only then was he admitted to a psychiatric ward for treatment.
Needless to say, the illness had a huge impact on his life. After treatment, he was put on medication that would greatly impair if not altogether obviate his ability to lead a normal life. He stopped taking the medication and had a relapse where he almost killed his infant son and hit his wife, and only after this was he able to begin to confront his hallucinations. His case is unique in that he was actually able to confront the figments of his imagination and eventually push them into silence, but they haunted his steps his entire life. Schizophrenia robbed him of what could have been the most fruitful years of his genius and took an enormous toll on his wife before it was under control enough for him to return to work and regain semblances of normality.
In the 1950s, the psychiatrists and neurologists who dealt with Nash’s case were only able to approach his illness from a Western perspective. In this model, they diagnosed him with general schizophrenia and saw the disorder as a biological illness that could be cured.’ They understood the nature of the illness and its symptoms, and they held a fairly thorough comprehension of Nash’s lack of control and emotional response to the situation. Their prognosis was that Nash would suffer from the hallucinations unless he was subjected to the prescribed electro-shock therapy for an extended period of time and took certain, powerful medications. The medications, however, would contribute to a reduced thinking capacity and dullness that Nash was unaccustomed to and triggered his journey for self-healing. There was no room in their diagnosis or outlook for any form of self-healing; and they probably had trouble explaining his ability to manage his illness. When the psychiatrist first approached Nash after his relapse, he told him self-healing would be practically impossible. Western psychology may see Nash’s success purely as a product of his great intellect and mental capacity.
However, if Nash’s case had been approached from Ken Wilber’s modern model of integral psychology; his diagnosis and prognosis may have been very different. Wilber developed a four-quadrant system which categorizes all the psychological disorders and developmental theories that have ever been academically validated. The four quadrants are comprised of the interior-individual, the exterior-individual, the interior-collective, and the exterior-collective (each relates the conscious self to the unconscious self or to the conscious or unconscious society of which the self is member). Disorders may be located anywhere on the four quadrant plane; depending on their presentation as being externally or internally controlled, and their identification with an individual or society at large. The model acts as a bridge between the Western psychology described above and an Eastern one, as it harbors both the external, objective/positivistic ideas and concepts of psychology (Western); as well as the internal, more subjective and constructivist aspects of the human mind (Eastern). Juxtaposed; the two viewpoints provide a much more holistic view of mental illness and psychology in general.
For example, Nash’s schizophrenia is much more specific and subject to particular distinctions when observed through the lens of Wilber’s paradigm. Application of the four-quadrant system mandates a closer look at the presentation of Nash’s madness.’ Schizophrenia can be most broadly classified by an ability to form or maintain personal, physical or abstract boundaries; a psychological disorder characterized by recurring sensory hallucinations. Also, schizophrenics are often victims of their own limbic system: the dopamine secreted by their nucleus accumbens during a psychotic episode triggers an addictive neural-net which, quite literally, fosters a neurological and chemical addiction to their psychosis inside the brain (hence Nash’s hesitation and reluctance to take his medication). Approaching the patient with this epistemic awareness of the potential diagnosis, in addition to asserting the open mind Wilber’s model requires, psychologists may find a few disparities that potentially disqualify Nash from a classic Schizophrenia diagnosis. Although Nash experienced stable, consistent hallucinations; he was unlike many other schizophrenic in that he did not indulge in total isolation and withdrawal. He was not grossly disorganized; and he did not exhibit catatonic behavior. Though he did prefer solitude, he was not entirely bereft of social interaction most supported by his experience of love with his wife, Alicia.
Additionally, many schizophrenics are completely indifferent to praise and criticism (Nash was most certainly not; on the contrary, his life was directed in the pursuit of accolades), and often they neither enjoy nor desire sex (again, Nash experienced healthy sexual desires). Seeing these more specific discrepancies in symptoms, Nash’s illness would be placed in a different place on Wilber’s model than most schizophrenics a generalization which is now defined as Schizoid Personality Disorder, and is more pervasive part of the patient’s personality than Nash’s illness was. Nash was a victim of what modern psychology identifies as Paranoid Schizophrenia; a delusional illness which is self-regulated and in which the consciousness exerts some degree of control, even if that control is at first almost impossible to procure.
This key difference places Nash’s schizophrenia in Wilber’s interior-individual quadrant; unlike Schizoid Personality Disorder, which would find its home in the exterior-individual quadrant because it is an overpowering force of neurology that is beyond the patient’s conscious control. Seen from this angle, the contributing factors to Nash’s madness’ are more easily pinpointed. He was perhaps victim of his genius, yes psychologists were able to identify Nash’s profound proclivity and skill for pattern recognition as a direct stimulant to his hallucinations. This would be classified an objective, exterior factor beyond his control: his intellect was a physical part of him not under his conscious ability to regulate. His intellect, then, is one causal factor of his schizophrenia, classified in the exterior-individual quadrant. His obsessive need for acceptance would fall under interior-individual; as this is a subjective motivation he was capable of and eventually managed to standardize and control. Past, negative experiences in social interactions such as being made fun of and isolated at school, or being punished for not attending class and studying on his own would fall as contributing factors in the exterior-collective quadrant. Finally, his own failures at modeling appropriate social behavior could also be a factor which caused his schizophrenia, and that would be categorized in Wilber’s interior-collective quadrant for his personal imposition of behavior on society, under his conscious control.
Clearly, approaching Nash’s schizophrenia from Wilber’s comprehensive, systematic model is invaluable to understanding the true nature of his illness. If it had been available when he was in treatment; perhaps the horrors of electro-shock therapy and the damages of psychotropic medications could have been avoided. Nash’s triumph over his psychological setback, or at least the one Russell Crow reenacted on his behalf in “A Beautiful Mind,” was incredible: especially when the treatment he received was so negative and damaging. How many people with less intellect or impetus to engage in self-healing have been lost to emotionless, dull lives as a result of being misdiagnosed by Western psychology? Wilber’s model not only helped to identify Nash’s illness, but it also provided a pragmatic venue for organizing both the contributing factors of his disorder and its symptoms. If Nash or his psychiatrists had been privy to such a thorough, procedural outline of his case, perhaps the self-healing he managed to induce could have been better facilitated.
John Nash by Goncalo L. Fonesca: http://cepa.newschool.edu/het/profiles/nash.htm | <urn:uuid:d9fe93f5-f0d5-416f-be83-e1276e640044> | CC-MAIN-2023-50 | http://www.actforlibraries.org/integral-psychology-in-ron-howards-cinematic-portrayal-of-john-nash/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100942.92/warc/CC-MAIN-20231209170619-20231209200619-00047.warc.gz | en | 0.983267 | 2,500 | 2.859375 | 3 |
Natural Habitats Creating Green PETS in Schools
Natural Habitats Creating Green PETS in
Stanmore Bay Primary School in Whangaparaoa will be the pilot school for the first sustainable Green Wall project completed in New Zealand. Affectionately named Green PETS, the Green Wall is made from recycled plastic polyethylene terephthalate bottles. The plants grow in a ‘top secret’ Natural Habitat nutrient media mix, with its own irrigation system.
Graham Cleary said, “It’s a simplistic idea, the community working together to collect, recycle and build a sustainable garden in the vertical plane. We have created a habitat for small animals and insects while instilling intrinsic environmental values in our next generation. Our aim is to reduce our carbon footprint using everyday items to create a beautiful edifice”.
The projects’ components shall be sourced, constructed, planted and installed at the school. The children, teachers, parents and whanau will be actively involved in the process from planning, collecting and recycling the plastic bottles, to the construction and planting of the wall through including the installation.
The school children will collect the plastic drinking PET bottles for recycling and participate in the construction of the wall with adult assistance.
The PET green wall project will promote education in:
• Horticultural practices
• The environment
• New Zealand native plant species
• Creating eco systems and food sources for plants, insects and birdlife
...as well as working collaboratively with the community on a community based project. We also aim for this school to be proud of their achievements of being the first to fully construct this recycled green technology in New Zealand.
The PET Bottle Greenwall Project was developed from an idea Mark Paul, Australia’s own Green Wall guru and founder of The Greenwall Company, had while working with his Brazilian licensee Bruno Resendez de Silva in schools in the Favellas in Rio de Janeiro. The outcomes there were dramatic with long lasting impact for the residence of this very hostile urban environment.
Why Natural Habitat’s are promoting Green PETS:
• Creating a home and
food source for birds and bees through specific plant
• Young people and their families are encouraged to think about the topical matter of their environment, and sustainability
• A proven concept that involves young children nationwide in creating a living legacy
• It’s an investment in educating and inspiring young people about their environmental responsibility
• A common household waste item is productively recycled
• Kids are taught basic skills in horticulture, craftwork, and team play
• Job creation, and work skills for aspiring young horticulturists / landscapers
Green Walls Background:
Since storming into public view in Paris in the early 2000s, Greenwalls have become one of the most talked about and coolest pieces of architecture ever since, and they are good for the environment.
Green Walls (also known as Living Walls, Green Facades, Bio Walls or Vertical Vegetation) are an innovative way of greening a vertical surface - they are magnificent to look at, have health benefits, contribute towards a company’s green star rating. They make a bold visual statement, which assists companies in becoming a leader in sustainability.
Natural Habitats (NZ) and Mark Paul’s The Greenwall Company (Australia) have been the leading innovators and installers of these beautiful pieces of functional art in the Southern Hemisphere. Leading corporates in Auckland, Melbourne and Sydney have snapped them up because of the branding association of Green Walls to the exemplary ideals they aspire to. Local installations include Britomart, Westpac, Goodman and Google to name but a few.
Natural Habitats is New Zealand’s largest and leading integrated landscape company. They are renowned for the quality of their work and recognized for their award winning landscape design, build and care. In New Zealand, Natural Habitats are at the forefront of the movement towards green technology in architecture. To find out more visit www.naturalhabitats.co.nz. | <urn:uuid:025a98a3-c577-4480-8466-c5b171659424> | CC-MAIN-2014-42 | http://www.scoop.co.nz/stories/ED1311/S00085/natural-habitats-creating-green-pets-in-schools.htm | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900030.8/warc/CC-MAIN-20141030025820-00052-ip-10-16-133-185.ec2.internal.warc.gz | en | 0.933753 | 830 | 2.90625 | 3 |
By Jesse Wood
On August 21, the “Great American Total Solar Eclipse” will occur along a 70-mile wide path from Salem, Oregon on the West Coast to Charleston, S.C., on the East Coast. While the last total solar eclipse to take place in the continental U.S. happened 47 years ago, the upcoming eclipse will be the first coast-to-coast eclipse since 1918.
Along the “path of totality” for this solar eclipse, daytime darkness will last up to about 2 minutes and 40 seconds as the moon totally covers the son. The temperature will also drop 10 degrees or more and stars will come out if the sky is clear.
Franklin is among the larger towns in the eclipse’s path in North Carolina and those couple minutes of darkness will occur at about 2:35 p.m. EDT.
A complete night sky during the day will not occur outside of the “path of totality.” Folks in Asheville will see the moon cover about 99.2 percent of the sun at 2:37 p.m., while the greatest partial eclipse for Boone will occur at 2:36 p.m. with 95 percent coverage.
While the total eclipse is brief, the entire eclipse event could lasts up to four hours. According to info on Appalachian State’s 2017 solar eclipse page, the first sight of the moon on the solar disk will occur at 1:10 p.m., while the moon completely exits the solar disk at 4:01 p.m.
Obviously, great weather and vantage point is paramount to a solar eclipse viewing. But you must also be situated inside the “path of totality” to experience the rare, natural event in its total awesomeness.
Dan Caton, professor and director of observatories at Appalachian State University in the Department of Astronomy and Physics, encourages everyone to try to venture into that 70-mile wide path on the afternoon of Aug. 21.
“The difference is really night and day,” said Caton, between a partial (even at 95 percent) and total eclipse.
Space.com described a breathtaking total eclipse as follows:
“The disk of the moon blocks out the last sliver of light from the sun, and the sun’s outer atmosphere, the corona, becomes visible. The corona is far from an indistinct haze; skywatchers report seeing great jets and ribbons of light, twisting and curling out into the sky.
“It brings people to tears,” Rick Fienberg, a spokesperson for the American Astronomical Society (AAS), told Space.com of the experience. “It makes people’s jaw drop.”
Check out a video of a solar eclipse in 2010 in Argentina:
Make sure to follow safety measures when viewing the eclipse. NASA states that the sun is unsafe to directly look at it except during the brief period of a total solar eclipse. When looking at the sun or a partially eclipsed sun, NASA recommends using eclipse glasses or handheld solar views that meet the ISO 12312-2 international standard.
Also seek expert advice when attempting to view the eclipse through a camera, telescope, binocular and other optical devices. Click here for NASA’s safety page on solar eclipses.
NASA’s Total Eclipse 2017 page: https://eclipse2017.nasa.gov/
App State’s Total Eclipse 2017 page: https://cas.appstate.edu/solar-eclipse-2017
NationalEclipse.com: 10 Unique Places To View Total Solar Eclipse
State by State Guide for Viewing Solar Eclipse
Local events: https://cas.appstate.edu/solar-eclipse-2017/events
Kids Learning Event Hosted by Watauga Library July 18 and July 20
The Watauga Library is having a Look up to the Stars program with Kevin Manning on Tuesday, July 18 at 7 p.m. Western Watauga is having the program on Thursday, July 20 at 7 p.m. This is an informational program on the up coming solar eclipse on Aug. 21, geared for ages 8 and up. Free solar glasses will be given out. This program is funded by a community grant from Blue Ridge Energy. More information about this program, click to http://www.lookuptothestars.com/
App State Event Day of Solar Eclipse
Monday, August 21, 2017, 1 p.m. – 4 p.m.
Sanford Mall and Grandfather Mountain Ballroom, Plemmons Student Union @ Appalachian State University
Live streaming of the eclipse from a location of totality, telescopes on the Mall and many interdisciplinary activites on this last day of summer and the day before fall semester classes begin.
Eclipse times in Boone:
- Ingress – 1st sight of Moon on solar disk – 1:10 p.m.
- Greatest partial eclipse ~ 95% coverage – 2:36 p.m.
- Egress – Moon completely leaves solar disk – 4:01 p.m.
You must be logged in to post a comment. | <urn:uuid:4f033583-9878-4d83-8be3-55239734ec91> | CC-MAIN-2023-23 | https://www.hcpress.com/front-page/great-american-total-solar-eclipse-travel-western-nc-aug-21.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649193.79/warc/CC-MAIN-20230603101032-20230603131032-00719.warc.gz | en | 0.897394 | 1,067 | 2.53125 | 3 |
With Gov. Jerry Brown upping the ante by calling for California to put at least five million zero-emission vehicles on state roads by 2030, and the state Assembly considering a bill that would ban sales of gas-powered cars, it’s worth testing these ambitious visions of our clean transportation future against market and technological realities.
Our new report does just that. While it was compiled before the governor announced the new goal last month, it finds that California is well on its way to meeting the previous target of 1.5 million ZEVs by 2025.
As with other advanced technologies, instead of electric vehicle sales growing steadily each year, they appear headed to a tipping point, where they grow exponentially until they become common. Smart phones followed this pattern; electric cars could be as common by 2040 as smart phones are today.
The cost of EV batteries – the vehicles’ most expensive component – dropped by 74 percent from 2010 to 2016, even as driving range increased every year. Last year, global electric vehicle sales topped one million for the first time in history. Countries including China, India and the U.K. have announced plans to phase out gas-powered cars entirely.
In addition to the market and technological improvements we’re seeing, analysts believe the advent of autonomous vehicles will further change the automotive industry. Fleets of self-driving electric cars that people hail instead of owning their own vehicles may not be that far off.
But other factors could slow the growth of zero-emission transportation. There’s a gap between the number of electric vehicles on California’s roads and the number of available charging stations. While we lead the nation with more than 16,500 public, nonresidential charging outlets, that’s only 0.05 outlets per ZEV, one of the lowest ratios in the country. To maintain or accelerate growth, the state needs to provide more charging options. Brown’s plan to put an additional 250,000 charging stations on the road will help.
Another consideration is selection. In 2016 in China, 25 new models were introduced and EV sales jumped 70 percent. Chinese drivers can now choose among 75 EVs. In some California cities, dealers offer 25 to 30 models. But many Californians and more than half the U.S population live in areas with seven or fewer EV options. Ensuring additional models make it into showrooms is essential to encouraging higher sales.
In the coming months and years, state policymakers will make decisions that can either smooth the road to a clean transportation future, or throw up roadblocks. Questions abound, including how growing numbers of EVs might reduce gas tax revenues and impact the electric grid.
But one thing is certain: Electric vehicles are poised to change how Californians get around, and as in many fields, as goes California, so might go the nation.
F. Noel Perry is the founder of Next 10, a nonpartisan nonprofit think tank, and can be contacted at email@example.com.
Adam J. Fowler is research manager for Beacon Economics in Los Angeles and can be contacted at Adam@beaconecon.com. | <urn:uuid:3219aa69-9ae1-44aa-812a-8e3af62bfd79> | CC-MAIN-2019-18 | https://www.sacbee.com/opinion/op-ed/soapbox/article199755669.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423140901-00196.warc.gz | en | 0.954946 | 639 | 2.734375 | 3 |
Multivitamins Do Not Prevent Heart Disease
Vitamins and Heart Disease Risk: Details
In the study, about half of the 15,000 men took a daily multivitamin, Centrum Silver. The other half took placebo. When they started the study, they were 50 or older; their average age was 64.
At the start, 754 men had a history of heart disease or stroke.
In the vitamin group, 876 of 7,317 men had a nonfatal heart attack or stroke, or died from cardiovascular disease. In the placebo group, 856 of 7,324 men did. That made their rates of these major events virtually identical.
There was also no effect of multivitamin use on most individual heart conditions. The rates of any heart attack (fatal or nonfatal), any stroke, and death due to stroke were the same in both groups. There were slightly fewer deaths in the vitamin group than in the placebo group, but the difference was so small it could have been due to chance.
The effect of taking daily multivitamin on major heart conditions did not differ between men with or without a history of heart disease.
The vitamin generally appeared well-tolerated, Sesso says. Men in the vitamin group were slightly more likely to develop skin rashes.
Sesso notes that the people in the study represented "a well-nourished population who already has adequate or optimum intake levels of nutrients, for whom supplementation may offer no additional benefits." Future research is needed to look at the impact of vitamins on people who don’t eat as well, he says.
The researchers received research funding from the National Institutes of Health. They received vitamins or support from BASF Corporation, Pfizer, and DSM Nutritional Products Inc.
Vitamins and Heart Disease: Perspectives
Doctors say the findings reinforce a message they try to impress upon their patients: Vitamins cannot replace a healthy lifestyle and a good diet.
"Many people with heart disease risk factors or [a history of heart disease] lead [inactive] lifestyles, eat processed or fast foods, continue to smoke, and stop taking lifesaving prescribed medications, but purchase and regularly use vitamins and other dietary supplements in the hope this approach will prevent a future [heart attack] or stroke.
"This distraction from cardiovascular disease prevention is the main hazard of using vitamins and other unproven supplements," Lonn says.
American Heart Association spokesman Elliott Antman, MD, of Harvard Medical School, says, "Thinking of multivitamins as a quick fix can have dangerous consequences. You should not assume that by taking a vitamin, you can forgo the things that work." Antman was not involved with the research.
What works? Eating healthy food, exercising regularly, avoiding tobacco products, and if you have risk factors, taking proven, safe, and effective medications, the doctors say.
Duffy MacKay, ND, vice president of scientific and regulatory affairs at the CRN trade group, says people who use vitamins are the very people who are most likely to have a healthy lifestyle.
"Government and other studies show that supplement users are more likely to be leaner and more physically active than non-supplement users. Our own research shows similar kinds of results, with supplement users being more likely than non-users to try to eat a healthy diet, engage in regular physical activity, and see a doctor regularly. It’s the whole lifestyle package, including consistent, long-term use of vitamins, that helps lead to good health," he says. | <urn:uuid:4da0766e-8504-4122-80e3-1b7b12500784> | CC-MAIN-2013-20 | http://men.webmd.com/news/20121105/multivitamins-heart-disease?page=2 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708739983/warc/CC-MAIN-20130516125219-00030-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.960952 | 735 | 2.53125 | 3 |
Search results for the phrase only produce results from sources akin to Natural News. Can coconut oil seriously kill HIV and other viruses?
There is a paper titled "Coconut Oil in Health and Disease: Its and Monolaurin’s Potential as Cure for HIV/AIDS" by Conrado S. Dayrit, an Emeritus Professor of Pharmacology University of the Philippines.
The paper describes a pilot study of 15 HIV patients who are split into three groups that are given coconut oil, low, and high doses of monolaurin over a period of 6 months.
In summary, the study is very poorly designed:
- No control group
- Small sample size (15 participants)
- Study period is too short (6 months) and too little measurements are taken (baseline, +3 months, +6 months)
- Effects are not very distinct, especially since the markers used in the study (i.e. Viral load, CD4, and CD8) naturally varies anyway
- There are a number of participants who developed AIDS during the study, one who takes coconut oil actually died shortly after the study finished
- One of the participant doesn't even appear to actually have HIV, at least during the study period
- The paper is too short, it doesn't describe the controls used in the study or the amount of time the patients have had HIV.
A review paper published in the Asian Pacific Journal of Tropical Medicine in 2011 shows antibacterial, antiviral and antifungal effects of coconut oil.
Manisha DebMandal et al./Asian Pacific Journal of Tropical Medicine (2011)241-247 245
5.9. Antibacterial activity
The most abundant and potent MCFA (medium chain saturated fatty acids) in coconut is lauric acid, which comprises nearly 50% of coconut’s fat content. The MCFAs and their derivatives e.g., MGs found in coconut are effective in destroying a wide assortment of lipid-coated bacteria by disintegrating their lipid membrane. For instance, they can be effective against bacteria that can lead to stomach ulcers, sinusitis, dental cavities, food poisoning, and urinary tract infections. Monoglycerides, especially Monolaurin, has been used to protect intravenously administrable oil-in-water emulsion compositions against growth of Escherechia coli (E. coli), Pseudomonas aeruginosa (P. aeruginosa), Staphylococcus aureus (S. aureus) and Candida albicans (C. albicans). The compositions can be medicaments containing lipophilic drugs, especially Propofol, and/or total intravenous nutritional compositions. [ ... ] Thus, like many other important medicinal plants having antibacterial property[26,27], C. nucifera is also excellent against different pathogenic bacteria causing several life-threatening infection to humans.
5.12. Antiviral effect
Coconut oil is very effective against a variety of viruses that are lipid-coated such as visna virus, CMV, Epsteinbarr virus, influenza virus, leukemia virus, pneumono virus, hepatitis C virus. The MCFA in coconut oil primarily destroy these organisms by disrupting their membranes, interfering virus assembly and maturation , . [...]
5.13. Antifungal effect
The antimicrobial spectrum of monolaurin is broad including fungal species such as Aspergillus sp., Penicillium sp., Cladosporium sp., Fusarium sp., Alternaria sp., C. albicans, Fonsecaea pedrosoi and Cryptococcus neoformans. [...] They can also help combat yeast overgrowth, such as candida and thrush. [...]
: Enig, M.G. - Coconut: In support of good health in the 21st Century, 2004:
approximately 6-7% of the fatty acids in coconut fat are capric acid. Capric acid is another medium chain fatty acid, which has a similar beneficial function when it is formed into monocaprin in the human or animal body. Monocaprin has also been shown to have antiviral effects against HIV and is being tested for antiviral effects against herpes simplex and antibacterial effects against chlamydia and other sexually transmitted bacteria. (Reuters, London June 29, 1999) | <urn:uuid:886e12d3-f642-43c5-9b0b-df7d3b53d993> | CC-MAIN-2020-24 | https://skeptics.stackexchange.com/questions/37570/how-antiviral-is-coconut-oil | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00007.warc.gz | en | 0.927633 | 907 | 2.5625 | 3 |
A Digital Tachometer is an electronic instrument which is used for measuring the speed of a motor or any device which has a rotating component in it (Eg: shaft or disc). It displays the revolutions per minute (rpm) or revolutions per second (rps) depending on how the timer has been programmed in the micro controller. Our robot can calculate the revolutions per minute (rpm), revolutions per second (rps). It also calculates the distance that was travelled by the robot till it is switched off. It displays these values using a JHD 162A Liquid Crystal Display.
Video demo of the project:
List of components used:
- We used the ATmega16 micro-controller for our project. It has a clock frequency of 1 MHz.
- Two different breadboards were used in the circuit, one primarily supplied voltage to the ATmega16 micro-controller and the L293D motor driver, while the other supplied to the Liquid Crystal Display.
- A metal chassis was used as the base of the robot, to which the DC geared motors were connected.
- The wheels were connected to the DC geared motors and controlled by ATmega16 micro-controller.
- A metal extension is used from the chassis for a more precise placement of the IR sensor.
- There are two IR sensors that were used, one IR sensor was connected to the extension and placed over the wheel, to measure the revolution of the wheel. It was connected to the micro-controller using a three pin RMC cable at pin PD2. The other IR sensor was connected to the PD7 pin of the micro-controller to switch ON/OFF the robot.
- We made use of the JHD 162A Liquid Crystal Display which is a 16×2 LCD. The Hitachi HD44780 LCD controller was used.
- Custom characters, numbers were displayed on this. They have a parallel interface.
- There are 16 pins on this specific LCD. They were connected to the micro-controller as follows-
Connections from LCD to the ATmega16:
Pin 1 (Vss) –Gnd
Pin 2 (Vcc)- 5 Volt
Pin 3 (Vee)- Potentiometer third pin.
Pin 4 (Rs)- PA0
Pin 5 (R/W)-PA1
Pin 6 (Enable)-PA2
Pin 7-14 (DB0-DB7) -PD0-PD7
Pin 15 (LED+) -5 Volt
Pin 16 (LED-) – Gnd
- We used Atmel Studios 6.2 as the software for our project.
- The platform requires pre-requisite knowledge in general about C programming.
- A brief study of the ATmega16 datasheet is encouraged before starting with the project especially the timers and counters section.
Header files involved:
#include <avr/io.h> : It contains appropriate input output definition for the ATmega16 controller to be used. The register set and the bit values and their significance are recognized only if this header file is declared.
#include <avr/interrupt.h> : The concept of programmed interrupt can be done only if this header file is introduced. It contains all the relevant registers and functions required to enable and disable interrupts.
#include <util/delay.h> : This header file contains the functions for specifying any time delays in terms of actual values rather than mentioning the number of cycles to wait for.
#include <string.h> : This header file contains commands used for manipulating strings that are to be displayed on the LCD as far as this project is concerned. For example determining the length of string that is to be displayed.
Macro definitions : This is a special feature available in C as well as Atmel Studios 6.2 for giving alias name(user defined) to certain keywords that are used just to make the program more readable and understandable.
It’s a Global constant declaration that is used throughout the program.
#define F_CPU 1000000UL : This macro word F_CPU is used to specify the internal Clock frequency of ATmega16 micro controller.
#define ldata PORTB : This macro word ldata is used for PORTB of the ATmega16 (micro-controller) to specify that the port is being used as a data port and is interfaced with the LCD.
Into the main !!!!!
Initially the respective ports were declared as input and output ports based on how they were to be used
As mentioned earlier PORTB was declared as output port so that it can be connected to the LCD.
The PORTA was also declared as output port for reading, writing and enabling the LCD and additionally for the motors as well.
The PORTD was declared as an input port for the IR sensor input as well as the on/off IR sensor input used so that the program can be interrupted accordingly.
Pulling up was done as shown above to avoid fluctuations.
All this was done using the DDRX (Data Direction Register)
Dealing with timers and counters:
TCCR1A : The timer/counter-1 control register A was set to a value of all zeros to operate as a normal mode counter. This mode uses a counter which is incremented in one direction and counter clear is not performed which is required for this application. It counts from minimum(0000) to maximum(FFFF) value and then overflows where the interrupt is triggered automatically.
The last two bits decide the mode of operation and they are set to a combination of 00 for normal mode of operation. Other bits are set to initial values as pwm mode, CTC mode of operations is not required for this application.
The timer/counter control register B was set to a value of 03H so that a pre-scale by 64 is achieved specifically. The other bits denoting input capture noise cancellation, reserved bit, waveform generation mode are set to zero since they are not required for this application.
The last three bits CS12, CS11, CS10 were set to 011 specifically to achieve a pre-scale by 64.
The timer and counter-1 register is a 16 bit registers which is split into two as shown above into TCNT1H and TCNT1L. These two represent the upper and lower bytes of the value that is fed to the entire register. These registers are basically used to show the starting count. They were by default set to a value of 00H.
Concept of Interrupts:
Once the interrupt.h has been declared a procedure has to be followed to make meaningful use of the interrupts.
In the programming part of it we made use of the following special interrupt functions they are cli() and sei(). These two functions cli and sei functions were used for clearing and enabling global interrupts.
The important registers we used in the case of interrupts are the following:
GICR-(Global Interrupt Control Register): This register is used to select which type of interrupt request we are going to use.
SREG(Status register) : This register is used to enable/disable a specific interrupt. It informs the micro-controller if it has to respond to the interrupt when triggered or not.
It is arbitrary to set the 7th bit high in order to enable the interrupts in general. One disadvantage is that the 7th bit gets reset whenever the interrupt request has been responded to by the micro-controller . Hence the bit has to be monitored. sei() function solves this problem.
MCUCR(MCU control register) : This register tells for how long the interrupt has to be triggered or under what condition the micro-controller has to respond to the request.
Basically an infinite loop is introduced where a check for the on off sensor is done which behaves like a switch.
Now basically two possibilities arise, the first case is when the switch is on within the loop, the pins of the micro-controller are made high and low in such a way that the robot is driven forward.
Now a check is performed to detect the completion of a second this is achieved using the TCNT1 register. This was done as follows if(TCNT1>=15624)
The value of 15624 was obtained by making use of a 64 bit pre-scaler i.e. the clock frequency of 1000000 is divided by the pre-scaler 64 which gives the number of cycles per second.
Now within this the rps was obtained by multiplying the count variable with 0.25. The multiplication was done by 0.25 since 4 paper clips were inserted on the wheel in our project.
Any number of paper clips can be introduced on the wheel and depending on it the multiplication must be done. For example if 8 paper clips are used a multiplication by 0.125 must be done.
A simple conversion between rps and rpm was done by multiplying the rps by 60.
Moving on to the case where the IR sensor behaves as an off state switch then the respective motor pins on the micro-controller were driven to zero so that motor stops.
Once the motor stops the distance was displayed based on the following calculation
In case of perfectly divisible values of rps the zeros were printed on the LCD screen using the variable control. This was achieved by following piece of code
After this the count values were reset to zero
Apart from these aspects an interrupt service routine was called during low value of the internal clock pulse in ATmega 16. In the ISR a constant check on PD7 was done to check if the switch is on at all times. Once this was checked for a check on whether a black was detected by the IR sensor at PD2. If not the count is incremented continuously.
The final robot: | <urn:uuid:7b87bf6d-a924-482c-9fa6-3e4cd3f68469> | CC-MAIN-2019-26 | http://lemalabs.com/digital-tachometer/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998376.42/warc/CC-MAIN-20190617043021-20190617065021-00246.warc.gz | en | 0.936092 | 2,013 | 3.4375 | 3 |
branch of mathematics
, formerly known as analysis situs, that studies patterns of geometric figures involving position and relative position without regard to size. Topology is sometimes referred to popularly as "rubber-sheet geometry" because a figure can be changed to that of an equivalent figure by bending, stretching, twisting, and the like, but not by tearing or cutting.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Mathematics | <urn:uuid:97212665-5908-4e77-bd64-e379f7558d7b> | CC-MAIN-2016-26 | http://www.factmonster.com/encyclopedia/science/topology.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00048-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.910175 | 113 | 2.953125 | 3 |
The celebrated Rosetta Stone which supplied Champollion with the key for the decipherment of the ancient monuments of Egypt was found near Fort St Julien, 4 m.
Layard at Nineveh opened up a new world, coinciding as they did with the successful decipherment of the cuneiform system of writing.
James Prinsep was then devoting his rare genius to the decipherment of the early inscriptions of northern India, especially those of Asoka in the 3rd century B.C. He derived the greatest assistance from Tumour's work not only in historical information, but also as regards the forms of words and grammatical inflexions.
It was formerly the custom to assign the invention of algebra to the Greeks, but since the decipherment of the Rhind papyrus by Eisenlohr this view has changed, for in this work there are distinct signs of an algebraic analysis.
Before the decipherment of the cuneiform texts our knowledge of its history, however, was scanty and questionable. | <urn:uuid:62559e81-7a24-425d-a9b9-776fe29f9c79> | CC-MAIN-2018-34 | http://thesaurus.yourdictionary.com/decipherment | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216724.69/warc/CC-MAIN-20180820180043-20180820200043-00424.warc.gz | en | 0.970191 | 212 | 3.328125 | 3 |
Template revision history
Revision history for the research lesson proposal template
- June 13, 2018 – Added a link for Open Up Resources as a possible resource for grades 6-8: https://im.openupresources.org/
- April 4, 2018 – Clarifies how goals should be stated. Switched to older, open “.doc” file format, away from the proprietary .docx format.
- Feb 24, 2017 – Introduction clarifies the purpose of the document. Relationship to standards includes a link to the Concept Maps on AchieveTheCore.org. Other small changes to explanatory text.
- Dec 3, 2016 – Includes a suggested order for working through the document (hint: don’t start at the top and work down).
- Nov 16, 2016 – more clarifying text, including specific examples of what should be in the third column (Points of Evaluation), which many teams have had trouble with.
- May 9, 2016 – text clarifying the purpose of the third column in the lesson plan; links to potentially helpful articles about learning goals and monitoring student work.
- November 25, 2015, to include guidelines for the reflection. | <urn:uuid:dbfbdb00-3b45-4548-a167-69f53170d6d1> | CC-MAIN-2019-43 | http://www.lsalliance.org/resources/template-revision-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00055.warc.gz | en | 0.901086 | 242 | 2.796875 | 3 |
What most people call a bunion is actually known as "Hallux valgus". Hallux valgus refers to the condition in which the big toe is angled excessively towards the second toe and a bunion is a symptom of the deformity. In a normal foot, the big toe and the long bone that leads up to it (the first metatarsal) are in a straight line. However, Hallux valgus occurs when the long foot bone veers towards your other foot and your big toes drifts towards your second toe. A bunion actually refers to the bony prominence on the side of the big toe. This can also form a large sac of fluid, known as a bursa, which can then become inflamed and sore.
Bunions form when the normal balance of forces that is exerted on the joints and tendons of the foot becomes disrupted. This disruption can lead to instability in the joint and cause the deformity. Bunions are brought about by years of abnormal motion and pressure over the MTP joint. They are, therefore, a symptom of faulty foot development and are usually caused by the way we walk and our inherited foot type or our shoes.
SymptomsThe signs and symptoms of a bunion include a bulging bump on the outside of the base of your big toe, swelling, redness or soreness around your big toe joint, Thickening of the skin at the base of your big toe, Corns or calluses, these often develop where the first and second toes overlap, persistent or intermittent pain, restricted movement of your big toe. Although bunions often require no medical treatment, see your doctor or a doctor who specializes in treating foot pain cream (http://ruraljungle9138.unblog.fr/) disorders (podiatrist or orthopedic foot specialist) if you have persistent big toe or foot pain, a visible bump on your big toe joint, decreased movement of your big toe or foot, difficulty finding shoes that fit properly because of a bunion.
Diagnosis begins with a careful history and physical examination by your doctor. This will usually include a discussion about shoe wear and the importance of shoes in the development and treatment of the condition. X-rays will probably be suggested. This allows your doctor to measure several important angles made by the bones of the feet to help determine the appropriate treatment.
Non Surgical Treatment
Somtimes observation of the bunion is all that?s needed. A periodic exam and x-ray can determine if your bunion deformity is advancing. Measures can then be taken to reduce the possibility of permanent damage to your joint. In many cases, however, some type of treatment is needed. Conservative treatments may help reduce the pain of a bunion. These options include changes in shoe-wear. Wearing the right kind of shoes is very important. Choose shoes with a large toe box and avoid narrow high heeled shoes which may aggravate the condition. Padding. Pads can be placed over the area to reduce shoe pressure. Medication. Nonsteroidal anti-inflammatory drugs may help reduce inflammation and reduce pain. Injection therapy. Injection of steroid medication may be used to treat inflammation that causes pain and swelling especially if a fluid filled sac has developed about the joint. Orthotic shoe inserts. By controlling the faulty mechanical forces the foot may be stabilized so that the bunion becomes asymptomatic.
The decision to have bunion surgery is personal and different everyone. While there are many reasons to have bunion surgery, the most common reasons include. Pain. Difficulty walking. Difficulty fitting shoes. Worsening bunion. Pain at the ball of the foot. Failed conservative measures. See Non-surgical Treatment. Some people have surgery simply because they don?t like the way the bunion looks. While some doctors may correct your bunion if it doesn?t hurt, you should be aware that permanent pain may occur after your surgery.
Here are some tips to help you prevent bunions. Wear shoes that fit well. Use custom orthotic devices. Avoid shoes with small toe boxes and high heels. Exercise daily to keep the muscles of your feet and legs strong and healthy. Follow your doctor?s treatment and recovery instructions thoroughly. Unfortunately, if you suffer from bunions due to genetics, there may be nothing you can do to prevent them from occurring. Talk with your doctor about additional prevention steps you can take, especially if you are prone to them.
+ نوشته شده: 1396/5/10 ساعت: ۰۹ توسط:shaythiel7 : | <urn:uuid:0ee81bb3-55d9-42cf-8c08-22c6f140b383> | CC-MAIN-2018-34 | http://shaythiel7.iranblag.com/t_foot%20pain%20end%20of%20day | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209585.18/warc/CC-MAIN-20180814205439-20180814225439-00524.warc.gz | en | 0.938485 | 958 | 2.828125 | 3 |
Gingival inflammation or swollen gums, occurs when plaque (sticky deposit) build-up containing bacteria on teeth and produces toxins that irritate the gums because of poor dental hygiene or because the saliva is too acidic. In pregnancy is exacerbated during a period of progesterone and estrogen imbalance. These hormones cause increased vascularity of connective tissue (increased blood flow to the mouth), making the gums friable and thus contributing to gingivitis.
When do they appear?
If the mother had a pre-existing gingival inflammation tends to appear in the second month and peaks in the eight months of gestation. Then, decrease in the last month of pregnancy and postpartum, they’re reduced or similar to the second month of pregnancy.
Area most affected?
Commonly the papillae (inter-proximal area) of the anterior teeth and the molars are the ones that get gingivitis throughout pregnancy. Swollen gums, may be sore and they can be more susceptible to bleeding, especially when brushed or flossed.
If gingivitis is left untreated in pregnancy it may lead to periodontitis (gum infection that damages the soft tissue and destroys the bone), usually in mothers who suffer from serious gum diseases that might lead to preterm labor, low birth weight and preeclampsia, also really serious complications if the bacteria enter the bloodstream.
You may also develop (but rare) a focal, highly vascular small lump or nodule called epulis gravidarum, a pyogenic granuloma that occasionally appear in an area where you have gingivitis but typically regresses spontaneously after delivery. If it’s really uncomfortable, interfering with chewing or brushing, or starts to bleed too much, it can be removed while pregnant.
Take the best dental and oral care including brushing of teeth, flossing, and scrape tongue twice a day. Try to always buy high-quality products to use for your health care, especially during pregnancy.
- For plaque removal you may use crest gum detoxify deep clean toothpaste, and crest gum care mouthwash, cool wintergreen, which neutralizes plaque bacteria around your gum line and reduces bleeding gums.
- Eat a lot of vitamin C rich food for gum health
- Take calcium for dental support
Note: Baby obtains calcium from the mother’s body stores, not from her teeth.
These are some really good remedies you also can use:
- Arnica reduce swelling and gum pain.
- Warm salt water gargling reduces swelling or warm lemon water
- Use lemongrass essential oil to mouthwash, more effective than traditional chlorhexidine mouthwash at reducing plaque and gingivitis levels
- Rub Aloe vera in gums or use aloe vera mouthwash, effective as chlorhexidine
- Massage tea tree essential oils to reduce swelling, pain and irritation in gums and significantly reduce gingival bleeding.
- Chew sugarless gum (Xylitol Gum) decrease the rate of tooth decay, increases saliva, equalize the Ph in the mouth, reducing cavity growth. | <urn:uuid:fe1034ea-d4ca-488a-9a85-9b3f0d0b7124> | CC-MAIN-2019-51 | https://pinkorbluecare.com/2019/02/common-complaints-of-pregnancy/swollen-gums-during-pregnancy.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00065.warc.gz | en | 0.891584 | 653 | 2.84375 | 3 |
Lowering a bucket - the rope, the bucket, the air, the earth
Lowering a bucket - the rope, the bucket, the air, the earth#
Maya stands on a cliff, and gently lowers a bucket of water to the ground using a rope. The bucket starts out with an initial downwards velocity, but Maya is tightening their grip on the rope as it slides through their hands, so that is slows as it descends, and when the bucket touches the ground it has zero velocity.
Consider the energy in the situation and how the work is being done if the system consists of the rope, the bucket, the air, and the earth.
Is the system open or closed?
None of the above
What types of energy are in the system? Are they increasing or decreasing?
Select all the choices that apply.
Note: You will be awarded full marks only if you select all the correct choices, and none of the incorrect choices. Choosing incorrect choices as well as not choosing correct choices will result in deductions.
Kinetic energy - Decreasing
Kinetic energy - Increasing
Gravitational potential energy - Decreasing
Gravitational potential energy - Increasing
Thermal energy - Decreasing
Thermal energy - Increasing
Elastic potential energy - Decreasing
Elastic potential energy - Increasing
Motion energy - Decreasing
Motion energy - Increasing
Is the work from external forces being done on the system positive or negative?
Both positive and negative (by different external forces)
No external forces
Problem is licensed under the CC-BY-NC-SA 4.0 license. | <urn:uuid:8c6168dc-375c-497e-b6d6-5393f3072b51> | CC-MAIN-2023-06 | https://firas.moosvi.com/oer/physics_bank/content/public/009.Work/Work/Lowering_bucket_2/Lowering_bucket_2.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00421.warc.gz | en | 0.887746 | 372 | 3.28125 | 3 |
Investigating the often overlooked evolutionary role of photosynthetic plasticity under fluctuating [O2]:[CO2], Yiotis et al. assess mature plants from two angiosperms, two monilophytes and Ginkgo biloba acclimated to a Triassic-Jurassic boundary (TJB) atmosphere and their photosynthetic plasticity using gas exchange and chlorophyll fluorescence methods.
Contrary to monilophytes, Ginkgo photorespires heavily and displays increased heat dissipation and severe photodamage when exposed to a TJB atmosphere. The observed photodamage reflects Ginkgo’s inability to divert photosynthetic electron flow to sinks other than photosynthesis and photorespiration, and provides insights into the underlying mechanism of Ginkgoales’ near extinction and ferns’ proliferation across the TJB.
This is an Open Access paper, so you can read it for free. | <urn:uuid:11b13f17-d1c2-448a-859a-55b5343edd57> | CC-MAIN-2022-40 | https://botany.one/2017/07/photosynthetic-plasticity-survival-across-triassic-jurassic-boundary/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00294.warc.gz | en | 0.860799 | 200 | 2.640625 | 3 |
In seven interactive virtual labs, identify bacteria, examine heart patients, probe the nervous system, assay antibodies, study circadian rhythms, and analyze evolution in action in stickleback fish.
- Lizard Evolution Virtual Lab: Explore the evolution of the anole lizards in the Caribbean.
- Bacterial ID Lab: Use DNA sequencing techniques to identify deadly pathogens.
- Cardiology Lab: Diagnose heritable heart diseases.
- Neurophysiology Lab: Dissect a leech to explore its sensory system.
- Immunology Lab: Use human antibodies to help diagnose disease.
- Transgenic Fly Lab: Create a transgenic fly to study circadian rhythms.
- Stickleback Evolution Lab: Collect and analyze data on stickleback fish and fossil specimens.
Includes detailed procedures, lab notebooks, videos, quizzes, glossaries, and background resources.
- 2002 Pirelli Award; First Prize and Top Pirelli Prize | <urn:uuid:7bdc0f74-d38d-40ee-984e-f3f5b3668c3e> | CC-MAIN-2018-26 | https://www.hhmi.org/order-materials/virtual-labs/virtual-lab-series | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864822.44/warc/CC-MAIN-20180622220911-20180623000911-00456.warc.gz | en | 0.668199 | 191 | 2.828125 | 3 |
Purpose of review: Stem cells are an important tool for the study of ex-vivo models of megakaryopoiesis and the production of functional platelets. In this manuscript, we review the optimization of megakaryocyte and platelet differentiation and discuss the mechanistic studies and disease models that have incorporated stem cell technologies.
Recent findings: Mechanisms of cytoskeletal regulation and signal transduction have revealed insights into hierarchical dynamics of hematopoiesis, highlighting the close relationship between hematopoietic stem cells and cells of the megakaryocyte lineage. Platelet disorders have been successfully modeled and genetically corrected, and differentiation strategies have been optimized to the extent that utilizing stem cell-derived platelets for cellular therapy is feasible.
Summary: Studies that utilize stem cells for the efficient derivation of megakaryocytes and platelets have played a role in uncovering novel molecular mechanisms of megakaryopoiesis, modeling and correcting relevant diseases, and differentiating platelets that are functional and scalable for translation into the clinic. Efforts to derive megakaryocytes and platelets from pluripotent stem cells foster the opportunity of a revolutionary cellular therapy for the treatment of multiple platelet-associated diseases. | <urn:uuid:8db8847e-d192-43c7-9666-7b38e9d51214> | CC-MAIN-2015-48 | http://journals.lww.com/co-hematology/Abstract/2014/09000/Stem_cells,_megakaryocytes,_and_platelets.10.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460519.28/warc/CC-MAIN-20151124205420-00200-ip-10-71-132-137.ec2.internal.warc.gz | en | 0.896894 | 255 | 2.53125 | 3 |
A/P is the designation used for an artist proof, or artist's proof. These prints are made prior to the limited edition series. A/P is usually printed in the lower left hand corner and may be followed by a number.
Other People Are Reading
Before modern printing practices, lithographers hand-pulled prints and the printing plate degraded with each subsequent print. The first productions were considered the most desirable, as they were the finest quality. These were usually given to the artist as payment for signing the lithographs, which would then be sold by the publisher.
Current technology ensures all prints are of the same quality; therefore an artist's proof, or a later number in a limited edition series are identical. This applies to prints of artwork and fine art photography. Artist's proofs are not technically necessary today, however the tradition still holds.
Hand-pulled artist's proofs are considered more valuable than the limited edition series, because they are of better quality and are the property of the artist. Artist's proofs usually number 10 per cent or less of an edition and this also increases the value. The art world is polarised as to whether photographic artist proofs or those made with modern printing techniques are as valuable. Collectors should look for proofs signed by the artist.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for | <urn:uuid:e0f59fc9-5cba-4c0c-bcf4-86c941d8cc2c> | CC-MAIN-2017-09 | http://www.ehow.co.uk/facts_7696542_ap-mean-regards-paintingprint.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00349-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.972002 | 294 | 2.59375 | 3 |
Future Plus has been designed to ensure we provide a broad and balanced curriculum for all students as required by the Department for Education. Alongside all their GCSE qualifications all students need key skills in order to help them be ready for adult life. The Future Plus programme covers some of the key elements that are not covered in other qualifications and forms part of our PSHE (personal, social, health and education) programme. In the Future Plus lessons, over Years 9 to 11, students will learn about various topics such as personal finance and how to manage their finances, how to use various software packages, how be safe online and how to have a greater understanding of their own health and fitness.
All students will experience elements of financial literacy, digital literacy and healthy living through the study of these subjects over the course of the 3 years. The details of the courses that are being studied in the Future Plus programme are highlighted below.
This course introduces the student to the impact of finance on the economy and encourages them to consider how this can affect business and the individual. Over the three years (Year 9 to 11) a student can develop their knowledge and a valuable range of applied and transferable skills. This could lead to a full equivalent GCSE qualification and also provides a foundation for further study in business and finance-related disciplines. Please click on the link to find more details regarding this course.
Being fit and healthy is important for a student’s well-being. In the healthy living course students will cover topics such as fitness, nutrition, being healthy. As some of these topics are part of the Sport/Dance BTEC course this will provided an opportunity for a student to gain a qualification in Sport or Dance over the course of the three years (Year 9 to Year 11) if they wish to do so.
Today, the advancement of technology has permeated every aspect of our lives. Employers expect their workforce to have the skills needed to live, work, and thrive in a digital society. So, when preparing pupils for the world of work, digital literacy is essential. In today’s work place all workers need the ability to use various software packages such as Microsoft Word, Excel, PowerPoint as well as understand how networks and file systems are organised. These skills will be developed alongside the financial literacy and healthy living parts of the Future Plus Programme. All students will acquire key skills and cover elements of digital literacy whilst completing their healthy living/fitness and financial literacy when they use different software packages.
Digital literacy is also a key part of our PSHE programme during tutor times since we teach the students about the awareness of the necessary standards of behaviour expected in online environments, and develop their understanding of the shared social issues created by digital technologies. | <urn:uuid:5c9ebfa0-4873-43bc-94b4-042a9f1cb688> | CC-MAIN-2019-09 | http://www.falmouth.cornwall.sch.uk/259/future-plus | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479627.17/warc/CC-MAIN-20190215224408-20190216010408-00020.warc.gz | en | 0.949015 | 555 | 2.78125 | 3 |
TRINE is an expert at Trenchless construction technology. Our Trenching capabilites allow our subsurface construction work to require small trenches or eliminates the need for continuous trenches. TRINE defines Trenchless as a family of methods, materials, and equipment capable of being used for the installation of new or replacement or rehabilitation of existing underground infrastructure with minimal disruption to surface traffic, business, and other activities.
TRINE Trenchless construction includes such construction methods as tunneling, microtunneling (MTM), horizontal directional drilling (HDD) also known as directional boring, pipe ramming (PR), pipe jacking (PJ), moling, horizontal auger boring (HAB) and other methods for the installation of pipelines and cables below the ground with minimal excavation. The difference between trenchless and other subsurface construction techniques depends upon the size of the passage under construction, and must consider soil characteristics and the loads applied to the surface. In cases where the soil is sandy, the water table is at shallow depth, or heavy loads like that of urban traffic are expected, the depth of excavation has to be such that the pressure of the load on the surface does not affect the bore, otherwise there is danger of surface caving in.
- Structure Lining
- Auger Boring and Jacking
- Micro Tunneling
- Hand Mining
- Vertical Boring
- Ductile Iron Pipe Directional Drilling
- PVC Pipe Directional Drilling
- Copper Tubing Directional Drilling
- Fiber Optic Cable Directional Drilling
- HDPE Pipe Directional Drilling
- Electrical Unit Duct Directional Drilling | <urn:uuid:2799d426-65e4-447c-8a12-13e9efbeb936> | CC-MAIN-2017-47 | https://trineconstruction.com/trenchless-technologies | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00761.warc.gz | en | 0.891544 | 341 | 2.5625 | 3 |
In 2003, the Trusted Computing Group (TCG) was formed by Advanced Micro Devices Inc., Hewlett-Packard Co., IBM, Intel Corp. and Microsoft. The group’s original goal was the development of a Trusted Platform Module (TPM), an integrated circuit that conforms to the trusted platform module specification put forward by the TCG. Throughout the years, this chip has found its way to servers, laptops and desktops. Still, some administrators and computer users are unsure of how this technology can benefit or hurt them. Below are a few tips to help administrators and users better assess trusted computing technology.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The benefits of trusted computing
According to the Trusted Computing Group's website, the TPM chip was built for security, privacy, interoperability, portability, control, and ease-of-use. Simply put, it aims to max out performance without letting anything fall by the wayside.
Developers designed a chip that would assure the integrity of a platform. Together with the BIOS, the TPM created what is known as the Root of Trust. The TPM contains several Platform Configuration Registers (PCRs) that allow secure storage and reporting of security-relevant data (unauthorized changes to the BIOS, possible root-based modifications, boot-sector changes, etc). This data can be used to detect changes to previous configurations and derive decisions on how to proceed. Microsoft's Bit Locker Drive Encryption functions in this way.
The concerns over trusted computing
Despite the numerous principles regarding the security of trusted computing, the design has raised some concerns over functionality and privacy. In practice, trusted computing uses cryptography to help enforce a selected behavior. The main functionality of trusted computing is to allow someone else to verify that only authorized code runs on a system. Remember, used alone, trusted computing does not protect against attacks that exploit security vulnerabilities introduced by programming bugs.
The problem arises with the core function of the chip. With trusted computing, it is technically possible not just to secure the hardware for its owner, but also to secure it against its owner.
Other similar concerns include the abuse of remote validation of software. In this scenario, the manufacturer—and not the user who owns the computer system—decides what software would be allowed to run. The secondary concern here is that user action in these situations may be recorded in a proprietary database without the user actually knowing. With this happening, user privacy becomes and issue as well as possibly creating a security compliance conflict.
TPM in server technology
Many large sever vendors sell TPM-ready machines. Still, the same cautions as above must be taken when deciding to use a TPM. These same large vendors will go on to warn their customers that the TPM is a customer-configured option. Server makers like HP take their own cautions with the chip, saying that they will not configure the TPM as part of any pre-installation process. HP even makes the point of mentioning that it isn't responsible for TPM locking users out. HP recommends backing up keys and server data before using TPM on a server level.
BitLocker, for example, will lock access if an error is made during a wide variety of procedures. HP lists "updating the system or option firmware, replacing the system board, replacing a hard drive, or modifying OS application TPM settings." In other words, there's a lot of room for error.
Caution is strongly advised when deploying the TPM within a server. Make sure there is a viable use case for this technology, as any mistake can be very costly.
Securing your machine
The topic of trusted computing will continue to draw criticism and support. When used as designed, the chip can certainly provide a higher level of machine security. However, abuses and functionality questions highlight the drawbacks to adopting the technology.
Remember, computer security does not have to be chip-reliant. Security best practices can help guide administrators in the right direction if they feel uncomfortable using the TPM chip. Ensuring a system’s BIOS settings are correct, its firmware and software is up to date and constantly monitoring an environment’s security health will keep systems running longer and safer. Each data center is unique and has different requirements. It will be through careful planning and research that an IT administer will be able to come properly secure their infrastructure.
About the author:
Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Virtualization Architect at MTM Technologies Inc. He previously worked as Director of Technology at World Wide Fittings Inc. | <urn:uuid:eaa555d1-d48b-4044-b77c-772982bd2617> | CC-MAIN-2017-30 | http://searchitoperations.techtarget.com/tip/Weighing-the-pros-and-cons-of-the-Trusted-Computing-Platform | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423764.11/warc/CC-MAIN-20170721082219-20170721102219-00141.warc.gz | en | 0.944121 | 992 | 3.125 | 3 |
This article does not cite any sources. (December 2006) (Learn how and when to remove this template message)
This is not the amount of power that a radio station reports as its power, as in "we're 100,000 watts of rock 'n' roll", which is usually the effective radiated power (ERP). The TPO for VHF-/UHF-transmitters is normally more than the ERP, for LF-/MF-transmitters it has nearly the same value, while for VLF-transmitters it may be less.
The radio antenna's design "focuses" the signal toward the horizon, creating gain and increasing the ERP. There is also some loss (negative gain) from the feedline, which reduces some of the TPO to the antenna by both resistance and by radiating a small part of the signal.
The basic equation relating transmitter to effective power is:
Note that in this formula the Antenna Gain is expressed with reference to a tuned dipole (dBd)
|This article about wireless technology is a stub. You can help popflock.com resource by .| | <urn:uuid:4e00ab8e-127f-4e86-9bd6-c7bf871793ad> | CC-MAIN-2018-30 | http://www.popflock.com/learn?s=Transmitter_power_output | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00578.warc.gz | en | 0.94733 | 240 | 2.78125 | 3 |
A healthy diet and diabetes
Healthy eating for people with diabetes is important because it can help:
- Maintain blood glucose control and reduce the risk of complications
- Reduce the risk of cardiovascular disease and the tissue damage associated with high blood glucose levels
- Support management of body weight
- Maintain quality of life
A healthy diet should include a wide variety of foods, not too many fatty and sugary foods, not too much salt and plenty of fibre-rich foods including fruit and vegetables.
Top tips for healthy eating
- Eat 5 or more portions of fruit and vegetables a day
- Reduce fat, especially saturated (animal) fat
- Reduce salt intake – the most effective way of doing this is to cut out as many processed foods as possible
- Increase intake of omega 3 oils – try eating at least two servings of oily fish per week
- Reduce alcohol intake
The above is an extract from the DRWF Patient Information Leaflet A Healthy Diet and Diabetes v8.0, published August 2018 (reviewed within an 18 month period).
The benefits of exercise
People who exercise have lower blood pressure, lower heart rates and improved circulation. They also have lower cholesterol and less body fat, as well as higher rates of metabolism and consequently better weight control. They sleep better, have more energy, are less stressed/anxious and tend to be happier and more confident.
Why is exercise important for someone with diabetes?
Unlike medication, exercise is low cost and side-effect free. Those with diabetes who don’t exercise are three times more likely to have poor diabetes control and more likely to suffer related complications. Exercising regularly improves sensitivity to a range of metabolic hormones and the body becomes better at transporting glucose. This happens because exercise stimulates the body’s muscles.
Exercise also reduces the level of fat in the body, particularly round about the tummy area. It is thought that it is this mobilisation of the body’s fat stores, by exercising, that might improve blood glucose control. Less glucose in the blood, because it’s now stored in the body’s muscle, means the blood flows better and some of the blood vessel complications, associated with diabetes, may be avoided.
Top tips to get started
- Check with medical personnel that your diabetes is presently stable enough to allow you to begin an exercise routine.
- Start with small bouts of exercise of around 5-10 minutes per day and build up gradually
- Find an exercise partner to provide motivation and accountability
- Choose something you enjoy, as you are more likely to stick at it
- Find out about Healthy Walk schemes or other exercise related events in your area
The above is an extract from the DRWF Patient Information Leaflet Exercise and Diabetes v4.0, published October 2018 (reviewed within an 18 month period).
Diabetes Wellness Events
Sharing experiences with like-minded people is a great way to feel supported in your efforts to attain a healthy balanced lifestyle and manage your diabetes effectively.
Whether you have type 1 or type 2 diabetes, newly diagnosed or ‘old hat’, parent or carer, attending a Diabetes Wellness Event is a great way to meet new friends, share stories of living with diabetes, learn about all aspects of the condition and related health from a host of clinicians and healthcare professionals, in a relaxed and friendly environment.
With 15 years experience of bringing people together through the Diabetes Wellness Event programme, we know that it provides a fabulous support network and something for everyone. | <urn:uuid:f94723d4-a91e-4ce7-a5ce-8231248e11fa> | CC-MAIN-2018-51 | https://www.drwf.org.uk/living-with-diabetes/healthy-living | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823442.17/warc/CC-MAIN-20181210191406-20181210212906-00349.warc.gz | en | 0.943569 | 723 | 3.40625 | 3 |
The following LCD screens provide the object target information.
We begin with Confidence. If the camera does not see a predetermined amount of target colored pixels, it assumes target is lost and if it’s above the amount target is acquired, see figure 18.
When target is acquired the program will continue to map the target within its Field of View (FOV).
The next value after Confidence is the horizontal X. The value of X determines whether the target is to the left, right or approximate center, see figure 19.
The next value after the horizontal X is the vertical Y. The value of Y determines whether the target is up, down or approximate center, see figure 20.
The next value after the vertical Y is the number of pixels (size). The number of pixels is used to determine distance from the camera (Z). If the pixels are under a certain value it is assume the target has moved away. If the pixel are above a certain value it is assumed the target is too close. In between these two values the object is in range, see figure 21. | <urn:uuid:81dfe451-b42a-410d-b4f9-1dc7bf0d5c29> | CC-MAIN-2016-26 | http://www.imagesco.com/articles/c-bot/pg8.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00054-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.879715 | 222 | 3.078125 | 3 |
What is shown in the table above is that personal pronouns have person, number, gender and case.The personal pronoun must be of the same number, gender, person, and in the same case as the noun for which it represents. See examples of personal pronouns and do the interactive test. Personal pronouns have: Number: They are singular or plural. Multiple Choice. Gender pronouns signify how someone would like to be referred to with regards to their gender identity. In summary, personal pronouns replace specific nouns. There are two types of personal pronouns: subject and object. If your question above referred to pronoun, it would be singular and would require a singular verb such as does; since your question refers to pronouns, which is plural, it requires a plural verb. A personal pronoun is a short word we use as a simple substitute for the proper name of a person. Grammatical person refers to the perspectives of the personal pronouns used to identify a person in speech and text (e.g. Define personal pronoun: The definition of personal pronoun is a pronoun that refers to a particular person or thing. Personal pronouns may be classified by three categories: person, number, and case. Person. Often, when speaking of a singular human in the third person, these pronouns have a gender implied -- such as “he” to refer to a man/boy or “she” to refer to a woman/girl. First. The pronouns he, she, and them are in the subjective case. Personal pronouns also have gender: he is masculine, she is feminine and it is neuter. The first person is personal (I, we, etc.) Personal pronouns are probably the type of pronoun that you are most familiar with, since they are taught very early on to beginner English learners. Personal Pronouns - Number Personal Pronouns - Gender. Personal pronouns take the place of a noun in a sentence to avoid repetition. Subject pronouns. Apart from number and gender, Polish pronouns have “inherited” one … In English, personal pronouns are words like I, you, we, me, he, she, and it. In many ways, Spanish and English personal pronouns are quite similar. When comparatives like than and as are involved, you can usually tell what pronoun to use by finishing the comparison. First-person is the most informal. Personal pronouns are the set of English pronouns that refer to people: I, you, he, his, she, hers, we, us, they, them, etc. 11. 1. tense 2. inflection 3. person 4. modifier Things are going to get a bit more complex now. We have 2 solutions to this problem in English: 1)Use both pronouns in our sentence separated by “or” to make it clear that we are not sure. Definition: The prefix pro means for or in place of.Pronouns stand in for or replace nouns. Pronouns that are the subject of the sentence are called subject pronouns. Personal pronouns are used in place of nouns. For instance, "he" is singular in number, third person, and masculine in gender. We often use them to refer back to people and things that we have already identified. To perfect your usage of personal pronouns, read on! - English Grammar Today - a reference to written and spoken English grammar and usage - Cambridge Dictionary These tables, which we have already seen, show the person and number for the nominative pronouns and for the forms of the verb blepw: If a pronoun and a verb agree in both number and person, then they are said to agree; if they agree, they may betalking about the same person or thing. Person refers to the relationship that an author has with the text that he or she writes, and with the reader of that text. For explanation and examples please refer to ESL Desk Pronouns page, English personal pronouns at Wikipedia, and use Personal Pronouns chart below for assistance. True False. Example: She is faster than I (am fast). English has three persons (first, second, and third). 9. Flashcards have arrived at The Free Dictionary. First-person pronouns are used by a speaker or writer to refer to him or herself and are divided by number, possession, and part of the sentence. Personal pronouns are pronouns that refer to specific antecedents. 10. So case is an important determinant of what pronoun should be used. You'll see how this works below in the declension table of personal pronouns. Remember, they are declined according to case, gender and number. This means it is not clear which personal pronoun to use in the third person (masculine or feminine). Pronouns: personal ( I, me, you, him, it, they, etc.) Also note that all the personal pronouns except you have distinct forms indicating number, either singular or plural. Learn English Grammar - Personal Pronouns. Personal pronouns are always specific and are often used to replace a proper noun (someone’s name) or a collective group of people or things. You - You are my best student. Grammatical case of Polish personal pronouns. Hisself and theirselves are third person personal pronouns. The following are personal pronouns: I, me, you, he, him, she, her, it, we, us, they, and them. All nouns and pronouns have number. It can be used to take the place of a person, an animal, a thing, or a place, in order to avoid stating the same noun over and over again in the same text. Personal pronouns also have gender. A person could be a man or a woman or both or neither and share any number of these sets of pronouns as the correct ones to use for them, but which set they go by is not necessarily indicative of their gender, even though for most people there is an association between the pronouns … The first type of pronoun in the English language is the personal pronoun. Personal pronouns have case, gender, number and _____. If you’d like to learn more about different types of Spanish pronouns and their usage, read this first. This lesson is about personal pronouns, which replace nouns that refer to people or things. | <urn:uuid:7e75b589-3cb1-4be6-8166-bce3b2d4663a> | CC-MAIN-2021-17 | http://satoevyemekleri.com/9mnh2/23363a-personal-pronouns-have-number-person-and-what | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00031.warc.gz | en | 0.956947 | 1,280 | 4.21875 | 4 |
A new coalition of farmers, scientists, poultry companies and government agencies is working on the Delmarva Peninsula to determine the best ways to both manage nutrient pollution in the Chesapeake Bay and keep farmers in business.
The initiative, called the Delmarva Land and Litter Challenge, released its first report in August. In it, the group states that the farms have not made as much progress in cleanup as they or the regulators would like because technology that could reduce pollution is slow to develop and data on how best to employ it is not as accurate as it needs to be.
The group is focusing on two themes: better technologies to transform manure into energy and assistance in incorporating land-applied manure into the soil so that it’s less likely to run off and is more available to crops.
The group would also like a Center of Excellence on the Delmarva Peninsula to bring all of the research on agriculture under one roof. The center would provide its own data and also give farmers “nutrient management support” to help them understand what was happening on their own farms. The hope is that an integrated approach would provide “mass balance” data, telling researchers and farmers the exact nature of the manure surplus, where it is, how it can efficiently be transported elsewhere and where it should go.
At present, various universities on the Shore, including the University of Maryland Center for Agro-Ecology and the University of Maryland-Eastern Shore, are engaged in research projects on phosphorus, poultry manure emissions and ways to reduce runoff. But, Land and Litter Challenge Facilitator Ernie Shea said the data are not available to farmers in a way that they can take it and make changes quickly. The center, he said, would make the information accessible.
The group would also like guidelines for practices like manure storage and transport to be uniform across the watershed. As it stands now, Maryland, Delaware, Virginia and Pennsylvania have different rules. A truck could traverse the states in one day and be required to follow four different sets of rules for manure transport.
Shea said manure-to-energy is an idea whose time has come, but an approach has not yet been decided. Many different entrepreneurs have arrived on Delmarva promising to build the ultimate facility to transform manure into energy. A couple are in the pilot stages, but none are ready to accept large quantities of manure from multiple farmers. Many questions about the right approach remain: Do farmers want small, on-farm facilities, or do they want to pool the manure and have it processed at a central place, like the Perdue AgriCycle facility does in Delaware? And if they do build a large plant like that one, how will they reduce air emissions?
“We’re early on in the deployment of technologies that could provide for an alternative use,” Shea said. “Don’t bet the farm on any one solution set today. When we finally figure this out, it will be an integrated path.”
Shea, who spent 10 years with the Maryland Department of Agriculture and 20 years running the association of soil conservation districts in Washington, has a small consulting firm in the Baltimore suburbs called Solutions for the Land in which he looks at land-based solutions to keep farmers working and conserving land throughout the country. Shea said he modeled the Land and Litter challenge after similar, successful efforts in the western part of the country. Montana, in particular, has a coalition of ranchers, conservationists, educators and government officials working to preserve the Blackfoot watershed, which encompasses 1.5 million acres west of the Continental Divide.
Shea said the first report was only the beginning and that the group continues to add partners. The Nature Conservancy and the Chesapeake Bay Foundation have signed on, as well as Perdue, Mountaire and the agencies in the Maryland governor’s Bay cabinet.
The report touched on a common refrain among farmers: They are trying their best, but they are operating with a deficit of information. They know what the Chesapeake Bay model says, Shea explained, but they are not sure how much nutrients are being generated, where pollutants are going and how fast they are arriving. Shea is hoping that, with quarterly meetings and a coordinated push, farmers and integrators will have more of that information. Solutions from the Land, he said, will be working to grow the group’s reach.
“Our role is going to be to facilitate,” Shea said. “It’s not going to be easy. These are, in some cases, strange bedfellows agreeing to work together.
The Delmarva Land and Litter Challenge report was funded by the Keith Campbell Foundation for the Environment. | <urn:uuid:b845ff3d-488d-40de-abc5-76ff56868d1c> | CC-MAIN-2019-09 | https://www.bayjournal.com/article/delmarva_coalition_to_focus_on_helping_reduce_runoff_from_farms | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487595.4/warc/CC-MAIN-20190218155520-20190218181520-00390.warc.gz | en | 0.965442 | 970 | 2.765625 | 3 |
April 25, 2011, 5:16 a.m.
posted by equivalent
DOM is based on an implicit data model, which is similar to but not quite the same as the data models used by other XML technologies such as XPath, the XML Infoset, and SAX. Before we delve too deeply into the nitty-gritty details of the DOM API, it's helpful to have a higher level understanding of just what DOM thinks an XML document is.
According to DOM, an XML document is a tree made up of nodes of several types. The tree has a single root node, and all nodes in this tree except for the root have a single parent node. Furthermore, each node has a list of child nodes. In some cases, this list of children may be empty, in which case the node is called a leaf node.
There can also be nodes that are not part of the tree structure. For example, each attribute node belongs to one element node but is not considered to be a child of that element. Furthermore, nodes can be removed from the tree or created but not inserted in the tree. Thus a full DOM document is composed of the following:
A tree of nodes
Various nodes that are somehow associated with other nodes in the tree but are not themselves part of the tree
A random assortment of disconnected nodes
DOM trees are not red-black trees, binary trees, B-trees, or any other sort of special-purpose trees. From a data-structures point of view, they are just plain-vanilla trees. Recursion works very well on DOM data structures, as it does on any tree. You can use all of the techniques you learned for processing trees in Data Structures 201. Breadth-first search, depth-first search, inorder traversal, preorder traversal, postorder traversal, and so on all work with DOM data structures.
In addition to its tree connections, each node has a local name, a namespace URI, and a prefix; although for several kinds of nodes, these may be null. For example, the local name, namespace URI, and prefix of a comment are always null. Each node also has a node name. For an element or attribute, the node name is the prefixed name. For other named things such as notations or entities, the node name is the name of the thing. For nodes without names, such as text nodes, the node name is the value from the following list that matches the node type:
Finally each node has a string value. For text-like things such as text nodes and comments, this tends to be the text of the node. For attributes, it's the normalized value of the attribute. For everything else, including elements and documents, the value is null.
DOM divides nodes into twelve types, seven of which can potentially be part of a DOM tree:
Processing instruction nodes
Document type nodes
Document fragment nodes
CDATA section nodes
Entity reference nodes
Of these twelve, the first seven are by far the most important; and often a tree built by an XML parser will contain only the first seven.
Each DOM tree has a single root document node. This node has children. Because all documents have exactly one root element, a document node always has exactly one element-node child. If the document has a document type declaration, then it also has one document-type-node child. If the document contains any comments or processing instructions before or after the root element, then these are also child nodes of the document node. The order of all children is maintained. Consider the simple XML-RPC document shown in Figure.
<?xml version="1.0"?> <?xml-stylesheet type="text/css" href="xml-rpc.css"?> <!-- It's unusual to have an xml-stylesheet processing instruction in an XML-RPC document but it is legal, unlike SOAP where processing instructions are forbidden. --> <!DOCTYPE methodCall SYSTEM "xml-rpc.dtd"> <methodCall> <methodName>getQuote</methodName> <params> <param> <value><string>RHAT</string></value> </param> </params> </methodCall>
The document node representing the root of this document has four child nodes in this order:
A processing instruction node for the xml-stylesheet processing instruction
A comment node for the comment
A document type node for the document type declaration
An element node for the root methodCall element
The XML declaration, the DOCTYPE declaration, and the white space between these nodes are not included in the tree. The document type node is available as a separate property of the document node. However, it is not a child and is not included in the list of the document's children. The XML declaration (including the version, standalone, and encoding declarations) and the white space are removed by the parser. They are not part of the model.
Each element node has a name, a local name, a namespace URI (which may be null if the element is not in any namespace), and a prefix (which may also be null). The string also contains children. For example, consider this value element:
When represented in DOM, it becomes a single element node with the name value. This node has a single element-node child for the string element. The string also has a single text-node child containing the text RHAT.
Or consider this para element:
<db:para xmlns:db="http://www.example.com/" xmlns="http://namespaces.cafeconleche.org/"> Or consider this <markup>para</markup> element: </db:para>
In DOM it's represented as an element node with the name db:para, the local name para, the prefix db, and the namespace URI http://www.example.com/. It has three children:
A text node containing the text Or consider this
An element node with the name markup, the local name markup, the namespace URI http://namespaces.cafeconleche.org/, and a null prefix
Another text node containing the text element:
White space is included in text nodes, even if it's ignorable. For example, consider this methodCall element:
<methodCall> <methodName>getQuote</methodName> <params> <param> <value><string>RHAT</string></value> </param> </params> </methodCall>
It is represented as an element node with the name methodCall and five child nodes:
A text node containing only white space
An element node with the name methodName
A text node containing only white space
An element node with the name params
A text node containing only white space
Of course, these element nodes also have their own child nodes.
In addition to containing element and text nodes, an element node may contain comment and processing instruction nodes. Depending on how the parser behaves, an element node might also contain some CDATA section nodes, entity reference nodes, or both. However, many parsers resolve these automatically into their component text and element nodes, and do not report them separately.
An attribute node has a name, a local name, a prefix, a namespace URI, and a string value. The value is normalized as required by the XML 1.0 specification. That is, entity and character references in the value are resolved, and all white space characters are converted to a single space. If the attribute has any type other than CDATA, then leading and trailing white space is stripped from its value, and all other runs of white space are converted to a single space. An attribute node also has children, all of which are text and entity reference nodes forming the value of the attribute. However, it's unusual to access these directly instead of by the value.
If a validating parser builds an XML document from a file, then default attributes from the DTD are included in the DOM tree. If the parser supports schemas, then default attributes can be read from the schema as well. DOM does not provide the type of the attribute as specified by the DTD or schema, or the list of values available for an enumerated type attribute. This is a major shortcoming.
Attributes are not considered to be children of the element to which they are attached. Instead they are part of a separate set of nodes. For example, consider this Quantity element:
<Quantity amount="17" />
This element has no children, but it does have a single attribute with the name amount and the value 17.
Attributes that declare namespaces do not receive special treatment in DOM. They are reported by DOM parsers in the same way as any other attribute. Furthermore, DOM always provides the fully qualified names and namespace URIs for all element and attribute nodes.
Only document, element, attribute, entity, and entity reference nodes can have children. The remaining node types are much simpler.
Text nodes contain character data from the document stored as a String. Any characters like 4 from outside Unicode's Basic Multilingual Plane are represented as surrogate pairs. Characters like & and < that are represented in the document by predefined entity or character references are replaced by the actual characters they represent. If these nodes are written out again to an XML document, these characters need to be re-escaped.
When a parser reads an XML document to form a DOM Document, it puts as much text as possible into each text node before being interrupted by non-predefined entities, comments, tags, CDATA section delimiters, or other markup. Thus no text node immediately follows any other text node, as there is always an intervening nontext node. However, if a DOM is created or modified in memory, then the client program may divide text between immediately adjacent text nodes. As a result, it's not always guaranteed that each text node contains the maximum possible contiguous run of text, just that this is the case immediately after a document is parsed.
A comment node has a name (which is always #comment), a string value (the text of the comment) and a parent (the node that contains it). That's all. For example, consider this comment:
<!-- Don't forget to fix this! -->
The value of this node is Don't forget to fix this! The white space at either end is included.
Processing Instruction Nodes
A processing instruction node has a name (the target of the processing instruction), a string value (the data of the processing instruction), and a parent (the node that contains it). That's all. For example, consider this processing instruction:
<?xml-stylesheet type="text/css" href="xml-rpc.css"?>
The name of this node is xml-stylesheet. The value is type="text/css" href="xml-rpc.css". The white space between the target and the data is not included, but the white space between the data and the closing ?> is included. Even if the processing instruction uses a pseudo-attribute format as this one does, it is not considered to have attributes or children. Its data is just a string that happens to have some equal signs and quote marks in suggestive positions.
CDATA Section Nodes
A CDATA section node is a special text node that represents the contents of a CDATA section. Its name is #cdata-section. Its value is the text content of the section. For example, consider this CDATA section:
<![CDATA[<?xml-stylesheet type="text/css" href="xml-rpc.css"?>]]>
Its name is #cdata-section and its value is <?xml-stylesheet type="text/css" href="xml-rpc.css"?>.
Entity Reference Nodes
When a parser encounters a general entity reference such as Æ or ©right_notice;, it may or may not replace it with the entity's replacement text. Validating parsers always replace entity references. Nonvalidating parsers may do so at their option.
If a parser does not replace entity references, then the DOM tree will include entity reference nodes. Each entity reference node has a name, and if the parser has read the DTD, then you should be able to look up the public and system IDs for this entity reference using the map of entity nodes available on the document type node. Furthermore, the child list of the entity will contain the replacement text for this entity reference. However, if the parser has not read the DTD and resolved external entity references, then the child list may be empty.
If a parser does replace entity references, then the DOM tree may or may not include entity reference nodes. Some parsers resolve all entity reference nodes completely and leave no trace of them in the parsed tree. Other parsers instead include entity reference nodes in the DOM tree that have a list of children. The child list contains text nodes, element nodes, comment nodes, and so forth, representing the replacement text of the entity.
For example, suppose an XML document contains this element:
<para>Ælfred is a very nice XML parser.</para>
If the parser is not resolving entity references, then the para element node contains two children—an entity reference node with the name AElig and a text node containing the text "lfred is a very nice XML parser." The AElig entity reference node will not have any children.
Now suppose the parser is resolving entity references, and the replacement text for the AElig entity reference is the single ligature character Æ. Now the parser has a choice: It can represent the children of the para element as a single text node containing the full sentence, "Ælfred is a very nice XML parser." Alternately, it can represent the children of the para element as an entity reference node with the name AElig followed by a text node containing the text, "lfred is a very nice XML parser." If it chooses the second option, then the AElig entity reference node contains a single read-only text-node child containing single ligature character Æ.
DOM never includes entity reference nodes for the five predefined entity references: &, <, >, ', and ". These are simply replaced by their respective characters and included in a text node. Similarly, character references such as and are not specially represented in DOM as any kind of node. The characters they represent are simply added to the relevant text node.
Document Type Nodes
A document type node has a name (the name the document type declaration specifies for the root element), a public ID (which may be null), a system ID (required), an internal DTD subset (which may be null), a parent (the document that contains it), and lists of the notations and general entities declared in the DTD. The value of a document type node is always null. For example, consider this document type declaration:
<!DOCTYPE mml:math PUBLIC "-//W3C//DTD MathML 2.0//EN" "http://www.w3.org/TR/MathML2/dtd/mathml2.dtd" [ <!ENTITY % MATHML.prefixed "INCLUDE"> <!ENTITY % MATHML.prefix "mml"> ]>
The name of the corresponding node is mml:math. The public ID is -//W3C//DTD MathML 2.0//EN. The system ID is http://www.w3.org/TR/MathML2/dtd/mathml2.dtd. The internal DTD subset is the complete text between [ and ].
There are four kinds of DOM nodes that are part of the document but not the document's tree: attribute nodes, entity nodes, notation nodes, and document fragment nodes. You've already seen that attribute nodes are attached to element nodes but are not children of those nodes. Entity and notation nodes are available as special properties of the type node. Document fragment nodes are used only when building DOM trees in memory, not when reading them from a parsed file.
Entity nodes (not to be confused with entity reference nodes) represent the parsed and unparsed entities declared in the document's DTD. If the parser reads the DTD, then it will attach a map of entity nodes to the document type node. Because this map is indexed by the entity names, you can use it to match entity reference nodes to entity nodes.
Each entity node has a name and a system ID. It can also have a public ID if one was used in the DTD. Furthermore, if the parser reads the entity, then the entity node has a list of children containing the replacement text of the entity. However, these children are read-only and cannot be modified, unlike children of similar type elsewhere in the document. For example, suppose the following entity declaration appeared in the document's DTD:
<!ENTITY AElig "Æ">
If the parser read the DTD, then it would create an entity node with the name AElig. This node would have a null public and system ID (because the entity would be purely internal) and one child, a read-only text node containing the single character Æ.
For another example, suppose this entity declaration appeared in the document's DTD:
<!ENTITY Copyright SYSTEM "copyright.xml">
If the parser read the DTD, then it would create an entity node with the name Copyright, the system ID copyright.xml, and a null public ID. The children of this node would depend on what was found at the relative URL copyright.xml. Suppose that document contained the following content:
<copyright> <year>2002</year> <person>Elliotte Rusty Harold</person> </copyright>
Then the child list of the Copyright entity node would contain a single read-only element child with the name copyright. The element would contain its own read-only element and text node children.
Notation nodes represent the notations declared in the document's DTD. If the parser reads the DTD, then it will attach a map of notation nodes to the document type node. This map is indexed by the notation name. You can use it to look up the notation for each entity node that corresponds to an unparsed entity, or the notations associated with particular processing instruction targets.
In addition to its name, each notation node has a public ID or a system ID, whichever was used to declare it in the DTD. Notation nodes do not have any children. For example, suppose this notation declaration for PNG images was included in the DTD:
<!NOTATION PNG SYSTEM "http://www.w3.org/TR/REC-png">
This would produce a notation node with the name PNG and the system ID http://www.w3.org/TR/REC-png. The public ID would be null.
For another example, suppose this notation declaration for TeX documents was included in the DTD:
<!NOTATION TEX PUBLIC "+//ISBN 0-201-13448-9::Knuth//NOTATION The TeXbook//EN">
This would produce a notation node with the name TEX and the public ID +//ISBN 0-201-13448-9::Knuth//NOTATION The TeXbook//EN. The system ID would be null. (XML doesn't allow notations to have both public and system IDs.)
Document Fragment Nodes
The document fragment node is an alternative root node for a DOM tree. It can contain anything an element can contain (for example, element nodes, text nodes, processing instruction nodes, comment nodes, and so on). Although a parser will never produce such a node, your own programs may create one when extracting part of an XML document in order to move it elsewhere.
In DOM, the nonroot nodes never exist alone. That is, there's never a text node or an element node or a comment node that's not part of a document or a document fragment. They may be temporarily disconnected from the main tree, but they always know which document or fragment they belong to. The document fragment node enables you to work with pieces of a document that are composed of more than one node.
What Is and Isn't in the Tree
Figure summarizes the DOM data model with the name, value, parent, and possible children for each kind of node. One thing to keep in mind is the parts of the XML document that are not exposed in this data model:
The XML declaration, including the version, standalone declaration, and encoding declaration. These will be added as properties of the document node in DOM3, but current parsers do not provide them.
Most information from the DTD and/or schema, including element and attribute types and content models, is not provided.
Any white space outside the root element.
Whether or not each character was provided by a character reference. Parsers may provide information about entity references but are not required to do so.
|Document||#document||Null||Null||Comment, processing instruction, one element, zero or one document type|
|Document type||Root element name specified by the DOCTYPE declaration||Null||Document||None|
|Element||Prefixed name||Null||Element, document, or document fragment||Comment, processing instruction, text, element, entity reference, CDATA section|
|Text||#text||Text of the node||Element, attribute, entity, or entity reference||None|
|Attr||Prefixed name||Normalized attribute value||Element||Text, entity reference|
|Comment||#comment||Text of the comment||Element, document, or document fragment||None|
|Processing instruction||Target||Data||Element, document, or document fragment||None|
|Entity Reference||Name||Null||Element or document fragment||Comment, processing instruction, text, element, entity reference, CDATA section|
|Entity||Entity name||Null||Null||Comment, processing instruction, text, element, entity reference, CDATA section|
|CDATA section||#cdata-section||Text of the section||Element, entity, or entity reference||None|
|Document fragment||#document-fragment||Null||Null||Comment, processing instruction, text, element, entity reference, CDATA section|
A DOM program cannot manipulate any of these constructs. It cannot, for example, read in an XML document and then write it out again in the same encoding as in the original document, because it doesn't know what encoding the original document used. It cannot treat $var differently from $var, because it doesn't know which was originally written. | <urn:uuid:fff6bb56-c423-4d7d-86b4-48911195a0ba> | CC-MAIN-2018-13 | http://codeidol.com/community/java/trees/12079/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647003.0/warc/CC-MAIN-20180319155754-20180319175754-00665.warc.gz | en | 0.846808 | 4,753 | 3 | 3 |
Essential oils have been used for nearly 6,000 years. The ancient Chinese, Indians, Egyptians, Greeks, and Romans used them in many cosmetics and perfumes to enhance mental wellbeing.
There is a deep connection between our sense of smell and our emotions. .
When you smell an essential oil the aroma enters through cilia (the fine hairs lining the nose) it triggers the olfactory nerve to send signals, which travels to the limbic system, the part of the brain that is the storehouse of moods, emotions, memory and learning.
The ability to smell provides pleasurable sensations, evokes memories of the past and has the ability to create a healing effect both mentally and emotionally when inhaled.
Aromas have a way of elevating your mood and your spirits, by using Pure Euphorias’ beautiful range of polishes, you can have healing at your fingertips.
Creating Pure Essential oils:
Essential oils are concentrated extracts of herbs, bark, leaves and flowers.
Through the distillation process raw plant material, consisting of flowers, leaves, wood, bark, roots, seeds, or peel, is put into an alembic (distillation apparatus) over water. As the water is heated, the steam passes through the plant material, vaporizing it. The vapours flow through a coil, where they condense back to liquid, which is then collected. | <urn:uuid:acbbf4e3-5416-4f93-a3e9-8ffa1793c514> | CC-MAIN-2017-30 | http://pureeuphoria.com.au/about/essential-oils/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425407.14/warc/CC-MAIN-20170725222357-20170726002357-00337.warc.gz | en | 0.940892 | 288 | 2.796875 | 3 |
An orthotic device (commonly referred to as an orthosis) is an external device applied on the body to limit motion, assist in correcting deformity, reduce axial loading, or improve function of a certain segment of the body resulting from a muscle, skeletal or neurological disorder.
(Courtesy of Ottobock HealthCare)
In some instances it may be necessary to design and fabricate a custom orthosis for a specific biomechanical deficiency. In other circumstances the orthotist may be able to utilize an off-the-shelf orthosis to achieve the desired result. An orthosis may be worn temporarily until a weakness or injury is overcome, or on a long-term basis to treat a chronic muscle, skeletal, or neurological condition
At Leimkuehler Orthotic and Prosthetic Centers, we are committed to provide you with the highest quality of orthotic care in a professional, friendly and caring environment. Our orthotic care services include insole products that are engineered to provide pain relief for aching feet. We are dedicated to improve your quality of life through custom designed, fabricated and fitted orthotic devices. You will receive personal attention from our staff.
Our orthotic services include the following types of orthotic devices:
- Cervical orthotics
- Fracture bracing orthotics
- Pediatrics orthotics
- Lower extremity orthotics
- Upper extremity orthotics
- Spinal orthotics
Please click on one the services to find out more about our orthotic services.
In designing every orthotic device, special attention is paid to a lot of features. Some of these features are listed below:
- Ease of putting on and taking off the device
- Aeration in order to avoid skin issues
There are several indications for recommending the use of orthotic devices, some of which include the following:
- Pain relief
- Spinal immobilization after surgery or after traumatic injury
- Compression fracture management
- Scoliosis management
- Kinesthetic reminder to avoid certain movements
The duration of orthosis use is determined by the individual situation. As a result of successful use orthosis, you may notice decreased pain, increased strength, improved function, improved posture, correction of spinal curve deformity, spinal stability, and healing of ligaments and bones. | <urn:uuid:4f396ea1-6ea9-4887-bbcf-1d50824f0065> | CC-MAIN-2018-17 | http://leimkuehleroandp.com/orthotic-services.php | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00077.warc.gz | en | 0.90092 | 473 | 2.734375 | 3 |
Table of contents
About this book
Bleeding during pregnancy is not a rare phenomenon and has been associated with significant maternal and fetal morbidities and even mortality. Although vaginal bleeding occurs mainly during the first trimester, it can appear at any stage of pregnancy and during the post-partum period. This sometimes life-threatening event requires an extensive work-up in order to recognize its cause and establish a rapid and effective therapeutic approach. Bleeding During Pregnancy: A Comprehensive Guide draws on evidence-based data and brings together updated information on all aspects of pregnancy-related bleeding. Chapters were contributed by a multidisciplinary team of international experts, including obstetricians, gynecologists, anesthesiologists, hematologists, oncologists and epidemiologists. Topics covered include: bleeding during early pregnancy (early pregnancy loss, ectopic pregnancy, gestational trophoblastic disease, and cancer of the reproductive tract during pregnancy; bleeding in late pregnancy (preterm delivery, placental abruption, placenta previa, vasa previa and uterine rupture); and post partum-hemorrhage. Intensive care of a patient with excessive bleeding and coagulotherapy during pregnancy or the post-partum period are also discussed. This book is an essential guide for a broad spectrum of clinicians and health care professionals who treat pregnant patients.
Bleeding Family Medicine Gynecology Obstetrics Pregnancy | <urn:uuid:d5122b30-1e11-4a6d-949c-92435ada451a> | CC-MAIN-2019-13 | https://link.springer.com/book/10.1007%2F978-1-4419-9810-1 | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.83/warc/CC-MAIN-20190323101107-20190323123107-00204.warc.gz | en | 0.937301 | 289 | 2.765625 | 3 |
Bilingualism and Preaching in Late-Medieval England
Publication Year: 1994
Published by: University of Michigan Press
Download PDF (41.2 KB)
Chapter 1: Macaronic Literature
Download PDF (760.8 KB)
Any society or social group in which at least some members are more or less fluent in more than one language tends to produce "texts," both oral and written, that mix languages in one form or another. Thus, when two of the heirs of Charlemagne's empire, after years of civil war, came to an agreement, they confirmed it with oaths spoken in the language...
Chapter 2: Types of Bilingual Sermons
Download PDF (1.0 MB)
Such sermons as the one quoted in the previous chapter were only part of a larger field of sermons in which two languages--Latin and English, for our purposes--appear side by side. Of the sermons produced in England during the century from 1350 to 1450, a high percentage show some degree of such mixture. The amount of English in them may be...
Chapter 3: The Manuscripts
Download PDF (2.3 MB)
The forty-three sermons I have classified as fully macaronic are found in thirteen separate manuscripts, which must now be examined in some detail. In this chapter I discuss the company the macaronic sermons keep, the kind of manuscripts that have preserved them, and the occasions, purposes, intended audiences, and authorship or at least affiliation of the...
Chapter 4: Macaronic Sermons
Download PDF (988.6 KB)
As we turn to fully macaronic sermons in particular and examine features already discussed in chapter 3--such as the makeup of individual manuscripts and the authorship, audience, and occasions of the sermons they preserve--the following pages may at first appear somewhat repetitive, but they will soon yield a more finely-tuned characterization of...
Chapter 5: Macoronic Texture
Download PDF (1.4 MB)
The sermons studied here are compositions whose basic fabric is in Latin but in which parts appear in English. The latter should not be thought of as foreign elements woven into the basic fabric; the strands of the fabric themselves change here and there from one language to the other. I will now examine this macaronic texture more closely by looking at...
Chapter 6: Bilingualism in Action
Download PDF (1.6 MB)
Though the rhetorical art of the passage quoted at the end of the preceding chapter does not appear consistently on every page of every macaronic sermon, it is sufficiently widespread to suggest, together with the preachers' use of sophisticated structural devices typical of the scholastic sermon, that these sermons are products of deliberate rhetorical...
Download PDF (9.9 KB)
Appendix A: Inventories of Manuscripts and Sermons
Download PDF (5.0 MB)
Appendix B: Sermon S-07, Amore langueo
Download PDF (3.1 MB)
Appendix C: Sermon O-07, De celo querebant
Download PDF (2.5 MB)
Appendix D: Sermon W-154, Quem teipsum facis
Download PDF (2.4 MB)
Appendix E: Statistical Table
Download PDF (46.9 KB)
Download PDF (510.1 KB)
Download PDF (231.2 KB)
Page Count: 376
Publication Year: 1994 | <urn:uuid:9620420f-a1f5-4b66-aa25-1947c8caa1e2> | CC-MAIN-2015-27 | http://muse.jhu.edu/books/9780472021468 | s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098808.88/warc/CC-MAIN-20150627031818-00044-ip-10-179-60-89.ec2.internal.warc.gz | en | 0.859831 | 734 | 2.53125 | 3 |
Sounds a lot like debacle to us.
The De Baca nuclear test was part of Operation Hardtack II, a series of thirty-seven Nevada Test Ground blasts squeezed into seven weeks in order to beat a looming deadline—the beginning of a U.S./U.S.S.R. nuclear moratorium. The test ban failed when the Soviets began testing again three years later, a political crisis precipitating that failure, specifically a showdown concerning the status of East Berlin. The test ban would have failed anyway, though, as all test bans have failed, and all future test bans will fail, because nuclear weapons are seen by weak nations as the ultimate defense against invasion by stronger nations. And of course, they’re right. Since only the year 2000, nuclear-armed nations have invaded non-nuclear nations nine times. Conversely, since the dawn of the nuclear era in 1945, a period comprising nearly seventy military encroachments, no nuclear nation has had its mainland invaded. The De Baca test occurred today in 1958.
These weapons have the power to kill every human on the planet. High five!
Back during the days of aboveground nuclear testing, particularly during the Korean War, the U.S. government wanted to be sure troops could operate under threat of nuclear attack. A field exercise known as Desert Rock IV was conducted at the Nevada Test Site during some of the detonations comprising the nuclear test series codenamed Operation Tumbler-Snapper. Thousands of soldiers conducted maneuvers as the blasts occurred, and were exposed to radiation, though the levels were said to be low. This particular photo is from the 20-kiloton airburst codenamed Dog, and shows two soldiers pretending to touch the bomb’s debris cloud. An aerial photo of the blast appears below. That was today in 1952.
I'm a very special pot, it’s true. Here’s an example of what I can do
Above, a photo of the American nuclear test codenamed Wasp, part of Operation Teapot, detonated at the Nevada Test Site today in 1955
Awfully sorry to burst your balloon.
Above is an image of a downed blimp, or barrage balloon, that was floated above the Nevada Test Site to measure the effects of the pressure wave from a nuclear blast. The test was a nineteen kiloton detonation codenamed Stokes, part of the series Operation Plumbbob, and was set off about five miles away from the blimp. That was today in 1957.
Above, a photograph of the superheated debris cloud of the American nuclear test codenamed Climax, part of the series Upshot-Knothole, detonated at the Nevada Test Site today in 1953.
Tell me again, who made the desert bloom?
Photo of the mushroom cloud generated by the American nuclear test Buster Charlie, a fourteen kiloton shot conducted at the Nevada Test Site, today, 1951.
Light of a clear blue morning.
Photo of the nuclear test codenamed Easy, part of the series Operation Ranger, detonated at Frenchman Flat, Nevada Test Site, February 1, 1951. This was the first nuclear blast shown on television—a news program secretly focused a camera on the desert from the top of a Las Vegas hotel and was able to broadcast a distant flash.
At least it's a dry heat.
Photo of the American nuclear test codenamed Fizeau, part of a series of tests named Plumbbob conducted at the Nevada Test Site. This one was fifty-two years ago today.
Red is the rose by yonder garden grows.
Detonation of the nuclear the bomb codenamed Stokes, part of Operation Plumbbob, which consisted of 29 separate tests at the Nevada Test Site, formerly known as The Nevada Proving Ground, August 7, 1957.
The headlines that mattered yesteryear.
1933—The Gestapo Is Formed
The Geheime Staatspolizei, aka Gestapo, the official secret police force of Nazi Germany, is established. It begins under the administration of SS leader Heinrich Himmler in his position as Chief of German Police, but by 1939 is administered by the Reichssicherheitshauptamt, or Reich Main Security Office, and is a feared entity in every corner of Germany and beyond.
1937—Guernica Is Bombed
In Spain during the Spanish Civil War, the Basque town of Guernica is bombed by the German Luftwaffe, resulting in widespread destruction and casualties. The Basque government reports 1,654 people killed, while later research suggests far fewer deaths, but regardless, Guernica is viewed as an example of terror bombing and other countries learn that Nazi Germany is committed to that tactic. The bombing also becomes inspiration for Pablo Picasso, resulting in a protest painting that is not only his most famous work, but one the most important pieces of art ever produced.
In Detective Comics #27, DC Comics publishes its second major superhero, Batman, who becomes one of the most popular comic book characters of all time, and then a popular camp television series starring Adam West, and lastly a multi-million dollar movie franchise starring Michael Keaton, then George Clooney, and finally Christian Bale.
1953—Crick and Watson Publish DNA Results
British scientists James D Watson and Francis Crick publish an article detailing their discovery of the existence and structure of deoxyribonucleic acid, or DNA, in Nature magazine. Their findings answer one of the oldest and most fundamental questions of biology, that of how living things reproduce themselves.
1967—First Space Program Casualty Occurs
Soviet cosmonaut Vladimir Komarov dies in Soyuz 1 when, during re-entry into Earth's atmosphere after more than ten successful orbits, the capsule's main parachute fails to deploy properly, and the backup chute becomes entangled in the first. The capsule's descent is slowed, but it still hits the ground at about 90 mph, at which point it bursts into flames. Komarov is the first human to die during a space mission.
It's easy. We have an uploader that makes it a snap. Use it to submit your art, text, header, and subhead. Your post can be funny, serious, or anything in between, as long as it's vintage pulp. You'll get a byline and experience the fleeting pride of free authorship. We'll edit your post for typos, but the rest is up to you. Click here
to give us your best shot. | <urn:uuid:f909bcd9-0fa1-4f0c-8f1a-276f7dd7d67d> | CC-MAIN-2015-18 | http://pulpinternational.com/pulp/keyword/Nevada+Test+Site.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655589.82/warc/CC-MAIN-20150417045735-00255-ip-10-235-10-82.ec2.internal.warc.gz | en | 0.950231 | 1,356 | 2.734375 | 3 |
You are using Internet Explorer 8 to view this site. IE8 is a 6-year-old browser that does not display modern web sites properly. Please upgrade to a newer browser to make best use of this site. Contact your local library branch if you require assistance. For more information, see this FAQ page.
How did Santa become Santa? Hundreds of years ago in Lapland, a little boy named Nikolas lost his family in an accident. To show his gratitude to the villagers who took him in, he decides to make toys for the children of the families as goodbye presents at Christmas, thus creating a legend that would be carried on from generation to generation. | <urn:uuid:baa8cc10-2f96-4e95-b828-7615f4533bf5> | CC-MAIN-2015-14 | http://olathe.bibliocommons.com/item/show/989893036_christmas_story | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131294307.1/warc/CC-MAIN-20150323172134-00075-ip-10-168-14-71.ec2.internal.warc.gz | en | 0.958726 | 135 | 2.984375 | 3 |
Greenhouse azaleas are those beautiful, multicolored joys of spring, those bright spots in the grocery store or garden nursery when everything else is winter gray. Their bright beauty has caused many a gardener (and many non-gardeners) to ask, “Can you grow azalea indoors successfully?” The answer is, “Of course you can!”
Tips for Growing an Azalea Houseplant
You can grow azalea indoors much like any other houseplant, but as with other blooming plants, there are a few tricks you need to know about the care of indoor azalea if you want to keep them blooming year after year.
The first step in growing an azalea houseplant is to choose the right shrub. You are looking for greenhouse azaleas, not hardy azaleas, which are only grown outdoors. Both are Rhododendrons, but different sub genres, one of which is only hardy to USDA plant hardiness zone 10. That’s the one you want.
Greenhouse azaleas aren’t always marked as such, but they will almost always be sold indoors and usually come with that decorative foil wrapping around their pots. Look for a plant with only a few buds open and showing color. That way, you’ll be able to enjoy that first full bloom for a longer period of time.
Flower buds should look healthy and be at different stages of development as a sign they are actively growing. An azalea houseplant with yellowed leaves isn’t healthy. Look under the leaves as well. That’s where those pesky whiteflies and mealybugs dwell. They love azaleas.
As houseplants, many growers ship azaleas in clear plastic sleeves. These sleeves are meant to protect the plant in shipping, but they also trap the ethylene gas released by the plant, which can cause leaf drop. Try to find a retailer who removes them or, if you can’t, remove it from your greenhouse azalea as soon as you get it home.
Care of Indoor Azalea
In their natural environment, these plants live in the understory of high trees. They thrive in cool, filtered sun. Azaleas as houseplants do best at cooler temperatures, ideally around 60-65 F. (16-18 C.). Cooler temperatures will also help the blooms last longer. Keep them well lit, but out of direct sun.
Moisture should be your greatest concern in the care of indoor azaleas. Never allow your plant to dry out. While watering from the top may provide sufficient care, indoor azaleas enjoy the occasional dunk, pot and all, in a larger container of water. When the bubbles stop, pull it out, and let it drain. Whatever you do, don’t let these plants dry out. Keep them damp, not soggy, and don’t fertilize until flowering is complete.
At this point, the lives of most azaleas as houseplants are over, because this is where most people throw them away or plant them in the spring garden for their foliage, allowing Mother Nature to do the deed with frost the following fall.
Getting Greenhouse Azaleas to Rebloom
Can you grow azalea indoors and get it to rebloom? Yes. It isn’t easy, but it’s worth a try. Once the blooms have faded, give your plant a little more light and fertilize it with an all-purpose liquid fertilizer every two weeks. When the weather warms, plant it pot and all in your outdoor garden or keep the pot in a semi-shaded area indoors or out. Since they prefer slightly acidic soil, you may want to use a fertilizer manufactured for that purpose.
Shape the plant in midsummer, cutting back any straggly growth and keep it well watered. Bring it back indoors before the first frost of autumn. Now the hard part begins. Between early November and early January, greenhouse azaleas need temperatures ranging between 40 and 50 F. (4-10 C.). A sunny, enclosed, but unheated porch will do the job so long as the temperature doesn’t drop to freezing. This is essential for growing an azalea as a houseplant, because the blooms set during this chilling time.
Give your plant enough water to keep it from wilting, but don’t be too generous and don’t fertilize. All the nutrition it needs has been stored in the leaves and fertilizing now will give you lush growth without flowers. In January, move the plant indoors, but it should still have nighttime temperatures around 60 F. (16 C.). That back bedroom that everyone complains about is ideal for this. In a few weeks, flowering should begin.
Growing and azalea houseplant and getting it to bloom again takes time and careful planning, but the reward of such lovely blooms make the effort well worth it. | <urn:uuid:374bd02d-d517-4591-b086-51dbce16383e> | CC-MAIN-2015-35 | http://www.gardeningknowhow.com/ornamental/shrubs/azalea/growing-azalea-houseplants.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066266.26/warc/CC-MAIN-20150827025426-00152-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.940574 | 1,046 | 2.53125 | 3 |
Materials Science and Engineering is a degree based firmly in the fundamental sciences of Mathematics, Physics and Chemistry. As it is not a specific area of focus in high school teaching, year 11 and 12 students often don't have a good understanding of what Materials Science and Engineering is, or the exciting career paths it can make available.
It is an excellent foundation to a wide range of careers in science, engineering and elsewhere, and a field that will have increasing importance as its correlation to sustainability and environmental impact is further explored.
Our HSC Subject Selection guide can assist teachers and students in making the most appropriate HSC choices to develop the strongest knowledge base for readily pursuing a degree in this area.
We encourage you to utilise our online tutorials, developed by teaching staff in the School to learn introductory materials science principles. The tutorials cover areas such as atomic bonding, corrosion, ceramics and materials testing, and are relevant to HSC students studying physics, chemistry and engineering studies.
We hope you find these resources useful in engaging with your students. | <urn:uuid:a4702bef-9596-4090-a59e-d1943c8e4f42> | CC-MAIN-2019-39 | http://www.materials.unsw.edu.au/high-school/teacher-resources-0 | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00161.warc.gz | en | 0.933814 | 210 | 3.09375 | 3 |
Cashing In On the Red Planet Download PDF
To land humans on the Red Planet, NASA will need new equipment, fresh thinking, and advanced technology. These companies are preparing for mankind’s next giant leap.
Attention, people of Earth: We are going to Mars. This is no sci-fi fantasy; for the past two years, NASA has been gearing up to meet the Bush administration’s goal of landing humans on Mars by around 2030. The agency plans to set up a base on the Moon by 2020 to act as a staging area; that effort alone is projected to cost at least $104 billion. Throw in the round-trip voyage to Mars, and John Edwards, space systems analyst at Forecast International, estimates that the total cost of the program will top $400 billion—making it history’s largest government-backed science project.
Money is already being spent: NASA’s 2006 budget allocates $16.4 billion, much of it for the development of a new spacecraft, called the crew exploration vehicle, that will replace the trouble-plagued space shuttle and carry humans “to the Moon, Mars, and worlds beyond.” The candidates to design and build the CEV are familiar names—Lockheed Martin (Research) and a combined Northrop Grumman (Research)/Boeing (Research) team are the leading contenders right now—but NASA’s multibillion-dollar shopping spree will create unprecedented opportunities for companies that haven’t traditionally been involved in spaceflight. For example, heavy-equipment manufacturer Caterpillar (Research) has worked with the space agency to develop “regolith-handling” construction machinery. (“Regolith” is the geologists’ term for extraterrestrial dirt.) Hundreds more contracts will be signed in the decades ahead as the space agency seeks to address the basic needs of deep-space astronauts—oxygen to breathe, food to eat, fuel to burn, and communications networks to stay in touch. These three companies are already positioning themselves to secure a lucrative spot on the launch pad.
1. SPACE NETWORKING: Cisco Systems
HEADQUARTERS: San Jose
2005 REVENUES: $24.8 billion
Rick Sanford has an unusual resume by Silicon Valley standards. A former Air Force cryptographer and security consultant who helped combat drug smugglers in South America, Sanford is now the director of space and intelligence initiatives for networking giant Cisco (Research). His job is to manage the millions of R&D dollars Cisco is spending to become competitive on the final frontier.
Just as we’ve become accustomed to hyperfast voice, video, and data links, future explorers will require similar connectivity as they travel the solar system. “The goal is to make Washington-to-Mars communication indistinguishable from Washington-to-San Francisco,” Sanford says. To accomplish that he’s assembled a team of 14 researchers—radio specialists, computer programmers, and theoretical physicists—to attack the problem. The challenges are like those encountered on Earth, only hugely amplified. Time delays are counted in minutes, rather than milliseconds. Equipment will be exposed to severe radiation, heat, and cold.
Cisco is already getting ready. The company has tested its networking hardware inside NASA vacuum chambers. It has subjected its routers to 30 Gs of force to replicate the brutal stress of a rocket launch. And in 2003, Cisco successfully operated an off-the-shelf router in Earth’s orbit, after installing the hardware aboard a British satellite. Ultimately, constellations of communications satellites may circle the Moon and Mars to provide the backbone for an interplanetary data network—a market that could generate more than $750 million worth of business for companies like Cisco. “Nontraditional space companies have to invest for a 20- to 30-year time frame,” Sanford says. “They need to make decisions today if they want to provide successful technology when we’re ready to go to Mars.”
2. LIFE SCIENCES: Dynamac
HEADQUARTERS: Rockville, MD
2005 REVENUES: $35 million
At first glance, Dynamac’s life-sciences lab looks like a meat locker. Two rows of aluminum-clad doors span the room from end to end; behind each is an “environmental growth chamber.” But rather than a cold breeze, the open doors release a warm blast of humid air and a blaze of bright light. Inside one chamber, six varieties of strawberries grow in plastic buckets linked by a lattice of thin vinyl tubing. There’s no soil; instead, a hydroponic feeding system bathes the plants’ roots in nutrient-rich water. Sharon Edney, a Dynamac plant biologist, explains that the chambers simulate the “less than ideal conditions” inside a stuffy Mars habitation module. “It’s like The Right Stuff,” Edney says. “The winning strawberry gets to go into space.”
Founded in 1970 by a small team of engineers and scientists, Dynamac is an environmental research and natural-resource management company. Its development of space strawberries was conducted under a 10-year contract from NASA to develop “bioregenerative life-support systems”—technologies that will allow astronauts to survive in space without having to haul all their food and water from Earth. As part of that effort, Dynamac also developed bioreactors, a technology that uses microbes to break down urine, feces, and dirty water into usable components such as water, oxygen, nitrates, and ammonia (for fertilizing plants).
There’s also the potential for civilian spinoffs: Dynamac’s bioreactors could be used to recycle waste products and produce food in remote locations where clean water is scarce. Earning a place on the Mars mission would be ideal, but Dynamac’s Mars research means that the company is also poised to pursue opportunities closer to home.
3. PROPULSION: Pioneer Astronautics
HEADQUARTERS: Lakewood, CO
2005 REVENUES: $1 million
Long before NASA decided to send a team to Mars, Robert Zubrin was designing the engines needed to get there. As an engineer for Lockheed Martin, Zubrin pursued experimental propulsion technology for interplanetary missions. In his spare time, he wrote “Mars Direct,” a groundbreaking white paper (and, later, a best-selling book titled The Case for Mars) that outlined a no-frills plan to explore the Red Planet. Over time, Zubrin grew frustrated with his employer’s lumbering pace, so he quit Lockheed in 1996 to found a company of his own, a bleeding-edge R&D firm called Pioneer Astronautics. His goal: to develop new technologies for a trip to Mars.
NASA awarded Zubrin his first research grant, for $70,000, in 1996. Beyond that, explains Pioneer research chemist Tony Muscatello, the company has “survived from contract to contract” while working for the Department of Defense and breathing-air-system specialist Neutronics. Now that the Mars project is gaining momentum, however, Pioneer’s expertise has become more valuable. In October 2005 the company won its largest NASA contract to date: a two-year, $600,000 award to build a device called an integrated Mars in-situ propellant production system. The concept uses a chemical reaction to produce rocket fuel from carbon dioxide in the Martian atmosphere. “What’s different now,” Zubrin says, “is the potential for access to the funding needed to take these ideas beyond the research stage and into flight hardware.” Translation: After years of hard work, his big payoff may finally be at hand.
Copyright © Michael Behar. All Rights Reserved. | <urn:uuid:697cf0b3-db9d-4ec3-aed0-94c981dfcb1a> | CC-MAIN-2017-26 | http://www.michaelbehar.com/articles/business-2-0-cashing-in-on-mars/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323801.5/warc/CC-MAIN-20170628204133-20170628224133-00046.warc.gz | en | 0.930202 | 1,640 | 2.65625 | 3 |
The majority of existing underground fuel storage tanks are manufactured from steel with steel suction and vent lines. Twin-skin tanks have either a fibreglass or steel inner and a fibreglass outer. The outer skin acts as secondary containment should the inner skin fail.
The life expectancy of a steel tank and lines is twenty to thirty years where that system is installed to the appropriate standards in an ideal environment.
Fuel Doctors have had to replace or abandon numerous steel tanks and lines over the years that did not make it to their first ten years, due to site specific instillation deficiencies and/or a lack of preventative maintenance.
High rise buildings with emergency power instillations place the storage tank at the lowest level in the building, up to six floors below ground. Life expectancy of these buildings can average seventy years. Consequently, preventative maintenance of the fuel containment system is a must to maintain service life and minimise environmental impact to the surrounding area.
Ground water and condensation are the main catalysts for tank and line failure. Corroded vent lines will allow ground water to flow freely into the tank, compromising fuel quality and tank shell life. Without integrity testing, these issues only become apparent due to engine damage by contaminated fuel.
90% of the tanks extracted by Fuel Doctors since 1992 have rusted from the inside. Integrity testing can determine ingress or egress and should be undertaken annually as part of a preventative maintenance plan. | <urn:uuid:2ddc597b-ecd6-4bf3-acc4-2aa2a58c5b23> | CC-MAIN-2020-05 | https://fueldoctors.com.au/services/integrity/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00133.warc.gz | en | 0.940273 | 293 | 3.359375 | 3 |
6th Grade – World Geography
Sixth grade Social Studies explores the essential question, "What do we do with a difference?" Students begin the year examining the elements of their own culture. The five major world religions are introduced, and students learn about contemporary issues as a way study the world. Throughout the year, students learn and practice reading, writing, critical thinking, and study skills, including identifying main ideas, making comparisons, writing paragraphs, engaging in small and large group discussions, and making presentations.
7th Grade – Civilizations
Seventh grade Social Studies explores the essential question, "What happens when cultures interact?" Students begin the year focusing on their own culture and then study some of the cultures of Asia, Europe, and Central America. There is a focus on the five major religions: Hinduism, Buddhism, Judaism, Christianity, and Islam, and how people who believe in these religions have interacted with each other. Throughout the year, students learn many skills necessary to help them to research, read and comprehend historical knowledge through a variety of sources. They also develop the skills necessary to present that information in a variety of formats, such as a written essay, oral and visual presentation, classroom debate, and small and large group discussions.
8th Grade – Civics
Eighth grade explores the essential question, "What does it mean to be a responsible citizen?" Over the course of the year, students will come to understand their role within society both as a United States citizen and as a citizen of the human race. The year begins by asking students to think about their own identity and how it influences the decisions they make and the viewpoints they hold. Students will then learn about the purpose of government and the foundations of the United States democratic model. This will include studying the Declaration of Independence, the United States Constitution, and the Bill of Rights. Included in this study will be a brief overview of some key events in United States history and United States geography. Students will then begin to explore how they can participate and engage in their own communities whether this be their school, town, state, or country. The year will finish with exploring case studies in civil rights movements particularly the integration of public schools in the 1950s. Students will prepare for their annual trip to Washington, D.C. by exploring the importance and symbolism of many of the sites they will visit on their trip and how these connect to the themes discussed throughout the school year. The goal of the 8th grade civics curriculum will be to give students the tools and knowledge they need to be responsible citizens of a democracy. A special emphasis will be placed on developing a student’s ability to read and comprehend challenging primary source material and write in a persuasive, argumentative style in order to prepare them for a rigorous high school curriculum. | <urn:uuid:71e8fd0e-4b48-452a-9355-4ed798a0bebb> | CC-MAIN-2018-09 | https://www.bedfordps.org/middle-school/social-studies/pages/curriculum | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812405.3/warc/CC-MAIN-20180219052241-20180219072241-00234.warc.gz | en | 0.95097 | 564 | 4.0625 | 4 |
Fiscal “Crisis” In Context: Two Indicators
With all the predictions of doom and gloom coming from the austerity camp, one would think that Canada was already about to hit the famed (but never seen) “debt wall.” Before we get too carried away, however, with the scary debt stuff, consider these two indicators of the fundamental fiscal fragility/stability of Canadian governments.
The first figure shows the net financial debt of the federal and provincial levels of government, measured as a share of GDP. This is total outstanding net financial liabilities. Federal net debt reached almost 70% of GDP in the mid-1990s, fell back steeply to below 30% by 2008, and since has bounced (modestly) to 34% as a result of the fiscal consequences of the meltdown and recession. It’s interesting to note that the federal debt burden has already levelled off (meaning that nominal net debt has been growing no faster than nominal GDP). You don’t need to strictly balance the budget in order to stabilize the debt burden (measured, appropriately, as a share of GDP). At the current level of indebtedness, so long as Ottawa’s annual deficit is less than about $20 billion the debt burden will still decline (since nominal GDP grows fast enough to absorb that new debt, in absolute terms, within a falling debt ratio). Future deficits would be smaller than that even without the Harper government’s cutbacks; moving forward, the federal debt burden will now start to fall significantly, long before the budget is balanced.
Of course, Canada is unusual among countries in having significant amounts of sub-national debt, mostly with the provinces — whose fiscal situation has been undeniably worse than Ottawa’s. The red section of the figure layers on provincial debt as a share of GDP. The rise and fall of the larger federal debt burden still sets the overall shape of the figure. Provincial debt rose to 27% of GDP by the mid 1990s, falling almost in half by 2007. It has since bounced back to over 20% — and unlike the federal debt, still grew modestly last year as a share of GDP. Nevertheless, even on a combined basis, federal and provincial net debt (now equal to 55% of GDP) is far lower than most other OECD countries, has increased by only 10 points of GDP since the financial crisis started, and is already leveling off. Big cuts in public spending are not necessary in order to stabilize and gradually reduce public debt in Canada.
If anything, I suggest this figure overstates the true “debt” of government because some of those liabilities were issued to pay for real (non-financial) assets which have enduring (and in many cases tradeable) value. If government capital spending were accounted for on an accrual basis (as makes sense, since this is how companies treat long-lived capital assets), then the true increase in the net debt burden since the recession would be even smaller.
An even more interesting indicator is provided in the next figure, which shows total public debt servicing charges (again as a share of GDP) for both the federal and provincial governments. This is from CANSIM Table 380-0022; I think (but am not 100% sure) that it is a gross number: that table does not separately report interest income for governments, but it does report a broader category called investment income … so it is likely that the data in this figure overstates the problem (by counting only interest expense, and not adjusting for interest income on the governments’ own financial assets).
According to this figure, not only has government debt service expense declined dramatically as a share of GDP since the bad old 1990s (when it peaked at close to 10% of GDP). Moreover, debt service has continued to decline despite the (modest) rebound in debt resulting from the recession. Debt service costs for all levels of government fell below 4% of GDP since the recession. How could debt service costs decline, even while the debt burden (modestly) grew? Because average interest costs have declined. Like home-owners, governments have been able to refinance their debt to take advantage of today’s ultra-low rates. (Remember, even fiscally pressed provinces like Ontario can still borrow money today for 10 years at real interest rates not much above zero.) As older bonds come due and are refinanced, governments reduce their interest costs dramatically. Those savings have more than offset the incremental debt service costs associated wtih additional debt. So the claim that rising debt service costs are squeezing out more useful forms of public expenditure (not that conservatives support those programs, either) is empirically false.
Running up public debt for the sake of running up debt makes no sense.  There are costs associated with debt, and limits to how much debt can rise. But there are benefits associated with debt-financed spending, too. That includes the productivity of long-lived public capital assets that can be financed with debt (just like companies or households prudently finance long-lived assets, from factory equipment to homes, with debt). In a demand-constrained macroeconomic context, another benefit of debt-financed spending is the positive spillover effect on overall employment and income that results from that spending (even when it’s on current services rather than public capital). Based on the preceding graphs, Canadian governments are far from any meaningful constraint on their ability to borrow. Hence, we should make a rational decision as a country regarding how much new debt is optimal, rather than being dominated by an initial quasi-religious assumption that “all debt is bad.”
In short, choosing today to slash spending on useful public programs very much reflects a political choice, not a fiscal necessity. In the long run, we can and should pay for the public services we need by putting Canadians back to work. In the meantime, there is clearly ample capacity for governments to continue to carry the cost of these programs. Since governments can borrow at near-zero real interest rates, even as companies and households are beginning to deleverage, it is counter-productive to impose austerity in the public sector on top of the other painful economic challenges we are facing. | <urn:uuid:65383091-cc06-4577-be99-d150c9ca6ee7> | CC-MAIN-2021-39 | https://www.progressive-economics.ca/2012/07/fiscal-crisis-in-context-two-indicators/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053918.46/warc/CC-MAIN-20210916234514-20210917024514-00367.warc.gz | en | 0.95951 | 1,316 | 2.921875 | 3 |
A friend of yours who has not taken astronomy sees a meteor shower (she calls it a bunch of shooting stars). The next day she confides in you that she was concerned that the stars in the Big Dipper (her favorite star pattern) might be the next ones to go. How would you put her mind at ease?
In what ways are meteorites different from meteors? What is the probable origin of each?
How are comets related to meteor showers?
What do we mean by primitive material? How can we tell if a meteorite is primitive?
Describe the solar nebula, and outline the sequence of events within the nebula that gave rise to the planetesimals.
Why do the giant planets and their moons have compositions different from those of the terrestrial planets?
How do the planets discovered so far around other stars differ from those in our own solar system? List at least two ways.
Explain the role of impacts in planetary evolution, including both giant impacts and more modest ones.
Why are some planets and moons more geologically active than others?
Summarize the origin and evolution of the atmospheres of Venus, Earth, and Mars.
Why do meteors in a meteor shower appear to come from just one point in the sky? | <urn:uuid:a7f69a3f-ce58-4dd0-9e84-d1669a6bd81f> | CC-MAIN-2019-51 | https://openstax.org/books/astronomy/pages/14-review-questions | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540506459.47/warc/CC-MAIN-20191208044407-20191208072407-00024.warc.gz | en | 0.938207 | 263 | 3.875 | 4 |
12 January 1729: Edmund Burke, one of the foremost political thinkers of 18th century, was born in Dublin on this day. He was also Anglo-Irish statesman, author, orator, political theorist and philosopher who, after relocating to England, served for many years in the British House of Commons as a member of the Whig party.
Burke was born in Dublin to a prosperous, professional solicitor father (Richard; d. 1761) who was a member of the Church of Ireland. His mother Mary (c. 1702–1770), whose maiden name was Nagle, belonged to the Catholic Church and came from an impoverished but genteel Cork family. Burke was raised in his father's faith and would remain throughout his life a practising Anglican, unlike his sister Juliana who was brought up as and remained a Roman Catholic. His political enemies would later repeatedly accuse him of harbouring secret Catholic sympathies at a time when membership of the Catholic church would have disqualified him from public office. Once an MP, Burke was required to take the oath of allegiance and abjuration, the oath of supremacy, and declare against transubstantiation. Although never denying his Irishness, Burke often described himself as "an Englishman".
He spent the bulk of his life in England and became active on Politics, opposing Britain’s polict on the Revolt of the American Colonies and at the end of life British policy towards Ireland. He never totally adopted any political philosophy however but overall he could be said to represent a conservative liberalism that eschewed extremes. His most famous work was Reflections on the Revolution in France which was a best seller, and in it warned against the dangers of excess in political affairs especially as events unfolded in France in the wake of the start of the French Revolution.
He basically wanted the role of the State to play but a limited role in the personal affairs of men and allow as much individual freedom of though and action that was commensurate with the Social Order.
‘That the State ought to confine itself to what regards the State, or the creatures of the State, namely, the exterior establishment of its religion; its magistracy; its revenue; its military force by sea and land; the corporations that owe their existence to its fiat; in a word, to every thing that is truly and properly public, to the public peace, to the public safety, to the public order, to the public prosperity.’
"All government, indeed every human benefit and enjoyment, every virtue, and every prudent act, is founded on compromise and barter."
'The only thing necessary for the triumph of evil is for good men to do nothing.'
While hard to sum up such an active career over many decades a pithy summary of what he stood for might well be:
His soul revolted against tyranny, whether it appeared in the aspect of a domineering Monarch and a corrupt Court and Parliamentary system, or whether, mouthing the watch-words of a non-existent liberty, it towered up against him in the dictation of a brutal mob and wicked sect.
Burke died in Beaconsfield, Buckinghamshire, on 9 July 1797, five days before the anniversary of the storming of the Bastille which marked the official start of the Revolution he so long predicted and fought against. He was buried in Beaconsfield alongside his son and brother. His wife survived him by nearly fifteen years. | <urn:uuid:e0fa1bc5-acb3-4fb5-b219-7f4b8aaf2acc> | CC-MAIN-2018-26 | http://irelandinhistory.blogspot.com/2014/01/12-january-1729-edmund-burke-one-of.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867055.95/warc/CC-MAIN-20180624195735-20180624215735-00427.warc.gz | en | 0.983587 | 713 | 4.125 | 4 |
Animal Species:Red Rock Crab
The Red Rock Crab is not only one of the most common crabs in Sydney but also one of the fastest movers. It is often used as bait, particularly to catch grouper.
Standard Common Name
Red Rock Crab
Red Bait Crab
The Red Rock Crab is easily recognised by its large size and red colour.
The Red Rock Crab is found in temperate waters of southern Australia, in New South Wales, Victoria, South Australia, Western Australia and Tasmania. Also found in New Zealand, South Africa and Chile.
Preferring moist environments, the Red Rock Crab is found around the low-tide mark on the rocky shore and underwater to depths of about 8 m. It tends to hide among seaweed and under rock ledges.
Feeding and Diet
The Red Rock Crab feeds on sponges, bryzoans and small seaweeds. | <urn:uuid:ed7956b3-54cd-4fbf-a7f9-42247e4e9cbb> | CC-MAIN-2016-36 | http://australianmuseum.net.au/red-rock-crab | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296721.55/warc/CC-MAIN-20160823195816-00273-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.825032 | 185 | 2.921875 | 3 |
We did it! Thanks to an incredible show of support to protect one of the richest habitats on Earth, we will be able to save 400 acres of tropical forest in the ...
Organisation’s aim: To protect the habitat of globally threatened species of birds in the Andes of Ecuador, together with all associated biodiversity. Fundación Jocotoco owns eight reserves that protect a range of habitats for threatened species, including ones discovered on the reserve as new to science, such as the Jocotoco Antpitta.
Fundación Jocotoco was established in 1998 to protect globally threatened bird species of the Ecuadorian Andes. Since then they have successfully created eight reserves protecting birds, as well as other wildlife depending on a wide range of different habitats.
Initially approached by Nigel Simpson, a founding Trustee of Fundación Jocotoco, World Land Trust (WLT) has been funding land purchase and protection with Jocotoco since 2003, as part of our conservation work in Ecuador.
There have also been Carbon Balanced projects at three reserves; Buenaventura, Tapichalaca and Yanacocha.
Tree planting has taken place on the Buenaventura, Tapichalaca, Yanacocha and Jorupe reserves since 2006, using funds from WLT’s Reforestation programme.
WLT also supports Fundación Jocotoco through the Keepers of the Wild programme, which funds rangers on our partners’ reserves.
Other projects and activities
- Organising community programmes, including hosting school visits at the Buenaventura Reserve to educate local children about the conservation of the El Oro Parakeet;
- Running tree nurseries that grow tens of thousands of seedlings for use in reforestation projects;
- Scientific research on topics including the Pale-headed Brush Finch programme, reforestation and management plans for the area.
Awards and achievements
- In 2007, the Buenaventura Reserve received an award for promoting eco-tourism to the Province of El Oro area. | <urn:uuid:e53a1531-b657-4cee-b4ac-01827b471c91> | CC-MAIN-2020-29 | https://www.worldlandtrust.org/who-we-are-2/partners/fundacion-jocotoco/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657170639.97/warc/CC-MAIN-20200715164155-20200715194155-00342.warc.gz | en | 0.918196 | 431 | 2.828125 | 3 |
With this micro-power flasher you can baffle the intruders trying to break into your home. The unit continuously emits flashing light both during day and night giving the impression that the occupants of the house are present inside. The circuit can run off four 1.5V AA-size cells continuously for a long period.
The human eye and brain sharply react to sudden changes in the light intensity, especially the flashing light. A flashlight for a minimum of 10 ms is necessary for perception at full brightness. The ability of the brain or retina called ‘persistence of vision’ keeps the memory of the vision for a period of 20 ms and then it slowly fades. The human eye can detect the flashing light as individual flashes only if the gap between flashes is greater than 20 ms. It is particularly attracted by a flashing light having a repetition period of 0.5 to 5 seconds.
Here a low-power CMOS IC CD4093 is used to generate sharp flashes from an ultra-bright red LED. CD4093 is a quad NAND gate IC and only one gate is used here as an oscillator. All the unused inputs are kept at logic ‘1’ by tying them to the positive rail to prevent floating and consequent damage due to statics.
LED1 is connected such that it receives the current from storage capacitor C2. This helps to reduce the battery drain since LED1 is not directly powered from the output of the IC.
The value of resistor R1 determines the current drain from the battery to charge capacitor C2. With the given value of 330 kilo-ohms, the current drain will be around 18 µA. (V/R=6/330,000 ohms=18 µA). Resistor R2 limits the current through LED1.
Typically, a 1.5V AA cell rated at 1.6 Ah has a nominal shelf life of five years. A low-leakage, good-quality battery can power the flasher for nearly three years.
The circuit costs less and can be constructed on a small perforated board. Enclose it in a small case with a front panel similar to that of a burglar alarm so that flashing LED1 gives the appearance of the alarm-‘on’ indicator. Use a high-brightness, transparent LED1 for eye-catching appearance.
The flasher can also be used as a pathfinder in corridors or exit doors when power fails or as a keyhole finder. | <urn:uuid:1d278edd-d844-4ac7-804f-e41b78a00674> | CC-MAIN-2022-21 | https://www.electronicsforu.com/electronics-projects/micro-power-flasher | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00621.warc.gz | en | 0.92683 | 512 | 2.65625 | 3 |
Who was Samuel Taylor Coleridge?
Samuel Taylor Coleridge (1772-1834) was one of the great Romantic poets. He was a writer of visionary imagination, lyric intensity and philosophical profundity.
Coleridge was born in Devon. Educated in London after his father died, he was often homesick, as he would recall in his poem, ‘Frost at Midnight’. He was an undergraduate at Jesus College, Cambridge, but left before completing his degree.
Coleridge became friends with another young poet, Robert Southey, and together they planned to emigrate to America to found an agrarian, communistic society which they named ‘Pantisocracy’. They married a pair of sisters, Sara and Edith Fricker, to advance this plan.
Coleridge gave politically radical public lectures in Bristol in 1795 on the subjects of the French Revolution, the Slave Trade, and Revealed Religion, while Southey lectured on history. However, disagreements between the two led to a period of estrangement: they did not attend one another’s wedding.
Partnership with William Wordsworth
In 1797, Coleridge and Sara moved to a cottage in Nether Stowey, Somerset. While living here, Coleridge began his literary partnership with William Wordsworth, producing a joint volume, Lyrical Ballads, in 1798.
This collection is often identified as the start of English Romantic poetry, and opens with Coleridge’s poetic masterpiece, ‘The Rime of the Ancyent Marinere’.This ballad about a sailor who shoots an albatross is a supernatural tale of transgression, suffering and partial redemption. Critics continue to debate whether the poem presents an experience of Christian revelation, or a nightmarish vision of a chaotic universe.
Travels in Europe
Coleridge travelled to Germany, where he studied at the University of Göttingen. After returning to England in 1799, he fell in love with Sara Hutchinson, Wordsworth’s future sister-in-law. ‘Asra’, as Coleridge called her, became the muse for several poems while his strained marriage deteriorated further.
Coleridge went abroad again between 1804 and 1806. In Malta, he was the Private Secretary to the Governor, Rear Admiral Sir Alexander Ball, for whom he drafted position papers on military policy.
Later years and reputation
In the 1810s, Coleridge established a reputation as a lecturer on philosophy and literature, particularly on the works of Shakespeare. In 1816, the poet Lord Byron convinced him to publish his unfinished ballad, Christabel, and his visionary poem, ‘Kubla Khan’.
In 1817, Coleridge published the Biographia Literaria, which combined autobiography, literary criticism and German philosophy. While Coleridge’s reputation suffered in the twentieth century when the extent of his addiction to opium and his plagiarisms from German sources were documented, such criticism has overshadowed the originality of the insights he added.
Coleridge continued to produce influential prose works in his final years, including The Statesman’s Manual (1816), Aids to Reflection (1825) and On the Constitution of Church and State (1829). His thinking in these works spanned religion, literary theory and constitutional politics, and would have a significant influence on the thinking of the Victorians. The Philosopher J. S. Mill placed Coleridge, along with Jeremy Bentham, as one of two seminal minds of the age in England. Coleridge died on 25th July, 1834. | <urn:uuid:8eceb8ab-7489-454d-b059-2964777779a2> | CC-MAIN-2021-04 | https://www.nationaltrust.org.uk/features/who-was-samuel-taylor-coleridge | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00743.warc.gz | en | 0.968024 | 766 | 3.5 | 4 |
Golden-hardhack (Dasiphora floribunda (Pursh) Raf.)5
Photo © 2000 James L. Reveal
Before he left the United States in late December of 1804, Rafinesque had approached Thomas Jefferson asking to be appointed naturalist to one of the President's proposed western expeditions. After some delay, Jefferson sort of agreed to send him on the Red River expedition to be led by William Dunbar and George Hunter. Alas for young Rafinesque, he received word that he would be appointed only after he was in Sicily. Considering that the natural history collections of the Dunbar-Hunter and the Freeman-Custis expeditions were essentially nil, science lost out on both accounts.6
Now, upon his return, without books, collections or family, he set out to recover at least the first two with the support of friends in New York. While in Sicily, Rafinesque had been publishing widely, both books and, especially, articles in scientific journals, most notably the New York-based journal Medical Repository. His journal Specchio delle scienze, (two volumes) published in 1814, and his 1815 book Analyse de la nature may be mentioned as representative contributions; both were published in Palermo. The latter work was particularly significant for here Rafinesque outlined a system of classification for all living organisms that was remarkably novel. For botany it could have been highly significant had he fully described all of the groups he recognized. Instead, he treated only a few in detail as examples, leaving the vast majority of the new names without descriptions. As a result, later authors published many of his new families of plants without giving him credit. And about that he complained bitterly.
Dr. Samuel Latham Mitchill (1765-1831)
Courtesy of Barnard College, Columbia University
Tradition suggests that Dr. Samuel Latham Mitchill7 took Rafinesque into his home. As a noted student of the natural sciences, he was generous to his fellow naturalists. Alexander Wilson, who would describe the birds gathered on the Lewis and Clark Expedition, was an early recipient of his generosity. While Rafinesque lived in Sicily, he published several papers in Mitchill's journal Medical Repository, so in a sense the two men were at least scientifically acquainted. In addition to providing lodging for the near-destitute Rafinesque, Mitchill sought employment for him. Rafinesque became a member of the newly established Lyceum of Natural History in New York and presented its first scientific lecture. At first he possibly was in the field with Mitchill and certainly with the New York botanist John Torrey, but by 1818 he was traveling alone, often into the Allegheny Mountains, to search for all sorts of curious objects. By Rafinesque's count, he collected more than 250 new species of plants and animals on these early trips.
Rafinesque published numerous critical reviews of floras and manuals published by others, mainly in the American Monthly Magazine. In 1817, he published Florula ludoviciana, only to have his effort severely criticized by others, or, worse yet for Rafinesque, totally ignored. Still, he was traveling widely and he was gradually rebuilding his collection of natural history objects.
5. Rafinesque described the genus Dasiphora (daze-IF-fore-ah) for the shrubby potentillas that are now so popular in the garden. Dasiphora floribunda (floor-ah-BUN-dah, referring to the many flowers on the shrub) of North America is closely related to Dasiphora fruticosa (fruit-EH-coal-ah, referring to the shrubby habit) of Europe and Asia and is more properly considered a subspecies. Pursh published Potentilla floribunda (poe-TEN-till-ah) in 1813. He based the name on plants from eastern North America. He felt at the time that the Lewis and Clark specimen from Montana was more like the Old World Potentilla fruticosa than his new species.
6. Dr. Boewe told me that of 15 Dec 1804 Jefferson wrote to Rafinesque " 'Certainly I should be happy to add your botanical talents to the party [of Hunter and Dunbar], but that it is not in my power to propose any birth [sic] worthy of your acceptance.' Most would consider this a "Chinese rejection slip"—i.e. a negative phrased to spare the feelings of the recipient." The letter did not reach Rafinesque until 1805.
William Dunbar, a noted local scientist, and George Hunter, a Philadelphia chemist, were selected by Jefferson to explore the southern boundary of the Louisiana Purchase much in the same manner Lewis and Clark were asked to explore the northern boundary. After much delay, Dunbar and Hunter lead a four-month-long expedition up the Ouachita River in Louisiana, an area already partially settled. Based on their findings, Jefferson obtained $5,000 from Congress to examine the Red River region. The actual expedition did not get started until May of 1806, with Thomas Freeman (1794-1821) in charge and Barton-trained Peter Custis (?-1842) acting as the expedition's naturalist. Being on the boundary of Spanish Mexico, there was considerable suspicion of the real intentions of the expedition. After a journey of some 615 miles up the Red River, the expedition was met by Spanish troops and forced to return (Flores 2000). Without a centralized national museum (the Smithsonian Institution would not be established until 1846), the collections of natural objects gathered by Custis were scattered and his report destined to be long forgotten. It was not until 1967 that the botanical report was evaluated (Morton, 1967), and not until 1982 that two plant specimens were relocated and the specimens identified. Interestingly, the Custis specimens were among the Lewis and Clark plants at the Academy of Natural Sciences in Philadelphia (Flores 1985). Custis described only three plants as new to science in 1806, but none of his name is in use today. The one plant he thought was new, the Osage orange, Maclura pomifera (Raf.) Schneid. [mac-CLURE-ah pome-IF-er-ah), was named (in 1817) from garden material grown from fruits Lewis and Clark had gathered in 1804. Custis did not propose a name for the Osage orange in either 1806 or 1807. Custis did propose the genus name Bartonia, but this had already been published in 1801. The plant Custis had is now known as Orobanche ludoviciana (ore-oh-BANK-ee lude-oh-viss-ee-AHN-ah), a name proposed by Nuttall in 1818.
7. Samuel Latham Mitchill (1764-1831) had a brilliant career in medicine, natural history and politics. He served in both houses of Congress and held various posts in New York. Mitchill, like Rafinesque, had an incredible mind and could remember even the minutest detail so that he was sometimes called a "walking library" or a "living encyclopedia." He married a widow of substantial means and thus he was able to devote considerable time to botany and zoology, especially the study of fish. Mitchill assisted Lewis in the identification of some of the fish he and Clark saw in the American West. His 1820 United States pharmacopoeia would serve the nation's doctors well for a generation. | <urn:uuid:9998017a-04a3-4349-8270-19b0c1d965d9> | CC-MAIN-2015-32 | http://www.lewis-clark.org/article/519?ArticleID=519 | s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988061.16/warc/CC-MAIN-20150728002308-00125-ip-10-236-191-2.ec2.internal.warc.gz | en | 0.972238 | 1,573 | 3.09375 | 3 |
Heart failure represents a major public health problem worldwide. There are approximately 5.1 million patients diagnosed with this disease in the United States, and more than 650,000 new heart failure cases are diagnosed every year. In fact, in the United States the lifetime risk for developing heart failure is more than 20% for individuals more than 40 years old. As the U.S. elderly population is continuously growing, the expected incidence of heart failure will rise. Therefore, the health care cost and burden of heart failure in medical systems will become even more significant.1
The “gold standard” treatment for patients with end-stage heart failure is cardiac transplantation. This treatment provides significant benefit in survival and quality of life for such patients. Unfortunately, the limited supply of donor organs makes this treatment an option for a few carefully selected patients. In fact, a significant proportion of patients die while waiting for heart transplantation. The use of mechanical circulatory support devices (MCSD) has become an alternative for end-stage heart failure patients for treatment in both the short term and long term. In recent years, the field of mechanical circulatory support has greatly expanded, displaying a wide array of options, from transient support of patients undergoing high-risk percutaneous interventions to destination therapy as a long-term alternative to heart transplantation.
Transesophageal echocardiography (TEE) plays an essential role as a diagnostic and management tool in a diversity of situations involving clinical care of patients undergoing mechanical circulatory support. This chapter will discuss the applications and highlight the versatility of TEE in management of patients with contemporary MCSDs.
CURRENT MECHANICAL CIRCULATORY SUPPORT DEVICES
Technological developments during the last decades have resulted in devices with diverse mechanisms of action, modes, and place of implantation, as well as support capabilities. It is possible to provide partial or complete ventricular function support for one or two ventricles. MCSDs can be used as temporary interventions while a long-term therapeutic option becomes possible (e.g., transplantation or long-term circulatory support) or there is myocardial recovery, or as a long-term support therapy. Devices can be classified according to the type of flow provided–pulsatile or continuous flow, or regarding their pump mechanism of action–centrifugal or axial. Other modes of classification include short-term vs. long-term, mode of implantation (percutaneous vs. surgically implanted), location (intracorporeal vs. extracorporeal), and the ventricle supported (left, right, or biventricular).2 The details about mechanisms of action, placement, and selection of MCSDs are beyond the scope of this chapter. We will briefly describe the most commonly used contemporary devices available (Fig. 17-1).
(A) HeartMate® II Left Ventricular Assist System (LVAS; Heart Mate II, Heart Mate 3 and St. Jude Medical are all tradmarkes of St. Jude Medical, LLC or its related companies. Reproduced with permission of St. Jude Medical, ©2018. All rights reserved). (B) HeartMate® ... | <urn:uuid:9068e807-85a5-4ba7-b93d-b0c1c63b6efc> | CC-MAIN-2020-50 | https://accessanesthesiology.mhmedical.com/content.aspx?bookid=2499§ionid=201984401 | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188947.19/warc/CC-MAIN-20201126200910-20201126230910-00499.warc.gz | en | 0.903093 | 657 | 2.625 | 3 |
National Drug Research Institute, Curtin University, WA.
Are you unsure how to start a conversation about alcohol use with teenagers? Or have questions about the most effective way to handle this topic? Informed by the latest research on prevention strategies, this webinar will provide practical advice that will be valuable for parents, school staff and others working with teenagers.
The webinar will:
- Provide an overview of effective strategies for preventing teenage alcohol use;
- Discuss effective ways to approach conversations about alcohol among young people;
- Provide suggestions for navigating common challenges;
- Provide the opportunity to ask your own questions.
This webinar was developed by Professor Steve Allsop at the National Drug Research Institute, Curtin University, and informed by review of the research evidence on this topic. | <urn:uuid:3d0ca987-6653-4ca6-8b2d-b27db44f9f70> | CC-MAIN-2017-51 | https://positivechoices.org.au/teachers/webinar-talking-teens-alcohol | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00430.warc.gz | en | 0.895436 | 160 | 2.5625 | 3 |
Rate of whooping cough among children is on the rise
Published 26/08/2014 | 02:30
THE number of children contracting whooping cough is increasing, with the disease commonly affecting young infants who suffer a severe illness.
There is strong evidence to suggest that waning immunity among older children and adults, who were vaccinated against the disease, may be causing the increase.
Whooping cough (pertussis) is highly contagious and the Irish figures are in line with international trends.
Young children who get whooping cough can require a prolonged spell in hospital, according to a new study by doctors at Temple St Children's Hospital in Dublin.
They studied a number of admissions to the hospital.
"Reducing transmission from known infected patients still plays a vital role in controlling the spread of disease," they reported in the Irish Medical Journal.
The number of cases notified in Ireland decreased in the early 2000s. Between 2003 and 2008, 40 to 104 cases were notified annually with infants suffering the highest incidence of illness and death.
"Figures doubled from 2010 to 2011 and doubled again from 2011 to 2012," reported the Temple Street study.
There were 18 laboratory confirmed cases diagnosed at Temple St and 15 of these were infants under six months. The infants with whooping cough ranged from 36 to 96 days old and had a median age of 44 days.
Ten infants had documented exposure to a sick member of their family - most commonly the child's mother or older siblings with similar respiratory symptoms, often with prolonged cough.
"Patients presented to the hospital as a result of increased severity of symptoms," reported the study. They spent from four days to 13 days in hospital. "Calculating for a cost of €800 for a night on a general ward and €1600 for a night in intensive care, the total cost of admissions to the hospital was in excess of €90,000, with an average cost of €6,450 per patient.
The disease is known for uncontrollable, violent coughing which can make it difficult to breathe. A bout of coughing is followed by a need to take in a deep breath which results in a 'whooping' sound. The disease can be fatal, especially in babies under 12 months. Immunisation is the best prevention strategy.
The whooping cough vaccine is currently given in Ireland as part of the 6-in-1 vaccine. Apart from whooping cough, this vaccine protects against diphtheria, tetanus, Haemophilus influenzae type b, hepatitis B and polio. Three doses of the 6-in-1 vaccine are usually given at two, four and six months of age. A fourth dose is recommended at four or five years.
Getting vaccinated while pregnant may help to protect the unborn baby from developing whooping cough in the first few weeks of life as the immunity is passed from mother to baby.
Health & LivingFollow @Independent_ie | <urn:uuid:7304e075-125e-4b68-a379-9ce3fcedd0a7> | CC-MAIN-2015-18 | http://www.independent.ie/irish-news/health/rate-of-whooping-cough-among-children-is-on-the-rise-30529100.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634331.38/warc/CC-MAIN-20150417045714-00220-ip-10-235-10-82.ec2.internal.warc.gz | en | 0.969393 | 593 | 3.234375 | 3 |
A graphing calculator isn’t a necessity for everyone’s mathematical needs. For many students, they just want the best scientific calculator they can get. Here are two best scientific calculators that we will discuss. They are TI-30X IIS vs TI-30X. We will find out the one which is really best for the students in mathematics or even science.
The TI-30X IIS is getting along with TI-30X IIB, we can find out from their name S for solar powered and B for battery powered. They have been around since the late 90’s. The most important part of the TI-30X II’s name is the “II,” which tells you that this is a two line calculator. This means that students can enter their work on one line, press enters, and see the answer on a different line, with their work still displayed. If a mistake was made, just pressing up will go back to that work to allow editing. See also the comparisons between Casio fx-9750GII and Texas Instruments Ti-84.
TI-30X II is an advanced version of the previous one-line version of the calculator, but this is not the best calculator in this series anymore. In addition, special trigonometric values are returned as decimal estimates rather than exact radicals. But this is the right thing, that this standard is still used as a standard in almost all calculators, but for a fairly similar price. Then, the TI-30X Multiview will give you the right answer.
The newer advantages of the TI-30X Multiview do not only have two display lines and even up to four lines, each capable of displaying problems and answers. Its greatest strength is that everything can be entered exactly as it appears in textbooks. It also has a distinct advantage of returning the results for trigonometric functions. Therefore, this calculator makes it the only legal calculator ACT produced by Texas Instruments including with the ability of the graphics calculator line.
TI-30X II can review previous entries and look for patterns. A two-line display on the screen shows an entry on the top line and shows the results on the bottom line. However, this is different from TI-30X. The four-line display on the screen type allows you to enter more than one calculation on the same screen.
TI-30X II, Students can easily browse the table (x, y) of the value for a given function, automatically or by entering a certain x value. While the TI-30X can perform advanced scientific functions, recognize π as a symbol in radian mode, and provide a menu that lets you choose the settings appropriate for your calculation needs. In addition, it can see scientific notation with the right superscripted exponents and see the output in scientific notation.
TI-30X II Calculator Adds reducing, multiplying, and dividing fractions that are included in the traditional numerator/denominator format. But the MathPrint feature, which can be used for expression display, symbols, and fractions as they appear in textbooks.
TI-30X IIS vs TI-30X
|Key Features||- More for the money with this high quality Product - Offers premium quality at outstanding saving - Excellent product||- Catalog publishing type, calculators scientific - Complex mumber calculations - Compliance, standards, RoHS compliant|
|Best Offer||Save Money Please click here||Save Money Please click here|
There are winners in hand in this competition, and this is the TI-30X Multiview. Some teachers may require an old IT-30X II school because they do not want students to have a calculator that makes fractions, radicals, or trigonometry so easy. However, if you are given a choice, it is difficult to understand why anyone would deliver to Multiview. Despite the fact that there is no additional cost, they could have an enormous added advantage in math and science classes. | <urn:uuid:1a753842-9841-4f09-8893-564963976488> | CC-MAIN-2021-43 | https://howibookedit.com/ti-30x-iis-vs-ti-30x/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00480.warc.gz | en | 0.935129 | 823 | 2.828125 | 3 |
Horsemeat Poses Serious Risks to Human Health
The Safeguard American Food Exports (SAFE) Act would prohibit the slaughter of horses in the United States for human consumption, as well as the export of live horses for the same purpose. Please visit AWI’s Compassion Index to urge legislators to cosponsor this legislation.
"The permissive allowance of such horsemeat used for human consumption poses a serious public health risk."1
The US Food and Drug Administration currently bans the presence of 379 common equine drugs in animals slaughtered for human consumption. However, there is no procedure in place to ensure that American horses, sold to slaughterhouses and killed for human consumption, are free of these FDA-banned substances. When horses are sold, especially through an auction, there is no required transfer of information regarding the substances they received during their lifetime. Therefore, there is no mechanism in place to ensure horses frequently bought at auction by killer buyers have not been given dangerous substances before they become part of the food chain.
Horses are routinely given substances that are dangerous to humans. Most American horse owners do not imagine that their horses may someday be slaughtered for human consumption, and almost universally give their horses medications, antibiotics, ointments, wormers, and other substances labeled "not for animals intended for human consumption." These substances may remain in the body for long periods of time.
A study published in May 2010 in the journal Food and Chemical Toxicology found that substances routinely given to American horses cause dangerous adverse effects in humans. One commonly used anti-inflammatory drug, phenylbutazone (bute), can be lethal if ingested by people. The most serious effect of bute on humans is bone-marrow toxicity, leading to agranulocytosis (failure to produce white blood cells, causing chronic infections) and aplastic anemia (insufficient production of red and white blood cells and platelets). Similar blood conditions such as leucopenia, hemolytic anemia, pancytopenia, and thrombocytopenia may also occur in people who consume bute. The National Toxicology Program has determined that bute is a carcinogen. For these reasons, the FDA bans this substance for human consumption.
The February 28, 2010 Paulick Report published a study revealing that more than 9 out of 10 racehorses are commonly administered bute before they race. Racehorses are frequently shipped to Mexico and Canada to be slaughtered for human consumption when their performance flags, often within days or weeks of receiving their last dose of bute. Any consumer of this meat, which can be ground together with beef and offered to consumers without proper identification, could be unwittingly ingesting banned substances, with potentially lethal results.
The European Union has a policy prohibiting importation of the meat of any horse who has ever received bute. Nitrofurazone, the most common wound ointment given to American horses, is also prohibited for use on any horse whose meat is shipped to the European community.
The United States needs to shut down the horse slaughter channels that currently put consumers at risk, and ensure that meat from American horses is not jeopardizing the health and lives of consumers.
Those promoting horsemeat consumption claim horsemeat is leaner (and therefore, supposedly, healthier) than beef. What they fail to point out is that, unlike cattle, horses are not raised for meat, and are given hundreds of legal and illegal drugs rendering their meat unsafe for human consumption in the United States and abroad. However, because of confusing and conflicting US and foreign laws, horse meat slips through the regulatory cracks and is consumed overseas by unsuspecting diners. The diagram shows just a few of the banned and dangerous drugs that consistently end up in horse meat and on people's plates.
1Dodman, N., et al. 2010. Association of phenylbutazone usage with horses bought for slaughter: A public health risk. Food Chem Toxicol. 48(5):1270-4. doi: 10.1016/j.fct.2010.02.021
Learn more about horse slaughter. | <urn:uuid:493cb290-4a8c-4254-a8b9-a3ea7f02df3c> | CC-MAIN-2019-39 | https://awionline.org/content/safeguard-american-food-exports-safe-act | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575844.94/warc/CC-MAIN-20190923002147-20190923024147-00480.warc.gz | en | 0.948441 | 848 | 2.84375 | 3 |
Details about Glencoe Literature: The Reader's Choice, Course 7, British Literature, Student Edition:
"The two most engaging powers of an author are to make new things familiar, and familiar things new." - Samuel Johnson (1709-1784)Glencoe Literature for 2002 also "makes new things familiar and familiar things new." Designed to meet the needs of today's classroom, Glencoe Literature has been developed with careful attention to instructional planning for teachers, strategic reading support, and universal access that meets the learning needs of all students.
Back to top
Rent Glencoe Literature: The Reader's Choice, Course 7, British Literature, Student Edition 1st edition today. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Glencoe/McGraw-Hill. | <urn:uuid:5beb00c9-4d6e-477f-8958-9ad7f404bf07> | CC-MAIN-2016-18 | http://www.chegg.com/textbooks/glencoe-literature-the-reader-s-choice-course-7-british-literature-student-edition-1st-edition-9780078251115-0078251117 | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111396.55/warc/CC-MAIN-20160428161511-00014-ip-10-239-7-51.ec2.internal.warc.gz | en | 0.91133 | 164 | 2.609375 | 3 |
Running head: TRANSFORMATIONAL EFFECTS
The period of renaissance saw significant changes that have shaped the modern world
beyond Europe, where it started. Human life changed socially, economically, politically and
technologically from this time (Fromherz, 2009). For instance, religion evolved and change...
15 Million Students Helped!
Sign up to view the full answer | <urn:uuid:7619b51c-29c1-45d7-89b4-6716123126a1> | CC-MAIN-2020-50 | https://www.studypool.com/discuss/19886852/what-transformational-effect-if-any-did-the-renaissance-have-beyond-europe | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191692.20/warc/CC-MAIN-20201127103102-20201127133102-00672.warc.gz | en | 0.771951 | 77 | 3.046875 | 3 |
Fluoride for Children, Why Use?
Cavities are the most common dental problem in the United States. Fluoride is one of the most effective and safest ways to prevent cavities for children.
How Does Fluoride Work?
Our mouths contain bacteria that feed on the sugars from the foods and drinks we consume. It results in the production of acid that can wear away our tooth enamel. The erosion of the enamel results in cavities long-term.
Fluoride strengthens the teeth, making you more resistant to acid. It not only lowers the risk of cavities but can also help reverse the early symptoms of decay.
How Children Can Benefit From Fluoride
One of the simplest ways for children to benefit from fluoride is drinking fluoridated tap water. The water contains healthy levels of fluoride, which can reduce tooth decay in children by 18 to 40 percent.
Most communities in the U.S. have access to fluoridated tap water. You can also find fluoride from children-friendly fluoride-based toothpaste. Fluoride supplements are also beneficial.
Who Should Get Extra Fluoride?
According to the American Academy of Pediatric Dentistry, children aged between 6 months and 16 years should use fluoride every day. If you don’t have access to fluoridated tap water, consult your pediatric dentist for the appropriate fluoride supplements that are suitable for your children’s ages. These are available as either tablets or drops, which are taken each day orally.
The dentist typically prescribes the amount of fluoride based on the age of the child, as well as the amount of fluoride in the drinking water for your area. Your child may also find sufficient fluoride from brushing and from foods prepared with fluoridated tap water.
It’s also worth noting that too much fluoride can lead to dental fluorosis. Children are more prone to dental fluorosis than adults because their teeth are more sensitive to the substance. The condition doesn’t affect the teeth that have already erupted, though.
Don’t hesitate to consult your dentist if you notice unusual changes in your child’s teeth. You want to deal with any dental issues early before they progress into more serious and costly problems. | <urn:uuid:dc6ff14d-20d9-4f09-85ac-de5611e5ca1a> | CC-MAIN-2022-21 | http://sebastiansmilespediatricdentistry.com/2020/01/01/fluoride-for-children-why-use/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00002.warc.gz | en | 0.952952 | 459 | 3.28125 | 3 |
As many as 1 in 8 students struggles with depression. You may wonder what the difference is between being seriously bummed out or sad and being clinically “depressed.”
Everyone has rough times. Big events like divorce, a death, moving, or a breakup can trigger depression. Depression can also occur without an obvious reason. A period of sadness versus true depression can be hard to diagnose and should be left to a professional to analyze. Persistence is a key sign – “normal” sadness goes away. Things to pay attention for are whether the person feels better within a few days or a week. When those feelings of sadness continue for weeks, it’s time to talk to someone.
If you or a friend can’t get past the sad or hopeless feelings, talk to a parent or trusted adult. Tell them how you are feeling and that these feelings have been persistent for a while. Physical symptoms, like recurring headaches or changes in sleeping patterns, can also accompany depression.
There is help and hope so you can feel better! Don’t hide it -- it’s not your fault or anything you’ve done. It’s a medical issue, just like healing a broken bone. Please talk to a trusted adult, and seek help with a professional.
Top Depression Warning Signs
- Persistent sad, anxious or empty mood
- Insomnia or sleeping too much
- Significant change in eating habits
- Lack of interest in doing anything
- Lack of enjoyment in any situation
- Inability to make a decision
- Difficulty concentrating
- Feeling hopeless or worthless
- Recurring headaches or stomach aches with other symptoms
- Waking up early and unable to go back to sleep
Recognizing and understanding depression is vital. Click here for resources on depression and ways to cope and get help. | <urn:uuid:53e4eb1e-f2b7-4071-8a5f-af1906dffad7> | CC-MAIN-2018-13 | http://www.michigan.gov/ok2say/0,5413,7-309-65191_67343---,00.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00277.warc.gz | en | 0.921826 | 380 | 3.546875 | 4 |
By Brian A. CollinsPublished Feb 11, 2018 07:07:51When you want to send files to another person or to a third party, you often need a server that’s capable of transferring the files to that person or third party.
This is often referred to as a “server in a box” or “solution in a bottle.”
If you have a server in a container, that means that you can connect it to a server anywhere in the world, and it can be used anywhere you want, including the cloud.
If you want a server to be a solution in a pod, you need to make it run in containers.
For example, you might want a solution that can serve files that come from different places in the cloud, such as the cloud from your PC or mobile device, and from a local machine in your home.
You can have a solution installed on a server, on a mobile device or even in the box.
In some cases, you can create a virtual server that can be installed on top of the solution in containers, and this can give you the flexibility to run solutions in containers on a variety of platforms.
However, if you want the flexibility that you need, you should first consider using an open source solution.
Open source solutions are available for a variety to provide the flexibility of using a solution on a wide range of platforms, including cloud, physical, and virtual.
However you choose to use your solution, you will find that there are several common features that you will need to be aware of:If you’re running a solution hosted on a physical server, then you will want to consider using a virtual host.
Virtual hosts are used to serve a virtual machine, and they can be accessed by a client machine, as opposed to a virtual IP address.
For example, a virtual computer might be hosted on the same physical server that hosts your solution.
However in this case, you may want to have a virtual desktop or a virtual terminal that can run the solution on.
If so, then virtual hosts will have a few additional features, as described in this section.
If you want your solution to be able to serve files in different locations on different servers, you must configure the IP addresses that you use to run your solution in different environments.
For instance, if your solution is hosted on an Amazon AWS instance, and you want it to be accessible from a different AWS instance in your organization, then the server’s IP address will need be changed.
The same applies for other cloud hosting providers.
For some solutions, this may be easy.
For others, it may require a bit more effort.
The steps that you take to manage these types of changes can vary.
For some solutions that you may need to change are:In a general sense, you want one server that runs the solution for you.
If that server is a physical machine, you could simply make it a VM that runs your solution from that physical machine.
For a virtual environment, you would need to configure that virtual machine to run a VM on the virtual server.
For the latter, you’d also need to ensure that the VM is running on the right hardware.
In this section, we will cover how to set up virtual machines and virtual machines that run virtual applications.
In this section you will learn how to configure the environment that your solution will run on, as well as some tips on how to make sure your virtual machine runs as expected.
The first thing you’ll need to do is make sure that your virtual server is running a stable version of Linux.
A stable version is a version that can handle all the changes that you want.
This means that your server is up to date and stable.
If it’s not, you’re missing out on a lot of benefits from a stable Linux installation.
If the Linux distribution is not stable, then your Linux installation will not run on your solution at all.
When you configure a virtual container, you’ll want to ensure the virtual machine is able to communicate with the solution.
In the example below, I have created a container that will be installed in a physical container.
When the container is running, the virtual disk will be mounted on that physical server.
The virtual machine will have to be configured to communicate over SSH with that physical box.
The first thing we need to set is the username and password that the container needs to have access to the SSH server.
In the example above, the username is myusername, and the password is password.
If we have a user named “myusername” in the container, we can connect to that user from within the container.
We can also connect to the container by using the ssh command.
Now we need the username, password, and environment variables that we set up earlier.
In our case, we’ll set up the environment variables as follows:Note that the environment variable is used for all of the above commands, but it doesn’t need to match any of the environment strings that we’ve set up previously.
We need the SSH environment variables to | <urn:uuid:14857edd-3152-4ecf-be76-99bf737e0814> | CC-MAIN-2021-39 | https://iraqjobz.com/2021/08/02/why-do-we-need-a-codecanyon/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00411.warc.gz | en | 0.942423 | 1,077 | 2.640625 | 3 |
Sol LeWitt in the Negev
The Negev is considered a stone desert where the elements have formed figures out of the sandstones and shifted the face of that desert by erosion. This means by taking something away to make other things visible.
In 1980 Sol LeWitt created a wall drawing on a black wall with nine geometric figures. He filled the spaces with white crayons to give it texture out of which was spared black space which turned into geometric figures. The idea of space that will only appear as a form through its surroundings, and will not only be visible but carry the memory of an underlying reality, is a concept that can be developed differently today.
I used this idea by filling a space with an image taken in the Negev out of which appears an underlying image of another space photographed in the desert as well; it is in the shape of a geometric figure. Also here the space of the first image allows some space for that underlying image and determines its visibility. | <urn:uuid:72b82e0d-6c4b-4248-bf15-0f1af1a47d6d> | CC-MAIN-2021-04 | https://normadrimmer.com/events/update-20/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507045.10/warc/CC-MAIN-20210116195918-20210116225918-00666.warc.gz | en | 0.964649 | 203 | 2.96875 | 3 |
The Northern Territory Centre for Disease Control is urging people not to handle bats because they may be carrying a rabies-like disease.
A television program recently featured someone rescuing a bat from a barbed wire fence with no protective clothing.
The centre's immunisation project officer, Chris Nagy, says contact with a bat's saliva can transfer the potentially deadly lyssavirus to humans.
"What we encourage people to do is if they are bitten or scratched by a bat, they should first wash the wound with soap and running water, apply some antiseptic, and then attend a health facility for review, which is actually going to involve immunisation.
"There's only been two cases of lyssavirus in Australia, and both those cases have been fatal.
"The [symptoms are] usually central-nervous system symptoms, so you might have blurring of your vision, be unable to walk, it might be paralysis." | <urn:uuid:0642c6b1-6418-438e-90ed-b2eafb1e2656> | CC-MAIN-2015-32 | http://mobile.abc.net.au/news/2009-01-22/bat-rescuers-warned-of-fatal-disease/2586498?pfm=sm | s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042985647.51/warc/CC-MAIN-20150728002305-00294-ip-10-236-191-2.ec2.internal.warc.gz | en | 0.949216 | 195 | 2.984375 | 3 |
Will a school of fish accept a robot fish as its leader? Seems impossible, but experiments at the New York Polytechnic Institute have revealed otherwise.
Using golden shiner fish and an artificial water body to act like a river, researchers tested if the shiners will respond to the presence of an artificial fish, despite its size. It turned out that with the robot fish, the live fish lined up behind the robot and slowed down in swimming.
The experiment led to an important question: why did the live fish accepted the artificial fish, more so as its leader? Despite this open inquiry, researchers were sure that robot fish can be used to lead schools of fishes away from danger, like environmental disasters and the like. It can also lead to new aquaculture practices, with robot fish being used as shepherds in the open ocean, rather than in unhealthy, enclosed spaces.
via The VergeFiled Under: Technology News | <urn:uuid:7a1b9e0d-ed48-45ae-8a86-fcade7d7819a> | CC-MAIN-2018-51 | https://www.geeky-gadgets.com/school-of-fish-follows-a-robot-fish-in-an-experiment-25-02-12/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00300.warc.gz | en | 0.955017 | 188 | 3.171875 | 3 |
Dry eyes are due to insufficient moisture on your eyes resulting from inadequate quality or quantity of tears. If the eyes are not lubricated enough, either by low quantity or poor quality of tears, it causes discomfort and leads to your eyes being irritated and not protected.
Low quantity of tears:
Decreased quantity of tears is a result of aging, hormones, some medical conditions, certain medications, eye surgery, and damage to the tear gland, just to name a few.
Poor quality tears:
Your tear film consists of three basic layers: fatty oils, water and mucus. These tear film layers keep the surface of your eyes clear, smooth and protected. Problems with any of these layers can produce poor-quality tears, which allow dry spots to form on your eye, leading to irritation.
Typical causes of inadequate tears:
- Certain medical conditions, including diabetes, rheumatoid arthritis, lupus, scleroderma, Sjogren’s syndrome, thyroid disorders and vitamin A deficiency
- Certain medications, including antihistamines, decongestants, hormone replacement therapy, antidepressants, and drugs for high blood pressure, acne, birth control and Parkinson’s disease
- Laser eye surgery, though symptoms of dry eye related to this procedure are usually temporary
- Tear gland damage from inflammation or radiation
Typical causes of increased tear evaporation:
- Wind, smoke or dry air
- Blinking less often, which tends to occur when you’re concentrating, for example, or while reading, driving or working at a computer
- Eyelid problems, such as out-turning of the lids (ectropion) and in-turning of the lids (entropion)
Typical causes of imbalance in tear composition:
The most common imbalance occurs when the tears don’t have enough oil composition because of blocked oil glands near the base of the eyelashes. This is the most common cause of Blepharitis.
Blepharitis is an inflammation of the eyelids. It commonly occurs when tiny oil glands near the base of the eyelashes become clogged which, as a result, leads to irritated and red eyes.
Dry Eye can be a very painful condition. Be Eye Wise and schedule an appointment with the Dry Eye & Blepharitis Center at Northwest Eye. Because the more you know, the better you see. | <urn:uuid:0c45b5a5-17a5-40af-abdc-5f3a9f906156> | CC-MAIN-2018-34 | http://nweyeclinic.com/conditions/dry-eyes/dry-eye-causes/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212768.50/warc/CC-MAIN-20180817182657-20180817202657-00102.warc.gz | en | 0.917306 | 499 | 2.8125 | 3 |
Shopping and food safety
When you order food by mail or online, you should be aware of food hygiene standards and make sure the food is safe to eat. Food hygiene is also important when buying packaged or unpackaged food in shops and supermarkets.
When you're shopping for food, it's important to:
- notice overloaded or overly-warm chilled or frozen food cabinets
- check shop staff wash their hands carefully between handling raw and cooked foods
- check shop staff clean or change their utensils between handling raw and cooked foods
- report unhygienic practices to the shop management or environmental health department in the local council
To make sure the food you buy is safe:
- don't buy any packets that have been damaged or opened
- don't buy food from counters where cooked and raw meat is not separated
- put chilled and frozen food into the right storage as soon as possible to avoid defrosting and spoilage
Your rights when buying food online
When you buy food online in the UK, the business must:
- give you clear information about the goods or services offered
- send you confirmation after you buy
- give you a cooling-off period of seven working days to cancel your order, unless you ordered something perishable
If you buy products from businesses in other European Union (EU) countries, your rights are similar to your rights in the UK. Outside the EU, your rights will vary in different countries.
Complaints about buying food
When a supermarket delivers food you didn't order
If you order a product that isn't available, the supermarket might substitute another product. Before you order, check the supermarket's policy on substitutions.
When you order goods but you're unhappy with the service
To complain, you should write to the business and give them a chance to put it right. When you write to complain you should include:
- date of advertisement, catalogue or website information where you found out about the product
- date you ordered
- information about what you ordered
- amount you paid
- how you paid
- any reference numbers, such as order or customer number
- reason for complaint
- any other relevant information
- how you want them to resolve the situation
Local trading standards services have powers under the Consumer Protection (Distance Selling) Regulations 2000 to apply for an injunction against any person or business that seems to have broken the regulations. They must investigate any complaint about breaking the regulations.
If your complaint is about food safety or labelling issues and you are not satisfied with the response you get, you could contact the local council where the company is based.
Complaining about advertising for food
If you think advertising about food is misleading, contact the environmental health office in the local council where the business is based.
Advertising rules vary according to the country where the business is based. Claims about products could be less reliable outside the European Union (EU).
Packaging and delivery
Ordering meat and dairy products from abroad
You can order meat and dairy products within the EU. From countries outside the EU, you shouldn't order meat and dairy products, including:
- canned meat
- dried meat
Getting food delivered by post or courier
If foods that need refrigerating (such as fish, meat products, cooked foods, many dairy products and ready-prepared salads) are sent by post or courier, they should be delivered as quickly as possible, ideally overnight, and they should be kept cool until delivery.
When you place an order, make sure you know when to expect delivery. If foods that need refrigerating are delivered late, this might mean they haven't been kept cool enough. For this reason, it's better not to accept food after the intended delivery time printed on the package.
How food sent by post should be packaged
Foods that need refrigerating (such as fish, meat products, cooked foods, many dairy products and ready-prepared salads) must be kept cool while they're being transported. Sometimes they'll be packed in an insulated box with a coolant gel, or in a cool bag.
If you order food that needs refrigerating and it will be travelling a long distance, check with the supplier what they do to keep it cool until delivery.
Products that are vacuum-packed, such as smoked fish, should still be kept cool.
When food packaging is damaged
Food should be sent in packaging that is strong and intact. If a pack is open, damaged or leaking, it's best not to eat the food. You might be able to reject the delivery. Contact the supplier to tell them.
Delivery of food orders from supermarkets
Often your shopping from a supermarket will be delivered in a refrigerated van and this is good practice, because it's an effective way to keep food cool.
But it isn't always essential for food to be refrigerated while it's being transported,as long as it's delivered quickly.
If you're concerned about the way your food is delivered, contact the supermarket.
If foods that need refrigerating aren't kept cool enough during delivery, it could make you ill. If this type of food arrives and it isn't cold, you shouldn't eat it.
You might be able to reject the delivery, depending on the terms of your contract with the supplier.
Before you order, you should check the food company's delivery policy.
Food safety law
NI food hygiene and safety law applies when food is sold to customers in Northern Ireland.
This protects consumers from buying food unfit for eating through poor hygiene or safety standards.
Under food safety law, businesses must make sure that all food and feed placed on the market is safe, that its quality is what consumers would expect and that it is not labelled in a false or misleading way.
Food safety officers carry out inspections to check that food is stored and prepared safely and check that food meets safety, composition and nutrition labelling standards (for example, labelling of allergens, use by dates, nutritional and compositional information).
Labelling requirements for food products
Generally, food products marketed within NI must be labelled in a way that's easy to understand, with print that's clear enough to read.
All prepacked food must have a food label that includes certain information.
All food is bound by general food labelling requirements and food placed on the market in Northern Ireland must follow those rules.
Any labelling provided must be accurate and not misleading. The label must include:
- name of the food
- list of ingredients
- allergen Information
- Quantitative declaration of ingredients (QUID)
- net quantity
- 'use by' or 'best before' date
- any special instructions about how to store or use the product
- name and address of the manufacturer
- country of origin or place of provenance
- preparation instructions
- alcoholic strength by volume for beverages containing more that 1.2 per cent volume of alcohol
- nutritional declaration
Further information is available at:
Providing labelling information
There's no legal requirement to give labelling information online or in a catalogue - this will depend on the policy of the supplier.
'Use-by' and 'best before' dates
It is important to understanding 'best before' and 'use-by' dates on food labels when shopping.
Selling on food products you bought from home
If you buy food online or by mail order, you can sell this food to other people if the products meet all UK food law for labelling and food hygiene.
If you're planning to sell food, you may need to keep to certain food law requirements, such as registering your business with your local council. If you're not sure, ask the council for advice. If food products don't meet UK law, it could be an offence to sell them or give them away. | <urn:uuid:db2454f4-2cee-4ddb-b829-fe9fc2fc0fdf> | CC-MAIN-2023-14 | https://www.nidirect.gov.uk/articles/shopping-and-food-safety | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00418.warc.gz | en | 0.95327 | 1,620 | 2.84375 | 3 |
PURIFICATION STRATEGIES FOR MICROBIAL PHYTASEHTML Full Text
PURIFICATION STRATEGIES FOR MICROBIAL PHYTASE
Aratrika Chatterjee, Angela Maria Mathew, Arvind George, Pallavi Sengupta, Priyanka Pundir, Fr. Jobi Xavier and Erumalla Venkatanagaraju *
Department of Life Sciences, CHRIST (Deemed to be University), Hosur Road, Bengaluru - 560029, Karnataka, India.
ABSTRACT: Phytase catalyzes the formation and release of inorganic phosphate from phytic acid. A few monogastric animals that lack phytase is incapable of digesting phytate obtained from plants and gets excreted, which results in the accumulation of phosphorus in the form of phytate in the environment which has a detrimental effect. In order to combat this problem, researchers have focused on production and purification of phytase from different microbial sources, which converts phytate to useful form of phosphorous that facilitates plant growth. This review paper summaries the various methods adopted for isolation, production, and purification of phytases from various sources.
Phytase, Inorganic phosphate, Phytic acid, Environment, Production, Purification
INTRODUCTION: Phytase is an enzyme that produces mineral residues and inorganic phosphate from phytic acid (phytate), which is the primary phosphate storage form in plants. Inorganic phosphate is a key mineral element for the growth, reproduction, and metabolism of animals 1. The bioavailability of minerals and proteins was enhanced by the supplementation of phytase 2, 3. This reduced the risk of eutrophication by reducing the feed additive phosphate 4. Due to extensive application of phytase, it has attracted the attention of scientists and entrepreneurs in the areas of nutrition, environmental protection and agriculture 5, 6. For commercial use, microbial production of phytase has been proven more effective than sources like plants and animals, due to high production yields and acid tolerance.
However, the amount of phosphate released may differ based on its source and physicochemical parameters 7, 8, 9, 10, 11. There are different techniques for phytase production including solid-state and submerged fermentation process 12, 13, 14, 15, 16.
Screening and Production of Phytase: Phytases have been produced from many sources, mostly microbial sources like bacteria and fungi. Phosphate-solubilizing bacteria were isolated from aerobic soil after serially diluting the soil samples. Three different populations were used for the isolation purpose- rhizophore, non-rhizophore and endophytic populations. For rhizophore plants, 3 g of rice plant roots along with adhering soil was transferred to a conical flask containing 9 ml of sterile distilled water and contents very subjected to vortexing. Further 10-fold serial dilution was performed, and 0.1 ml aliquots were spread on a selective media plate and incubated at 28 ºC in an incubator. For endophytic population, fresh roots were taken and sterilized with 70% alcohol for 5 min and treated with colorox for 30 sec.
These surface-sterilized roots were further homogenized using mortar and pestle, and the total plate count method was used to determine population count 17.
It was also found that acidic thermophilic bacteria, Geobacillus stearothermophilus, produced microbial phytase, which was isolated by inoculating 5 ml of hot spring water in 50 ml of phytase production medium and was kept for incubation at 65 °C for 3 days. After this, the sample was streaked onto a phytase growth media and incubated at 65 °C for 2 days. The bacterial colonies which produced phytase were found to have a clearance zone around it 1. The phytase enzyme was also produced from Schizophyllum commune by providing soya bran rice, coffee husk, rye, barleycorn and citric pulp as substrates in solid-state fermentation. The effect of different washes was studied on the production of phosphorus. WB (wheat bran without wash) wb1 (one-time wash), wb2 (2-time wash) and wb3 (3-time wash). The washes were done by ultra-pure water at 50 ℃ and later filtered using a Whatman filter paper 1. The bran was further dried at 60 ℃ for 4 h, 80 ºC for 4 h and 100 ºC for 15 min to reduce the microbial contamination 18. The moisture content was set to 50% w/w. The flasks were inoculated after pretreatment with pellets of S. commune and the contents were incubated for 72 h at 30 ºC. After fermentation, the fermented material was collected and checked for phytase activity.
A special kind of phytase called thermostable beta-propeller phytase was produced from Bacillus licheniformis. The collected soil samples were diluted and were spread onto a phytase screening culture medium. The colonies which showed a clear zone were again screened using submerged fermentation medium. The sterile production medium was inoculated with 12 h old cultures which were grown on a 2% LB medium and were incubated at 55 °C for 24 h. The fermented broth was cleared by centrifugation at 10000 rpm for 10 min and was assayed to check phytase activity 19. Phytase was produced using 3 strains of Bacillus subtilis. The collected soil sample was dissolved in 0.9 % of saline and was filtered using Whatman filter paper. 100 µl was taken from the filtered sample and spread on a phytate screening agar medium and incubated for 2 days at 30 °C.
The bacterial colony which produced phytase enzyme had a clearing zone around them. Ones with the highest clearing zone were re-plated onto phytase producing media. A single colony was streaked on LB media and kept for overnight incubation at 30 ºC. A loop of this culture was then transferred into phytase producing medium and incubated at 30 ºC in an orbital shaker at 600 rpm for 6 days, after which the culture was centrifuged, and the supernatant was used as an enzyme source to study phytase activity 20. Phytase was also extracted from the Aeromonas species which was isolated from soil sample 21.
Phytase produced from Anoxybacillus sp. MHW14, was isolated from hot spring in the Chiang Mai area located in Thailand, was stored in 15% (v/v) glycerol at -80 °C. It was cultured in nutrient broth and streaked to single colony prior to utilizing. Then utilized for the production by using modified Atlas media of 0.1% ammonium sulphate, 0.01% magnesium sulphate, 0.1% D-glucose, 0.01% Na citrate and 1.0% phytic acid. The liquid medium was adjusted to pH 7.0. Then the fermentation was carried out on 150 rpm rotary shaker at 45 °C for 24 h. The culture broth was taken to determine phytase activity 22.
Phytase was also produced from Shigella by inoculating in PSM broth and was incubated for 37 ± 11 ºC at 120 rpm for 5 days, and then the broth was centrifuged for 10 min at 10,000 rpm. The cell-free supernatant was separated, tested for phytase activity. Accordingly, the reaction mixture consisting of 0.8 ml acetate buffer (0.2M, pH 5.5 containing 10 mM sodium phytate) and 0.2 ml of supernatant. This was incubated for 30 min at 37 °C, and the reaction was stopped by adding 1 ml of 10% trichloroacetic acid. Then the assay mixture of 0.5 ml was taken in fresh set of tubes, mixed with 4 ml of 2:1:1 v/v of acetone, 10 mM ammonium molybdate and 5 N sulfuric acid and 0.4 milliliters of citric acid (1M).
The amount of free phosphate released was determined (spectrophotometrically) at about 355 nm, 1 unit of phytase activity was defined as 1 mol of phosphate produced per min per milliliter of culture filtrate under the defined assay conditions 23. Enteric bacteria from cow-dung also produce phytase.
The isolation of pure cultures from different collected samples was done by the serial dilution method, then by plating onto phytase screening medium (PSM) at about 30 °C. Based on the results of qualitative and quantitative enzymatic analysis, two potent phytase producing strains were selected. The Pure cultures were isolated, screened for phytase production in the PPM at about 37 °C, under shake flask culture and analyzed for intracellular, extracellular phytase activity 2. Phytase was also isolated from Schizophyllum 18.
Phytase was produced using straw mushrooms grown on various substrates such as soil, grains cereals and fruits. 19 fungal strains from soil were isolated using serial dilutions. 1 g of soil from the municipality of Rionegro (Antioquia, Colombia) was dissolved in 9 ml of sterile distilled water then the dilutions of 10-2, 10-3 and 10-4 were plated onto sterilized potato dextrose agar (300 g potato, 20g dextrose, 20 g agar per liter) containing 100 µg per ml of ampicillin, incubated at 25 °C in the laboratory for about 1 week. Fungi growing on agar plates were subcultured in fresh PDA medium until pure colonies were observed. About 7 environmental isolates from various locations in Medellín (Universidad Nacional de Colombia Sede Medellín and district of Belen) were collected in moist chambers using 250 ml Styrofoam cups with a layer of wet filter paper at the bottom, the grains (rice, wheat and a mix of wheat, barley, and oat flakes) and fruits (lemon and orange) were used as various substrates. These chambers were left open at the site of collection for about 1 day and then closed, incubated at room temperature.
Fungal growth was observed within 1 week of incubation. Isolation and purification were done on PDA plates. About 26 fungal isolates were obtained in pure cultures by single spore transfer onto PDA plates, stored at 4 ℃. The isolates were preserved by immersing the mycelium in 20% sterile glycerol and stored in -80 ºC freezer. The isolated fungi were further identified on the basis of morphological characteristics by using various updated taxonomical keys 14.
Phytase was extracted from fruiting bodies using cold distilled water for 4 h, followed by centrifugation at 12000 rpm for 15 min and supernatant was collected and subjected to ultrafiltration. Ion exchange chromatography was performed with NH4HCO3 buffer (pH 9.5). Fraction showing phytase activity was dialyzed against NH4OAc buffer and again subjected to ion exchange chromatography. Other fractions with phytase activity were subjected gel-filtration by FPLC. The molecular mass of the purified enzyme was found using SDS-PAGE and FPLC-gel filtration. Various buffers of different pH values (3.0-9.0) were used to find the pH of maximum activity. The reaction mixtures were incubated at 20, 30, 37, 45, 50, 60, 70, 80 and 100 ºC for 15 min. A 20.7% enzyme recovery was obtained with 34.6 fold purification. The phytase activity was found to be 3.11 U/mg. The molecular mass was found to be 14kDa using SDS-PAGE and FPLC. The optimum pH was found to be 5.0 and the optimum temperature was 37 °C 18.
A new phytase was produced from Aspergillus niger using citric pulp by solid-state fermentation. The pH was fixed using sodium citrate buffer which contained ammonium citrate, potassium chloride, magnesium sulfate and zinc sulfate. The inoculum used was pellet suspension of A. niger. The columns were incubated at 30 ºC for 97 h in a water bath. The crude phytase obtained upon fermentation was subjected to centrifugation to remove precipitate 25.
The rhizosphere soil samples were collected during the rainy season were cultured in nutrient broth media containing 0.004% w/v, calcium phytate, 0.05% w/v Potassium chloride, 0.05% w/v MgSO4, 0.001% w/v ferrous sulphate. 001% w/v manganese sulfate as trace elements required for growth, for about 24-30 h, at 30 ℃ for mesophilic isolates. Then to the 50 ml of enzyme production medium was added with 4 ml of culture. Then, the cultures were incubated at 30 °C for about 24-48 h in an incubator shaker. Culture contents were then centrifuged at 5000 g for 15 min at 2 or 4 °C.
Accordingly, the supernatant and the cell pellet were collected; both were used to check phytase activity. For the fungal isolates, the supernatant was assayed for three consecutive days. To work out the effect of metal ions and inhibitors on enzyme activity of crude phytase different metals and inhibitors i.e. ferric chloride, magnesium sulphate, zinc sulphate, cobalt chloride, copper sulphate, sodium chloride, silver nitrate, sarium chloride, mercury chloride, sodium nitrite, calcium chloride, lead nitrate, manganese chloride, urea, dithiothreitol, ethylene di-amine tetra acetic acid, phenylmethyl sulphonyl fluoride and polyethylene glycol were all tested at final concentration of about 1mM 8.
Purification of Phytase: After the production of phytase from the source, it will be in its crude form which has to be purified. There are several purification techniques for phytase and the techniques may vary for phytase isolated from different sources. The phytase obtained from Flammulina velutipes was given for SDS-PAGE analysis and it was found that the isolated phytase had a molecular mass of 14.8 kDa. The phytase activity was measured by incubating the enzyme solution with Tris-HCl buffer (pH = 7), which was assayed at 37 ºC and stopped with the addition of 5% trichloroacetic acid. This phosphate was measured at 700 nm after the addition of a colour reagent. The amount of enzyme which was required to release 1µmol of Pi per min under standard assay conditions was defined as 1 unit of enzyme activity. The protein determination was also done using Bradford using a protein assay kit with bovine serum albumin as the standard. The results suggest that the activity of phytase extracted has a much lower enzyme activity than other fungal phytases. Various phosphorylated substrates were used to determine the substrate specificity of the purified phytase. 5mM concentrations were added to the assay buffer separately which included AMP, ADP, ATP, fructose-6-phosphate, glucose-6-phosphate, and β-glycerophosphate. The release of inorganic phosphate was determined as mentioned earlier. The specificity of the enzyme was found to be much low. The enzyme activity was tested over a range of temperatures (20-100 ºC) and pH (3-9). The optimal temperature and pH were determined by subjecting the solutions to different temperatures and pH respectively. The optimum temperature was found to be 45 ºC and optimum pH was found to be 5.0 14.
Crude protein extracts from 9B (Bacillus) and 15C (Geobacillus) were used for analyzing the phytase properties. Diluted crude protein extracts were used to determine the optimum temperature (subjected to 30-90 ºC), thermal stability at standard assay conditions (at 37 ºC), optimum pH (by using buffers of pH ranging from 1.0-9.0, along with sodium phytate) and pH stability. Molecular effectors like ethylenediaminetetraacetic acid and CaCl2 were used to study the enzyme activity. A combination of anion exchange chromatography and high-performance liquid chromatography was used to evaluate the phytase action according to Sandberg and Adrienne. Crude extract along with CH3COONH4+ buffer (pH 5.0) and sodium phytate as substrate was incubated & taken out at different timings, cooled and stored at -20 ºC.
By AEC inositol phosphates were extracted from the samples and stored at 4 ºC, solvents were evaporated & inositol phosphates were resuspended, homogenized, filtered & stored at -20 ºC. Reverse-phase HPLC was done for separation and quantification of inositol phosphates with an HPLC chromatograph. The optimum temperatures were found to be 60 ºC and 50 ºC, respectively. Phytase from Geobacillus isolate showed low thermal coefficient and energy of activation than that of Bacillus phytase. The results suggest that Geobacillus phytase has more ability to work at high temperatures. The optimum pH was found to be 5.0 for both the protein isolates. Relatively higher phytase activity was observed in the presence of molecular effector, CaCl2 than that of EDTA. The appearance of partially phosphorylated myo-inositol phosphates was observed in the first hour in 9B, whereas, in 15 ºC isolate, these were observed in the fourth hour, indicating lower degradation rate 26.
From Anoxybacillus, they grew the bacteria in an optimized medium, on 150 rpm rotary shaker at 37, 45, 50, 55, 60 and 65 ºC for 12 h. To determine the effect of pH, the bacterium was grown in the optimized medium, with pH ranging from 5.0-10.0, on 150 rpm rotary shaker at 45 ºC for 12 h. The collected samples were used to find the phytase activity. The highest phytase activity was found in the culture incubated at 45 & 50 ºC, whereas those that are incubated at 37, 60, and 65 ºC showed less activity. The optimum pH range was found to be from 6.0-9.0 1, 20.
Ascorbic acid method modified from Kim et al., (1998) was used to determine the phytase activity with sodium phytate as a substrate. Phytase solution and sodium phytate in 0.1 M Tris-HCl buffer (pH 7.5) was taken and incubated at 55 ºC for half an hour and stopped by adding 10% (w/v) trichloroacetic acid (TCA). The Pi released was measured by incubating with a color reagent for 55 ºC for half an hour. The absorbance was measured at 820 nm. The amount of enzyme required to release 1µmol of Pi per min under standard assay conditions was defined as 1 unit of enzyme activity. The maximum phytase activity obtained from the optimized media was found to be 0.20 U/ml 22. For Aspergillus niger, an unoptimized medium with agricultural residues was moistened in an Erlenmeyer flask and subjected to sterilization at 121 ºC for 30 min, cooled and inoculated with 1% spore suspension and incubated for a week at 30 ºC. The optimized fermentation medium contained wheat bran in Erlenmeyer flask which also contains glucose, dextrin, sodium nitrate, magnesium sulfate moistened with water was subjected to sterilization at 121 ºC for half an hour. The medium was cooled and inoculated with 1% spore suspension and incubated at 30 ºC for 4 days. Optimized media along with wheat bran were moistened, sterilized and inoculated with spore suspension in enamel-coated metallic trays and incubated at 30 ºC for 5 days.
Phytase enzyme activity was calculated at 50 oC. Cellulase, xylanase activities were determined. α- amylase activity was determined using McCleary and Sheehan’s method. The presence of reducing sugars was determined using DNS method. Protein concentration was calculated using Lowry’s method using BSA as standard. Biomass was determined by measuring the glucosamine content using Reissig method. The phytase enzyme extraction was subjected to NH4SO4 precipitation, centrifuged and dissolved in acetate buffer (pH 6.0). Sephadex G-25 was used for desalting and the fraction was estimated for phytase activity. PAGE was performed at room temperature and 200V for 2-3 h and protein bands were obtained after silver staining. Gel filtration was done on Sephadex S-200 column equilibrated with acetate buffer (pH 5.5) for the molecular weight estimation. Cytochrome C, BSA, carbonic anhydrase, alcohol dehydrogenase, and β-amylase as standard protein according to Andrew’s method. The purification resulted in 69% enzyme recovery and specific activity of 49.83 IU/ml of protein.
Partially purified phytase was characterized after precipitation and desalting of the enzyme. The effect of pH on the enzyme activity was determined by treating with buffers of different pH ranging from 2.0-10.0 at 50 ºC. The enzyme was subjected to a range of temperatures to determine the optimum temperature. Enzyme activity was studied by standard assay conditions and the values were compared with a standard devoid of inoculation. The optimum pH was found to be 6.0. The optimum temperature was found to be 55 ºC. The effect of metal ions was studied by treating the culture with various metal ions at 50 ºC for 30 min at standard assay conditions. It was found that phytase activity was moderately enhanced by Ca2+, Fe2+, Fe3+, Ba2+ and Pb2+15. Enteric bacteria were also found to produce phytase and it was purified using techniques based on solubility and chromatographic methods. The best purification strategy was found to be a combination of NH4SO4 precipitation, gel filtration and ion exchange chromatographic methods. The amount of protein was found using Bradford method considering BSA as standard. The phytase fractions were separated by SDS-PAGE according to Laemmli method. A single protein band was obtained for both the purified enzyme and the molecular masses were found to be 45kDa and 43kDa for the strains, Klebsiella sp. RS4 and Shigella sp. W3, respectively 16. Phytase was also produced from lactic acid bacteria isolated from dairy products. Non-lactic acid bacteria were used as a negative control for the E. coli isolates from the faecal flora of a healthy donor. The isolates were grown in a modified MRS broth overnight in which the only available phosphate source was 0.1% sodium phytate, centrifuged at 400 rpm, and the supernatant was collected and tested for phytase activity. A complete phytase assay system was used to find the phytase activity in probiotic isolates according to manufacturer’s instructions. The phytase activity was found to be much higher in the bacterial strain (like 4.0-5.4 mU), than those by probiotic isolates 26.
The phytase enzyme was also produced by the bacteria which inhabited the roots of mangroves. Partial purification was done using (NH4)2SO4 precipitation followed by dialysis carried out at 4 ºC. Five isolated strains were incubated with phytase production media in a shaker incubator at 200 rpm at 37 ºC for 3-5 days. This fermented broth was subjected to centrifugation at 10,000 rpm for 10 min at 4 ºC and the supernatant was collected and was fractionated by (NH4)2SO4 precipitation and saturation. The fraction was incubated overnight and centrifuged at 9000 rpm for 30 min at 4 ºC and dissolved in acetate buffer followed by enzyme assay. The enzyme fraction with highest activity was desalted using a dialysis bag and subjected to SDS-PAGE according to modified Laemmli method to check the homogeneity. The purified isolate showed a high enzyme activity of 5.583 U/mg and 72% enzyme recovery with 10.3-fold purification. Rf value was calculated as the ratio of distance traveled by each isolate & distance traveled by the dye front. The molecular weight of the sample was determined from the standard curve obtained by plotting standard protein against Rf values. The molecular weight was estimated to be in the range 16-22 kDa. To study the effect of temperature on enzyme activity, the sample was incubated at 50, 60, 70 and 80 ºC and cooled to 4 ºC followed by assay. To study the effect of pH on enzyme activity, the sample was treated with buffers ranging from 2.0-9.0 and then assayed. Optimum temperature and pH were found to be 45-55 ºC and 5.0, respectively 8.
Geobacillus stearothermophilus is a thermophilic bacterium that has been found to produce phytase. The strain DM12 was grown on a phytase production media and incubated in a shaker incubator at 200 rpm at 65 °C for 48 h. The phytase medium consisted of 1.5% glucose, 0.5% ammonium sulphate, 0.05% potassium chloride, 0.01% sodium chloride, 0.01% calcium chloride (hydrated), 0.001% ferric sulphate, 0.001% manganese sulphate and 0.5% sodium phytate. The pH was made to 7.0. The culture taken was 16 hours old. After incubation, the culture was subjected to centrifugation for 10000 rpm for 10 min at 4 °C. The supernatant was collected. This is the crude enzyme. This supernatant was then fractionated by stepwise precipitation with ammonium sulfate powders at different saturations. These samples were again given for centrifugation at 12000 rpm for 20 min and the precipitate was collected. It was dissolved in 1 ml of 0.1 M acetic acid buffer and was assayed for enzymatic activities.
The aliquot with the highest amount of phytase activity was given for dialysis overnight against Tris-HCl buffer of pH 7.5 to remove the remaining amount of salt. These samples were then loaded onto a Q-sepharose column of 1.5 × 24 cm and were previously put in equilibrium by 20mM Tris-HCl buffer of pH 7.5 and the column was eluted with a linear gradient of 0.1 M NaCl at a flow rate of 1ml/min. Now, these fractions were checked for phytase activity. This is the purified form of the enzyme. These purified fractions were then subjected to SDS-PAGE to check the homogeneity. The phytase activity was detected by incubation of the gel in a 4mM sodium phytate solution in 0.1 M sodium acetate buffer for 30 min. After washing with water, the phytase bands were detected by immersing the gel in a coloring agent (malachite green) for a period of 1-2 h until the green bands were visible 1.
Different bacterial colonies were obtained after inoculating on nutrient agar media from the soil samples. Their phytase activities were screened by re-plating each of the single colonies on wheat bran extract agar media plates and observing the clearance zones. The most efficient isolates were inoculated into a 10% concentration of wheat bran media. Pre sterilized 0.2% calcium chloride was added just before inoculation. The contents of the flask were mixed and incubated in a shaker incubator at 200 rpm at 37 °C for 72 h. The fermented broth was centrifuged at 6000 rpm for 30 min at 4 °C, and the supernatant collected used to detect the phytase activity checking. The more the amount of phosphate released, the better was the phytase activity 21.
For the phytase obtained from Bacillus subtilis MJA, the supernatant obtained after the centrifugation of the culture was dialyzed and concentrated using a filtration system. It was then transferred to an anion-exchange chromatographic column which was equilibrated with 20 mM Tris-HCl buffer of pH 8. After washing the column, the enzyme was eluted out at a flow rate of 1 ml/min by using a linear gradient from 0 to 100% of 1 M NaCl in 20 mM Tris-HCl buffer of pH 8. The enzyme was collected in fractions of the volume of 5 ml each. These fractions were checked for phytase, and the ones containing phytase activity were pooled together, dialyzed, and concentrated using the TFF filtration system. The concentrated enzyme was applied to a gel filtration system that had been pre-equilibrated with 50 mM phosphate buffer and 200 mM NaCl at pH 8 and eluted using the same buffer at a flow rate of 1 ml/min. The fractions were collected every 5 min and once again, the fractions with high phytase activity were collected and pooled together. During the purification processes, all the collected and pooled fractions were tested for absorption (wavelength- 280 nm), total protein (wavelength- 595 nm), and phytase activity (wavelength- 355 nm) 21.
Phytase was produced from Aeromonas species. Aeromonas sp. culture was first cultivated at 37 °C at 200 rpm and at pH 7 in 2 L of MPB medium for 48 h. The culture broth was then centrifuged, and the supernatant obtained was the crude enzyme solution. These supernatants were then precipitated with the help of 80% saturated ammonium sulfate and resuspended in 10 mM Tris-HCl buffer of pH 8. This was then given for dialysis and the dialyzed fractions were loaded onto a DEAE-Sephacel column and eluted out with 200 mM NaCl. The enzyme activity was assayed using 0.1 M Tris-HCl buffer at pH 7 containing 2 mM sodium phytate at 37 °C for 30 min. The reaction was stopped by adding 250 µl of 10% (w/v) trichloroacetic acid 13. 100 µl of the enzyme was isolated from Bacillus licheniformis, along with 900 µl of the substrate (sodium phytate in 0.2 M Tris-HCl buffer of pH 7.0 supplemented with 1 mM calcium chloride) were incubated together at 55 °C for 30 min. The protein concentration was then measured using Bradford’s dye-binding assay method 6.
The activity of phytase from phosphate-solubilizing bacteria was measured using a modified method of Fiske and Subbarow. After 48 hours of incubation, an exact 150 µl of PSB bacteria (phosphate-solubilizing bacteria) was added with 600 µl of substrate and incubated at 45 °C. 750 µl of 5% trichloroacetic acid was added to stop the reaction. The released inorganic phosphate was measured spectrophotometrically 19. Phytase was obtained from Lactobacillus plantarum by ammonium salt precipitation. Once the supernatant is obtained after centrifugation of bacterial broth, it was subjected to ammonium sulfate precipitation. All the precipitates obtained were dissolved in small amounts of 0.1 M Tris-HCl buffer of pH 5.5 and the phytase activities were checked. The active fractions were given for dialysis against the same buffer. The active fractions were pooled and allowed to stand at 4 °C. The pure form of the enzyme was thus obtained 28.
Citric pulp fermentation with Aspergillus niger FS3 was found to produce a new kind of phytase. This phytase was purified by cationic-exchange, anionic exchange chromatography, and chromatofocusing. In cationic exchange chromatography, the first step of phytase purification from the crude extract was conducted with SP Sepharose, which was pre-equilibrated with 25 mM glycine-HCl of pH 2.85. The column was then filled with the crude extract with a dilution of 1:3 along with 25 mM glycine-HCl of pH 2.85 and was washed with the same buffer. The enzyme proteins were then eluted out with a linear gradient of 0 to 0.5 M NaCl in the same buffer. Fractions of 10 ml were collected. All the fractions were checked for the amount of phytase activity, and the ones showing the highest phytase activity were taken, pooled, and dialyzed against 50 mM Tris-HCl of pH 7.8. The second step is anionic exchange chromatography. It was conducted using a Mono Q column by using 50 mM Tris-HCl buffer of pH 7.8 for equilibration. The pooled samples obtained in the first step were loaded into the Mono Q column and eluted using a linear gradient of 0 to 0.5 M NaCl concentration.
The eluted-out volume was collected into 50 tubes such that each had 6 ml of the sample. These fractions were checked for phytase activity and the ones showing the phytase activity were pooled. The last step in purification was the chromatofocusing step. The second pooled samples were loaded into a Mono PTM column, and the separation was performed using 25 mM imidazole-HCl of pH 6.2 in a pH of 6.2 to 4.0 inside the column. The elution buffer used was diluted 8 times in 25 mM imidazole-HCl with pH adjusted to 4.0 with HCl and applied over 30 min to elute out the bound proteins based on their isoelectric points. The fractions that were phytase-active were collected, pooled and lyophilized. From SDS-PAGE analysis, the molecular weight of pure phytase was found to be 108 kDa, had an optimum pH of 5.0 to 5.5 and an optimum temperature of 60 °C. It displayed a high affinity towards phytate. It had a high phytate-degrading activity 28.
The phytase purification from Aspergillus tamari was usually done by DEAE-cellulose column chromatography. The crude phytase extract produced was purified by DEAE- cellulose column chromatography. Alcohol treatment was done prior to column chromatography. The enzyme activity was studied by sodium acetate buffer (pH 5.0) with sodium phytate at 37 ºC. The amount of Pi released was measured according to a modified ammonium molybdate method. The amount of enzyme required to release 1µmol of Pi per min under standard assay conditions was defined as 1 unit of enzyme activity. The phytase purified was found to have an enzyme recovery of 20.3% with 51- fold purification. The pH stability and optimum pH were determined using buffers of different ranges of pH (1.0-10.0). The enzyme was subjected to different temperature ranges.
Phytate degrading enzyme was purified by running FPLC at 25 ºC with flow rate 1 ml/min. The enzyme was also precipitated using ammonium sulfate, then suspended in Tris-HCl buffer and dialyzed against the same buffer followed by centrifugation. The dialyzed precipitate was loaded onto the DEAE-Sepharose CL 6B column and the required enzyme fraction was pooled after subsequent fractionation. This fraction was loaded onto a 16/60 Sephacryl S-200 HR column and the resulting fraction was dialyzed against sodium acetate buffer (pH 5.0). This fraction is further applied onto a Mono S HR 5/5 column equilibrating with sodium acetate buffer (pH 5.0) and fraction with phytase-activity was pooled. The enzyme activity was found to be 133 Umg-1.
SDS-electrophoresis was performed according to the Laemmli technique. The molecular weight of the purified protein was found using gel-filtration on 16/60 Sephacryl S-200 HR. Gel filtration technique concluded that the molecular mass of the enzyme was 85,000 ± 2500 Da, while SDS-PAGE accurately estimated the molecular mass to be 85,000 Da 15. Phytase was also obtained from Saccharomyces cerevisiae, known as Baker’s yeast was shown to express phyA genes that are responsible for the production of phytase enzyme. 1 ml of the yeast cell culture was centrifuged at 5000 rpm for 10 min. The supernatant formed was the crude extracellular phytase enzymes. The phytase activity was assayed by 0.8 ml of sodium phytate which was pre-incubated 65 °C for 5 min and 0.2 ml of the phytase was added to it. The mixture was kept for incubation at 65 °C for 30 min and afterward, it was stopped by the addition of 1 ml of 50 g/L trichloroacetic acid. These mixtures were then eluted out by column chromatography and the pure form of phytase enzyme was obtained. Yeast cell surfaces have a special quality of handling the phytase enzyme with a highly homogeneous quality even without purification 15.
Citric pulp fermentation with Aspergillus niger FS3 was found to produce a new kind of phytase. This phytase was purified by cationic-exchange, anionic exchange chromatography, and chromatofocusing. In cationic exchange chromatography, the first step of phytase purification from the crude extract was conducted with SP Sepharose, which was pre-equilibrated with 25 mM glycine-HCl of pH 2.85. The column was then filled with the crude extract with a dilution of 1:3 along with 25 mM glycine-HCl of pH 2.85 and was washed with the same buffer. The enzyme proteins were then eluted out with a linear gradient of 0 to 0.5 M NaCl in the same buffer. Fractions of 10 ml were collected.
All the fractions were checked for the amount of phytase activity and the ones showing the highest phytase activity were taken, pooled and dialyzed against 50 mM Tris-HCl of pH 7.8. The second step is the anionic exchange chromatography. It was conducted using a Mono Q column by using 50 mM Tris-HCl buffer of pH 7.8 for equilibration. The pooled samples obtained in the first step were loaded into the Mono Q column and eluted using a linear gradient of 0 to 0.5 M NaCl concentration. The eluted-out volume was collected into 50 tubes such that each had 6 ml of the sample. These fractions were checked for phytase activity and the ones showing the phytase activity were pooled. The last step in purification was the chromatofocusing step. The second pooled samples were loaded into a Mono PTM column, and the separation was performed using 25 mM imidazole-HCl of pH 6.2 at a pH of 6.2 to 4.0 inside the column. The elution buffer used was diluted 8 times in 25 mM imidazole-HCl with pH adjusted to 4.0 with HCl and applied over 30 min to elute out the bound proteins based on their isoelectric points. The fractions that were phytase-active were collected, pooled and lyophilized.
From SDS-PAGE analysis, the molecular weight of pure phytase was found to be 108 kDa, had an optimum pH of 5.0 to 5.5, and an optimum temperature of 60 °C. It displayed a high affinity towards phytate. It had a high phytate-degrading activity 5.
The pI value of phytase enzyme from Aspergillus fumigatus was slightly higher than phytase enzymes from other sources as was found from protein sequence analysis. It was also more basic compared to the other protein structures present in the culture broth, purification could be done by a one-step process. The enzyme was subjected to cation-exchange chromatography at pH 5.0 and the phytase was eluted as a single peak at 500 mM NaCl in the step gradient. The purified enzyme was analyzed by SDS-PAGE 25.
CONCLUSION: Since there is an increase in demand for phytase production due to the excessive accumulation of phosphates in the environment, many researchers have been carried out to extract the enzyme from different sources. In majority of the experiments conducted, it was observed that micro-organisms were used as the sources for the production of phytase enzyme. The possible reasons for this could be that microorganisms are easily manageable and have a rapid rate of multiplication.
ACKNOWLEDGEMENT: The authors would like to thank the VC, Pro VC and CHRIST (Deemed to be University) management for supporting this work.
CONFLICTS OF INTEREST: The author declares no conflict of interest
- Shivanna GB and Venkateswaran G: Phytase Production by Aspergillus niger CFR 335 and Aspergillus ficuum SGA 01 through submerged and solid-statefermentation. The Scientific World Journal 2014; 1-6.
- Ji SM, Nyeo KJ, Ah CE, Hoon P, Jong CH and Ryang PU: Purification and characterization of a novel extracellular alkaline phytase from Aeromonas Journal of Microbiology and Biotechnology 2004; 15(4): 745-48.
- Parhamfar M, Dalfard AB, Khaleghi M and Hassanshahian M: Purification and characterization of an acidic, thermophilic phytase from a newly isolated Geobacillus stearothermophilus strain DM12. Progress in Biological Sciences 2015; 5(1): 61-73.
- Salmon NX, Piva LC, Binati RL, Rodrigues C, Vandenberghe LPS, Soccol CR and Spier MR: A bioprocess for the production of phytase from Schizophyllum commune: studies of its optimization, profile of fermentation parameters, characterization and stability.Bioprocess and Biosystems Engineering, February 2012; 35(7): 1067-79.
- Spier MR, Fendrich RC, Almeida PC, Noseda M, Greiner R, Konietzny U, Woiciechowski AL, Soccol VT and Soccol CR: Phytase produced on citric byproducts: purification and characterization. World Journal of Microbiology and Biotechnology 2010; 27(2): 267-74.
- Wang Y, Yichun W, Han J, Shijun F, Shijin G and Zhiqiang S: Production and characterization of a thermostable beta-propeller phytase from Bacillus licheniformis. African Journal of Biotechnology 2013; 7(17): 1745-51.
- Panhwar QA, Othman R, Rahman ZA, Meon S and Ismail MR: Isolation and characterization of phosphate-solubilizing bacteria from aerobic rice. African Journal of Biotechnology 2012; 11(11): 2711-19.
- Toukhy NMK, Youssef AS and Mikhail MGM: Isolation, purification and characterization of phytase from Bacillus subtilis African Journal of Biotechnology 2013; 12(20): 2957-67.
- Sreedevi S and Reddy NB: Screening for efficient phytase producing bacterial strains from different soils. International Journal of Biosciences 2013; 3(1): 76-85.
- Zhu MJ and Wang HX: Purification and identification of a phytase from fruiting bodies of the winter mushroom, Flammulina velutipes. African Journal of Biotechnology 2011; 10(77): 17845-52.
- Singh B and Satyanarayana T: Production of phytate-hydrolyzing enzymes by thermophilic moulds. African Journal of Biotechnology 2012; 11(59): 12314-24.
- Jorquera MA, Gabler S, Inostroza NG, Acuña JJ, Campos MA, Blackburn DM and Greiner R: Screening and characterization of phytases from bacteria isolated from Chilean Hydrothermal Environments. Microbial Ecology 2018; 75(2): 387-99.
- Kanpiengjai A, Unban K, Pratanaphon R and Khanongnuch C: Optimal medium and conditions for phytase production by thermophilic bacterium, Anoxybacillus sp. MHW14. Food and Applied Bioscience Journal 2013; 1(3): 172-89.
- Bhavsar KP, Khire J and Kumar VR: High level phytase production by Aspergillus niger NCIM 563 in solid state culture: response surface optimization, up-scaling, and its partial characterization. Journal of Industrial Microbiology and Biotechnology 2011; 38(9): 1407-17.
- Shah KB and Trivedi R: Purification and characterization of an extracellular phytase from Aspergillus tamari. International Journal of Pharma and Bio-sciences 2012; 3(2): 775-83.
- RoyMP and Ghosh S: Purification and characterization of phytase from two enteric bacteria isolated from cow dung. Proceedings of 5th International Conference on Environmental Aspects of Bangladesh 2014; 57-59.
- Khodaii Z, Natanzi MM, Naseri MH, Goudarzv M, Dodson H and Snelling AM: Phytase activity of lactic acid bacteria isolated from dairy and pharmaceutical probiotic products. International Journal of Enteric Pathogen 2013; 1(1): 12-16.
- Zhang GQ, Wu YY, Ng TB, Chen QJ and Wang HX: A phytase characterized by relatively high pH tolerance and thermostability from the Shiitake Mushroom Lentinus edodes. BioMed Research International 2013; 1-7.
- Patki JM, Singh S and Mehta S: Partial purification and characterization of phytase from bacteria inhabiting the mangroves of the western coast of India. International Journal of Current Microbiology and Applied Sciences 2015; 4(9): 156-69.
- Akyurek H, Ozduven ML, Okur AA, Koc F and Samli HE: The effects of supplementing an organic acid blend and/or microbial phytase to a corn-soybean based diet fed to broiler chickens. African Journal of Agricultural Research 2011; 6(3): 642-49.
- Bahadoran R, Gheisari A and Toghyani M: Effects of supplemental microbial phytase enzyme on performance and phytate phosphorus digestibility of a corn-wheat-soybean meal diet in broiler chicks. African Journal of Biotechnology 2011; 10(34): 6655-62.
- Park I and Cho J: The phytase from Antarctic bacterial isolate Pseudomonas JPK1 as a potential tool for animal agriculture to reduce manure phosphorus excretion. African Jou of Agricultural Research 2011; 6(6): 1398-06.
- Sasirekha B, Bedashree T and Champa KL: Optimization and partial purification of extracellular phytase from Pseudomonas aeruginosa European Journal of Experimental Biology 2012; 2(1): 95-104.
- Roy MP, Poddar M, Singh KK and Ghosh S: Purification, Characterization and properties of phytase from Shigella CD2. Indian Journal of Biochemistry and Biophysics 2012; 49: 266-71.
- Pasamontes L, Haiker M, Wyss M, Tessier M and Loon AP: Gene cloning, purification and characterization of a heat-stable phytase from the fungus Aspergillus fumigatus. Applied and Environmental Microbiology 1997; 63(5): 1696-00.
- Sarubuga E, Nadaroglu H, Dikbas N, Senol M and Cetin B: Purification, characterization of phytase enzyme from Lactobacillus plantarum bacteria and determination of its kinetic properties. African Journal of Biotechnology 2014; 13(23): 2373-78.
- Xu L, Zhang G, Wang H and Ng TB: Purification and characterization of phytase with wide pH adaptation from common edible mushroom Volvariella volvaceae (Straw mushroom). Indian Journal of Biochemistry & Biophysics 2012; 49(1): 49-54.
- Borda-Molina D, Vital M, Sommerfeld V, Rodehutscord M and Camarinha-Silva A: Insights into broilers gut microbiota fed with phosphorus, calcium and phytase supplemented diets. Frontiers in Microbiology 2016; 7: 1-13.
How to cite this article:
Chatterjee A, Mathew AM, George A, Sengupta P, Pundir P, Xavier FJ and Venkatanagaraju E: Purification strategies for microbial phytase. Int J Pharm Sci & Res 2020; 11(1): 25-34. doi: 10.13040/IJPSR.0975-8232.11(1).25-34.
All © 2013 are reserved by the International Journal of Pharmaceutical Sciences and Research. This Journal licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
A. Chatterjee, A. M. Mathew, A. George, P. Sengupta, P. Pundir, F. J. Xavier and E. Venkatanagaraju *
Department of Biotechnology, Bannari Amman Institute of Technology, Sathyamangalam, Erode, Tamil Nadu, India.
23 March 2019
01 August 2019
06 November 2019
01 January 2020 | <urn:uuid:91f002dc-b288-48a0-9c06-4e79f5093d8e> | CC-MAIN-2022-27 | https://ijpsr.com/bft-article/purification-strategies-for-microbial-phytase/?view=fulltext | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00051.warc.gz | en | 0.957047 | 10,938 | 2.765625 | 3 |
Published in July 2018.
This module is a resource for lecturers
This section provides a list of (mostly) open access materials that the lecturer could ask the students to read before taking a class based on this Module.
Key concepts, issues and challenges
- United Nations, General Assembly (2006). Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law . 21 March. A/RES/60/147.
- United Nations, General Assembly (1985). Declaration of Basic Principles of Justice for Victims of Crime and Abuse of Power . 29 November. A/RES/40/34.
- United Nations, Human Rights Council (2016). "They came to destroy": ISIS Crime Against the Yazidis . 15 June. A/HRC/32/CRP.2.
- United Nations, Office of the High Commissioner for Human Rights (OHCHR) (2011). Realizing the rights of victims of terrorism . 22 August.
- United Nations, General Assembly, Human Rights Council (2012). Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, Ben Emmerson: Framework principles for securing the human rights of victims of terrorism. 4 June. A/HRC/20/14.
- African Commission on Human and Peoples' Rights (2015). Principles and Guidelines on Human and Peoples' Rights while Countering Terrorism in Africa .
- Organization for Security and Cooperation in Europe (OSCE) (2004). Permanent Council Decision No. 618 on Solidarity with victims of terrorism . 1 July. PC.DEC/618. Paras. 1-2; recognition of the need for appropriate support.
- Hoffman, Bruce, and Anna-Britt Kasupski (2007). The Victims of Terrorism: An Assessment of Their Influence and Growing Role in Policy, Legislation, and the Private Sector . Santa Monica: RAND Center for Terrorism Risk Management Policy - CTRMP.
- Shapland, Joanna, and Matthew Hall (2007). "What do we know about the effects of crime on victims?" International Review of Victimology, vol. 14, pp. 175-217.
- Schmid, Alex (2006). " Magnitudes and Focus of Terrorist Victimization." In Uwe Ewald, and Ksenija Turkovic, eds. Large-Scale Victimisation as a Potential Source of Terrorist Activities.IOS Press.
International/regional legal framework
- United Nations Office on Drugs and Crime (2015). Good Practices in Supporting Victims of Terrorism within the Criminal Justice Framework . New York: UNODC.
- UNODC (1999). Handbook on Justice for Victims . New York: UNODC. Provides guidance as to the establishment of a social solidarity fund for victims of terrorism.
- United Nations, General Assembly (2011). Promotion and protection of human rights and fundamental freedoms while countering terrorism . 18 August. 66/310. Addresses the rights of victims of terrorism.
- United Nations, General Assembly (2017). Protecting human rights and fundamental freedoms while countering terrorism - Report of the Secretary-General . 11 August. A/72/316.
- Goldscheid, Julie (2004). " Crime Victim Compensation in a Post 9/11 World." City University of New York (CUNY) Academic Works.
- Sommer, Hillel (2003). " Providing Compensation for Harm Caused by Terrorism: Lessons Learned in the Israeli Experience." Indiana Law Review, vol. 36, no. 2, p. 335-365.
- Moréteau, Oliver (2008). " Policing the Compensation of Victims of Catastrophes: Combining Solidarity and Self-Responsibility ." Journal Articles, 299.
- Grey, Betsy J. (2005/6). " Homeland Security and Federal Relief: A Proposal for a Permanent Compensation System for Domestic Terrorist Victims." New York University Journal of Legislation and Public Policy, vol. 9, no. 2, pp. 663-750.
- OSCE (2005). Background Paper on Solidarity with Victims of Terrorism . Oñati, 9-10 March. | <urn:uuid:2695b940-9391-4824-ab9d-383b6cfcc107> | CC-MAIN-2019-30 | http://www.unodc.org/e4j/es/terrorism/module-14/core-readings.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00352.warc.gz | en | 0.752044 | 873 | 2.546875 | 3 |
In an innovative trial beginning this spring, Trees for Life will harness the power of local mushrooms to boost reforestation at its Dundreggan Conservation Estate in Glenmoriston.
The conservation charity’s experts and volunteers are to introduce a special mix of spores collected from mushrooms on the Highland estate when planting native trees on the hills and when growing seedlings in Dundreggan’s tree nursery during this spring.
A pinch of the black granules containing the spores will be added to the planting holes of 20,000 trees in one section of the estate, and will also be applied to a selection of seedlings.
The results of this trial will be monitored to see if treating selected trees and seedlings in this way improves their growth and decreases the need for fertiliser application. It is hoped the trees will have greater resistance to drought and heat, and protection against pests.
“In tough, windswept environments such as those where we plant, newly planted trees need all the help they can get – especially in their early years. This magical mushroom mixture could speed up the return of the Caledonian Forest and its wildlife,” said Doug Gilbert, Trees for Life’s Operations Manager at Dundreggan.
Mycorrhizal fungi live underground on tree roots in a mutually beneficial relationship that has evolved over 400 million years. Many plants cannot survive without the fungi. In the autumn, some of these fungi form fruiting bodies, or mushrooms, above the ground.
The trees provide sugars for the fungi and in return, the fungi’s powerful enzymes break down and release nutrients such as phosphorus and iron which helps feed the trees. Young trees inoculated with mycorrhizal fungi suffer much less from heat stress, drought and the shock of planting.
Natural forest soils are full of these important fungi. But in very deforested areas such as the Highlands, forests still containing mushrooms are rare, small or fragmented, and are often separated by huge swathes of farmland and moorland. This means it can take years for fungi spores to land in the right place by newly planted trees – by which time the trees may be stunted or dead.
Last autumn, the first batch of a new mycorrhizal fungi treatment was made containing 59 species collected from the old-growth forests at Dundreggan by expert Jacob Whitson. Commercially available mycorrhizal treatments for trees are usually made from only a few mushroom species that may not be adapted to conditions in Scotland.
“Mycorrhizal fungi are one of our greatest allies for reforesting degraded landscapes – but they have been lost from soils because of issues including deforestation and overgrazing. By reintroducing them we can help trees,” said Jacob, who runs Chaos fungorum, a business supplying native mycorrhizal mixes for trees.
Trees for Life will now begin using the spore-laden granules during its popular volunteer Conservation Weeks running this spring. During these volunteering weeks, participants plant some 130,000 trees a year at Dundreggan, and also grow more than 60,000 trees a year in the estate’s tree nursery.
Trees for Life’s award-winning restoration of the Caledonian Forest also includes taking action for the return of rare woodland wildlife and plants, and carrying out innovative scientific research and education programmes. To find out more, see www.treesforlife.org.uk. | <urn:uuid:d547a3c6-10c9-44f7-938c-61e42bad1346> | CC-MAIN-2019-13 | http://www.walkfife.com/mysterious-mushroom-mixture-set-to-boost-reforestation-of-the-highlands/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202324.5/warc/CC-MAIN-20190320085116-20190320111116-00222.warc.gz | en | 0.942528 | 713 | 2.890625 | 3 |
The Orthodox Church has a rich tradition of iconography as well as other church arts: music, architecture, sculpture, needlework, poetry, etc. This artistic tradition is based on the Orthodox Christian doctrine of human creativity rooted in God’s love for man and the world in creation.
Because man is created in the image and likeness of God, and because God so loved man and the world as to create, save and glorify them by His own coming in Christ and the Holy Spirit, the artistic expressions of man and the blessings and inspirations of God merge into a holy artistic creativity which truly expresses the deepest truths of the Christian vision of God, man, and nature.
The icon is Orthodoxy’s highest artistic achievement. It is a gospel proclamation, a doctrinal teaching and a spiritual inspiration in colors and lines.
The traditional Orthodox icon is not a holy picture. It is not a pictorial portrayal of some Christian saint or event in a “photocopy” way. It is, on the contrary, the expression of the eternal and divine reality, significance, and purpose of the given person or event depicted. In the gracious freedom of the divine inspiration, the icon depicts its subject as at the same time both human and yet “full of God,” earthly and yet heavenly, physical and yet spiritual, “bearing the cross” and yet full of grace, light, peace and joy.
In this way the icon expresses a deeper “realism” than that which would be shown in the simple reproduction of the physical externals of the historic person or happening. Thus, in their own unique way the various types of Orthodox icons, through their form and style and manner of depiction as well as through their actual contents and use in the Church, are an inexhaustible source of revelation of the Orthodox doctrine and faith.
Musical expression may be added to the icon as a source of discovering the Orthodox Christian worldview. Here, however, there is greater difficulty because of the loss in recent years of the liturgical and spiritual meaning of music in the Church. Just as the theological meaning of the traditional Orthodox icon is being rediscovered, so is the traditional doctrinal significance of Orthodox music. The process in the latter case, however, is much slower, much more difficult, and much less evident to the average person.
The traditional Orthodox architecture also expresses the doctrine of the Church, particularly in its emphasis on “God with us” and the complete communion of men and the world with God in Christ. The use of domed ceilings, the shape and layout of the buildings, the placing of the icons, the use of vestments, etc., all express the teachings of the Church. The traditional Orthodox church architecture and art work are expressions of the Orthodox Christian doctrines of creation, salvation and eternal life.
It is a very important spiritual exercise for Christians to study the holy icons and the hymns of the Church’s liturgy. One can learn much about God and His gracious actions among men by a careful and prayerful contemplation of the artistic expressions of Church doctrine and life (see Worship). | <urn:uuid:047e4c77-0962-4490-af6d-cb74b36a9cf1> | CC-MAIN-2014-35 | http://oca.org/orthodoxy/the-orthodox-faith/doctrine/sources-of-christian-doctrine/church-art | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835822.36/warc/CC-MAIN-20140820021355-00069-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.949253 | 642 | 2.625 | 3 |
The Texas A&M Institute of Renewable Natural Resources has unveiled a new comparison component to the Texas Land Trends database that will help public and private decision makers make more informed decisions about conservation of working rural lands in Texas, according to an official at the institute.
Texas is home to more than 142 million acres of private farms, ranches and forest lands, thus leading the nation in land area devoted to privately-owned working lands. These lands account for 84 percent of the state's entire land area and provide substantial economic, environmental and recreational resources to the benefit of the state's entire population.
Members of the institute's Land Information System demonstrated the new tool that compares land use, market value, population and ownership size side-by-side for different regions of the state at a recent Land Trends workshop in San Antonio, organized by the Texas Agricultural Land Trust.
The Texas Land Trends database, www.texaslandtrends.org, is an interactive website detailing current land-use trends within the state. The institute and American Farmland Trust developed the database and website.
Amy Snelgrove, the institute's land information manager, said the new component allows users to compare specific trends such as ownership within a county to the river basin it falls in, along with statewide trends.
"As conservation planners use Texas Land Trends to make informed decisions, the need to compare different regions within the state side-by-side has become increasingly important," said Snelgrove, who also is a geographic information system specialist for Texas AgriLife Research.
According to accumulated data from county appraisal districts, from 1997 to 2007 more than 2.1 million acres of farms, ranches and forestlands were converted to other uses.
"Approximately 40 percent of this land conversion was related to growth and development associated with population expansion in the state's 25 highest growth counties," said Blair Fitzsimons, Texas Agricultural Land Trust executive director. "During this period, 861,765 acres were lost from the agricultural land base in these counties."
Fitzsimons said this rapid urbanization and fragmentation threaten Texas' $73 billion agriculture industry, its sources of drinking water, and the habitat upon which a $10.9 billion wildlife-recreation industry depends.
"Large-scale infrastructure projects are also often planned with little analysis of the public costs—economic, social and environmental—of losing vibrant rural lands, Fitzsimons said.”This new tool, along with the amount of useful information already available through Texas Land Trends, will help stakeholders define focus areas for long-term strategic planning."
Sponsors of the project are The Brown Foundation, Houston Endowment, Shield-Ayers Foundation, Magnolia Charitable Trust and The Jacob and Terese Hershey Foundation. | <urn:uuid:e38262a3-f5cc-48e8-810d-13ff8deee809> | CC-MAIN-2014-52 | http://southwestfarmpress.com/print/management/institute-unveils-database-tool-help-conservation-planning | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768561.127/warc/CC-MAIN-20141217075248-00129-ip-10-231-17-201.ec2.internal.warc.gz | en | 0.924144 | 564 | 2.875 | 3 |
NIST Releases Federal Risk Assessment GuideFederal technology standards body issues new guidelines for evaluating cyber security vulnerabilities.
(click image for larger view)
Slideshow: Inside DHS' Classified Cyber-Coordination Headquarters
The federal organization for creating technology standards has released new guidance to help agencies assess risk within their IT systems as part of an overall strategy to instill more prevention in federal cybersecurity.
The National Institute for Standards and Technology (NIST) is currently seeking comments through Nov. 4 on its Guide for Conducting Risk Assessments, which updates an original version published nine years ago.
The guide is aimed at helping agencies evaluate the current threat landscape as well as identify potential vulnerabilities and the adverse impacts they may have on agency business operations and missions, according to NIST.
[Pacific Northwest National Laboratory CIO Jerry Johnson takes you inside the cyber attack that he faced down--and shares his security lessons learned, in Anatomy of a Zero-Day Attack.]
Risk assessment is one of four steps in agencies' general security risk-management strategy, according to NIST. Assessment helps agencies determine the appropriate response to cyberattacks or threats before they happen and guides their IT investment decisions for cyber-defense solutions, according to the organization.
It also helps agencies maintain ongoing situational awareness of the security of their IT systems, something that is becoming more important to the federal government as it moves from a mere reactionary or compulsory security approach to one that proactively addresses risks and takes more consistent, preventative measures.
Indeed, in testimony Wednesday before Congress, a federal IT official noted the government's new focus on risk mitigation as key to its future security measures, particularly as they pertain to cloud computing and its security risks.
The government is "shifting the risk from annual reporting under FISMA to robust monitoring and more mitigation" in an attempt to strengthen the security of federal networks, said David McClure, associate administrator for the General Services Administration's office of citizen services and innovative technologies during a House subcommittee on technology and innovation hearing.
To this end, NIST has been working to provide cybersecurity guidelines and standards to agencies as they work to better lock down federal IT systems.
Changes also have been made to how agencies report their security compliance. Agencies recently were required to report security data to an online compliance tool called CyberScope as part of fiscal year 2011 requirements for the Federal Information Security Management Act (FISMA), a standard for federal IT security created and maintained by NIST.
In the new, all-digital issue of InformationWeek Government: As federal agencies close data centers, they must drive up utilization of their remaining systems. That requires a well-conceived virtualization strategy. Download the issue now. (Free registration required.) | <urn:uuid:626684ee-53a5-4727-8a0a-0e273269576e> | CC-MAIN-2018-26 | http://www.darkreading.com/risk-management/nist-releases-federal-risk-assessment-guide/d/d-id/1100284?piddl_msgorder=asc | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863206.9/warc/CC-MAIN-20180619212507-20180619232507-00162.warc.gz | en | 0.947227 | 557 | 2.546875 | 3 |
As 'fountain of honour' in the UK, The Queen has the sole right of conferring titles of honour on deserving people from all walks of life, in public recognition of their merit, service or bravery. The most well-known honours are probably MBEs, OBEs and CBEs, but there are a whole range of other honours that The Queen awards in addition to these, such as The Order of Merit, or The Order of St Michael and St George.
Recipients collect their awards from The Queen or another Member of the Royal Family at an Investiture ceremony.
Most honours are awarded on the advice of the Cabinet Office, and anybody can make a recommendation if they know someone they believe to be worthy (see 'Honours nomination').
Sometimes, on the advice of the Foreign and Commonwealth Office, honorary decorations are awarded to people who are not British or Commonwealth nationals but who have made a significant contribution to relations between the UK and their own country.
Honours recipients are announced twice a year, once in the New Year's Honours List, and once on The Queen's official birthday.
Sometimes Orders are exchanged between The Queen and overseas heads of state as formal and official awards by which one nation honours another.
History of the Honours System
Throughout history, monarchs have rewarded those who have shown service, loyalty or gallantry with gifts or titles.
After medieval times, physical gifts such as land or money were replaced by the awarding of knighthoods and of membership within Orders of chivalry, accompanied by insignia such as gold or silver chains.
As the UK Government evolved and Parliament's legislative role grew in the eighteenth and nineteenth centuries, the Cabinet took over the role of selecting honours recipients.
Until the beginning of the nineteenth century, only members of the aristocracy and high-ranking military figures could be appointed to an Order of chivalry, but from then onwards appointments were drawn from a wider variety of backgrounds. In 1917 the Queen's grandfather, George V, developed a new order of chivalry, called the Order of the British Empire, as a way of rewarding both men and women who had made an outstanding contribution to the WWI war effort. Nowadays the Order of the British Empire rewards service in a wide range of areas, from acting to charity work, with honours that include the well-known MBE and OBE.
Types of Honours
The Order of the Garter
This is the oldest and most senior order of chivalry in Britain; members are selected and appointed personally by The Queen.
The Order of the Thistle
Recognising sixteen knights by personal gift of The Queen, this is the highest order of chivalry in Scotland.
Order of St Patrick
The national Order of Ireland, this lapsed in 1974 with the death of the last surviving recipient.
Order of the British Empire
Instituted in 1917 by George V to reward outstanding contribution to the war effort, this Order now rewards people from all walks of life with well-known honours such as MBEs and OBEs.
Order of Merit
The sole gift of the Sovereign to 24 members at any one time, this rewards those who have achieved greatly in the arts, learning, literature and science.
Companions of Honour
Sometimes regarded as a junior class of the Order of Merit, this Order rewards 65 individuals at any one time who have made a longstanding contribution to arts, science, medicine or government.
Order of the Bath
Including past members such as Nelson and Wellington, this Order recognises the work of senior military officials and civil servants.
Order of St Michael and St George
This order rewards service in a foreign country, or in relation to foreign and Commonwealth affairs.
Royal Victorian Order
The personal gift of the Sovereign, this honour is awarded to those who have served The Queen or the monarchy in a particular way.
Royal Family Orders
These are small portraits of the Sovereign attached to ribbon, gifted to Members of the Royal Family.
Commonwealth citizens can also receive UK awards, and Commonwealth countries have their own honours, which are sometimes awarded to UK citizens.
Military Honours and Awards
There are several different awards exclusively for rewarding bravery and outstanding service in the Armed Forces, the highest of which are The Victoria Cross and The George Cross. The newest of these is The Elizabeth Cross, instigated in 2009 to recognise families who have lost loved ones as a result of conflict or terrorism.
Anybody in the UK can make a recommendation for a British national to receive an honour. This ensures that many people who are not in the public eye are recognised for their valuable service and contribution, perhaps to charity, to the emergency services, or to their industry or profession.
Honours are awarded on the advice of the Cabinet Office, who deal with all nominations.
To find out more, or to nominate someone for an honour, please visit https://www.gov.uk/honours. | <urn:uuid:0bedfacd-4dc7-44f0-9c0a-2e48de2f2bdf> | CC-MAIN-2017-22 | https://www.royal.uk/queen-and-honours | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608665.33/warc/CC-MAIN-20170526124958-20170526144958-00298.warc.gz | en | 0.965425 | 1,019 | 3.046875 | 3 |
When playing, children have an innate curiosity that when fostered, can turn to a life-long love of learning. Through play, children learn problem solving and critical thinking-- ”How do I make this puzzle piece fit?” “What happens when I do this?” Scaffolding can really help children learn. Supports such as hand over hand, modeling, verbal prompts, turn taking and imitation can really help a child learn new skills when you are playing with them.
Children become self-motivated through play and learn determination with encouragement of loved ones to keep going. You can do it! And finally, when they accomplish, celebrate by saying “You did it!” | <urn:uuid:b6797612-ffbf-4775-8735-74c8165a4b22> | CC-MAIN-2023-40 | http://www.komie-assoc.com/blog/intellectual-competence | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506329.15/warc/CC-MAIN-20230922034112-20230922064112-00788.warc.gz | en | 0.950642 | 162 | 2.953125 | 3 |
Intel announced on Monday that it will be presenting a paper at Siggraph 2008 about its "many-core" Larrabee architecture, which will be the basis of future Intel graphics processors.
The paper itself, however, has already been published, and I was able to get a copy of it. (Unfortunately, as you'll see at that link, the paper is normally available only to members of the Association for Computing Machinery.)
The paper is a pretty thorough summary of Intel's motives for developing Larrabee and the major features of the new architecture. Basically, Larrabee is about using many simple x86 cores--more than you'd see in the central processor (CPU) of the system--to implement a graphics processor (GPU). This concept has received a lot of attention since Intel first started talking about it last year.
The paper also answers perhaps the biggest unanswered question about Larrabee--what are the cores, and how can Intel put "many" of them on a chip when desktop CPUs are still moving from two to four cores?
Intel describes the Larrabee cores as "derived from the Pentium processor," but I think perhaps this is an oversimplification. The design shown in the paper is only vaguely Pentium-like, with one execution unit for scalar (single-operation) instructions and one primarily for vector (multiple-operation) instructions.
That's the basic answer: Larrabee cores just have less going on. A quad-core desktop processor might have six or more execution units, and a lot of special logic to let it reorder instructions and execute code past conditional branches just in case it can guess the direction of the branch correctly. This complexity is necessary to maximize performance in a lot of desktop software, but it's not needed for linear, predictable code--which is what we usually find in 3D-rendering software.
But the vector unit in Larrabee is much more powerful than anything in older Intel processors--or even in the current Core 2 chips--because 3D rendering needs to do a lot of vector processing. The vector unit can perform 16 single-precision floating-point operations in parallel from a single instruction, which works out to 512 bits wide--great for graphics, though it would be overkill for a general-purpose processor, which is why the vector units in mainstream CPUs are 128 or 256 bits wide at most.
The new vector unit also supports three-operand instructions, probably including the classic "A * B + C" operation that is so common in many applications, including graphics. With three operands and two calculations per instruction, the peak throughput of a single Larrabee core should be 32 operations per cycle, and that's just what the paper claims.
I say "probably" because the Siggraph paper doesn't describe exactly what operations will be implemented in the vector unit, but I suspect this part of the Larrabee design is related to Intel's Advanced Vector Extensions, announced last April. The first implementations of AVX for desktop CPUs will apparently begin with a 256-bit design, another indication of how unusual it is for Larrabee to have a 512-bit vector unit.
The multithreading factor
Intel also built four-way multithreading into the Larrabee cores. Each Larrabee core can save all the register data from four separate threads in hardware, so that most thread-switch operations can be performed almost instantly rather than having to save one set of registers to main memory and load another. This approach is a reasonable compromise for reducing thread-switching overhead, although it probably consumes a significant amount of silicon.
Note that this kind of multithreading in Larrabee is very different from the Hyper-Threading technology Intel uses on Pentium 4, Atom, and future Nehalem processors. Hyper-Threading (aka simultaneous multi-threading) allows multiple threads to execute simultaneously on a single core, but this only makes sense when there are many execution units in the core. Larrabee's two execution units are not enough to share this way.
All of these differences prove rather conclusively that Larrabee's cores are not the same as the cores in Intel's Atom processors (also known as Silverthorne). That surprised me; the Atom core seemed fairly appropriate for the Larrabee project. All that really should have been necessary was to graft a wider vector unit onto the Atom design. But now I suppose the Atom and Larrabee projects have been completely independent from one another all along.
Intel won't say how many cores are in the first chip. The paper describes an on-chip ring network that connects the cores. The network is 512 bits wide. Interestingly, the paper mentions that there are two different ring designs--one for Larrabee chips with up to 16 cores, and one for larger chips. That suggests Intel has chips planned with relatively small numbers of cores, possibly as few as four or eight. Such small implementations might be appropriate for Intel's future integrated-graphics chip sets, but as such they will be very slow by comparison with contemporary discrete GPUs, just as Intel's current products are.
Larrabee provides some graphics-specific logic in addition to the CPU cores, but not much. The paper says that many tasks traditionally performed by fixed-function circuits, such as rasterization and blending, are performed in software on Larrabee. This is likely to be a disadvantage for Larrabee, since a software solution will inevitably consume more power than optimized logic--and consume computing resources that could have been used for other purposes. I suspect this was a time-to-market decision: tape out first, write software later.
The paper says Larrabee does provide fixed-function logic for texture filtering because filtering requires steps that don't fit as well into a CPU core. I presume there's other fixed-function logic in Larrabee, but the paper doesn't say.
Larrabee's rendering code uses binning, a technique that has been used in many software and hardware 3D solutions over the years, sometimes under names such as "tiling" and "chunking." Binning divides the screen into regions and identifies which polygons will appear in each region, then renders each region separately. It's a sensible choice for Larrabee, since each region can be assigned to a separate core.
Binning also reduces memory bandwidth, since it's easier for each core to keep track of the lower number of polygons assigned to it. The cores are less likely to need to go out to main memory for additional information.
The numbers crunch
The paper gives some performance numbers, but they're hard to interpret. For example, game benchmarks were constructed by running a scene through a game, then taking only widely separated frames for testing on the Intel design. In the F.E.A.R. game, for example, only every 100th frame was used in the tests. This creates an unusually difficult situation for Larrabee; there's likely to be much less reuse of information from one frame to the next.
But given that limitation of the test procedure, the results don't look very good. To render F.E.A.R. at 60 frames per second--a common definition of good-enough gaming performance--required from 7 to 25 cores, assuming each was running at 1GHz. Although there's a range here depending on the complexity of each frame, good gameplay requires maintaining a high frame rate--so it's possible that F.E.A.R. would, in practice, require at least a 16-core Larrabee processor.
In other words, unless Intel is prepared to make big, hot Larrabee chips, I don't think it's going to be competitive with today's best graphics chips on games.
Intel can certainly do that-- no other semiconductor company on Earth can afford to make big chips the way Intel can-- but that would ruin Intel's gross margins, which are how Wall Street judges the company. Also, Intel's newest processor fabs are optimized for high-performance logic, like that used in Core 2 processors. Larrabee runs more slowly, suggesting it could be economically manufactured on ASIC product lines... but Intel's ASIC lines are all relatively old, refitted CPU lines.
Nvidia, by comparison, gets around this problem by designing its chips from the beginning to be made in modern ASIC factories, chiefly those run by TSMC. Although these factories are a generation behind Intel's in process technology, they're much less expensive to operate. So this may be a situation where Intel's process edge doesn't mean as much as it does in the CPU business.
The Larrabee programming model also supports nongraphics applications. Since it's fundamentally just a multicore x86 processor, it can do anything a regular CPU can do. Intel's paper even uses Sun Microsystems' term, Throughput Computing, for multicore processing.
The Larrabee cores aren't nearly as powerful as ordinary notebook or desktop processors for most applications. Real Larrabee chips will likely be faster than the 1GHz reference frequency used in the paper, but they still don't have as many execution units for the scalar operations that make up the bulk of operating-system and office software. That means a single Larrabee core could feel slow even when compared with a Pentium III processor at the same frequency, never mind a Core 2 Duo.
But with such a strong vector unit, a Larrabee core could be very good at video encoding and other tasks, especially those that use floating-point math. At 1GHz, a single Larrabee core hits a theoretical 32 GFLOPS (32 billion floating-point operations per second). A 32-core Larrabee chip could exceed a teraflop--roughly the performance of Nvidia's latest GPU, the, which has 240 (very simple) cores.
But I don't expect to see that kind of performance from the first Larrabee chips. The power consumption of a 32-core design with all the extra overhead required by x86 processing would be very high. Even with Intel's advantages in process technology, such a large Larrabee chip would probably be commercially impractical. Smaller Larrabee designs may find some niche applications, however, acting as number-crunching coprocessors much as IBM's Cell chips do in some systems.
And although a Larrabee chip could, in principle, be exposed to Windows or Mac OS X to act as a collection of additional CPU cores, that wouldn't work very well in the real world and Intel has no intention of using it that way. Instead, Larrabee will be used like a coprocessor. In that application, Larrabee's x86 compatibility isn't worth very much.
The bottom line
So...what's Larrabee good for, and why did Intel bother with it?
I think maybe this was a science project that got out of hand. It came along just as AMD was buying ATI and so positioning itself as a leader in CPU-GPU integration. Intel had (and still has) no competitive GPU technology, but perhaps it saw Larrabee as a way to blur the line distinguishing CPUs from GPUs, allowing Intel to leverage its expertise in CPU design into the GPU space as well.
Intel may have paid too much attention to some of its own researchers, who have been touting ray tracing as a potential alternative to traditional polygon-order ray tracing. I wrote about this in some depth back in June (""). But ray tracing merits just one paragraph and one figure in this paper, which establish merely that Larrabee is more efficient at ray tracing than an ordinary Xeon server processor. It falls well short of establishing that ray tracing is a viable option on Larrabee, however.
Future members of the Larrabee family may be good GPUs, but from what I can see in this paper, the first Larrabee products will be too slow, too expensive, and too hot to be commercially competitive. It may be several more years beyond the expected 2009/2010 debut of the first Larrabee parts before we find out just how much of Intel's CPU know-how is transferable to the GPU market.
I'll be at Siggraph again this year, and I'll have more to say after I've read this paper through a few more times and had a chance to speak with some of the folks I know at AMD, Nvidia, and other companies in the graphics market. | <urn:uuid:581509fe-bb6c-44d8-806a-651f168ea2a0> | CC-MAIN-2014-41 | http://www.cnet.com/news/intels-larrabee-more-and-less-than-meets-the-eye/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135930.79/warc/CC-MAIN-20140914011215-00182-ip-10-234-18-248.ec2.internal.warc.gz | en | 0.962355 | 2,574 | 2.78125 | 3 |
This is how our clothing affects the empathy that other people feel towards us.
Wearing less revealing, and more comfortable clothing, makes other people feel more empathy towards you, according to new research.
However, when women in the study wore a short dress, heels and heavy make-up, it reduced how much empathy others felt towards them. The same was true whether it was a man or woman observing the other woman.
When wearing comfortable trousers, a jersey, ballet flats and light make-up, others felt more empathy towards them.
The results are likely because revealing more skin tends to make people see us more as a sexual object, rather than a person.
In order to come to this conclusion, the researchers had study participants play a cyber game where they tossed a ball to different actors: “sexualized” women (in a dress, high heels, and heavy makeup), “personalized” women (in jeans, a t-shirt, and light makeup), and themselves. The virtual ball-tossing game was used to elicit negative emotions by excluding the actors from the game at different points, and positive emotions by including them. The results revealed the study participants were far less likely to feel empathy for the sexualized woman when she was excluded from the ball-tossing game, and they felt less intense positive emotions when she was included.
Dr Giorgia Silani, who led the study, said: “The results suggests that the underlying mechanism may be a reduced activation of the brain’s empathy network.”
Sexual objectification also robs a person of their apparent ability to plan their actions and have a moral sense in the eye of the beholder.
The results come from a study in which 41 people (20 women/ 21 men) watched a video designed to test empathic reactions. Brain scans measured how they reacted to the sexualised and non-sexualised target.
The researchers measured the brain activity of study participants, and found that fewer regions associated with empathy lit up when it came to the sexualized actor. “This reduction in empathic feelings towards sexually objectified women was accompanied by reduced activity in empathy related brain areas. This suggests that observers experienced a reduced capacity to share the sexualized women’s emotions,” Silani explained .
Furthermore, the authors note in the study that “the emotional intensity reported [by study participants] for the self was the highest, followed by the personalized targets, and the objectified targets as the lowest.” Meaning, the participants were less likely to empathize and relate to the sexualized women — whether they were feeling positive emotions or negative emotions. However, when the researchers modified the clothing on the sexualized model and covered more skin, they found the participants became more empathetic. | <urn:uuid:fb73d0bf-7f22-4c86-b81b-0945ee5d2ca8> | CC-MAIN-2020-10 | https://mysticalraven.com/health/11220/one-study-proved-that-these-types-of-clothing-make-people-feel-more-empathy | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00223.warc.gz | en | 0.965193 | 576 | 2.5625 | 3 |
The purpose of this study was to examine the attitudes and beliefs of preservice teachers concerning inclusive education for students with severe disabilities. Individual interviews were conducted with 35 preservice teachers to determine their attitudes and beliefs concerning inclusion of students with severe disabilities and to examine the factors that influenced these attitudes and beliefs. Following qualitative data analysis procedures, findings indicated that the preservice teachers were relatively evenly divided on their opinions about where students with severe disabilities should receive educational services. The most significant finding of this study was that the preservice teachers attributed the underlying basis of their beliefs about inclusive education to prior experiences in their schools, families, and communities. These findings suggested that teacher educators should consider the far-reaching impact of the training they provide. The future of inclusion may depend upon preparing thoughtful practitioners whose positive attitudes and beliefs are modeled in their classrooms and in their communities. These teachers will have the power to influence the attitudes and beliefs of the members of the "villages" in which they teach.
Garriott, P. P.,
& Ringlaben, R.
If it Takes a Village, Then We'd Better Educate the Villagers: Preservice Teachers' Attitudes and Beliefs about the Inclusion of Students with Severe Disabilities,
Electronic Journal for Inclusive Education, 1 | <urn:uuid:4049f138-6998-4f9d-a212-b6e34fcc6328> | CC-MAIN-2018-26 | https://corescholar.libraries.wright.edu/ejie/vol1/iss8/4/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863518.39/warc/CC-MAIN-20180620104904-20180620124904-00482.warc.gz | en | 0.958826 | 264 | 3.234375 | 3 |
Arduino 101: Introduction to Micro-Programming
Arduino is a microcontroller-based pc-board with associated software that allows you to create, automate and customize a variety of projects. It is an efficient and relatively easy way to add electronic "brains" to electro-optical and electro-mechanical systems.
No previous electronics or programming experience is necessary. Through hands-on exercises, the workshop will introduce you to the Arduino board hardware and software and show you how to interface it to circuits that you build. The workshop fee includes an Arduino board and all of the necessary electronics for the exercises. Please bring a computer with the Arduino IDE installed (arduino.cc).
For more information please visit http://www.fablabtacoma.com/ai1ec_event/arduino-101-workshop
FabLab Tacoma (View)
1938 Market St
Tacoma, WA 98402
|Kid Friendly: Yes!|
|Dog Friendly: No| | <urn:uuid:aa3285b3-62a2-413e-81d0-ff1d197074ce> | CC-MAIN-2017-30 | http://www.brownpapertickets.com/event/310444 | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423839.97/warc/CC-MAIN-20170722002507-20170722022507-00218.warc.gz | en | 0.82221 | 208 | 2.953125 | 3 |
Relapse does not mean treatment was a failure. It indicates that a person needs supportive services and a modified treatment plan.
Relapse may seem like a failure, but it is actually a part of the recovery process. If you are in treatment for addiction, it is important to know that relapse and recoverycan occur together as part of an ongoing journey. Here are five reasons why a relapse doesn’t mean you’ve failed.
Addiction is a Disease
One reason that relapse is not indicative of failure is that drug or alcohol addiction is a chronic relapsing illness. As experts have reported, the research supports that addiction is a brain disease because it weakens the brain’s ability to experience pleasure and motivation, increases a person’s response to stress, creates cravings and unpleasant emotions when cravings go unsatisfied, and impairs functioning of brain regions associated with controlling inhibitions, making decisions, and regulating behavior.
Because addiction is a disease that affects the way your brain works, relapse is also part of the disease.
Relapse is Common
Relapse is a common part of the recovery process. According to the National Institute on Drug Abuse (NIDA), relapse statistics show that 40-60% of people relapse after completing treatment. This relapse rate is comparable to that seen with physical illnesses, such as asthma and high blood pressure, for which the rate of relapse is between 50-70%.
The chances of relapse after rehab are moderately high, indicating that relapse is a normal part of recovery and not an individual failure.
The Recovery Village recently surveyed 2,136 American adults who either wanted to stop drinking alcohol or had already tried to (successfully or not). Of those, only 29.4% reported not relapsing at all. The largest group (32.3%) relapsed back to alcohol use within the first year after stopping. With perseverance, your chances of relapsing decrease the longer you stay sober: 21.4% relapsed in their second year in recovery, but only 9.6% relapsed in years three through five, and only 7.2% did so after their fifth year in recovery.
Recovery is a Lifelong Journey
It’s important to remember that recovery is a lifelong journey. One expert explains that recovery is more like a form of remission, where relapse is still possible. Recovery can mean that someone is making progress but is not cured.
Your recovery journey is an ongoing process, not a single event where you’re suddenly cured and will never experience a relapse.
Relapse is a Sign You Need to Alter Treatment
Instead of seeing relapse as a failure, relapse can be a sign that it’s time to make some changes to your treatment plan. Per the National Institute on Drug Abuse, a relapse indicates that a person in recovery needs to have a discussion with a professional about altering their treatment or perhaps even returning to treatment. Developing a plan that includes relapse prevention strategies can be helpful and reduce the risk of future relapses.
Researchers have found that a successful relapse prevention plan should help people to identify the early signs of relapse as well as develop coping skills for dealing with stressors, cravings, and thoughts of using drugs. Cognitive therapy and relaxation techniques can be helpful interventions for preventing relapse.
Recovery Involves Building a New Life
Recovery involves creating a sober lifestyle and completely changing past habits, and it is understandable that there may be relapses during the course of building a new life. Addiction experts explain that changing your life is the first step in the recovery process, and this involves avoiding people you used drugs with as well as the places you went to use drugs. Building a new life also requires changing unhealthy thought processes associated with substance abuse.
Change can be difficult, and there may be relapses along the way, but it is possible to lead a sober life with recovery support. People in recovery may benefit from working with a peer support specialist to assist them in building their new sober life.
If you are struggling from an addiction to drugs or alcohol and are ready to create a sober lifestyle, The Recovery Village offers comprehensive treatment services, including aftercare and relapse prevention planning, to meet your needs. Contact our admissions department today to begin your journey toward recovery.
Volkow, Nora et al. “Neurobiologic advances from the brain disease model of addiction.” The New England Journal of Medicine, January 28, 2016. Accessed August 20, 2019.
National Institute on Drug Abuse. “Treatment and Recovery.” July 2018. Accessed August 20, 2019.
Melemis, Steven. “Relapse prevention and the five rules of recovery.” Yale Journal of Biology and Medicine, September 2015. Accessed August 20, 2019.
Substance Abuse and Mental Health Services Administration. “Peers.” August 12, 2019. Accessed August 20, 2019.
The Recovery Village aims to improve the quality of life for people struggling with substance use or mental health disorder with fact-based content about the nature of behavioral health conditions, treatment options and their related outcomes. We publish material that is researched, cited, edited and reviewed by licensed medical professionals. The information we provide is not intended to be a substitute for professional medical advice, diagnosis or treatment. It should not be used in place of the advice of your physician or other qualified healthcare providers. | <urn:uuid:354045b3-7494-4299-ba87-29c3e7987107> | CC-MAIN-2023-40 | https://www.therecoveryvillage.com/recovery/relapse/relapse-not-a-failure/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511055.59/warc/CC-MAIN-20231003060619-20231003090619-00146.warc.gz | en | 0.948939 | 1,102 | 2.640625 | 3 |
1. Molecular or Gene Cloning:- Molecular or gene cloning, the process of creating genetically identical DNA molecules, provides the foundation of the molecular biology revolution and is a fundamental and essential tool of biotechnology research, development and commercialization.
Methods of Cloning
i) Artificial Embryo Twining (AET)
- In AET, the natural process of creating identical twin is mimicked, but in a lab instead of the mother’s womb. Scientists naturally separate a very early embryo into individual cells and then allow each cell to divide and develop on its own. The resulting embryos are placed into a surrogate mother, where they are carried to term and delivered. Since all the embryos come from the same zygote, they are genetically identical.
ii) Somatic Cell Nuclear Transfer (SCNT)
- SCNT involves the isolation of a somatic (body) cell, which is any cell other than those used for reproduction (sperm and egg, known as the germ cells). In mammals, every somatic cell has two complete sets of chromosomes, whereas the germ cells have only one complete set. To make Dolly,(the first cloned Ship) scientists extracted the nucleus of a somatic cell taken from an adult female sheep and transferred it to an egg cell from which the nucleus had been removed. After some chemical manipulation, the egg cell, with the new nucleus behaved like a freshly fertilized zygote. It developed into an embryo, which was implanted into a surrogate mother and carried to term giving birth to Dolly.
- There are considerable technical issues and risks involved in the technique. (Note: Dolly died at the age of six while a sheep usually lives for 10 to 16 years. Also, Dolly had a fast rate of ageing.)
- No matter how many species are cloned human cloning will always be an experiment as different species have different behavior towards cloning i.e. even if we solve the problem with the ape or monkey cloning, cloning of humans will still be full of problems.
- Another problem is the relatively low rate of success in cloning and the fact that many of the cloned animals are born with deformities or congenital defects.
- Cloning of human babies is certainly likely to become feasible but many regard this as both dangerous and morally wrong.
- Using the cloning and genetic engineering, it is possible to change the genetic makeup of the embryo, giving rise to the possibility of the “designer baby”. Some people see nothing wrong in it. They say it is good rather than spending huge sum of money on schools. But it is also true that such technology would be available to only few wealthy people.
- Considering the “Designer Baby” scenario, it is also important to remember that no technology is ever precisely predictable, and our current understanding of genetic engineering is very limited.
- Religious groups have also raised serious objections as it is against the basic religious principles. | <urn:uuid:646d13b0-56c1-4e6d-816c-e35cc5ee4a43> | CC-MAIN-2019-39 | https://erewise.com/current-affairs/cloning_art53108884d4400.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00275.warc.gz | en | 0.957978 | 604 | 3.53125 | 4 |
Your IP Address detected as:
|Organization||AWS EC2 (us-east-1)|
Your IP Address is an identifier for a device on a network.
An IP address is written as four sets of numbers separated by periods.
Within a local network, it can be assigned randomly, as long as they are distinctive.
If more devices are in the same network they can share the same IP address, making the trace to go to one of them or to show only the network server.
The Truth Fact About Your IP Address
- When you go online, you automatically have an active IP address
No matter what device (laptop, smartphone, etc) you use to go online with an internet connection, you automatically have an active IP address. IP stands for Internet Protocol: The protocols are connectivity guidelines and regulations that govern computer networks.
- IP addresses are assigned to the device, not the people
The IP address detected when you're connected to a network is actually assigned to the device you use. It will not refer to your personal details or specific location where you belongs currently.
- Your Internet Service Provider (ISP) is the only one who knows your personal details that corresponds to an IP address
Your ISP will surely keep your real information private and do not disclose IP addresses (or names and addresses) of clients to just anyone asking for it. However, they would disclose that information under citation to law enforcement agencies if necessary.
- Yes, you can hide your actual IP address
To be precise, you can show a different IP address from the one you're actively using. You can do that by using proxies installed to your browser, or using web proxies site to browse the internet, or by using a Virtual Private Network (VPN).
- A Virtual Private Network (VPN) is the best and easiest way to hide your real IP address
VPN also offers other online safety benefits, including keeping snooping eyes out of your computer, as well as your personal and financial activity. Some VPNs are free, but the better ones would charge a small monthly fee. | <urn:uuid:5edf07e4-8109-447c-ac5f-37e876ec58ca> | CC-MAIN-2019-47 | https://eliteproxy.net/check-ip/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668954.85/warc/CC-MAIN-20191117115233-20191117143233-00078.warc.gz | en | 0.917413 | 427 | 3.046875 | 3 |
Cable car 2
Cable car rises at an angle 41° and connects the upper and lower station with an altitude difference of 1175 m. How long is the track of cable car?
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this verbal math problem are needed these knowledge from mathematics:
Next similar math problems:
The largest angle at which the lift rises is 16°31'. Give climb angle in permille.
- Equilateral triangle
How long should be the minimum radius of the circular plate to be cut equilateral triangle with side 19 cm from it?
Circular reflector throws light cone with a vertex angle 49° and is on 33 m height tower. The axis of the light beam has with the axis of the tower angle 30°. What is the maximum length of the illuminated horizontal plane?
The longer leg of a 30°-60°-90° triangle measures 5. What is the length of the shorter leg?
- Reference angle
Find the reference angle of each angle:
- An angle
An angle x is opposite side AB which is 10, and side AC is 15 which is hypotenuse side in triangle ABC. Calculate angle x.
Flowerbed has the shape of an isosceles obtuse triangle. Arm has a size 5.5 meters and an angle opposite to the base size is 94°. What is the distance from the base to opposite vertex?
- SAS triangle
The triangle has two sides long 7 and 19 and included angle 36°. Calculate area of this triangle.
How tall is the tree that observed in the visual angle of 52°? If I stand 5 m from the tree and eyes are two meters above the ground.
- High wall
I have a wall 2m high. I need a 15 degree angle (upward) to second wall 4 meters away. How high must the second wall?
- Triangle 42
Triangle BCA. Angles A=119° B=(3y+14) C=4y. What is measure of triangle BCA=?
- Internal angles
Find the internal angles of the triangle ABC if the angle at the vertex C is twice the angle at the B and the angle at the vertex B is 4 degrees smaller than the angle at the vertex A.
- Height 2
Calculate the height of the equilateral triangle with side 38.
- Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
- Center traverse
It is true that the middle traverse bisects the triangle?
In ▵ ABC, if sin(α)=0.5 and sin(β)=0.6 calculate sin(γ)
Is true equality? ? | <urn:uuid:45202583-6df1-4614-b0e1-669e302ab4bb> | CC-MAIN-2019-43 | https://www.hackmath.net/en/math-problem/1089 | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00399.warc.gz | en | 0.873178 | 611 | 3.078125 | 3 |
State and Local Climate and Energy Program
Assessing Air Quality, Greenhouse Gas, and Public Health Benefits
- Developing an Action Plan
- Developing a GHG Inventory
- Identifying and Evaluating Policy Options
- Designing and Implementing Programs
- Choosing a Clean Energy Financing Program
- Leading by Example in Government Operations
- Engaging Stakeholders
- Determining Results
- Assisting Local Governments
- What are the Air Quality, Greenhouse Gas, and Public Health Benefits of Clean Energy?
- Steps in Estimating Benefits
- Tools and Resources
What are the Air Quality, Greenhouse Gas, and Public Health Benefits of Clean Energy?
Electricity generation from fossil fuels is a major source of various types of air pollution. Many states and localities are exploring or implementing clean energy policies to achieve reductions in greenhouse gases (GHGs) and criteria air pollutants, such as particulate matter, ground level ozone, carbon monoxide, sulfur oxides, and nitrogen oxides. While GHGs have a global effect, contribute to climate change, and can last more than 100 years, criteria air pollutants have a local to regional effect on air quality and human health and can dissipate in hours or days. Clean energy measures that reduce criteria air pollutants, therefore, can result in almost immediate local improvements in air quality and human health.
Tools and methods are available to help states estimate the impact of clean energy policies on criteria air pollutant emissions, ambient air quality, and the related environmental and health impacts.
Steps in Estimating Benefits
|Develop and project a baseline emissions inventory||Select method, compile emissions from available sources into inventory, and develop a forecast.|
|Quantify emission reductions||Develop emissions from clean energy using energy savings estimates, load profile, emissions factors, and control technology or fuel data. Compare against baseline.|
|Estimate changes in air quality resulting from emission reductions||Use criteria air pollutant data to estimate changes in air quality with an air quality model.|
|Estimate human health and related economic effects of air quality changes||Use data on air quality changes and epidemiological and population information to estimate health effects. Apply economic values of avoided health efforts to monetize benefits.|
Developing an Emissions Inventory Forecast Baseline
States can use many sources of data as they compile top-down or bottom-up inventories. Some of these data sources focus specifically on criteria air pollutants, some focus on GHGs, and some include both. More information is available about Developing a GHG Inventory.
|Date Source||Type of Air Pollutant or GHG Emissions||Approach|
|National Emissions Inventory (NEI)||√||√||√||√||√|
|Emissions Collection and Monitoring Plan System (ECMPS)||√||√||√||√|
|World Resources Institute Climate Analysis Indicators Tool||√||√||√|
|State GHG Inventories||√||√||√|
|Local GHG Inventories||√||√||√|
Evaluation of the Wisconsin Focus on Energy Program's energy efficiency and renewable energy projects funded by the Utility Public Benefits fund shows that from program inception in July 2001 through June 30, 2006, the state displaced annual emissions from power plants and utility customers of about:
- 5.8 million pounds of NOX
- 2.6 billion pounds of CO2
- 11.4 million pounds of SO2
- 46 pounds of mercury (Hg)
In 2004, the Texas Commission on Environmental Quality evaluated the Texas Emissions Reduction Plan and calculated that it achieves an annual reduction of NOx emissions of 346 tons through energy efficiency and renewable energy. NOx reductions over the period 2007—2012 are projected to range from 824 tons per year in 2007 to 1,416 tons per year in 2012.
Quantifying Emission Reductions
States can use a range of tools and approaches to quantify the emissions changes from clean energy policies, which can then be compared to the baseline emissions inventory to determine emission benefits.
Basic approaches provide policymakers with approximate estimates of emission reductions they can use for preliminary short-term studies and program evaluation or design. They are often less expensive than more complicated models, but because of their simplicity they are unable to provide the levels of detail that some policymakers require.
Sophisticated approaches are more complex and can offer a wider range of modeling options. These tools provide policy makers with richer insight into the range of emission results and are appropriate to use for regulatory decisions and long-term analyses. They are more costly to run, however, and can require significant technical expertise.
Modeling Air Quality Changes
Clean energy policies that reduce both primary (e.g., NOX) and secondary (e.g., ozone) air pollutants may improve ambient air quality. States can use models to estimate changes in ambient air quality, such as those currently used for State Implementation Plans, as required by the Clean Air Act.
Modeling ambient air quality impacts can be a complex task requiring sophisticated air quality models and extensive data inputs (e.g., meteorology). States can use one of three types of models to conduct this type of analysis: dispersion models, photochemical models, and receptor models. All of the models require location-specific information on emissions and source characteristics, although they may represent photochemistry, geographic resolution, and other factors to very different degrees. States can learn more information about these models through EPA's Support Center for Regulatory Atmospheric Modeling (SCRAM).
Estimating Health Effects and Related Economic Value
Where clean energy measures improve air quality or avoid damage to air quality, they may prevent negative health incidences, such as illnesses and deaths. States can use basic and sophisticated modeling approaches to estimate the human health effects of air quality changes and the monetary value of avoided health effects-a key component of a comprehensive economic benefit-cost analysis. This information can help states compare across alternative program options and communicate some of the most important advantages of clean energy.
Tools & Resources
Assessing the Multiple Benefits of Clean Energy
Assessing the Multiple Benefits of Clean Energy: A Resource for States provides an overview of the multiple benefits of clean energy and their importance. It includes information on:
- The importance of and approaches to calculating or estimating energy savings as the foundation for deriving multiple benefits
- A range of tools and approaches to estimating energy systems, environmental, and economic benefits across varying levels of rigor
- How states have supported the use of clean energy through the estimation of multiple benefits
Clean Air and Climate Protection Software Tool (CACPS)
CACPS calculates and tracks emissions and reductions of greenhouse gases (carbon dioxide, methane, nitrous oxide) and criteria air pollutants (NOX, SOX, carbon monoxide, volatile organic compounds, PM10, PM2.5) associated with electricity, fuel use, and waste disposal.
Co-Benefits Risk Assessment (COBRA) Screening Model
COBRA is a screening tool that enables users to:
- Roughly estimate the impact of emission changes on ambient air pollution
- Further translate this into health effect impacts
- Monetize the value of those impacts
- View the estimated county-level results in tables and maps
E-Calc is a Web-based calculator that allows government and building industry users to design and evaluate a wide range of projects for energy savings and emissions reduction potential. This tracking tool was developed by Texas A&M University's Energy Systems Laboratory in response to legislative incentives to quantify emissions reductions from building energy savings and distributed renewable technology. E-Calc evaluates residential, commercial, retail, and municipal buildings energy and emissions savings, as well as savings from renewables like solar heating, solar PV, and wind power.
Environmental Benefits Mapping and Analysis Program (BenMAP)
BenMAP is a tool for estimating the health and economic benefits of air pollution reduction strategies. It combines air pollution monitoring data, air quality modeling data, census information, and population projections to calculate a population's potential exposure to ambient air pollution. BenMAP is used primarily to estimate benefits from changes in particulate matter and ozone concentrations, but it can also be adapted for other pollutants. Most Windows-based computers run BenMAP.
The OTC Workbook is a free tool developed for the Ozone Transport Commission to help local governments prioritize clean energy actions. The Workbook uses a detailed Microsoft Excel spreadsheet format based on electric power plant dispatch and on the energy savings of various measures to determine the air quality benefits of various actions taken in the OTC Region. This tool is simple, quick, and appropriate for scenario analysis. It can calculate predicted emission reductions from energy efficiency, renewables, energy portfolio standards (EPSs), and multi–pollutant proposals.
States can use this tool to evaluate the environmental benefits of choosing cleaner sources of energy. The Power Profiler is a Web-based tool that allows users to evaluate the air pollution and greenhouse gas impact of their electricity choices. Using only a ZIP code, the tool generates a report describing the characteristics of one's electricity use.
Support Center for Regulatory Atmospheric Modeling (SCRAM)
EPA's SCRAM website provides descriptions and documentation for three types of air quality models: dispersion, photochemical, and receptor models; modeling guidance & support for applying air quality models for regulatory applications; and information on Meteorological data used in air quality models as derived from both ambient measurements and meteorological models. | <urn:uuid:aeef58d5-b1fb-479c-b3e3-32f043c14335> | CC-MAIN-2014-10 | http://www.epa.gov/statelocalclimate/state/activities/assessing-air-quality-and-public-health.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021587780/warc/CC-MAIN-20140305121307-00098-ip-10-183-142-35.ec2.internal.warc.gz | en | 0.8767 | 1,964 | 3.21875 | 3 |
Cancer patients have another weapon in their fight against the deadly disease: immunotherapy. It is changing cancer care, and making treatment more powerful with fewer side effects.
Immunotherapy uses the body’s natural defense system to fight against cancer. Unlike chemotherapy, which destroys both cancer cells and healthy cells, immunotherapy can be targeted at specific cells.
“Targeted immunotherapies work by attacking certain proteins on the surface of cells that identify cancer and spur the body’s immune system to destroy it,” says Mark A. Meadors, DO, medical oncologist, Saint Francis Medical Partner. “In general, immunotherapies stimulate the body’s immune system to fight cancer, more rigorously.”
Cancer cells are notorious for mutating, which makes them less responsive to treatments that previously worked. However, immunotherapy can trigger a cascade of immune cells that can recognize cancer cells with certain antigens – even as they mutate.
“This is a very exciting area in cancer treatment,” says Meadors. “We are constantly learning about new developments, which allow us to give our patients more hope than ever before. Tumors that have not been responsive to other types of treatment are decreasing in size, and often disappearing completely, after being exposed to immunotherapy.”
“As researchers find new ways to fight cancer, our patients benefit,” says Meadors.
For more information, call 573-331-3996 or visit Saint Francis Medical Center’s Cancer Institute webpage. | <urn:uuid:176b0094-487b-4778-9224-0914ac80fd55> | CC-MAIN-2019-18 | https://www.sfmc.net/health-page/18933/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530176.6/warc/CC-MAIN-20190421040427-20190421062427-00316.warc.gz | en | 0.95474 | 326 | 2.953125 | 3 |
During 2006, 242 confirmed cases of cryptosporidiosis (4.7 per 100,000) were reported. This equals the highest number of cases ever reported in Minnesota, and is 42% higher than the median number of cases reported annually from 1996 to 2005 (median, 170 cases; range, 81 to 242). The median age of case-patients in 2006 was 24 years (range, 1 month to 97 years). Children 10 years of age or younger accounted for 29% of cases. Fifty-one percent of cases occurred during July through October. The incidence of cryptosporidiosis in the Southwestern, West Central, South Central, Southeastern, and Northeastern districts (14.6, 11.4, 9.4, 8.3, and 7.1 cases per 100,000, respectively) was significantly higher than the statewide incidence. Only 59 (24%) reported cases occurred among residents of the metropolitan area (2.1 per 100,000). Forty-three (18%) case-patients required hospitalization, for a median of 4 days (range, 1 to 56 days). Six cases were known to be HIV-infected. One outbreak of cryptosporidiosis was identified in 2006, accounting for 17 laboratory-confirmed cases. This outbreak was associated with multiple school swimming pools.
- Note: For up to date information see: Cryptosporidiosis (Cryptosporidium spp.)
- Go to full issue: Annual Summary of Communicable Diseases Reported to the Minnesota Department of Health, 2006 | <urn:uuid:f7d91024-e276-45a6-8159-709500c485b8> | CC-MAIN-2013-48 | http://www.health.state.mn.us/divs/idepc/newsletters/dcn/sum06/cryptosporidiosis.html | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052712/warc/CC-MAIN-20131204131732-00005-ip-10-33-133-15.ec2.internal.warc.gz | en | 0.937731 | 316 | 2.578125 | 3 |
In last month’s blog I explained that for many gemstone beads, the country of origin may be difficult to determine. Fossil turritella agate is an exception because it comes exclusively from the Green River Formation in Wyoming, USA.
Turritellas are marine snails (gastropods) with spiral shells. When fossil snails were found in the Green River Formation, they looked like turritellas and were given that name. However, it turned out that the snails were in fact an extinct freshwater variety and were renamed Elimia tenera. By that time, the incorrect name had already become commonly used and the name elimia tenera has never managed to replace the incorrect turritella name.
Turritella fossils are among my favorite fossils to incorporate into my jewelry designs; the fossil snails are creamy to white spirals in a rich brown to almost black matrix. The polish can be uneven on these stones due to the natural variation in the fossils, but that’s part of what makes each one unique.
The following is directly from http://geology.com/gemstones/turritella/:
“How did Turritella Agate Form?
About 50 million years ago, during the Eocene epoch, the young Rocky Mountains were almost finished growing, and the landscape of what is now parts of Colorado, Utah, and Wyoming consisted of rugged mountains separated by broad intermountain basins. Rains falling on the slopes of these mountains ran off of the land and collected into streams that carried sand, silt, mud, and dissolved materials down into the lakes that occupied the intermountain basins. Over time, these sediments began filling the lakes, and many types of fossils were preserved within them.
Abundant plants and algae grew on the margins of these lakes, providing a perfect habitat and food source for Elimia tenera, the freshwater snail. When the snails died, their shells sank to the bottom of the lake. The snails were so prolific that entire lenses of sediment were composed almost entirely of their shells.
After these layers were buried, groundwater moved through the sediments. Small amounts of microcrystalline silica that were dissolved in the groundwater began to precipitate, possibly in the form of a gel, within the cavities of the snail shells and the empty spaces between them. Over time, the entire mass of fossils was silicified, forming the brown fossiliferous agate (also known as chalcedony) that we know today as Turritella agate.” | <urn:uuid:df1804ed-3180-42d5-9e22-4586e765e968> | CC-MAIN-2019-35 | https://mochisgifts.com/tag/turritella/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317688.48/warc/CC-MAIN-20190822235908-20190823021908-00424.warc.gz | en | 0.973901 | 532 | 3.21875 | 3 |
Quasi-Static Eocene–Oligocene Climate in Patagonia Promotes Slow Faunal Evolution and Mid-Cenozoic Global Cooling
New local/regional climatic data were compared with floral and faunal records from central Patagonia to investigate how faunas evolve in the context of local and global climates. Oxygen isotope compositions of mammal fossils between c. 43 and 21 Ma suggest a nearly constant mean annual temperature of 16 ± 3 °C, consistent with leaf physiognomic and sea surface studies that imply temperatures of 16–18 °C. Carbon isotopes in tooth enamel track atmospheric δ13C, but with a positive deviation at 27.2 Ma, and a strong negative deviation at 21 Ma. Combined with paleosol characteristics and reconstructed Leaf Area Indices (rLAIs), these trends suggest aridification from 45 Ma (c. 1200 mm/yr) to 43 Ma (c. 450 mm/yr), quasi-constant MAP until at least 31 Ma, and an increase to ~800 mm/yr by 21 Ma. Comparable MAP through most of the sequence is consistent with relatively constant floral compositions, rLAI, and leaf physiognomy. Abundance of palms reflects relatively dry-adapted lineages and greater drought tolerance under higher pCO2. Pedogenic carbonate isotopes imply low pCO2 = 430 ± 300 ppmv at the initiation of the Eocene–Oligocene climatic transition. Arid conditions in Patagonia during the late Eocene through Oligocene provided dust to the Southern Ocean, enhancing productivity of silicifiers, drawdown of atmospheric CO2, and protracted global cooling. As the Antarctic Circumpolar Current formed and Earth cooled, wind speeds increased across Patagonia, providing more dust in a positive climate feedback. High tooth crowns (hypsodonty) and ever-growing teeth (hypselodonty) in notoungulates evolved slowly and progressively over 20 Ma after initiation of relatively dry environments through natural selection in response to dust ingestion. A Ratchet evolutionary model may explain protracted evolution of hypsodonty, in which small variations in climate or dust delivery in an otherwise static environment drive small morphological shifts that accumulate slowly over geologic time.
Kohn, Matthew J.; Strömberg, Caroline A.E.; Madden, Richard H.; Dunn, Regan E.; Evans, Samantha; Palacios, Alma; and Carlini, Alfredo A. (2015). "Quasi-Static Eocene–Oligocene Climate in Patagonia Promotes Slow Faunal Evolution and Mid-Cenozoic Global Cooling". Palaeogeography, Palaeoclimatology, Palaeoecology, 435, 24-37. http://dx.doi.org/10.1016/j.palaeo.2015.05.028 | <urn:uuid:33128db2-01cb-495d-b25c-2315c4b884fe> | CC-MAIN-2018-51 | https://scholarworks.boisestate.edu/geo_facpubs/295/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00354.warc.gz | en | 0.814691 | 599 | 2.53125 | 3 |
For this section of the course, you need to know:
Classical Conditioning Pavlov's research
Operant Conditioning Types of reinforcement and Skinner's research
The behaviourist approach emerged at the beginning of the 20th century and became the dominant approach in psychology for half of that century. It is also credited as being the driving force in the development of psychology as a scientific discipline.
Behaviourist approach - a way of explaining behaviour in terms of what is observable and in terms of learning.
Classical conditioning - learning by association. A neutral stimulus, when paired with a second stimulus can, by association, elicit the same response as the second stimulus could by itself. E.g. a dog salivates in response to seeing food. A ringing bell could become associated with food through repeated presentation of both simultaneously, such that eventually the bell alone could produce salivation.
Operant conditioning - a form of learning in which behaviour is shaped and maintained by its consequences. Possible consequences include positive reinforcement, negative reinforcement and punishment.
Reinforcement - a consequence of behaviour that increases the likelihood of that behaviour being repeated.
Assumptions of the behaviourist approach -
1) The behaviourist approach is only interested in studying behaviour that can be observed and measured. It is not concerned with investigating mental processes. Behaviourists rejected introspection as it involved too many vague and immeasureable concepts. As a result, behaviourists tried to maintain more control and objectivity within their research relying on lab experiments as a way of achieving this.
2) We are born 'blank slates' and all behaviour is learned from the experiences a person has, including reinforcements and punishments.
3) Behaviourists suggested that the basic processes that govern learning are the same in all species. This meant that in behavioural research, animals could replace humans as experimental subjects. Behaviourists identified two important forms of learning; classical and operant conditioning.
CLASSICAL CONDITIONING - PAVLOV
Classical conditioning is 'learning through… | <urn:uuid:7a2f9bdc-c5eb-41ec-b293-ef5a19ed58ce> | CC-MAIN-2017-43 | https://getrevising.co.uk/revision-notes/the-behaviourist-approach | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00842.warc.gz | en | 0.958307 | 415 | 3.875 | 4 |
Volcanoe pictures of Volcanoes
Types of Volcano eruptions
A volcano is a mountain that opens downward to a pool of molten rock below the surface of the earth. A volcano is a vent in the Earth from which molten rock (magma) and gas erupt. The molten rock that erupts from the volcano (lava) forms a hill or mountain around the vent.
All volcanic eruptions are not alike. Some eruptions, are quiet, with lava slowly oozing from a vent. Other eruptions are very violent, with lava and other materials being hurled hundreds of miles into the air. Gases from within the earth’s interior mix with huge quantities of dust and ash and rise into the air as great dark clouds that can be seen from many kilometers away.
Some dark-colored lava is thin and runny, and tends to flow. Explosive eruptions are caused when lava in the vents hardens into rock. Steam and lava build up under the rocks. When the pressure of the steam and new lava becomes great, a violent explosion occurs. When pressure builds up, eruptions occur. Gases and rock shoot up through the opening and spill over or fill the air with lava fragments. Volcano eruptions have been known to knock down entire forests. An erupting volcano can trigger tsunamis, flashfloods, earthquakes, mudflows and rockfalls.
Fresh volcanic ash, made of pulverized rock, can be harsh, acidic, gritty, glassy and smelly.
Types of Volcanoes
The form of a volcano is determined by the ingredients of the erupting magma. Their shapes are determined by the explosivity of the eruptions and to the amount of water in the magma.
Shield Volcanoes – are large volcanic forms with broad summit areas and low, sloping sides. This type of volcano is caused by slow, basaltic lava flows. A good example of a shield volcano is the island of Hawaii (the “Big Island”).
Cinder cones are mounds that are formed by streaming gases that carry lava blobs and ribbons into the atmosphere to form lava fountains. The lava blobs commonly harden during flight through the air before landing on the ground. If gas pressure drops, the final stage of building a cinder cone may be a lava flow that breaks through the base of the cone. The longer the eruption, the higher the cone. Some are no larger than a few meters and others rise to as high as 610 meters or more, such as Paricutin volcano, Mexico, that was a nearly continuous eruption from 1943 to 1952 and eventually destroyed the village.
Composite Volcanoes (Stratovolcanoes)
Composite volcanoes are constructed from multiple eruptions, sometimes recurring over hundreds of thousands of years, sometimes over a few hundred. Most of the tallest volcanoes are composite volcanoes. These form from a cycle of quiet eruptions of fluid lava followed by explosive eruptions of viscous lava. The fluid lava creates an erosion-resistant shell over the explosive debris, forming strong, steep-sided volcanic cones
Volcanoes are rather unpredictable phenomena. Some volcanoes erupt fairly regularly; others have not erupted within modern history. In order to indicate the relative activity of volcanoes, scientist classify them as active, dormant, or extinct. An active volcano is one that erupts either continually or periodically.
Some mild eruptions merely discharge steam and other gases, whereas other eruptions quietly extrude quantities of lava. The most spectacular eruptions consist of violent explosions that blast great clouds of gas-laden debris into the atmosphere.
The type of volcanic eruption Is often labeled with the name of a well-known volcano where characteristic behavior is similar–hence the use of such terms as “Strombolian,” “Vulcanian,” “Vesuvian,” “Pelean,” “Hawaiian,” and others. Some volcanoes may exhibit only one characteristic type of eruption during an interval of activity–others may display an entire sequence of types.
The most powerful eruptions are called “plinian” and involve the explosive ejection of relatively viscous lava. Large plinian eruptions–such as during 18 May 1980 at Mount St. Helens -can send ash and volcanic gas tens of miles into the air. The resulting ash fallout can affect large areas hundreds of miles downwind. Fast-moving deadly pyroclastic flows (“nuées ardentes”) are also commonly associated with plinian eruptions.
” Preatic” ( or steam-blast) eruptions are driven by explosive expanding steam resulting from cold ground or surface water coming into contact with hot rock or magma. The distinguishing feature of phreatic explosions is that they only blast out fragments of preexisting solid rock from the volcanic conduit; no new magma is erupted.
In a Strombolian-type eruption observed during the 1965 activity of Irazú Volcano in Costa Rica, huge clots of molten lava burst from the summit crater to form luminous arcs through the sky. Collecting on the flanks of the cone, lava clots combined to stream down the slopes in fiery rivulets.
Hawaiian” eruptions may occur along fissures or fractures that serve as linear vents, such as during the eruption of Mauna Loa Volcano in Hawaii in 1950; or they may occur at a central vent such as during the 1959 eruption in Kilauea Iki Crater of Kilauea Volcano, Hawaii. In fissure-type eruptions, molten, incandescent lava spurts from a fissure on the volcano’s rift zone and feeds lava streams that flow downslope.
In contrast, the eruptive activity of Parícutin Volcano in 1947 demonstrated a “Vulcanian”-type eruption, in which a dense cloud of ash-laden gas explodes from the crater and rises high above the peak. Steaming ash forms a whitish cloud near the upper level of the cone.
In a “Vesuvian” eruption, as typified by the eruption of Mount Vesuvius in Italy in A.D. 79, great quantities of ash-laden gas are violently discharged to form cauliflower-shaped cloud high above the volcano.
In a “Peléan” or “Nuée Ardente (glowing cloud) eruption, such as occurred on the Mayon Volcano in the Philippines in 1968, a large quantity of gas, dust, ash, and incandescent lava fragments are blown out of a central crater, fall back, and form tongue-like, glowing avalanches that move downslope at velocities as great as 100 miles per hour.
Some Volcanoe Facts :
The May 18, 1980 eruption of Mount St. Helens in the Cascade Range of Washington State happened after more than 100 years of dormancy ( a time when the volcano was “asleep.”) When the volcano erupted, it took the lives of 58 people and caused $1.2 billion in damage.
There are more than 500 active volcanoes in the world.
The rock debris carried by a lateral blast of Mt. St. Helens traveled as fast as 250 miles per hour.
Crater Lake in Oregon formed from a high volcano that lost its top after a series of tremendous explosions about 6,600 years ago.
More than 80 percent of the earth’s surface is volcanic in origin. The sea floor and some mountains were formed by countless volcanic eruptions. Gaseous emissions from volcano formed the earth’s atmosphere.
Useful Volcanoe sites, Links : | <urn:uuid:cad3cc3f-20e4-431c-92d4-94a33eb9e0b7> | CC-MAIN-2018-43 | http://www.indianchild.com/volcanoes.htm | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513441.66/warc/CC-MAIN-20181020205254-20181020230754-00420.warc.gz | en | 0.935847 | 1,611 | 3.890625 | 4 |
- Publius Project
- RSS Feeds
- Essays & conversations about constitutional moments on the Net collected by the Berkman Center.
Realising the Value of Public information
Governments and public bodies have always been in the business of managing information - as creators, controllers, distributors, and more. Until the mid-1990s, however, most states took on only two main roles as holders of information. First, they took responsibility for ensuring that information on matters of national security was held securely and beyond the reach of potential miscreants. Second, they assumed the job of making sure that full records of public affairs were maintained, archived, and accessible to authorized persons. Since the mid-1990s, there has been a clear shift in government policy in many countries in relation to information generated from within or on behalf of the public sector. In summary, many governments have expressed a commitment to making official information more widely available. There are two main strands of thinking here. One is that government should be more open; and this has given rise to freedom of information (FOI) regimes. This is about providing access to information. The other strand is that public sector information (PSI) can and should be re-used where benefits can accrue. FOI and PSI re-use together are the fundamental building blocks of government in the Internet age. This is not merely about making formal government publications available online. It is about capturing, nurturing, and maintaining much of the information generated by public sector bodies as a common and easily accessible good for all of society. At a policy level, these developments will combine to bring about an entirely new landscape for the management and control of information and knowledge in the public sector.
It is far from clear, however, that senior officials and politicians around the world are yet alive to the cumulative shift in policy and practice. Nor is there evidence of much in-depth analysis of the long-term implications of these changes.
That said, the last decade has undeniably witnessed enormous change. To a large extent, this change has been catalysed by the advent of the Internet, which is steadily, fundamentally, and globally changing the relationship between the individual and the state.
Before the 1990s, most government was closed government - official information was made available, largely, on a need-to-know basis. Restricting the flow of information was clearly central to totalitarian rule, for example. But benevolent democracies also held back, adopting a paternalistic posture, releasing information sparingly. Perhaps it was not in people’s interests to know too much. Anti-paternalists claim the problem was, rather, that there were no effective channels for fuller information flows between citizen and government. But this changed in the 1990s, with the coming of the Internet and the Web. Suddenly, information could be shared widely and cheaply; and the business of government was changed for ever.
, for example, in 1996 and 1997, the Conservative and Labour governments both stated their commitment to providing official information on the web. Why? Was it that the Internet made it all but impossible for government to resist greater openness? Or was there, coincidentally, some new political will to make public affairs more transparent? Either way, open government arrived. UK
There are two types of open government. A reactive open government is one which, when faced with a request for access to official information, will respond favourably. Request leads to access. In contrast, a proactive open government believes that an integral part of the job of government is to make all information created in the process of governing available to the people.
Proactive open government is much more than meeting, more or less willingly, a request for access. Instead, proactive open government regards the provision, usually online, of all official information as part of the very business of government. Withholding information is looked upon as exceptional and requires justification. The
government, for instance, is currently moving, albeit at a modest pace, from being reactively to proactively open. One sign of this is the drive to provide more useful and better stocked web sites. Another is that, under its freedom of information legislation, all public authorities must maintain publication schemes that indicate what information will be made available proactively. UK
Looking forward, full scale proactivity will require a positive effort on the part of governments and public bodies actually to maximise the value of their information, in both economic and in social terms. Realising the value of public information should be a central task of tomorrow’s governments.
Professor Richard Susskind is IT Adviser to the Lord Chief Justice of
Englandand . He is the author of numerous books, including The Future of Law (1996), Transforming the Law (2000), and The End of Lawyers? (2008), all published by Wales Press. He has also written over 100 columns for The Times. He is Emeritus Law Professor at Oxford University Gresham College, , where he was also appointed the first and only Honorary Professor in 400 years. From 2003 until 2008, he was Chair of the London ’s Advisory Panel on Public Sector Information, a non-departmental public body set up by the Cabinet Office and now sponsored by the Ministry of Justice. UK | <urn:uuid:35dcbcd9-ed9e-4de1-9045-147fd3973440> | CC-MAIN-2019-35 | http://publius.cc/realising_value_public_information/041709 | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00369.warc.gz | en | 0.961933 | 1,065 | 2.515625 | 3 |
Greenhouse solution: sucking the CO2 straight out of the atmosphere
By Loz Blain
May 28, 2007
May 29, 2007 Since industry is constantly proving it's unwilling to address Global Warming from an emissions standpoint, creative science is looking at attacking atmospheric carbon dioxide levels from the other side - sucking the greenhouse gas out of the atmosphere. Researchers have just successfully demonstrated air extraction technology that could be employed to reduce global carbon dioxide levels in the atmosphere back to the levels that Climate Change scientists say we need to aim for to prevent global catastrophe.
Global Research Technologies, LLC (GRT), a technology research and development company, and Klaus Lackner from Columbia University have achieved the successful demonstration of a bold new technology to capture carbon from the air. The "air extraction" prototype has successfully demonstrated that indeed carbon dioxide (CO2) can be captured from the atmosphere. This is GRT’s first step toward a commercially viable air capture device.
This technology debuts at a critical juncture where recent findings of an esteemed array of global experts — including former Vice President Al Gore, Sir Nicholas Stern, and the eminent scientists and practitioners serving on the Intergovernmental Panel on Climate Change — have concluded that man-made climate change is indeed upon us. One of the most critical challenges we face is the dramatically increasing and completely unprecedented level of carbon dioxide in the earth’s atmosphere. The air extraction device is one critical solution to help the world reduce dangerous amounts of CO2 in the air.
The carbon capture technology was developed by GRT and Klaus S. Lackner, a professor at Columbia University’s Earth Institute and the School of Engineering and Applied Sciences. The Tucson-based technology company began development of the device in 2004 and has recently successfully demonstrated its efficacy. The air extraction device, in which sorbents capture carbon dioxide molecules from free-flowing air and release those molecules as a pure stream of carbon dioxide for sequestration, has met a wide range of performance standards in the GRT research facility.
"This is an exciting step toward making carbon capture and sequestration a viable technology," said Lackner. "I have long believed science and industry have the technological capability to design systems that will capture greenhouse gases and allow us to transition to energies of the future over the long term."
The GRT’s demonstration could have far-reaching consequences for the battle to reduce greenhouse gas levels. Unlike other techniques, such as carbon capture and storage from power plants, air extraction would allow reductions to take place irrespective of where carbon emissions occur, enabling active management of global atmospheric carbon dioxide levels. The technology shows, for the first time, that carbon dioxide emissions from vehicles on the streets of Bangkok could be removed from the atmosphere by devices located in Iceland. This could present a solution to three problems that until now have posed intractable obstacles for advocates of greenhouse gas reduction: how to deal with the millions of vehicles that together represent over 20 percent of global CO2 emissions, how to manage the emissions from existing infrastructure, and how to connect the sources of carbon to the sites of carbon disposal.
"This significant achievement holds incredible promise in the fight against climate change," said Jeffrey D. Sachs, director of The Earth Institute, "and thanks to the ingenuity of GRT and Klaus Lackner, the world may, sooner rather than later, have an important tool in this fight."
A device with an opening of one square meter can extract about 10 tons of carbon dioxide from the atmosphere each year. If a single device were to measure 10 meters by 10 meters it could extract 1,000 tons each year. On this scale, one million devices would be required to remove one billion tons of carbon dioxide from the atmosphere. According to the U.K. Treasury’s Stern Review on climate change, the world will need to reduce carbon emissions by 11 billion tons by 2025 in order to maintain a concentration of carbon dioxide at twice pre-industrial levels.
Experts have long highlighted the potential of air extraction, arguing that it could have a vastly greater impact than the renewable energy sources that currently operate on a small scale. To date, however, the transport sector has resisted many carbon-reducing technologies. Although carbon capture is possible at power plants through flue-gas scrubbing, designing millions of cars, trucks, and trains to capture CO2 from their exhaust streams is simply not practical. Hauling a "trailer" behind every passenger car to collect exhaust emissions would exacerbate traffic congestion, reduce gasoline mileage and increase fuel consumption. Simply put, CO2 emissions from the transportation sector are going to end up in the atmosphere and can only be removed from the atmosphere with a device like the one GRT has developed.
Air capture devices are small and require much less land area than the wind mills that would be needed to offset an equal amount of CO2 emission. Indeed, if the CO2 carried by the air streams used to drive wind mills were to be captured, then on an energy equivalent basis, the CO2 capture would reduce emissions hundred times more than a wind mill of equal sweep area. Like wind turbines, the GRT devices would be deployed in coordinated formations, but would extract the air’s carbon dioxide, not its kinetic energy.
A major challenge facing scientists working to extract and sequester carbon from the atmosphere has been the fact that it is too expensive to re-outfit many of the world’s existing power plants to make them more eco-friendly. In general, building new technologies is easier and cheaper than adding retrofits to existing infrastructure. Another exciting benefit of the GRT device is that it faces down this challenge by capturing the emissions from existing power plants without imposing retrofit costs.
Air capture offers a third important benefit. The CO2 capture device can be located at the point of CO2 end-use or sequestration, eliminating the current need to match CO2 sources with sinks. For example, the CO2 originating from all those vehicles in Bangkok can be captured in an oil field in Alberta, Canada, where it could be used on-site for enhanced oil recovery (EOR) operations or it could be captured in South Africa to feed a growing demand in that country for feed stocks for petrochemical production. If the goal is to sequester a given quantity of CO2 in a specific geological formation, the air capture system could be located at that physical location. Within the United States, formations in Ohio, Oklahoma and Michigan, among other sites, appear to hold promise for long-term CO2 storage underground. Air extraction could also offer a new window in negotiations between developed and developing countries over how to deploy carbon reducing technologies.
Going forward, GRT plans to begin demonstrating its air capture system on a larger scale. Extensive deployment of the GRT air capture system makes it possible to envision an actual reduction of CO2 levels in the atmosphere, perhaps even to pre-industrial levels. That is the exciting promise of air capture and precisely what has just been demonstrated by GRT.
Just enter your friends and your email address into the form below
For multiple addresses, separate each with a comma | <urn:uuid:494c1265-f4a9-4cc3-b22b-4cc7d20a2e22> | CC-MAIN-2013-48 | http://www.gizmag.com/go/7341/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163049948/warc/CC-MAIN-20131204131729-00072-ip-10-33-133-15.ec2.internal.warc.gz | en | 0.945661 | 1,447 | 3.078125 | 3 |
The iSchool Initiative
I think this initiative is a great idea for teachers and
students. The seventeen year old has made a difference by traveling the country
and embracing mobile learning. With this program, I think students would look
forward to getting on their iPod touch for school. It was a great point he made
when he said the internet would block non-educational sites so students would
not be able to browse all over the web. This video was a really good way of
describing the pros of using the iSchool format when teaching.
Eric Whitacre’s Virtual Choir
This video is absolutely amazing. These people are from all
over the world and have never met before, but they have made a wonderful work
of art. Eric Whitacre is a genius. I cannot even imagine how long this took him
to put together. This video gives you a look at what else you can do with technology.
Who would have thought you could have so many people on one video from all
over the world?
Teaching in the 21st Century by Kevin Roberts
This video is very educational and something I believe all educators
should watch. The video says that teachers are not the source of knowledge
anymore, the internet is. It says that the teachers are the filter. I agree
that teachers should show students ways to use the internet as a reliable
source for information. It also wants
the educators to inform the students about the non-reliable websites. This was a great video.
Flipping the Classroom
I think flipping your classroom is a good idea for all
teachers. I agree with Dr. McCammon in the video Dr. Lodge McCammon's FIZZ -Flipping the Classroom that there is too much lecture in the classroom and
students are not engaging. The teachers post there videos on-line for students
to watch before class to have some knowledge about what the will go over in
class. Students can work ahead or review material they do not understand. This is a brilliant idea for teachers and students. | <urn:uuid:78aeceb6-b0e4-49a1-9586-65988f2c2bc9> | CC-MAIN-2018-30 | http://snowashley310.blogspot.com/2012/09/blog-post-5.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00346.warc.gz | en | 0.965651 | 431 | 2.578125 | 3 |
The liquid waxes occur in the blubber of the sperm whale, and in the head cavities of those whales which yield spermaceti; this latter is obtained by cooling the crude oil obtained from the head cavities.
The richest quality yields about 100 to 130 gallons of crude oil per ton, or 17,000 to 18,000 cub.
Various arrangements have been proposed and patented for the continuous distillation of petroleum, in which crude oil is supplied to a range of stills as fast as the distillates pass off.
Petroleum has very long been known as a source of light and heat, while the use of crude oil for the treatment of wounds and cutaneous affections, and as a lubricant, was even more general and led to the raw material being an article of commerce at a still earlier date.
In most petroleum-producing countries, however, and particularly where the product is abundant, the crude oil is fractionally distilled, so as to separate it into petroleum spirit of various grades, burning oils, gas oils, lubricating oils, and (if the crude oil yields that product) paraffin.
These factories were worked by crude oil from the Baku wells.
Pop. (1901), 4135 It is in the midst of the oil region of Canada, and numerous wells in the vicinity have an aggregate output of about 30,000,000 gallons of crude oil per annum, much of which is refined in the town.
But all these are insignificant in comparison with the mineral oil industry of Baku, which in normal times yields annually between ten and eleven million tons of crude oil (naphtha).
Annually), shipped from northern California, Oregon and A Washington, and in crude oil and general merchandise.
It is interesting to find that a rude pipe-line formerly existed in this field for conveying the crude oil from the wells to the river; this was made of bamboos, but it is said that the loss by leakage was so great as to lead to its immediate abandonment on completion. | <urn:uuid:f54d0dc5-dd48-45b7-8648-ffaaba784d95> | CC-MAIN-2016-44 | https://www.all-dictionary.com/sentences-with-the-word-crude%20oil | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00305-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.979732 | 420 | 2.609375 | 3 |
Updated On: 19-9-2020
Loading DoubtNut Solution for you
What is Quadratic Equation ?
General Form of the Quadratic Equation and Roots of the Quadratic Equation.
Determine whether a given equation is quadratic or not (i) `x^2-6x+4=0` (ii)`x^2+2sqrtx+3=0`
Determine whether an unknown involved in a quadratic equation when its roots are given .`kx^2+2x-3=0;x=2`
Determine whether the given values are the solution of the given equation (i)`3x^2-2x-1=0;x=1` (ii)`x^2-x+1=0; x=1;x=-1`
Algorithm to Solve the equation by factorization method `x/(x+1)+(x+1)/x=34/15`
Algorithm to find the solution of quadratic equation by completing the square : `9x^2-15x+6=0`
What is quadratic formulae to find the solution of quadratic equation
Solve the quadratic equation having real roots by quadratic formulae :(i)`9x^2+7x-2=0` (ii)`6x^2+x-2=0`
Determine the discriminant of quadratic equation (i)`3x^2+2x-1=0` (ii) `x^2-4x+2=0`
Re-download your INI CET 2021 admit card says AIIMS, Details Here
AIIMS has released the INI CET 2021 admit card, Candidates can download admit card on or before 17th July 2021.
BITSAT Self-Declaration form 2021 and Admit Card released, check now
BITSAT self-declaration form 2021 and admit card released. Check steps to download BITSAT 2021 admit card & other important details here.
JEE Main 2021 4th Session Starts from Aug 26, Application Date Extended
JEE Main 2021 4th session starts from Aug 26, application last date extended. Know how to fill the JEE Main application form 2021 & other details here.
MP Board 10th Result 2021 Out Now, Know Steps to Download Here
MP Board 10th Result 2021 out now, result based on the alternative marking scheme. Know Steps to download scorecard, statistics and other details here.
Best Biology Book Refer to CBSE and NEET Students
Here are some best Biology books referred by CBSE and NEET Students. Prepare for CBSE board exams and NEET simultaneously with best biology reference books. | <urn:uuid:b48ec520-35f7-4e22-8b39-d59be0221339> | CC-MAIN-2021-31 | https://www.doubtnut.com/question-answer/if-alphaa-n-dbeta-ar-the-zeros-of-the-polynomial-fxx2-5x-k-such-that-alpha-beta1-find-the-value-of-k-24882 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00461.warc.gz | en | 0.835593 | 590 | 2.65625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.