text stringlengths 222 548k | id stringlengths 47 47 | dump stringclasses 95 values | url stringlengths 14 1.08k | file_path stringlengths 110 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 53 113k | score float64 2.52 5.03 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|
According to the American Association of Endodontists, tooth pain can be a symptom of a wide variety of dental problems including decay, injury or infection. While mild sensitivity can be linked to receding gums and poses little harm, other dental pain can signify a more serious problem and requires a trip to the dentist for a diagnosis and treatment plan.
Different types of tooth pain can signify different types of problems with your teeth. The American Association of Endodontists explains that sharp, localized pains when biting down on foods can indicate a crack or decay in your tooth. Leaving this type of pain untreated will cause it to intensify over time.
Sensitivity to eating hot and cold foods can be troublesome, but typically using a soft toothbrush and toothpaste designed for sensitive teeth mitigates some of this pain. The American Association of Endodontists points out that tooth pulp decay or trauma to the tooth are likely to blame for pain that lingers for up to 30 seconds or more after eating hot or cold foods. This type of condition necessitates a trip to a dentist..
According to Medline Plus, any constant or severe pain in your tooth or gums, often accompanied by swelling of the surrounding gum, can indicate an abscessed tooth. Once infected, an abscessed tooth will require a trip to the dentist to save the tooth and relieve the pain. | <urn:uuid:31203494-c73d-4e5f-9be5-ba00531a0d6e> | CC-MAIN-2019-04 | https://www.reference.com/health/teeth-hurt-d19b7b7a21948668 | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583867214.54/warc/CC-MAIN-20190122182019-20190122204019-00099.warc.gz | en | 0.918019 | 277 | 3.03125 | 3 |
The below article was written by Litterati and shared here as an inspiration for schools and students who want to combat litter and help clean their communities.
Our planet faces many environmental problems. It often feels overwhelming and hard to know how we can make a difference. Well here’s one way you can. Join the Litterati - a global community that’s cleaning the planet - one piece of litter at a time. This mobile app (iOS & Android) allows anyone to identify, map, and collect litter in their community.
Litter is tangible, approachable and easy to understand. Litterati’s Educational Program provides a service-learning model that involves students in a range of experiences which benefit their community, while advancing their classroom skills. The program empowers them to build a more sustainable planet.
In California, 7th and 8th grade students from Spencer Avenue Elementary School picked up and documented 2,902 pieces of litter in the Island Lake Conservation Area. The Litterati data revealed the prevalence of cigarette butts as well as more surprising discarded objects, including shopping carts, couch cushions, and even a lawnmower. Litterati helped students understand the negative impact littering can have on their community; a message they shared at a schoolwide assembly. The students also sent letters to neighborhood businesses, offering suggestions about what they could do to improve their litter footprints. (read more)
Arturo Soria school in Madrid used Litterati during an end-of-course activity, cleaning up local parks near their school. Thanks to the Litterati app they were able tag, track and log the litter they collected. From the data and maps they created, the students analyzed the problem and proposed several solutions to the local community. Some involved making posters, pins and artwork to raise awareness. Other solutions required asking local businesses to install ashtrays. Students then wrote letters to the municipal cleaning services suggesting how they could pitch in and help. (read more)
It's easy to get started:
Create a club for your class or school
Invite your students to join the club
Photograph a piece of litter
Recycle or throw out the piece of litter. (Repeat steps 4 & 5)
Each photo is full of data. Geotags map problem areas. Timestamps indicate when we see specific types of litter. And tags identify the most commonly found brands and products. This data can then be used to influence product innovation, sustainable packaging, and educating consumer behaviour. We all have a role to play.
Litterati has been featured at TED, is supported by the National Science Foundation, and in partnership with the United Nations Environment.
Download the Litterati app today and join the movement. Individually you can make a difference. Together we create an impact. If you want more information contact us at email@example.com | <urn:uuid:522e325c-4399-423b-8c47-e8992db7e372> | CC-MAIN-2019-30 | https://www.yre.global/stories-news/2019/2/15/litterati-launches-education-initiative | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195532251.99/warc/CC-MAIN-20190724082321-20190724104321-00504.warc.gz | en | 0.950817 | 592 | 3.234375 | 3 |
Measles Topic Guide
Immunization Schedule, Children Vaccinations are some of the most important tools available for preventing disease. Most children get all their shots during childhood. Parents should consult their doctors about which vaccines their children should have and when. Keep track of your children's immunizations yourself.
Expert Views and News | <urn:uuid:6761a25c-78c1-41e8-800c-a13904304d08> | CC-MAIN-2015-14 | http://www.emedicinehealth.com/measles/topic-guide.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299339.12/warc/CC-MAIN-20150323172139-00285-ip-10-168-14-71.ec2.internal.warc.gz | en | 0.959523 | 66 | 2.59375 | 3 |
The number of Facebook friends you have and the size of particular brain regions are directly linked say researchers from University College London.
The study – which involved 125 university students who were all active users of the social networking site – found a correlation between the number of Facebook friends the student had, and the size of three regions of their brain.
It also showed that the more Facebook friends the student had, the more ‘real-world’ friends they had.
“Online social networks are massively influential, yet we understand very little about the impact they have on our brains,” said Professor Geraint Rees, a Wellcome Trust Senior Clinical Research Fellow at UCL. “This has led to a lot of unsupported speculation the internet is somehow bad for us.”
“Our study will help us begin to understand how interactions with the world are mediated through social networks. This should allow us to start asking intelligent questions about the relationship between the internet and the brain.”
The research found a strong correlation between the number of Facebook friends an individual had and the amount of grey matter in several regions of the brain. Three regions – the right superior temporal sulcus, the left middle temporal gyrus and right entorhinal cortex – correlated with online social networks suggesting the more friends the larger the area.
Previous research showed the amygdala – associated with processing memory and emotional responses was larger in people with a larger network of real and online friends.
“We have found some interesting brain regions that that seem to link to the number of friends we have – both ‘real’ and ‘virtual’,” said Dr Ryota Kanai, first author of the study published in Proceedings of the Royal Society B.
“The exciting question now is whether these structures change over time – this will help us answer the question of whether the internet is changing our brains.”
Rees said the finding supported the idea that most Facebook users use the site to support their existing social relationships, maintaining or reinforcing these friendships, rather than just creating network of entirely new, virtual friends. | <urn:uuid:fcf90d45-52f3-4d26-ba06-11dcd2234aac> | CC-MAIN-2013-20 | http://www.labnews.co.uk/news/facebook-linked-to-size-of-brain/ | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711515185/warc/CC-MAIN-20130516133835-00089-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.93816 | 430 | 3.09375 | 3 |
Heat has critical influences on machining. To some extent, it can increase tool wear and then reduce tool life, get rise to thermal deformation and cause to environmental problems, etc. But due to the complexity of machining mechanics, it's hard to predict the intensity and distribution of the heat sources in an individual machining operation. Especially, because the properties of materials used in machining vary with temperature, the mechanical process and the thermal dynamic process are tightly coupled together. Since early this century, many efforts in theoretical analyses and experiments have been made to understand this phenomena, but many problems are still remaining unsolved.
The pure analytical approches , in general, came out the average temperature on the shear plane and at the tool/chip interface. The temperature distridution along the shear plane and the tool/chip interface was also obtained some of the following approaches:
Since 1920s, many experimental methods were devised to measure the tool,chip or workpiece temperature and their distribution:
The numerical methods were successfully applied in calculating the temperature distribution and thermal deformation in tool, chip and workpiece. Especially,the finite element and boundary element methods can deal with very complicated geometry in machining, they have great potential to slove the problems in practice. These methods are listed in the following:
In this class of methods, some information such as chip surface temperature or temperature distribution in workpiece is first obtained experimentally. Then the temperature distribution and/or thermal deformation in chip, and sometimes in the tool and workpiece as well are calculated analytically. The inverse heat transfer problem in machining is an example of these methods.
Almost all of the heat generation model were established under orthogonal cutting condition. But in practice, there are various machining operations which cannot satisfy this condition, such as oblique turnning, boring, drilling, milling, grinding, etc.
Generally, the intensity of heat sources in real machining operations can be determined approximatedly by the external work applied, however, the distribution of the heat sources are hard to obtained by either theoretical or experimental methods.
The following listed are the simplified heat source model in real operations:
There are several types of heat source in machining:
Heat generated in this zone is mainly due to plastic deformation and viscous dissipation. But in classical machining theory, the rate of heat generated is the product of the shear plane component, Fs, of the resultant force and the shear velocity, Vs, i.e., the shear energy is completedly converted into heat.
If heat source is uniformly distributed along the shear plane, the intesity of shear plane heat source, Ip, satisfies the following relation:
where b is the cutting width and t1 the uncut depth.
In this region, because of the complexity of plastic deformation, this part of heat was ignored in many prevoius theoretical research.
Boothroyd has shown that the secondary plastic zone is roughly triangular in shape and that strain rate, E., in this region varies linearly from an approximatedly constant value along the tool/chip interface given by
Where Vc is the chip velocity, dt the maximum thickness of the zone.
Hence the maximum intensity of heat source in this zone is proportional to the strain rate.
Heat is generated at the tool/chip interface by friction. The intensity,Ic, of the frictional heat source is approximatedly by
where F is the friction force, Vx the sliding velocity of the chip along the interface, and h is the plastic contact length.
Heat generation is not well investigated in the following areas:
The three types of heat transfer, conduction, convection and radiation, all exist in the machining operations.
Heat transfer inside the chip and workpiece, the tool and toolholder is by conduction.
Heat transfer between coolant/air and the chip/tool/workpiece is by convection.
Radiation is rarely investigated in traditonal machining operations. But radiation techniques are widely applied in measuring the temperature distribution in various machining operations.
For more plots of temperature distrbutions, please click here.
Cutting fluids' effects on heat transfer are, in gerneral, classified as:
In practice, there are other types of heat source involved in machining, such as ambient heat sources. They may cause some thermal deformation in the lathe and so on.
Heat influence on the cutting forces is mainly because that:
Variations of tool life with workpiece bulk temperature when milling
Cr-Ni-Mo steel at speeds of (1) 150 fpm and (2) 200 fpm. (After krabacher and Merchant 1951)
Heat gives rise to thermal deformatiom in the workpiece, which finally takes on the form of surface toughness.
Thermal deformation in the lathe is the so-called thermal error in precision machining.
Interesting? please take a Health Issue in Enviromentally Conscious Machining.
Predictive heat generation models in either orthogonal cutting or other various operations
A Heat Transfer Performance Module, which can predict the convective heat transfer coeffients of several kinds of coolants used in some typical machining operations, can be accessible.
A energy and mass flow model of cutting fluid circulation system is a very important issue in environmentally conscious machining. Sometimes, the disposal of chips and coolants needs much more energy than that in real cutting operations. Developing an effective way to utilize energy should be under consideration.
Other than research issues mentioned above, there are still some areas listed here:
Go to previous page | <urn:uuid:8dc25e77-d4b3-403c-9ce8-77a7e2c2b232> | CC-MAIN-2015-11 | http://www.mfg.mtu.edu/marc/primers/heat/heat.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462548.53/warc/CC-MAIN-20150226074102-00222-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.940021 | 1,170 | 3.46875 | 3 |
The Evolution of Campus Communication
Compared to 40 years ago, it is an understatement to say that students this day and age have it a lot easier when it comes to communicating. Over the past few decades, technology has advanced so drastically that it has altered the ways in which students interact, collaborate, and communicate with one another.
The evolution of technology can be seen in the ways we now communicate. In the past, if a student wanted to get a message to another person close-by, he would have to walk across campus to relay it and if the individual was not within walking distance, then “snail mail” would be required—a process that took forever.
With advancements over time and the widespread use of email in the 1990s, the time it took for someone to deliver and receive a message decreased. In the late 1960s Ray Tomlinson, an MIT graduate, developed technologies for the military communications network called ARPANET. He was later given the task of finding a way to send messages between two different computers—a very advanced concept at the time. In October 1971, Tomlinson made history by successfully delivering a message from his own computer to another computer. On December 3, 1992, the first text message was sent by Neil Papworth, an engineer working in the UK that read, “Merry Christmas”. By the late 1990s, students no longer had to wait to hear something from someone via word-of-mouth, mail, or landline phone. Emailing was beginning to be widely used among college students and campuses. With no walking required, a message was now sent with a push of a button over the Internet or cell phone.
The idea of communicating with people and getting a message to them in such a short amount of time (almost instantly) arguably would never have been conceived of in the past. But since the inception of the mobile phone and Internet, the ways in which we communicate have greatly changed. Students are not only able to directly connect and talk to someone, but also to a mass group of people all at the same time. For example, students can work by sending group texts, scheduling virtual meetings, conference calls, or chatting online. This method of communicating and collaborating is now considered to be the norm to many students.
It is rare to be on campus and not see a student communicating on a laptop or texting each other on their phones. According to a survey conducted by Nielsen in 2010, those between the ages of 18-24 send on average about 1,630 texts a month—around three texts per hour. A Pew Research Center survey showed that 88% of undergrads in college own a laptop computer. Students are more wired-in than ever and are in constant contact with one another through a variety of mediums.
Year over year, technology is changing the way we interact with one another. While the old methods of communicating are still used, new breakthroughs have made it possible to do the same things, but more effectively and conveniently. With advancements in technology to come, what do you think the future of communicating will be like? | <urn:uuid:fc2c2909-73b8-441f-933f-f44d30ee2ce7> | CC-MAIN-2015-18 | http://blog.wiggio.com/the-evolution-of-campus-communication/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640550.27/warc/CC-MAIN-20150417045720-00171-ip-10-235-10-82.ec2.internal.warc.gz | en | 0.972237 | 633 | 3.078125 | 3 |
The Ray (or Mineral Creek) district is in northeastern Pinal County about 17 miles south of Miami. It lies between the Dripping Springs Range to the east and the Tortilla Range to the west. Copper is the major commodity of this district; gold is a byproduct.
The district was organized by silver prospectors, probably before 1873, and the first locations were made about 1880 (Arizona Bureau of Mines, 1938, p. 80-81). The first copper company was organized in 1883, but attempts at exploitation over the next 23 years failed, owing to the generally low grade of the ore. In 1906 some high-grade copper ore was mined. In 1907 the Ray Consolidated Copper Co. was organized, and extensive surface drilling and underground exploration revealed enormous copper ore bodies which were mined on a large scale in the spring of 1911 (Ransome, 1919, p. 17-19). Ray Consolidated soon became the largest producer in the district. The property continued to be an important source of copper, though ownership was changed to Ray Division of Kennecott Copper Corp.
The Ray district has produced a surprisingly small amount of gold, considering the large production of copper. Total gold production through 1959 was about 35,250 ounces.
The rocks exposed in the Ray district are similar to those of the Globe-Miami district. The oldest rocks are granitic intrusives and Pinal Schist of Precambrian age. Unconformably overlying them are altered sedimentary rocks of the Apache Group and the Troy Quartzite of late Precambrian age. Great sills of diabase were intruded into the Apache Group and the older rocks (A. F. Shride, oral com-mun., 1962). In the eastern part of the district lower Paleozoic sedimentary rocks are exposed in a few fault blocks. Dikes, sills, and irregular bodies of quartz diorite, quartz monzonite, and granite, of probable early or middle Tertiary age intrude the Precambrian and Paleozoic rocks. Conglomerate and a dacite flow of late Tertiary age and the Gila Conglomerate of Tertiary and Quaternary age discordantly overlap the older rocks (Ransome, 1919, p. 123-126). The rocks in the eastern part of the district are displaced by a mosaic of normal faults. West of Mineral Creek, which is in general parallel to the Ray fault (the major structural element in the district), Precambrian and Tertiary rocks are exposed and are considerably less faulted than the rocks east of Mineral Creek (Ransome, 1919, p. 127, 128).
The ore deposits consist of disseminated chalcocite of secondary origin associated with primary pyrite and are chiefly in the Pinal Schist and in diabase adjacent to quartz monzonite intrusives and in the intrusives themselves. The primary deposits, which underlie the chalcocite ore, contain pyrite and chalcopyrite. The chalcocite ore is generally overlain by a leached capping of variable thickness which locally is rich in chrysocolla and malachite. The ore bodies are undulate, flat-lying masses of irregular outline and thickness (Ransome, 1919, p. 12).
Page 2 of 3 | <urn:uuid:15ec6c5e-f00c-4ea0-9a7c-bb30d4ce699b> | CC-MAIN-2019-51 | https://westernmininghistory.com/articles/95/page2/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00167.warc.gz | en | 0.952937 | 691 | 3.078125 | 3 |
Originally Posted by groakes
Originally Posted by SolipsismX
1) My comment went way, way over your cranum.
2) If you want to discuss the most abundant metal in the Earth's crust then the term aluminum predates aluminium. It was the British that decided after the fact and without consideration to any standards body to make the word use the -ium suffix to more closely match other metals on the periodic table.
3) Your argument that every American is pretentious for not converting their language in the last 24 years because of what the IUPAC wants is ridiculous.
Not that I really care but....http://www.worldwidewords.org/articles/aluminium.htm
The metal was named by the English chemist Sir Humphry Davy ..., even though he was unable to isolate it: that took another two decades’ work by others. He derived the name from the mineral called alumina
, which itself had only been named in English by the chemist Joseph Black in 1790. Black took it from the French, who had based it on alum
, a white mineral that had been used since ancient times for dyeing and tanning, among other things. Chemically, this is potassium aluminium sulphate...
Sir Humphry made a bit of a mess of naming this new element, at first spelling it alumium
(this was in 1807) then changing it to aluminum
, and finally settling on aluminium
in 1812. His classically educated scientific colleagues preferred aluminium
right from the start, because it had more of a classical ring, and chimed harmoniously with many other elements whose names ended in –ium
, like potassium
, and magnesium
, all of which had been named by Davy.
The spelling in –um
continued in occasional use in Britain for a while, though that in –ium
soon predominated. In the USA, the position was more complicated. Noah Webster’s Dictionary of 1828 has only aluminum
, though the standard spelling among US chemists throughout most of the nineteenth century was aluminium
; it was the preferred version in The Century Dictionary
of 1889 and is the only spelling given in the Webster Unabridged Dictionary
of 1913. Searches in an archive of American newspapers show a most interesting shift. Up to the 1890s, both spellings appear in rough parity, though with the –ium
version slightly the more common, but after about 1895 that reverses quite substantially, with the decade starting in 1900 having the –um
spelling about twice as common as the alternative; in the following decade the –ium
spelling crashes to a few hundred compared to half a million examples of –um | <urn:uuid:789b2825-4229-4406-b007-0a050bacd5f4> | CC-MAIN-2015-32 | http://forums.appleinsider.com/t/180731/apples-ive-hints-new-materials-to-be-used-in-upcoming-products-suggests-new-form-factors-and-devices/40 | s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986357.49/warc/CC-MAIN-20150728002306-00333-ip-10-236-191-2.ec2.internal.warc.gz | en | 0.962466 | 556 | 2.796875 | 3 |
Abolitionist John Brown played a major role in the lead to the Civil War. Brown was a strong supporter in committing violence against the South to end slavery, an on October 17, 1859, Brown brought along 19 other followers armed with weapons, led a raid on the federal armory located at Harper's Ferry, Virginia. The plan was to capture the armory there and pass them along to local slaves. A force of U.S. Marines took down the raid, and Brown was capture, convicted and hanged. (Varon 326-34). | <urn:uuid:6a245103-54f4-4ef0-869f-5c0b04cbb019> | CC-MAIN-2023-06 | https://www.timetoast.com/timelines/the-american-civil-war-37a0026a-497d-4d58-be2e-3b48efe0cd8a | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00270.warc.gz | en | 0.986827 | 112 | 3.421875 | 3 |
Scientists announce malaria breakthrough
Malaria kills about 800,000 people a year, most of them children in sub-Saharan Africa. And, while researchers haven’t found a cure, they’ve made a huge leap in our understanding of how the disease works - one that could finally lead to an effective vaccine. By SIMON ALLISON.
Before you can cure a disease, you must understand how it works. Much of the multibillion-dollar industry of vaccine development is not focussed on dreaming up a vaccine, but on discovering how a disease enters and then functions in the human body. Once that’s understood, a cure can be contemplated.
That’s why the breakthrough announced by scientists from the Sanger Institute is so important. They’ve been able, for the first time, to pinpoint exactly how malaria enters the bloodstream. Once you’re bitten by the carrier mosquito, the researchers discovered that the malaria parasite relies on a single protein receptor to gain entry to the red blood cells where it multiplies. Once it’s in the blood, it becomes dangerous to humans. Knowing how this process works might mean drugs can be developed to target the particular protein receptor, which researchers describe as the “lock” into which the malaria parasite inserts its “key” to enter. All they need do now is change the lock.
“Our research seems to have revealed an Achilles heel in the way the parasite invades our red blood cells,” said Gavin Wright, co-leader of the study. “Our findings were unexpected and completely changed the way in which we view the invasion process. The great hope is that this breakthrough will facilitate the path towards a more effective vaccine.” This could still take a long time – up to a decade – but it’s an important new direction for researchers.
This is the second major development around malaria this year. Last month pharmaceutical company GlaxoSmithKline released the test results of its experimental RTS,S vaccine, which appeared to halve the risk of malaria in children. DM
- Malaria: beginning of the end? in the Independent. | <urn:uuid:62c6868b-bc3b-4b43-a805-081b5be3b8e5> | CC-MAIN-2015-06 | http://www.dailymaverick.co.za/article/2011-11-11-scientists-announce-malaria-breakthrough | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00163-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.930264 | 444 | 3.09375 | 3 |
Zoologger is our weekly column highlighting extraordinary animals – and occasionally other organisms – from around the world
One of the most important things any animal can do is to tell potential mates about themselves. They have all sorts of ways to do it, from peacocks' ridiculously large and ornamented tails to the sharp suits and gym-honed bodies of posing human males. If it weren't a matter of life and death, it would all seem very, very silly.
Many of these signals come in the form of secondary sexual characteristics: parts of the body that aren't directly involved in producing offspring, but are nevertheless associated with the process. The peacock's train is one example; in humans, male body hair is a signal of reproductive maturity, and large female breasts are renowned for attracting male attention.
But female red-spotted newts may need to look a little more closely when they choose their mates. Specifically, if they want a good one, they would be well-advised to take a look at his kidneys. The same may be true of many salamanders.
Kidneys before sex
Red-spotted newts have a peculiar way of mating. In common with many other salamanders, the male produces a blob of jelly called a spermatophore, which carries a consignment of sperm. The female stores it until she is ready to reproduce.
But according to Dustin Siegel of Southeast Missouri State University in Cape Girardeau, before all this happens, the male's kidneys have to do their bit.
Siegel captured two or three adult males every month for a year and dissected their kidneys. During the mating season, the rear portions of the kidneys secreted an unidentified liquid, but didn't do so for the rest of the year.
The tubes running through the kidneys also changed shape for the mating season, developing thicker walls. The changes in the kidneys mirrored those in other secondary sexual structures – for instance, the males' tails became thicker for the mating season
That is good evidence that the kidneys are indeed involved in sex, but what they are doing remains a mystery. The liquid they make is a glycoprotein – a protein bonded to a carbohydrate – but it has not been specifically identified.
Siegel points out that the liquid from the kidneys drains into the Wolffian ducts that house the sperm, so he suspects that it helps to boost the sperm's motility or lifespan. "The secretions do mix with sperm during spermatophore formation," says Siegel. "It could be to do with sperm motility or morphology, or viability."
The substance could also be a pheromone. But it probably doesn't contribute to the spermatophore, which is made by glands in the cloaca – the urinary and reproductive opening.
Kidneys with sex-related functions are an overlooked component in two other animal groups besides salamanders, Siegel says.
The kidneys of snakes and lizards look similar to those of the red-spotted newt, and also secrete unknown liquids that seem to be involved in sex. They may help make the copulatory plugs that males of some species use to prevent other males from mating with females, but that remains to be seen. And male sticklebacks use their kidneys to produce a glue that holds their nests together.
One can only hope that kidney-based pornography won't emerge as a genre anytime soon.
Journal reference: Journal of Herpetology, DOI: 10.1670/11-013
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. | <urn:uuid:3c95c1c1-a2ee-448d-952f-2d67d96c79af> | CC-MAIN-2014-35 | http://www.newscientist.com/article/dn21677-zoologger-meet-the-amphibian-with-sexual-kidneys.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500837094.14/warc/CC-MAIN-20140820021357-00420-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.962053 | 783 | 3.265625 | 3 |
Teachers who choose to use the Iditarod in the classroom DO NOT teach “Iditarod”. They teach their school’s content curriculum. They align their lessons and activities to standards, objectives, and 21st century teaching goals. Teachers choosing to use Iditarod as a vehicle of instruction, use research based methods of instruction and technology.
Please notice our curriculum is listed by PreSchool/Kindergarten, Elementary, Middle School, and High School levels, and the content area. Standards are included in our lessons. This should not limit use of our lessons, but expand use, as many are easily adaptable to other grade levels and content areas.
Literacy plays a key role in our curriculum. | <urn:uuid:23b04369-3e8e-40bd-ac6b-e0a9446598fa> | CC-MAIN-2018-17 | https://itcteacheronthetrail.com/curricululm/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00298.warc.gz | en | 0.952317 | 151 | 3.296875 | 3 |
Findings from the groundbreaking research were published in Nature Communications.
The world’s critical system of ocean currents might be on the brink of collapse sooner than anticipated, presenting a possible global catastrophe due to the escalating climate crisis, says a new study.
The study suggests that the Atlantic Meridional Overturning Circulation (AMOC), a crucial system that includes the Gulf Stream, could potentially cease around the mid-century or possibly as soon as 2025.
Renowned climate scientists unaffiliated with the study have agreed on the decreased stability of the current, though they also advise a level of restraint when interpreting the research outcomes.
The AMOC, often depicted as an oceanic conveyor belt, comprises currents that transport warm waters from the northern to the southern hemisphere and vice versa. The circulation process happens over an extended period, facilitating a significant cycle within the Atlantic Ocean. Additionally, this system is responsible for carrying essential nutrients vital for sustaining marine life.
The Gulf Stream, a part of the AMOC and an extensively understood segment, is a wind-propelled current crucial for maintaining a warm climate in considerable parts of Europe and Florida’s east coast, as per the National Oceanic and Atmospheric Administration (NOAA).
The NOAA affirms that England would experience a significantly “colder climate” in the absence of the warm waters facilitated by the Gulf Stream.
According to experts, the predicted collapse of the AMOC raises “major concern” since it is identified as one of the vital tipping elements within the Earth’s climate system.
For their research, scientists from the University of Copenhagen utilised sea surface temperature data dating back to 1870. This data was used to trace changes in the Gulf Stream’s currents across the years and predict when a potential tipping point could occur.
In the past, scientists have raised concerns over several studies that indicated a swift deceleration of the AMOC.
However, it’s worth noting that the UN’s Intergovernmental Panel on Climate Change (IPCC) predicts that the Gulf Stream isn’t likely to collapse within this century. The panel expects the current to “weaken but not cease,” adding a somewhat optimistic note to the unsettling predictions. | <urn:uuid:f42ed0fd-29bb-4894-81ed-660c4e10d96e> | CC-MAIN-2023-40 | https://dohanews.co/crucial-ocean-currents-system-faces-catastrophic-collapse/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511021.4/warc/CC-MAIN-20231002200740-20231002230740-00743.warc.gz | en | 0.936889 | 461 | 3.546875 | 4 |
A specialist in global development policy says efforts to tackle some of the world's biggest problems are virtually ineffective because they cannot be measured.
Amir Attaran, made his comments as world leaders gather at the UN this week to discuss progress towards the Millennium Development Goals (MDGs) — internationally agreed targets aimed at addressing problems such as disease, poverty, and hunger.
Attaran, whose views are published today (13 September) in PLoS Medicine, says that unless the UN uses scientifically valid measurements, the millions of lives that the eight MDGs were designed to save could be lost.
Laudable as the goals are, he says, the scientific evidence at their core is so unreliable that the targets themselves are almost meaningless.
For example, when the MDGs were set in 2000, scientists knew little about how many people are infected with, and die from, malaria. So with no baseline figure to work from, argues Attaran, it is "futile" to talk of reducing the number of people dying from malaria.
Measuring the effect of tuberculosis is similarly difficult, he says. No country counts the numbers of new patients with the disease, a measure that the MDGs relating to tuberculosis focus on. The World Health Organization's method is too weak to be used with any confidence, says Attaran, who is based at Canada's University of Ottawa.
The implication of this, he states, is that scientists have little idea of whether the MDGs for malaria and tuberculosis are on schedule.
Tomorrow, global leaders will meet to discuss progress on the goals. The UN has ordered that they should not be "distracted by arguments over the measurement of the MDGs". Attaran says the order is "illogical and sabotages the MDGs' chances of success".
Attaran adds that this delay means that the next chance for UN scientists to present more accurate measurements will probably be in 2010. This is just five years away from when the goals are meant to be achieved.
In view of the difficulty in measuring the MDGs, making them international goals to which almost all scientific effort is now directed is misguided, he says.
In a response to Attaran's comments published today on the SciDev.Net website, Jeffrey Sachs and colleagues at the UN Millennium Project say "although Attaran raises important points on the poor quality of data for measures of progress on the MDGs, he uses these findings to draw the wrong conclusions" (see Millennium Development Goals 'not doomed to fail').
Attaran does not suggest discarding the goals, but altering them to ones that can be properly measured.
He adds that some scientists' suggestions that the MDGs are 'aspirational' and not a measuring exercise is "shameful". European or US leaders, he points out, would not describe efforts to reduce unemployment or teen pregnancy in their own countries as mere aspirations.
If the UN wishes to maintain its credibility, and save the lives it has promised to, says Attaran, it must engage in "more thoughtful and timely action".
Link to full article by Attaran in PLoS Medicine
Click here for the response to Attaran's comments from Jeffrey Sachs and colleagues, at the UN Millennium Project | <urn:uuid:d848e443-0698-4658-8e22-a165bff80f09> | CC-MAIN-2017-43 | http://www.scidev.net/global/tb/news/progress-on-un-development-goals-cant-be-measure.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826114.69/warc/CC-MAIN-20171023145244-20171023165244-00546.warc.gz | en | 0.955729 | 650 | 2.9375 | 3 |
The first step in furniture construction involves transforming the rough timber (cut in a paddock from a fallen tree and then air dried) into flat and square timber. This process is called 'dressing' the timber or because it is often done by machine 'machining'.
Step 1. Jointing
The first step in this process is called 'jointing'. The key concept in this process is to find a surface you know is flat and then you make you timber surface parallel to it. In this process a long bed plane is used to remove the peaks in the rough timber. (The ideal length of the bed is twice the length of your timber so it is supported at all times.)
In this set up I am using a 3hp 12" woodfast combo machine set up in jointer mode with a roller outfeed table. The rotating cutter (below the aluminum guard) is beneath a large flat iron surface. Moving the rough timber over this surface removes small amounts of the peaks of the board (~1mm). By repeating this process eventually all the peaks are removed and you end up with a flat surface.
This process can also be done by hand. (The next time you see a piece of very old timber building or furniture, all that timber would have been prepared by hand.) A long hand plane (normally, no 7 or 8) is used to remove the peaks. This requires constant checking to monitor your surface and stay square. (Winding sticks, a straight edge, and a small square are invaluable accessories at this stage.)
Occasionally, you'll have a board that is twisted. Opposite corners will point one way causing the board to rock. If you don't recognize this the board can rock on the jointer which will fail to flatten. The fastest way to fix this is to use a jack plane (no 5) to quickly flatten the corners. Another trick is with cupped or warped boards; always plane the curve concave down so it doesn't rock in the middle.
Phase 2: Thicknessing
After you have jointed two surfaces (a face and an edge) forming one square corner you can move on to thicknessing. Thicknessing is the process of planning the opposite edge making it parallel to the one you just jointed.
Machine thicknessers (see the woodfast combo now in thickness planing mode) uses a cutter head above the piece you are planing to cut a certain distance from your flat surface. If your surface is not flat you simply copy this to the other side.
Step 3: Sawing
Now we have 3 edges square and flat. (If you can balance the thin edge of the board on the thicknesser you can use it to set the width of your board. However, normally this will be too thin do this safely.)
To cut (rip) the width of the board I use my circ. saw and a guide set at the distance required. (This technique is obviously less accurate than a table saw but much faster than cutting it by hand.)
Now you have a board that's square and flat on four sides, also known as 'four square'.
The Shaker Table
The process of dressing allows me to take rough timber which has twisted and warped as it's dried...
...and form beautiful flat and square timber ready for joinery. | <urn:uuid:f78be930-e1fb-4594-8991-3795431e957a> | CC-MAIN-2016-07 | http://theloveofwood.blogspot.com/2010/07/getting-dressed-in-morning.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158609.98/warc/CC-MAIN-20160205193918-00075-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.946771 | 683 | 3.046875 | 3 |
Evaluating the risks posed by synthetic biology
Last week, nearly 70 experts from around the world gathered at EPFL for a workshop on potential threats arising from synthetic biology. Technologies developed in this field aim primarily at treating diseases and combating the effects of climate change, but they can also have unintended consequences or even be used for malicious purposes.
This four-day workshop was organized by EPFL’s International Risk Governance Center ([email protected]). We spoke with Marie-Valentine Florin, the Center’s executive director, about IRGC’s role at EPFL.
How can synthetic biology be used improperly or maliciously?
Synthetic biologists are able to construct new biological systems and functions with applications in energy, health care and farming. Yet these same technologies can also be hijacked to create potentially dangerous pathogens for which there is no known treatment.
So as researchers, we need to ask ourselves what we can do to advance science and technology for the common good while at the same time managing the risks of “dual-use” research, or research that can be turned against people or the environment.
The EPFL workshop, which was funded by NATO’s Science for Peace and Security Programme, gave scientists, national and international regulators, security agencies and businesses a chance to pool their expertise in order to clarify the causes and potential consequences of these risks and refine existing response strategies.
The working groups formed during the workshop will now turn their discussions into a report and a book discussing the various problems, current and potential future solutions, existing obstacles and recommendations. The workshop participants explored both the failure of well-intentioned researchers to take necessary precautions and the impact of their negligence, as well as the danger that knowledge or material will be used for malevolent purposes.
What is [email protected]?
IRGC was created by EPFL in 2017 to serve as a forum for cross-disciplinary dialogue on complex, uncertain and ambiguous risks – which are often a counterpart to opportunities. Our goal is to give policymakers the information they need to make decisions based on solid scientific and technological foundations.
Most technologies are aimed at reducing existing risks – think climate change, diseases and natural disasters – and this is something that should be promoted by public policies. It is also essential to create positive incentives through targeted research programs, financial support, and standards that reward performance gains. But new dangers can always arise, and this is where IRGC comes in. Our recommendations are meant to highlight key risks and challenges and help identify possible strategies to deal with them.
We have a dual mission. First, we seek to develop widely applicable risk governance concepts and instruments. Risk governance refers to the processes, standards, regulations and structures that come into play when risk-related decisions have to be made. This includes assessing, managing and communicating about risks with the involvement of the various stakeholders. We then issue recommendations, mainly for public policymakers, about how to manage the risks posed by some emerging technologies.
Why does EPFL need IRGC?
All major universities around the world have an institute or center that studies the link between technology and public policy. The concept of risk is central because it is what justifies public intervention. Yet the risk governance approach that we take encompasses more than simple risk management. For a university like EPFL, it means creating the conditions necessary for new technologies to be adopted. For example, we commonly recommend that new technological applications must not only improve existing performance in some way, but they must also be economically viable and generally responsible, i.e. socially acceptable and environmentally respectful.
Our role at EPFL is to answer researchers’ questions about such topics. We can also be more proactive in certain areas, such as insisting on the importance of taking cultural differences into account when assessing risk acceptability – in genome editing, for example – and raising awareness of the role and place of ethics in researchers’ work.
What is IRGC currently working on?
Our work is focused on two of EPFL’s areas of expertise: digitalization and life sciences. In digitalization, this year we are working on deepfakes, which is when text, audio or video content is falsified in order to mislead or manipulate people. We’re looking at what steps can be taken to ensure machine learning is used to improve diagnostics, predictions and decisions without being applied to actually producing deepfakes as well. In the life sciences, we have set up a program to help decision-makers achieve the social, regulatory and economic conditions needed to promote fairness in the burgeoning field of precision medicine. This year, our focus is on value creation in this field. | <urn:uuid:957119e4-062a-4ff2-96d2-cc4eb5244a65> | CC-MAIN-2020-50 | https://actu.epfl.ch/news/evaluating-the-risks-posed-by-synthetic-biology/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00526.warc.gz | en | 0.948706 | 947 | 2.703125 | 3 |
People Who Follow 4 Simple Habits Sharply Reduce Risk of Serious Disease.
What are those 4 habits? Here they are:
1. Never smoke.
2. Exercise at least 3.5 hours per week.
3. Maintain a body mass index (BMI) under 30.
4. Follow a diet high in fruits, vegetables, and whole-grain breads, and limited in meats.
This is a mnemonic for exercises that can be done with just body weight: PLSS
Exercises that can be done with just body weight: PLSS.
4 Healthy Habits That Cut Disease Risk. WebMD, 2009.
Pessimism and Cynicism Appear to Increase Risk of Heart Disease and Death http://bit.ly/synPO
Healthy Living Is the Best Revenge: Adhering to 4 simple healthy lifestyle factors prevents chronic diseases http://bit.ly/564PR
How to stay healthy while traveling | <urn:uuid:f59f42c0-8670-468e-8b40-9e494a9a47e3> | CC-MAIN-2021-25 | http://casesblog.blogspot.com/2009/09/4-healthy-habits-sharply-reduce-risk-of.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539480.67/warc/CC-MAIN-20210623134306-20210623164306-00428.warc.gz | en | 0.8336 | 198 | 2.578125 | 3 |
Kartik Purnima is literally defined as the Purnima or the full Moon night of the Kartik Month. It is a very auspicious festival celebrated by Hindus, Sikhs and Jains all over India. It is also well known by the names of Tripuri Purnima or Dev Diwali meaning Diwali for the Gods. The famous Pushkar fair or the Pushkar Mela ends on this day, the most auspicious day of the fair. It is celebrated with decorating and gifting deep and diyas. Thus, the day is also known as Kartika Deeparatna.
Story Behind Kartik Purnima
The story behind Kartik Purnima revolves around the demon son of Tarakasur, Tripurasur. He had become very powerful and had even defeated the Gods. He had even managed to create three cities in the heavens called Tripura. Even the Gods feared him. Lord Shiva then incarnated as Tripurantaka or ‘killer of Tripurasur’, and destroyed him with his cities in a single shot of the arrow. The Gods were overjoyed and celebrated the day as a festival of illumination. Thus, the name Dev Diwali.
Kartik Purnima is the birthday of Matsya, Lord Vishnu’s fish incarnation.
Why is it Celebrated
A holy bath at Pushkar or in the Ganges river, especially at Varanasi is deemed very auspicious. Kartik Purnima is the most popular day for bathing in the Ganges at Varanasi. This auspicious Snaan is called Kartik Snaan. A ritual bath on Kartik Poornima in the Pushkar Lake is considered to lead one to salvation.
The full moon day of shukla paksha in the month of kartik is known as kartik purnima. The shubh purnima tithi begins at 11:17 pm on 13th November, 2016 to 7:22 pm on 14th November, 2016.
Many festivities and fairs including the Pushkar fair that begin on Prabodhini Ekadashi end on Kartik Purnima. Known as the day of illumination, Kartik Purnima fills life of one and all with the light of prosperity and luxuries. The holy dip or the ‘Kartik Snaan’ is believed to eliminate all sins.
Products to perform the puja
It is a very auspicious day of the year when Worshipping Lord Vishnu is believed to eliminate all sins and evil. The rituals include:
On kartik purnima day and lamps are lit on intersections, under Peepal trees, in temples etc.
kartik purnima day is considered to be day of Lord Shiva, Lord Brahma, Lord Vishnu, Lord Angira and Lord Sun.
The ancestors are offered prayers on this day.
The devotees try to take a holy dip on this day as the Gods are believed to transcend in the holy waters on this day.
Donations or daan are given to the poor as per the devotees stature.
Rudraabhishek is done with yagna to appease Lord Vishnu and Goddess Lakshmi to attain eternal wealth and prosperity. | <urn:uuid:5df098d4-3085-4aaa-9597-0f144f628ace> | CC-MAIN-2017-22 | http://kamiyasindoor.com/Kartik-Purnima-2016.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608652.65/warc/CC-MAIN-20170526090406-20170526110406-00115.warc.gz | en | 0.9399 | 686 | 2.734375 | 3 |
- Get informed. Acknowledge that the system has changed, and use all the resources available, from websites to infosessions to meetings with guidance counselors, to gather information about applying to college today, especially regarding the schools your child is interested in. Don’t assume—look it up.
- Talk to other parents. Parents whose children are slightly older than yours and who have been through the process recently will have valuable information and personal insight to share. Ask them what they wish they’d known.
- Start early. Many application tasks, especially those that are research-related, can be started well in advance of your child’s senior year. Your plans don’t need to be set in stone years in advance, but it’s a good idea to have a sense of direction.
- Stay realistic. It’s okay for your child to have big dreams and apply to some reach schools, but you should help them manage their expectations about application outcomes and match their accomplishments to appropriate colleges. You probably think that your student is pretty great, but admission to selective colleges can be brutal, and many qualified applicants are rejected. Think practically, and have a backup plan.
- Help your child build a strong applicant profile. Starting early in high school, ask questions and offer opportunities for your child to figure out what subjects or career paths interest them most. Encourage them to maintain strong academic performance and extracurricular involvement, and address academic or other problems early before they become major obstacles.
- Openly discuss paying for college. You may be uncomfortable talking to your child about money, but it’s essential that your child knows what the family can afford. Like it or not, cost is a major factor in choosing a college, and your student needs to know which schools are practical possibilities and how much financial aid might be required.
- Be prepared to deal with practical tasks. During application season, you may be able to help in innumerable small and large ways, from arranging for tutoring and assistance to checking in about deadlines to helping to gather information. Individual needs depend upon your student—talk to them to figure out where you can be most helpful.
- Always offer personal support, encouragement, and love. You’re a parent, not an admissions coach, and your close personal relationship with your child is important. While you may have to push your child to be focused and responsible sometimes, you also need to look out for their health (mental and physical) during this stressful time, help them build good habits, and respect their need to make independent choices (and sometimes mistakes) about their intended adult path.
- A Parent’s Guide to College Planning
- What If My Student Only Wants To Go To One College and Won’t Apply Anywhere Else?
- What Parents Need to Know About SAT and ACT Studying Prep
- Your Parents’ Application Experience Was a Lot Different from Yours
- 6 Things You Wish You Could Say to Parents During College Application Season
- College Spotlight Series: Everything You Need to Know About Princeton - February 18, 2018
- Into Engineering? Check Out Penn’s Engineering Summer Academy - February 7, 2018
- Why You Need to Gear Up Now for Summer Programs - January 17, 2018
Parent Perspective: What You Need to Know About Today’s College Applications
Post-secondary education is an ever-evolving field. It’s generally understood that someone who applied to college in the 1950s or ‘60s encountered an admissions environment very different from the one you’ll find today, with different application processes, different admission standards, and tuition rates that seem impossibly low from today’s perspective.
What you might not realize is that even if you went to college more recently, in the ‘80s or ‘90s, your experience also doesn’t reflect the current state of college admissions. Acceptance rates, standardized tests, admissions requirements, and college costs have all changed significantly within the space of a generation. At the same time, it’s more important than ever before that young people go to college—the career and income benefits are substantial.
If you’re a parent who is currently helping a student navigate the world of college admissions, and especially if that child is your first to go through the application process, it’s important that you update your expectations and assumptions about what that process will hold. In order for your child to successfully get admitted to a college that’s a great fit for them, they’ll need your informed help. Here’s our advice for what you need to learn and how you can get up-to-date.
The Changing Face of College Applications
Now more than ever, a bachelor’s degree is a highly valuable asset when thinking about career opportunities. You yourself may have come of age in a world where attractive career prospects didn’t necessarily require a college education, but the country is changing. Employers are raising their educational requirements for entry-level jobs, both because those jobs now require additional skills, and because the competitive job market allows them to be more choosy.
With more students applying to more colleges, and applicant pools at top-tier schools continuing to grow, it’s also gotten harder for a student to get into their college of choice. As we’ve covered many times on the CollegeVine blog in posts like Why are Acceptance Rates so Low?, the percentage of students accepted to top-tier colleges has declined precipitously.
For example, in 2016, Stanford University, currently the most selective school in the nation, accepted fewer than 5% of undergraduate applicants, less than one in twenty. Compare this to the 1995-1996 application season, in which Stanford accepted nearly 16% of applicants—still highly selective, but quite a different figure.
One important takeaway is that there’s always some element of chance to elite college admission. Competitive schools have to turn down many qualified applicants simply because of space constraints, and your child may very well be one of them. There’s simply no way to be sure, so you’ll need to be realistic about your expectations and help your child be realistic about theirs.
Overall, the college application process is just more serious than it used to be. Parents and students routinely spend considerable time and money perfecting applications, preparing for standardized tests, and piling on extracurriculars, leadership positions, specialized summer programs, and other enrichment experiences. Today’s aspiring college applicants tend not to get much unscheduled free time.
The stakes are high, and there’s a lot of pressure involved. Many college admissions advisors, including those here at CollegeVine, encourage today’s applicants to think strategically when applying to college. It’s no longer just about finding a few schools your student likes—it’s about compiling a carefully chosen list of schools to maximize your student’s chances of getting admitted somewhere that’s a good fit for them.
How you choose to approach this new set of realities is up to you and your student. However, you need to both be aware that getting accepted to a top-tier school is difficult, and your student will be competing with applicants who have put this kind of concentrated effort into making themselves compelling candidates for college admission.
Paying for College: The New Realities
Many different factors come into play when figuring out which colleges might be strong matches for your student, but one of the most significant is cost. Need-based financial aid, scholarships, and loans can help make a college more affordable, but the bottom line is that without a way to pay for a college, your student can’t attend.
You’ve probably already heard that the cost of getting a college education has gone up dramatically over time, and that’s true even within the last few decades. In fact, college tuition increases have outpaced inflation, meaning that not only is the dollar amount higher, but it’s actually harder for the average family to afford college than it used to be.
The average cost of tuition plus room and board at four-year colleges was $23,600 for the 2014-2015 school year, compared with $7,602 for the 1990-1991 school year. As of 2017, at a few especially expensive colleges, the yearly estimated cost of attendance for an average student is nearing $70,000.
For top-tier colleges, it’s close to impossible for most students to pay their own way based on savings and part-time or summer income, as many students did in the past. Financial aid can help a great deal, but most well-regarded colleges award financial aid based at least in part upon the family’s financial need, so your income and assets will be taken into account when assessing your student’s aid eligibility.
What does this mean for you as a parent? It means that if you make assumptions about college costs based on your own experiences from two decades ago, you and your student likely won’t be adequately prepared for the realities of college costs today. Updated information is essential if you’re going to make informed decisions about saving for college, seeking financial aid and scholarships, and choosing colleges your family can afford.
Top Tips for Parents: Getting Up to Date and Helping Your Student
If your established notions about college admissions process were formed a long time ago, you may feel a little overwhelmed by the amount you need to learn. However, many resources exist to help get you on the right track. Here’s a selection of our best advice for getting informed, adapting to the current state of the admissions world, and helping your child make wise choices throughout the application process.
For More Information
Here are some of the other parent-focused posts you’ll find on the CollegeVine blog and our sister blog, CollegeVine Zen.
The college application process can certainly be stressful, but having a trusted ally with experience navigating the admissions world can help tremendously. Our admissions consultants can help with everything from choosing colleges to perfecting essays to polishing your student’s overall applicant profile. To learn more about the services we offer, visit the CollegeVine College Application Guidance Program on our website. | <urn:uuid:6c86ed34-b2e2-4192-a8d4-87caf38b4606> | CC-MAIN-2018-09 | https://blog.collegevine.com/parent-perspective-what-you-need-to-know-about-todays-college-applications/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00456.warc.gz | en | 0.955682 | 2,130 | 2.640625 | 3 |
Walking in Stride
How Using the UNH Campus Walking Guide Can Help You Get Your Daily Steps!
As students we hustle to our classes every day throughout campus, but have you ever wondered how far you have walked in a day? Now you can figure out how far you have walked using the UNH Campus Walking Guide! This guide uses Google Maps to help track how far you have walked in a day. The guide is simple to use and even has the option to use this map with your cell phone!
Walking is a very important aspect in our life and is a great relaxed form of physical activity. The US Surgeon General recommends that adults should walk about 10,000 steps per day, which is about five miles. Using the UNH Campus Walking Guide, you can estimate how many miles you walk in a day to see if you meet to US Surgeon General daily recommendations.
Here are some tips on how you can help make sure you achieve your steps goal every day!
1. Listen to music while walking, it makes it more relaxing!
2. Bring a friend with you for a walk!
3. Take the long way to class if you can! | <urn:uuid:950ac165-7acc-4298-891d-6b6831d72586> | CC-MAIN-2021-04 | https://www.unh.edu/healthyunh/blog/physical-activity/2016/04/walking-stride | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704833804.93/warc/CC-MAIN-20210127214413-20210128004413-00677.warc.gz | en | 0.937527 | 242 | 2.6875 | 3 |
Anyone can care for a thriving flower garden; although it does take some time and effort. Below are some tips to keep your flower garden happy and healthy.
- The Three Necessities: Water, Sunlight and Fertile Soil
Your flower garden must have an adequate supply of water, sunlight, and fertile soil. Any lack of these basic necessities will affect your garden’s health. In terms of watering and sunlight; the best thing to do is follow the plant tag suggestions. During hot and dry cycles it’s okay to water more frequently; but too much water can cause your flowers to rot. It’s best to water at the base of each plant as overhead watering can encourage the spread of diseases. It’s a good thing to add organic matter and fertilizer to your garden yearly; as your flowers as well as earthworms and microbes feed on added organic matter causing reserves to be low for the following year.
- Flower Selection
There are four types of flowers: annuals, biennials and perennials and bulbs. Annuals typical grow and bloom for one season. For biennials, the leaves and stems grow the first years and the flowers are produced the second year. Perennials and bulbs bloom and grow for several years. The best thing to do when planting your garden is to mix all four varieties of plants to ensure that something will always be blooming. Also, switching out annuals can add interest and different colors / textures to your garden yearly in front of your backdrop of perennials.
Most insects are beneficial to your garden; while a few can be detrimental. Insects such as bees and butterflies not only pollinate your flowers but also fertilize them. Concerning the “bad” insects look for clues in the damage they leave behind such as chewed leaves. When choosing an insecticide, choose one that yields the highest pest death rate with the least impact towards the environment and beneficial critters.
Deadheading keeps gardens neat and blooming. It’s the snipping off of the flower head after it wilts, this allows for the possibility of a new flower to grow and bloom. | <urn:uuid:3276f134-4349-40ba-be6b-9c3aae0fb493> | CC-MAIN-2020-10 | https://www.demicheleinc.com/caring-for-flower-gardens-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00494.warc.gz | en | 0.943078 | 442 | 3.28125 | 3 |
LIME (Citrus X Aurantifolia)
Lime is a citrus fruit, typically round in shape and green in color. It is widely grown in tropical and subtropical areas. It is a key ingredient in certain pickles, and chutneys and lime juice is used to flavor drinks, food, and confections.
- It contains 0.5 g of Proteins.
- It is a good source of Vitamin C, Calcium, and Potassium.
- It consists of 1.5 g of Fiber.
- It works as an immunity booster.
- It helps in lowering the risk of cancer.
- It can prevent Kidney stones.
|Balaji (Tenali Selection)|
|Rasraj (IIHR hybrid)| | <urn:uuid:165cab2d-bfb7-439b-9d64-9928b8ac77de> | CC-MAIN-2022-21 | https://indianaladin.com/products/fruits-and-nuts/lime/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00204.warc.gz | en | 0.843194 | 229 | 2.640625 | 3 |
Welcome to Emotional Intelligence Guide
Emotional Intelligence Selfawareness Examples Article
This is a selection made from among articles on Emotional Intelligence Selfawareness Examples. For a permanent link to this article, or to bookmark it for future reading, click here.
Emotional Intelligence Used Against Depression and Anxietyfrom: www.SelfImprovementsGuide.com
An effective weapon against depression and anxiety is emotional intelligence. Emotional intelligence is the ability of a person to perceive and understand emotions. Through emotional intelligence, depression and anxiety can stop being problems.
How does one use emotional intelligence against depression and anxiety?
Understanding - This is the key to using emotional intelligence against depression and anxiety. Because of emotional intelligence, people can understand their emotions. This is the first step in breaking the hold of these emotions. People use emotional intelligence against depression and anxiety by understanding these emotions. They learn the source of their emotion and thus, learn the most effective way to fix it.
Distinction - A person with emotional intelligence can use it to distinguish what emotions are. They are able to distinguish their feelings from their thoughts. They are able to distinguish the part of emotion that comes from them as opposed to what they think is causing their feelings. It is this distinction that enables a person to use emotional intelligence against depression and anxiety. They are able to take control of their emotions and use them properly.
Cultivation - Depression and anxiety are often caused by unfulfilled emotional needs. Emotional intelligence is all about cultivating emotions. People have been ignoring their emotions for years, but they fail to realize that it has to come out sooner or later. Through emotional intelligence, people combat depression and anxiety by letting emotions grow naturally in a controlled manner.
There are other ways how emotional intelligence is used against depression and anxiety. However, these few ways should be able to give you an insight on how emotional intelligence works and why it is good for you.
There are episodes in a person's life when the world just seems so bleak. There are times when one's existence just seems so pointless. For some people, these episodes are short and temporary. For these people the emotions of depression and anxiety are nothing to be alarmed about. However, there are those who suffer from these emotions and consider them as threats to their goal of living a happy life.
There are people for whom depression and anxiety are barriers to their success. These people find that their problems of depression and anxiety aren't mere episodes. They consider depression and anxiety as threats to their lives. Why is this so?
For starters, anxiety and depression are very negative emotions. These emotions, when they come in fits can have very detrimental effects on a person's life. When you are depressed, you won't be able to think clearly. Your perspective on things will be distorted, which means you will not be able to make objective decisions.
Anxiety prevents a person from thinking things out properly. A person suffering from anxiety often finds that he or she is unable to do anything about his or her situation. People who are overcome with anxiety often experience a feeling of helplessness and do one of two things:
a) They resign themselves to their situation. Because of anxiety, a person's mind will shut down and focus on his or her anxiety instead of the problem. Because of anxiety they will be unable to act and just wait for what they think is inevitable.
b) They act rashly. People who are overcome by anxiety often lose focus of the world around them. They make decisions that are fueled by their anxiety. Because of this, they often end up making all the wrong moves.
Christine P Gray is a recognized authority on the subject of creativity. Her website www.selfimprovementsguide.com provides a wealth of informative articles and resources on everything you will need to know about self improvement. All rights reserved. Articles may be reprinted as long as the content and links remains intact and unchanged.
Warning: file() [function.file]: php_network_getaddresses: getaddrinfo failed: Name or service not known in /hermes/bosnaweb02a/b1208/nf.selfimprovementsguide/public_html/selfimprovementsguide.com/emotional/datas/searchfeed.php on line 8
Warning: file(http://www.searchfeed.com/rd/feed/TextFeed.jsp?trackID=&pID=&cat=emotional+intelligence+selfawareness+examples&nl=5&page=1&excID=) [function.file]: failed to open stream: php_network_getaddresses: getaddrinfo failed: Name or service not known in /hermes/bosnaweb02a/b1208/nf.selfimprovementsguide/public_html/selfimprovementsguide.com/emotional/datas/searchfeed.php on line 8
Emotional Intelligence Selfawareness Examples Specific links
Emotional Intelligence Selfawareness Examples News
No relevant info was found on this topic. | <urn:uuid:62430424-0aab-4d77-bae5-ae98ec1c71ed> | CC-MAIN-2015-32 | http://www.selfimprovementsguide.com/emotional/emotional-intelligence-selfawareness-examples.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986357.49/warc/CC-MAIN-20150728002306-00302-ip-10-236-191-2.ec2.internal.warc.gz | en | 0.948909 | 1,031 | 2.75 | 3 |
Movile (B): Building New Venture Opportunities
The (B) case explained how Movile expanded their business, moving into the smartphone market from their core business of feature phones in Latin America. The case described the two parts of Movile’s strategy. The first was Movile’s development and launch of a smartphone app, PlayKids, which, as of the end of 2014, was the number-one children’s app in the world (by sales in the Apple app store). The second aspect of this strategy involved investing in mobile-enabled offline start-ups in Brazil.
The (B) case requires the (A) case to be read beforehand.
The learning objective of the (B) case was for students to examine how a non-U.S. start-up has successfully expanded internationally and pivoted their business model in the face of a declining market. | <urn:uuid:cd08875b-ad66-4cbe-b1c8-64e993fe3862> | CC-MAIN-2021-43 | https://www.gsb.stanford.edu/faculty-research/case-studies/movile-b-building-new-venture-opportunities | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00562.warc.gz | en | 0.950638 | 185 | 2.640625 | 3 |
Metball modelling rests on the polygonization of an isosurface. The isosurface is defined as the set of all points in space where the force function is precisely equal to some chosen constant threshold. The polygonization of the surface is a set of polygons which attempt to aproximate the form of the surface to the best of their resolution. The surface is not guaranteed to be accurate; shapes on the surface that fall below the resolution of the polygon mesh will not be represented accurately. But overall, polygonization is an effective way of displaying the isosurface, and so care must be taken to compute the polygon mesh as quickly and accurately as possible.
In the following discussion, a point is hot if the value of the force function at that point is above the system threshold. A point is cold if the value of the force function at that point is below the threshold. An edge or face is hot if it has one or more hot vertices and one or more cold vertices.
The polygon mesh is generated by refining an Octree over the bounding cube of the model. An Octree is a cube of space. Cubes have eight corners; consider an edge between any two. If the edge is hot, then somewhere in between the two F() must be exactly 0.5. This is because F() is a continuous function, because it is the sum of a set of continuous functions. Interpolation techniques can be used to quickly aproximate where along the edge F()=0.5. An effective function has proven to be
where P is now an interpolation between the two vertices. (Obviously, this fails if both vertices are hot or cold.) P is the vertex of a polyon in the polygon mesh.
If only one corner of the cube is hot then we will interpolate three points in space where F()=0.5; these three points uniquely determine an oriented polygon. If only two neighboring corners were hot, we would find a squareish polygon separating those two vertices from the rest of the cube. Any configuration of hot and cold corners gives rise to one or two polygons. So any cube in space in which one or more corners are different temperatures from the rest of the cube can be used to produce polygons whose vertices are on the surface. Eight corners, each either hot or cold, means that there are 2^8=256 possible combinations of hot and cold corners=256 possible polygon cases. Both implementations currently use pre-calculated arrays of constants to find which vertices are applicable to which cases.
A single Octree produces a single polygon. When we refine an Octree, we break it up into eight smaller Octrees, one for each octant. Several of these will not contain any part of the surface; they will not be refined any further. Those which are crossed by the surface (some corners are hot but some are cold) will produce polygons; they can be queued for refinement at a later date. Thus starting from a single seed Octree, we can produce a smooth, finely faceted surface. Parts of this surface can be removed and updated dynamically as changes occur to the underlying model.
The following sequence demonstrates refinement in action.
These images were taken from meshes exported to Data Explorer:
An important consideration in implementing an octree refinement scheme is that sampling the force function F() may be a very expensive operation. Care should be taken to avoid repeating it more often than necessary. If possible, neighboring octrees should share vertex values between them and parent octrees should share vertex values with their children.
More Information on Surface-Fitting Algorithms
An Overview of Metaballs/Blobby Objects by Matthew Ward, WPI CS Department
Example Dinosaur image.
Example animation by John Isidoro created by using the Marching Pyramids algorithm.
Main Modeling Page
HyperGraph Home page. | <urn:uuid:6a4dd17d-2f02-48e7-aad7-b74c5bdda8e5> | CC-MAIN-2016-36 | http://www.siggraph.org/education/materials/HyperGraph/modeling/metaballs/metaballs.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982290752.48/warc/CC-MAIN-20160823195810-00189-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.93012 | 813 | 3.78125 | 4 |
Fetal ventriculomegaly refers to the presence of dilated cerebral ventricles in utero.
Important in itself, it is also associated with other CNS anomalies.
Using the current sonographic cut-off criteria (see radiographic features below), the estimated prevalence may be at ~0.9% of all pregnancies 14. There may be a slightly increased male predilection.
Development of lateral ventricles
- first trimester
- the choroid plexus regularly fills the entire lateral ventricle, bilaterally
- second trimester
- the choroid plexus begins to recede posteriorly but remains in close contact with the medial and lateral walls of the bodies and atria of the ventricles
- likewise, the lateral cerebral ventricle is large relative to the cerebral hemispheric width
See the article: fetal ventriculomegaly (differential)
While many fetuses with mild ventriculomegaly have a normal outcome, there are also a large number of congenital syndromes associated with enlarged ventricles.
Ultrasound is the screening modality of choice for initial evaluation 8.
The measurement should be in the true axial plane at the atria of the lateral ventricle and glomus of the choroid plexus. The ventricle is measured from inner margin of the medial ventricular wall to inner margin of the lateral wall.
Fetal ventriculomegaly is defined as:
- >10 mm across the atria of the posterior or anterior horn of lateral ventricles at any point in the gestation
- alternatively, a separation of more than 3 mm of the choroid plexus from the medial wall of the lateral ventricle 2 may be used
The severity of ventriculomegaly can be further classified as 7:
- mild/borderline fetal ventriculomegaly: lateral ventricular diameter between 10-12 mm
- moderate fetal ventriculomegaly: 12.1-15 mm
- severe fetal ventriculomegaly (also sometimes classified as fetal hydrocephalus): lateral ventricular diameter >15 mm 13
When ventriculomegaly is pronounced, the choroid plexus will no longer lie in an almost parallel fashion against the lateral ventricular wall. Tethered at the foramen of Monro the free hanging choroid will "hang down" and appear to "dangle" within the dilated ventricle. This appearance is often termed the dangling choroid sign. The ventricle to cerebral hemisphere ratio would also increase as a result.
Fetal brain MRI
MRI may be useful for evaluation of additional anomalies.
(more content required)
Significance when detected on ultrasound
Even when noted without an associated structural anomaly, mild fetal ventriculomegaly is often considered a soft antenatal marker for underlying chromosomal abnormalities. Therefore, a careful search for other sonographic abnormalities is recommended.
Careful ultrasound evaluation of the posterior fossa is also critical to look for a potential cause of obstructive hydrocephalus.
Borderline to mild prenatally detected ventriculomegaly, without additional abnormalities or an abnormal karyotype, the majority of children have been to reported to have a normal development 10-11.
Treatment and prognosis
The prognosis, as well as management, largely depend on the aetiology and on the presence of associated abnormalities.
- pseudo-hydrocephalus: if the ventricle appears enlarged, but there is no dangling choroid, the cerebrum may just be hypoechoic
- investigate closely, try different angles to find both hyperechoic lines of the lateral ventricle
- 1. Twining P, Jaspan T, Zuccollo J. The outcome of fetal ventriculomegaly. Br J Radiol. 1994;67 (793): 26-31. doi:10.1259/0007-1285-67-793-26 - Pubmed citation
- 2. Wax JR, Bookman L, Cartin A et-al. Mild fetal cerebral ventriculomegaly: diagnosis, clinical associations, and outcomes. Obstet Gynecol Surv. 2003;58 (6): 407-14. doi:10.1097/01.OGX.0000070069.43569.D7 - Pubmed citation
- 3. Mercier A, Eurin D, Mercier PY et-al. Isolated mild fetal cerebral ventriculomegaly: a retrospective analysis of 26 cases. Prenat. Diagn. 2001;21 (7): 589-95. doi:10.1002/pd.88 - Pubmed citation
- 4. Entezami M, Albig M, Knoll U et-al. Ultrasound Diagnosis of Fetal Anomalies. Thieme. (2003) ISBN:1588902129. Read it at Google Books - Find it at Amazon
- 5. Zimmerman RA, Bilaniuk LT. Magnetic resonance evaluation of fetal ventriculomegaly-associated congenital malformations and lesions. Semin Fetal Neonatal Med. 2005;10 (5): 429-43. doi:10.1016/j.siny.2005.05.008 - Pubmed citation
- 6. Girard N, Ozanne A, Chaumoitre K et-al. [MRI and in utero ventriculomegaly]. J Radiol. 2003;84 (12 Pt 1): 1933-44. J Radiol (link) - Pubmed citation
- 7. Morris JE, Rickard S, Paley MN et-al. The value of in-utero magnetic resonance imaging in ultrasound diagnosed foetal isolated cerebral ventriculomegaly. Clin Radiol. 2007;62 (2): 140-4. doi:10.1016/j.crad.2006.06.016 - Pubmed citation
- 8. Mehta TS, Levine D. Imaging of fetal cerebral ventriculomegaly: a guide to management and outcome. Semin Fetal Neonatal Med. 2005;10 (5): 421-8. doi:10.1016/j.siny.2005.05.002 - Pubmed citation
- 9. D'addario V, Pinto V, Di cagno L et-al. The midsagittal view of the fetal brain: a useful landmark in recognizing the cause of fetal cerebral ventriculomegaly. J Perinat Med. 2005;33 (5): 423-7. doi:10.1515/JPM.2005.075 - Pubmed citation
- 10. Patel MD, Filly AL, Hersh DR et-al. Isolated mild fetal cerebral ventriculomegaly: clinical course and outcome. Radiology. 1994;192 (3): 759-64. Radiology (abstract) - Pubmed citation
- 11. Bromley B, Frigoletto FD, Benacerraf BR. Mild fetal lateral cerebral ventriculomegaly: clinical course and outcome. Am. J. Obstet. Gynecol. 1991;164 (3): 863-7. - Pubmed citation
- 12. Kazan-tannus JF, Dialani V, Kataoka ML et-al. MR volumetry of brain and CSF in fetuses referred for ventriculomegaly. AJR Am J Roentgenol. 2007;189 (1): 145-51. doi:10.2214/AJR.07.2073 - Free text at pubmed - Pubmed citation
- 13. Wyldes M, Watkinson M. Isolated mild fetal ventriculomegaly. Arch. Dis. Child. Fetal Neonatal Ed. 2004;89 (1): F9-13. Arch. Dis. Child. Fetal Neonatal Ed. (link) - Free text at pubmed - Pubmed citation
- 14. Salomon LJ, Bernard JP, Ville Y. Reference ranges for fetal ventricular width: a non-normal approach. Ultrasound Obstet Gynecol. 2007;30 (1): 61-6. doi:10.1002/uog.4026 - Pubmed citation
Ultrasound - obstetric
- ultrasound (introduction)
- obstetric ultrasound
first trimester and early pregnancy
- gestational sac
- yolk sac
- Beta-hCG levels
- ectopic pregnancy
- multiple gestations
- subchorionic hematoma
- failed early pregnancy
- fetal biometry
- fetal morphology assessment
- fetal echocardiography views
- nonvisualisation of the fetal stomach
- nuchal fold thickness
- absent nasal bone
- choroid plexus cysts
- enlarged cisterna magna
- shortened fetal long bones
- echogenic intracardiac focus (EIF)
- echogenic fetal bowel
- aberrant right sublavian artery
- fetal pyelectasis / fetal renal pelvic dilatation
- single umbilical artery
- sandal gap toes
- Doppler ultrasound
- umbilical artery Doppler assessment
- fetal middle cerebral arterial Doppler assessment
- ductus venosus flow assessment
- umbilical venous flow assessment
- nuchal translucency
- 11-13 weeks antenatal scan
- chorionic villus sampling (CVS) and amniocentesis
- first trimester and early pregnancy | <urn:uuid:4ae26703-f401-4971-ab09-51964c8e2b5f> | CC-MAIN-2018-13 | https://radiopaedia.org/articles/fetal-ventriculomegaly | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648177.88/warc/CC-MAIN-20180323024544-20180323044544-00121.warc.gz | en | 0.660529 | 2,042 | 2.78125 | 3 |
With the delta variant surging in the United States, doctors are urging everyone who is eligible to get vaccinated -- including the more than 30 million people who have already had COVID-19.
Despite these recommendations, some high-profile political figures have insisted that prior infection is enough, and there's no need to get a COVID-19 vaccine for those who have already recovered.
Understandably, some Americans, having now recovered from COVID-19, are left conflicted with the mixed messaging and are unsure what to do next.
“For those who have had COVID and are wondering whether or not to get vaccinated, I would absolutely encourage them to do so now to protect themselves and others,” said Dr. Simone Wildes, an infectious disease physician at South Shore Health and an ABC News Medical contributor.
While the benefits of vaccination after infection are well-documented, there are still many Americans who have neither been vaccinated nor infected, and they also have a choice to make.
Not only is getting a vaccine far safer than being infected with the COVID-19 virus, but studies also show that vaccine-induced immunity may be superior to post-infection immunity. In fact, a recent study published in Science Translational Medicine demonstrated that antibodies induced by the vaccine may better combat a wider range of new viral variants when compared to antibodies induced by infection.
“This is particularly important, as now we are seeing an increase in cases due to the delta variant,” Wildes said.
Experts agree that getting vaccinated after recovering from infection is safe -- and the best way to protect yourself from COVID-19.
However, there are some important instructions the CDC has released for specific groups. Patients who received monoclonal antibodies or convalescent plasma should wait for 90 days before vaccination. Children who were diagnosed with multisystem inflammatory syndrome should also wait for 90 days after the date of diagnosis.
As the delta variant becomes rampant in unvaccinated communities, and more and more Americans find themselves at a crossroads after infection, experts say it's crucial for everyone to consider vaccination -- even those who were previously infected.
Priscilla Hanudel, M.D., is an emergency medicine physician in Los Angeles and a contributor to the ABC News Medical Unit. | <urn:uuid:d2e81059-4d76-4095-a78d-b1a1bdf94649> | CC-MAIN-2021-31 | https://abcnews.go.com/Health/covid-19-vaccines-protect-infection-doctors/story?id=78849841 | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00292.warc.gz | en | 0.969318 | 468 | 2.78125 | 3 |
Study shows around 4 million Americans out of work due to long COVID
According to research that was just released this week by the Brookings Institution, there may be around 4 million people in the United States who are unable to find jobs as a result of the long COVID.
This might amount to at least 170 billion dollars worth of lost earnings annually, according to the findings of the report,
According to federal data, approximately 12 million Americans worked full-time or the equivalent of full-time hours prior to receiving long COVID.
From there, it estimated the number of individuals who had to give up their jobs or work less as a direct result of continuing health problems brought on by their COVID infection. The research used the definition of long COVID from the Household Pulse Survey conducted by the Census Bureau. This definition states that long COVID is described as symptoms that last for three months or more and were not present prior to COVID.
A number of studies have been conducted in an attempt to determine the effect that long Covid has on employment. According to an estimate that was published in a working paper by the Federal Reserve Bank of Minneapolis last month, by the middle of the year 2021, 26% of workers who had long Covid were either unemployed or working fewer hours. According to the findings of a worldwide study, 22% of people with long COVID were no longer working because to their illness, and 45% of people worked reduced hours as of the year 2020. A survey conducted in the United Kingdom indicated that between April and May 2021, 16% of people with long Covid had reduced hours, and 20% of people were on paid sick leave. | <urn:uuid:5c28e92b-8708-4a15-bd54-53a3a99e4116> | CC-MAIN-2023-14 | https://thesouthfloridadaily.com/2022/08/27/study-shows-around-4-million-americans-out-of-work-due-to-long-covid/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00448.warc.gz | en | 0.987271 | 338 | 2.59375 | 3 |
The Lydian mode is the 4th mode of the major scale. It is a frequently used mode in modern music across a number of styles. The only difference between a major scale and the lydian scale is that it contains a sharp 4. It therefor sounds quite similar to a major scale, but with perhaps a ‘brighter’ sound.
In this post we are going to explore exactly what the lydian mode is and how to construct it.
If you have read the post on guitar modes explained, or if you have read any of the other posts on the individual modes, you should have a good understanding of how modes are constructed and the theory behind them. Of course, I will go into as much detail as possible in this post just to drive home the theory.
If you have not read any of the above posts, I highly recommend reading guitar modes explained. If not, a solid understanding of major scales is the main requirement for understanding modes. You need to understand what a major scale is, what it sounds like and how to play it in any key. If you do not know this, read the post on major scales before reading on.
Parallel vs Derivative Lydian Mode
The key to understanding any mode is to understand the parallel approach and the derivative approach. At the moment, we know two things about the lydian mode:
- It is the 4th mode of the major scale.
- It contains a raised 4 (or sharp 4, depending on what terminology you wish to use).
Let’s explore what this means and how it applies to the two approaches. Let’s first look at the derivative approach. As I mentioned, the lydian mode is the 4th mode of a major scale. To put it simply, that means that if you play a major scale and start on the 4th note, you are playing the lydian mode. Here are a few examples:
The C Major scale contains the following notes:
C – D – E – F – G – A – B
The 4th note of C Major is F. Therefor if we play the C Major scale starting on F, we get the following:
F – G – A – B – C – D – E
What we have there is an F Lydian mode. Simple! Let’s do another example.
The A flat major scale has the following notes:
Ab – Bb – C – Db – Eb – F – G
Db is the 4th note of the A flat major scale. Therefor if we play the A flat major scale and star on D flat, we get the following:
Db – Eb – F – G – Ab – Bb – C
We have just constructed the D flat lydian mode. Both of these examples have used the derivative approach. This is because we ‘played’ the lydian mode by deriving it from a major scale. The lydian mode is the 4th mode of a major scale, therefor we can derive the lydian mode play playing a major scale and starting on the 4th note.
Let’s just look at one more thing before moving on to the parallel approach. Suppose we want to play a G lydian scale. If we were to use the derivative approach, we would need to know what major scale produces the note ‘G’ as the 4th note. If you are familiar with major scales, you would know that G is the 4th note of the D major scale (D – E – F# – G – A – B – C#). This means that to play a G lydian scale, all we have to do is play the D major scale and start on the 4th note:
G – A – B – C# – D – E – F#
Let’s now look at the parallel approach. The lydian mode contains a ‘raised 4th’. This is the information we need to play the lydian scale using the parallel approach. Put simply, it means that to play the lydian mode, you need to play a major scale and then raise the 4th note by a semitone. Again, let’s test this out with a few examples.
The C major scale has the following notes:
C – D – E – F – G – A – B
F is the 4th note of the C major scale. If we raise the 4th note (F) by a semitone, we get F#. Therefor, C lydian is simply a C major scale with an F# instead of an F:
C – D – E – F# – G – A – B
It’s quite simple. The lydian mode is actually quite a simple mode for a number of reasons. Firstly, the fact that the only difference between it and the major scale is the 4th note, means that it is very easy to construct using the parallel approach. The other simplicity is that because it is similar to the major scale, it does not sound very unusual to the ear and can therefor be used a lot more easily than other modes.
Let’s do another example using the parallel approach. Suppose we want to play Bb lydian. Bb major contains the following notes:
Bb – C – D – Eb – F – G – A
The 4th note is Eb. If we raise Eb by a semitone we get E. Therefor, Bb lydian looks like this:
Bb – C – D – E – F – G – A
It is important to remember that both approaches produce the same results. To prove this, let’s look at one more example using both approaches. Suppose we want to play E lydian. The derivative approach tells us that we need to know which major scale contains E as the 4th note. It is in fact B major:
B – C# – D# – E – F# – G# – A#
If we play the B major scale and start on the 4th note (E), we get the following:
E – F# – G# – A# – B – C# – D#
We have just produced the E lydian mode. Now let’s get the same result by using the parallel approach. E major has the following notes:
E – F# – G# – A – B – C# -D#
The parallel approach requires us to raise the 4th note (A) by a semitone. If we raise A to A# we get the following:
E – F# – G# – A# – B – C# – D#
As you can see, both approaches have produced the same result.
We won’t go into too much detail in this post about using the lydian mode for improvising and composition etc. The main thing is that you understand what the lydian mode is, how to construct it and how to play the mode in every key.
Individual Lydian Mode Keys
Here is a list of Lydian modes in every key:
- A Flat Lydian
- A Lydian
- A Sharp Lydian
- B Flat Lydian
- B Lydian
- B Sharp Lydian (impractical)
- C Flat Lydian
- C Lydian
- C Sharp Lydian
- D Flat Lydian
- D Lydian
- D Sharp Lydian
- E Flat Lydian
- E Lydian
- E Sharp Lydian
- F Flat Lydian
- F Lydian
- F Sharp Lydian
- Gb Lydian
- G Lydian
- G Sharp Lydian | <urn:uuid:6c52613a-d4f1-48bd-8da2-5c266a09327b> | CC-MAIN-2021-49 | https://onlineguitarbooks.com/lydian-mode/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363157.32/warc/CC-MAIN-20211205100135-20211205130135-00605.warc.gz | en | 0.943969 | 1,647 | 3.765625 | 4 |
Zyra //// Zyra UK //// Concepts //// Sound //// Site Index
If a tree falls in a forest and no-one hears it, does it make any sound?
This great mystery of the unheard falling tree in the deserted forest has puzzled philosophers for thousands of years. For most of that time it was regarded as a classic unsolvable conundrum, as statements of belief in either direction were irrefutable.
Note: "Irrefutable" doesn't mean "correct", it just means that it can't be disproved. For example, "God lives in my teapot" is an irrefutable statement provided you don't allow non-believers to desecrate the sanctity of the internal space in the teapot.
Meanwhile, on the allegedly silent falling trees in unaudienced forests, the puzzle went on being puzzling for an immensely long time but things started to change with the invention of the phonograph, and then the tape recorder, and then the digital audio ambiance capture module with dynamic paraphragm microphone for capturing audio holograms.
Well let's put this to the test. Suppose there's an old tree in a deserted forest, and we know it's going to fall next week, so we go along there and set up some sort of tape recording machine and see what happens. We leave the scene with the sound recorder running and come back in a couple of weeks. For sure, the tree is now fallen, and the recording machine is still running, and we know that no-one has visited and heard the tree fall because we put tamper-evident dust on all the tracks anywhere near the tree.
Taking the tape (or other sound recording media) back to the lab, what happens now?
One option is to PLAY it. That would have two consequences: Either there's be a sound of a falling tree on it, or mysterious silence. This would be very revealing, and would settle the whole thing once and for all, but perhaps there's more to it.
The other option gives rise to some even more interesting concepts. Suppose we DO NOT play the recording, but instead delete it. Now, that would raise some very curious points. Do we delete it or not? We have freedom of choice to delete the tape, or to play it, so this reveals two intriguing possibilities:
1. If all trees falling in forests make a noise regardless of whether anyone hears them, then the deleting option would do nothing more than to delete a perfectly ordinary sound of a falling tree.
2. (the more interesting option). If trees falling unheard in deserted forests make no noise unless they are later heard on a tape recorder, then our choice of whether to delete the tape would retrospectively cause the tree to have made a noise or not! This would be time travel as it would have an effect whose cause was in the future.
I think this is all very interesting as it sets new insight on a very old problem, and could potentially be a source of a research project.
However, I take a scientific view which is that all trees falling in forests (regardless of audience turnout for the event), all make a noise. If they didn't, there would be something wrong with the scientific way of thinking, and besides, I think that the philosophical notion to suggest that unsupervised falling trees are silent is a devil's advocate type of philosophical comment designed to cause debate rather than progress.
Then again, if I can listen to a few silent tapes witnessing falling trees, I may be convinced there is something more to be investigated, and it may be worth putting some CCTV cameras up to see if the falling trees are invisible too.
Plus, if my scientific belief that falling trees always make sound turns out to be proven false, there may be industrial applications. For example, when using a chainsaw to cut down trees (in renewable forestry, of course!), much energy could be saved if automatic unsupervised chainsaws could be set up to cut the trees down (before planting some new trees), as there would be no energy wasted by the chainsaw having to make that awful sound and scare the wildlife.
Also see ecology , concepts , furniture made of wood , science , belief , and audio-related stuff.
Also, What Sound does a Falling Bomb make? | <urn:uuid:2ec5c112-92e4-49c7-b533-feafcb735eae> | CC-MAIN-2022-21 | http://www.zyra.global/www.zyra.tv/tfshush.htm | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00478.warc.gz | en | 0.960836 | 884 | 2.59375 | 3 |
Battered Woman Syndrome:
The repeated episodes of physical assault on a woman by the person with whom she lives or with whom she has a relationship, often resulting in serious physical and psychological damage to the woman.
Such violence tends to follow a predictable pattern. The violent episodes usually follow verbal argument and accusation and are accompanied by verbal abuse. Almost any subject -housekeeping, money, child rearing- may begin the episode. Over time, the violent episodes escalate in frequency and severity.
Most battered women report that they thought that the assaults would stop; unfortunately, studies show that the longer the women stay in the relationship the more likely they are to be seriously injured. Less and less provocation seems to be enough to trigger an attack once the syndrome has begun. The use of alcohol may increase the severity of the assault. The man is more likely to be abusive as the alcohol wears off.
Battering occurs in cycles of violence. In the firstphase the man acts increasingly irritable, edgy, and tense. Verbal abuse, insults, and criticism increase, andshoves or slaps begin. The second phase is the time of the acute, violent activity. As the tension mounts, the woman becomes unable to placate the man, and she may argue or defend herself. The man uses this as the justification for his anger and assaults her, often saying that he is “teaching her a lesson.” The third stage is characterized by apology and remorse on the part of the man, with promises of change. The calm continues until tension builds again.
Caring for and counseling a battered woman often require great patience because she is usually ambivalent about her situation and may be confused to the point of believing that she deserves the assaults she has suffered.
How Do You Heal From Battered Wife Syndrome?
Keep in mind that all of these ideas might not apply to you or your situation–you decide what fits best for you.
First priority is your physical safety and the physical safety of your children, if there are children involved. Child Protective Services and Family Services agencies in your area will be able to give you contact information for shelters where you can go and be safe from the abuser in your life. If you don’t value yourself enough to seek protection, then at least do it for your children.
Next you need to think about breaking the cycle of abuse. The components of the cycle, as you can see in the image, are unmet needs, anxiety, seeking love, finding relief, pleasing and appeasing, control and abuse, anger and fear, reconciliation and “back to normal.”
You break the cycle by taking responsibility for your safety (and your children’s safety if they’re part of it), rather than worrying about whether “he will get better” or focusing on the fact that you love him.
You break the cycle by respecting yourself enough to only maintain relationships in which you are treated with care and respect. You begin to recognize that you are a good person and you are worthy of respect in your relationships.
One of the best ways out of the battered wife syndrome is with healthy anger. Anger is a protective emotion, and you need to have some healthy anger if you and/or your children are being abused. You are your own best anger management resource.
If you don’t take care of yourself, no one else can! In other words, you have to take the first steps, to reach out for help, then there will be others to help you.
If you just stay in the cycle, the abuse will only get worse, and could even become fatal.
Fact Source: Anger Management Source
Fact Source: Medical Dictionary
To read from the beginning… #MyStory starts here. | <urn:uuid:0c5c5d07-ee0f-4c1b-a17a-6e387aabe649> | CC-MAIN-2017-47 | https://bwseekingbl.wordpress.com/2014/10/24/31-facts-in-31-days-day-24/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00281.warc.gz | en | 0.953364 | 771 | 2.578125 | 3 |
-- Robert Preidt
WEDNESDAY, Aug. 7 (HealthDay News) -- Although it's been said
that elephants never forget, it now appears that dolphins may have
them beat in the memory department.
Even after being separated for more than 20 years, dolphins can
recognize former tank mates' whistles, new research shows.
The study demonstrates the longest social memory ever recorded
for a nonhuman species. Dolphins' long-term memory for other
dolphins' whistles may be even more long-lasting than humans'
ability to remember other people's faces, the report suggested.
Previous research has shown that each dolphin has its own unique
"signature" whistle that appears to function as a name.
The new study, published in the current issue of the
Proceedings of the Royal Society of London B, included 53
bottlenose dolphins at six facilities that are part of a breeding
consortium that has rotated dolphins between sites for decades, and
kept records on which ones lived together.
When hearing recordings of individual signature whistles, the
dolphins had much stronger responses to the whistles of dolphins
they once knew -- even if it was decades ago -- than to whistles of
unfamiliar dolphins, according to the findings.
"This shows us an animal operating cognitively at a level that's very consistent with human social memory," study author Jason Bruck, who conducted the study while working on his Ph.D. at the University of Chicago's program in Comparative Human Development, said in a university news release.
However, it's not clear what signature whistles signify in a
"We know they use these signatures like names, but we don't know if the name stands for something in their minds the way a person's name does for us," Bruck said. "We don't know yet if the name makes a dolphin picture another dolphin in its head."
He'll attempt to get answers to that question in his next round
For this round, he used data from dolphins at six facilities,
including Brookfield Zoo near Chicago and Dolphin Quest in
"This is the kind of study you can only do with captive groups when you know how long the animals have been apart," Bruck said. "To do a similar study in the wild would be almost impossible."
Earthtrust has more about
Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
Copyright © EBSCO Publishing. All rights reserved. | <urn:uuid:3b537193-d6ef-42c2-95ca-f29c9cd80871> | CC-MAIN-2016-18 | http://www.wkhs.com/Cancer/Education-Resources/News.aspx?chunkiid=868671 | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860114649.41/warc/CC-MAIN-20160428161514-00209-ip-10-239-7-51.ec2.internal.warc.gz | en | 0.954264 | 590 | 2.796875 | 3 |
Erich Maria Remarque (1898-1970) was born to a working-class family in Osnabrück, Germany. In November 1916 he was drafted, and after military training at Osnabrück and Celle he was sent to the Western Front in June 1916, where he served in an entrenchment unit at the Somme and in Flanders. On 31 July 1917 Remarque was severely wounded near Houthulst, Flanders, and sent to a military hospital in Duisburg, where he stayed until the armistice. After the war Remarque worked as an elementary school teacher and then after 1921 as a journalist in Hannover and Berlin.
His anti-war novel Im Westen nichts Neues (All Quiet on the Western Front), first published in 1928, became an international success and provoked a controversial discussion on the representation and interpretation of World War I in Germany. The subsequent banning of the movie All Quiet on the Western Front (USA 1930), directed by Lewis Milestone (1895-1980) in December 1930 as a consequence of National Socialist protests forced Remarque to go into exile in Switzerland in 1931.
In May 1933 Remarque’s books were burnt by the National Socialists, and in 1938 he lost his German citizenship. Remarque remained in exile, living in Switzerland, France, and the U.S., where he became a U.S. citizen in 1947. He returned to Europe in 1948, and lived in Switzerland, New York and Italy until his death.
In the novels, short stories, plays, and film scripts he wrote after Im Westen nichts Neues, Remarque focused on the fate of ordinary people in times of war, persecution and inhumanity. His immense international success and reputation was shaped by his strong commitment to pacifism and humanism.
Other selected writings↑
Der Weg zurück / The Road Back, novel 1930, Drei Kameraden /Three Comrades, novel 1936, Arc de Triomphe /Arch of Triumph, novel 1945, Der Funke Leben / Spark of Life, novel 1952, Zeit zu leben und Zeit zu sterben / A Time to Live and a Time to Die, novel 1954, Der letzte Akt /The Last Act, film script 1955, Die Nacht von Lissabon / The Night in Lisbon, novel 1961.
Thomas F. Schneider, Erich Maria Remarque Peace Center/Osnabrück University, Germany
Section Editor: Christoph Nübel
- Gilbert, Julie Goldsmith: Opposite attraction. The lives of Erich Maria Remarque and Paulette Goddard, New York 1995: Pantheon Books.
- Glunz, Claudia / Schneider, Thomas F. (eds.): Remarque-Forschung 1930-2010. Ein bibliographischer Bericht, Göttingen 2010: Vandenhoeck & Ruprecht.
- Murdoch, Brian: The novels of Erich Maria Remarque. Sparks of life, Rochester 2006: Camden House.
- Sternburg, Wilhelm von: 'Als wäre alles das letzte Mal'. Erich Maria Remarque. Eine Biographie, Cologne 1998: Kiepenheuer und Witsch. | <urn:uuid:088d29d6-c473-4aa2-9088-7629829ed994> | CC-MAIN-2023-50 | https://encyclopedia.1914-1918-online.net/article/remarque_erich_maria | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00755.warc.gz | en | 0.891042 | 696 | 2.546875 | 3 |
The Education Ministry has decided to bridge the divide between Jews and Arabs with a new plan, entitled 'Education for coexistence'.
According to the new initiative, teachers of grades 1-12 will study about the culture, language and heritage of the other race. In addition, the ministry will encourage Arab teachers to teach at Jewish schools, and vice-versa.
The plan, which was initiated due to concern over racism in schools, will also include outings and joint projects such as movies, plays, sportive events, and school trips.
Ynet learned Monday that an Education Ministry committee has focused over the past two years on helping Jewish and Arab teens to coexist. The committee heard from professionals in the fields of conflict resolution and education in order to develop a plan.
The committee handed its recommendations to Education Minister Gideon Sa'ar a week ago, and he is expected to approve the plan – which will be employed in basic subjects such as Civics and Social Studies – shortly.
The committee also recommended teaching tolerance in other subjects, such as Literature, History, Geography, and Art, as well as arranging meetings between faculty members of both races.
Recent polls have found that racism abounds among Jewish and Arab high school students in Israel, and the ministry fears a further exacerbation of fear, lack of trust, and hostility.
"This is a very brave step by the Education Ministry, and its success will depend on the good will of many different people," said Dr. Bat Chen Weinheber of the Beit Berl Academic College.
"We are at a point in which we don't have the option of giving up, it's a matter of survival for the State of Israel." | <urn:uuid:034f1a57-d7a5-48a5-9b95-7cebd7bbfa23> | CC-MAIN-2015-27 | http://www.ynetnews.com/articles/0,7340,L-4084934,00.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098990.43/warc/CC-MAIN-20150627031818-00017-ip-10-179-60-89.ec2.internal.warc.gz | en | 0.973249 | 346 | 2.734375 | 3 |
Raspberry Pi is a popular single-board computer designed for DIY enthusiasts, educators, and hobbyists. With its compact size, low cost, and versatility, it has become a go-to option for many projects. However, it is essential to know the power consumption of the Raspberry Pi to ensure it operates efficiently and safely.
The power consumption of the Raspberry Pi varies depending on the model, usage, and connected peripherals. In general, the newer Raspberry Pi models consume less power than the older ones. For instance, the Raspberry Pi 4 Model B has a power consumption of around 3.5W when idle and up to 7.5W when running a CPU-intensive task.
To help you understand the power consumption of various Raspberry Pi models, we have compiled a table below. The table contains the power consumption of the most popular Raspberry Pi models when idle and operating at maximum load. Note that the values are approximate and can vary depending on the specific use case and connected peripherals.
| Model | Idle Power Consumption | Maximum Power Consumption |
| Raspberry Pi 4 Model B | 3.5W | 7.5W |
| Raspberry Pi 3 Model B+ | 2.5W | 4.5W |
| Raspberry Pi 3 Model A+ | 1.5W | 2.5W |
| Raspberry Pi Zero W | 0.7W | 1.3W |
To summarize, understanding the power consumption of the Raspberry Pi is crucial for efficient and safe operation. By referring to the table above, you can get an idea of the power requirements of different Raspberry Pi models and plan your project accordingly.
How much power does a Raspberry Pi 4 consume?
The Raspberry Pi 4 is a powerful and versatile single-board computer that has become increasingly popular among hobbyists, educators, and professionals alike. When it comes to raspberry pi power consumption, the Pi 4 requires more power than its predecessors due to its higher processing power and increased number of ports and features. According to official documentation, the Raspberry Pi 4 Model B requires a 5V DC input voltage and a minimum of 3A current to operate reliably.
The actual power consumption of the Raspberry Pi 4 will vary depending on various factors, such as the workload, peripherals connected, and software running. However, in general, the Pi 4 consumes anywhere from 3W to 6W under typical usage scenarios. It’s worth noting that some power supplies may not be able to provide the required amount of current, leading to stability issues and potential damage to the board. Therefore, it’s essential to use a high-quality power supply that meets the Pi’s power requirements and provides a stable and reliable power source.
Overall, understanding raspberry pi power consumption is crucial for any Pi user, as it can affect the performance, stability, and lifespan of the board. By using a suitable power supply and monitoring the power usage, users can ensure optimal performance and avoid potential issues. For more information on power consumption, check out the official Raspberry Pi documentation and community forums.
How much electricity does it cost to run a Raspberry Pi 24 7?
Raspberry Pi power consumption is a key factor to consider when using the device, especially if it will be running 24/7. The power consumption of a Raspberry Pi depends on the model, as well as the use of peripherals such as monitors and USB devices. For example, the Raspberry Pi 3 Model B+ has an average power consumption of around 3.7 watts, while the Raspberry Pi Zero W uses only 0.5 watts.
To calculate the cost of running a Raspberry Pi 24/7, you need to know the electricity rate in your area. Assuming an average rate of $0.12 per kilowatt-hour, running a Raspberry Pi 24/7 would cost around $2.58 per year for the Raspberry Pi 3 Model B+, and only $0.33 for the Raspberry Pi Zero W. However, if you are using peripherals that consume more power, such as a monitor, the cost could be significantly higher.
Overall, the power consumption of a Raspberry Pi is relatively low, making it an energy-efficient option for projects that require constant use. By taking into account the electricity rate and the power consumption of your specific Raspberry Pi model and peripherals, you can calculate the cost of running the device and make an informed decision.
How much power does a Raspberry Pi sleep consume?
The Raspberry Pi is a popular single-board computer that is widely used for various applications. One of the key factors that determine its efficiency is its power consumption. The Raspberry Pi sleep consumes very low power, making it an ideal choice for low-power applications. According to official documentation, the Raspberry Pi 4 Model B has a maximum power consumption of 7.5W, with a typical power consumption of around 3.5W under normal usage conditions. However, when in sleep mode, it consumes only 1.6W, which is significantly lower than its typical power consumption.
The low power consumption of the Raspberry Pi is due to its efficient power management system. It has a built-in power management chip that can switch off power to various components when they are not in use, thereby reducing power consumption. Additionally, the Raspberry Pi has various power-saving options that can be configured to further reduce power consumption. For example, the “suspend to RAM” option in the Raspberry Pi OS can reduce power consumption to as low as 0.5W, making it an ideal choice for battery-powered applications.
In conclusion, the Raspberry Pi sleep consumes very low power, making it an efficient choice for low-power applications. Its power management system and power-saving options ensure that power consumption is minimized while maintaining optimal performance. Its low power consumption makes it an ideal choice for battery-powered applications and other applications where power efficiency is a critical factor.
How many watts does a Raspberry Pi 3 use?
When it comes to power consumption, the Raspberry Pi 3 is known for being a low-power device. According to official specifications, the Raspberry Pi 3 Model B+ uses between 2.5 and 3.0 watts of power under normal use. This is significantly less than other popular devices, such as desktop computers, which can use upwards of 100 watts.
The low power consumption of the Raspberry Pi 3 makes it an ideal device for a wide range of applications, including IoT projects and portable computing. However, it’s important to note that power consumption can vary based on factors such as the number of peripherals connected, the use of overclocking, and the types of software running.
To ensure optimal power efficiency, it’s recommended to use a high-quality power supply that meets the Raspberry Pi’s voltage and amperage requirements. Additionally, monitoring power usage through tools such as a power meter or software can help identify any potential issues or areas for improvement.
In conclusion, when it comes to raspberry pi power consumption, it is important to consider the different factors that can influence it. From the type of model being used, to the peripherals connected and the software being run, there are many variables that can affect power usage. However, with careful planning and optimization, it is possible to reduce power consumption and extend the lifespan of your raspberry pi.
For those interested in learning more, there are many resources available online. The official raspberry pi website offers a wealth of information on power management and optimization techniques, including tips on reducing power usage in different scenarios. Additionally, forums and online communities such as the Raspberry Pi subreddit can be a valuable source of advice and support for those looking to optimize their raspberry pi power consumption. By taking the time to research and understand the factors that affect power usage, users can ensure that their raspberry pi operates efficiently and effectively for years to come.
You may also be interested in: | <urn:uuid:24f3c725-30c0-4003-a4ab-4ae6ce3f7cc9> | CC-MAIN-2023-40 | https://wifocusonenergy.com/raspberry-pi-power-consumption-what-you-need-to-know/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510427.16/warc/CC-MAIN-20230928162907-20230928192907-00386.warc.gz | en | 0.932044 | 1,613 | 2.953125 | 3 |
How do you write a sci paper?
Steps to organizing your manuscript
- Prepare the figures and tables.
- Write the Methods.
- Write up the Results.
- Write the Discussion. Finalize the Results and Discussion before writing the introduction.
- Write a clear Conclusion.
- Write a compelling introduction.
- Write the Abstract.
- Compose a concise and descriptive Title.
What format do you write science papers in?
- Most journal-style scientific papers are subdivided into the following sections: Title, Authors and Affiliation, Abstract, Introduction, Methods, Results, Discussion, Acknowledgments, and Literature Cited, which parallel the experimental process.
- Strategy for Writing Title.
- Top of page.
How do you write a tear jerking story?
6 Tips for Writing a Sad Story
- Tap into your own emotionality.
- Know the difference between sentimentality and truth.
- Leave room to be surprised by specific detail.
- Pair strong emotions with ordinary ones.
- Use backstories to add weight.
- Use sad moments to further character development.
How do you start a sad love story?
Use an embedded narrative, like Noah and Allie’s flashback love story in The Notebook. Start with the end of a relationship, like in (500) Days of Summer or Marriage Story. To ensure your script stays in the tragic romance realm, work towards a tragic twist that explains why the romance ended.২৪ অক্টোবর, ২০২০
How do you express anger?
One 2010 study found that being able to express your anger in a healthy way can even make you less likely to develop heart disease.
- Take deep breaths.
- Recite a comforting mantra.
- Try visualization.
- Mindfully move your body.
- Check your perspective.
- Express your frustration.
- Defuse anger with humor.
- Change your surroundings. | <urn:uuid:b869b20d-7387-4742-8460-f15832743e37> | CC-MAIN-2022-05 | https://www.mvorganizing.org/how-do-you-write-a-sci-paper/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00139.warc.gz | en | 0.78169 | 431 | 3.390625 | 3 |
America’s “inner-ring” suburbs – the group of small, independent municipalities that surround the largest US cities – are undergoing a remarkable transformation. In the 25 years or so that followed the second world war, these neighbourhoods were the classic aspirational destination. People moved to the suburbs to purchase their slice of the good life – a spacious home, with a quiet yard, near a good school. The suburbs represented the American ideals of homeownership, education, low crime and complete autonomy. They represented, in other words, insulation from the perceived ills of urban living. Now it is that very insulation, which made them attractive in their early years, that may be sealing their doom.
The first inner-ring suburbs developed between about 1900 and 1930 – towns like Brookline and Somerville outside of Boston, and University City adjacent to St Louis. They were often called “streetcar suburbs”, after their principal mode of access to the downtown core. Their development stalled during the Great Depression and the war, but soon restarted: veterans received federally backed low-cost mortgages, and the interstate highway network opened acres of land to new housing.
There was another incentive for some of these new suburbanites: a desire to escape the complex social tensions at work in large cities in the postwar era. When faced with growing minority populations, particularly of African-Americans, white city dwellers often chose to pull up stakes. It wasn’t always the primary reason for moving, but it was often a part of the migration equation.
This pattern of “white flight” to the suburbs was characteristic of American metro areas until the 1970s and 1980s, when newer suburbs – bigger, more spacious, more contemporary – began stealing residents away from the older inner-ring suburbs. And by the 1990s, more minorities were beginning to follow the same aspirational path as the former white city dwellers before them. Just as previous generations did, minorities sought larger homes, quieter environments and better schools. And white residents who craved insulation from the perils of urban living now saw it coming to their front lawns – again.
The recent events in Ferguson, Missouri have brought this tension into sharp focus. Ferguson, an inner-ring suburb about 10 miles northwest of St Louis, was a city in transition long before officer Darren Wilson shot the unarmed black teenager Michael Brown. In 1990, three-quarters of Ferguson’s 22,000 residents were white; just 20 years later, by 2010, nearly three-quarters of them were black. These two groups of Fergusonians share little in common. In 2012, the median age of white residents in Ferguson was nearly 49; for black residents, it was only 29. The median household income of whites was nearly $52,000; for blacks, less than $30,000. The story of Ferguson is truly a tale of two suburbs.
But Ferguson is hardly unique. It was simply the place where a flashpoint exposed the tragedy of American inner-ring suburbs, conspired against by large-scale migration and development trends. In recent years, young, college-educated adults have begun to move into cities in great numbers, attracted by jobs and urban amenities; meanwhile, the suburban sprawl machine that created the inner-ring suburbs in the first place continues to expand, making newer, more desirable places even further from downtown.
To understand the implications of white flight and “resegregation”, look no further than the north side of St Louis. It was the primary destination for early black migrants, but quickly became an impoverished, isolated enclave. In recent years St Louis has been successful in broadening its citywide appeal as an immigrant destination, but few of those immigrants are interested in moving to black-majority areas. In the words of black St Louis alderwoman Sharon Tyus: “No one wants to live next to black people.”
Studies document this sentiment. Indiana University doctoral student Samuel Kye examined census data from 1990-2010, and found that, as affluent minority populations in the suburbs grow, “white flight” continues. White residents in these transitioning suburbs are “especially sensitive” to racial and ethnic change, he argues: “Ethnoburbs [Kye’s term for suburbs with large numbers of racial or ethnic minorities] have lost a steady flow of white residents over the past 20 years.” The end result? African-American suburban migration has only led to greater segregation, creating ethnic pockets: whites in one, blacks in the other.
This has been an active decision. As black people move into their suburban idylls, longtime white residents flee to other suburbs, or retreat to the highest value enclaves in town. They take other measures, too.
They limit the expansion of rental housing to restrict affordable housing options. They develop a strong law and order environment. And they do their best to insulate themselves, physically and socially, from minority transition. It works, after a fashion – until something like Ferguson shows the cracks.
It wasn’t always this way. In the 1960s, two suburbs took a decidedly different approach to racial transition: Oak Park, Illinois and Shaker Heights, Ohio (inner-ring suburbs of Chicago and Cleveland). When faced with the possibility of a destabilising resegregation in the 60s, both communities elected to take a proactive, race-conscious tack. They established community relations commissions, to develop and foster ongoing conversations around race. They worked hard to dispel rumours related to racial transition. They actively sought out white residents who would welcome black neighbours. They encouraged the dispersion of black residents to prevent clustering. They passed local open-housing ordinances. They even established equity assurance programmes to ensure residents against declining property values. Both communities weathered the racial transition of the 60s and 70s well. They’re proud of their accomplishments. They should be.
Because today, there are no inner-ring suburbs who follow a similar path. A number of metro areas have undergone significant black suburbanisation over the last two decades, but with very little of the same sensitivity. Why are there no more Oak Parks or Shaker Heights? Perhaps leaders believe that the “problem” of resegregation was “solved” during the civil rights era, and addressing it now is moot. Yet the spread of suburban enclaves has mirrored the patterns of segregation you can see in the large cities – south Cook County outside of Chicago; Prince George’s County near Washington, DC; parts of Delaware County adjacent to Philadelphia; eastern Cuyahoga County near Cleveland; parts of DeKalb County outside of Atlanta; and parts of North St Louis County, outside of St Louis – including good old Ferguson.
Meanwhile, everyone’s talking about downtown cores: a back-to-the-city movement led by well-educated young adults seeking the vigour and dynamism of urban living. Rapid gentrification – a predominantly white phenomenon – is associated with bold new ideas about city life. “Big data” can create a technology revolution, it is argued. Apps can make cities run with greater efficiency. A more pedestrian-oriented environment is the way to make your neighbourhood attractive. Many of the best of these ideas have been filtering through to the newest suburbs, too.
In the middle sit America’s inner-ring suburbs. They don’t enjoy the same attention. They are rapidly growing more diverse yet more impoverished – and are poorly equipped to handle transition, because adaptation simply wasn’t included in their development fabric. Will the time ever come to address their ills? The home of the American dream is ailing.
Pete Saunders is a leading demographics and urban planning consultant based in Chicago and publisher of The Corner Side Yard, about the redevelopment of the American Rust Belt | <urn:uuid:4c480536-5c31-4a2e-a3af-b0800a8a3bb7> | CC-MAIN-2020-50 | https://www.theguardian.com/cities/2014/sep/05/death-america-suburban-dream-ferguson-missouri-resegregation | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141717601.66/warc/CC-MAIN-20201203000447-20201203030447-00110.warc.gz | en | 0.963085 | 1,602 | 3.15625 | 3 |
The Importance of Being Extra Safe As Therapists While Remaining Compassionate for Our Patients
Coronavirus (COVID-19) and influenza are upon us. People most adversely affected by COVID-19 are the elderly and those with preexisting conditions. Those are the folks that come to us so we have extra responsibility to prevent contagion. This is why the following is important. Touch therapy practitioners have one of the highest risks of contracting or spreading microorganisms that cause disease in society as we frequently work with hands directly on the skin. Respiratory disease is airborne and transmitted by micro water particles that can be broadcast by a sneeze or cough, 3 to 6 feet. Molecules can linger on the face or in the hair if you or your client covers the mouth with the hand as one sneezes. It then is in the hand and the air. Then the patient lays down and the therapist does cranial work to the head. The therapist now has the germs on their hands and touches a door knob or itches their own nose or touches their own face and goes on to touch someone else. Most of the time we are in session if we touch our face or hair, we are actually the transmitter of pathogens to ourselves or others. Molecules can linger on a door knob, arm of a chair, side of a treatment table, waiting room countertop, water cooler button, the pen the client uses to write a check or the one we use to do notes.
Massage therapists, PTs, OTs, Visceral Mobilization, lymphatic work, craniosacral work includes lots of skin-on-skin touch techniques. In the age of COVID-19 and the influenza season here are some precautions:
- Reschedule or refer to doctor a client who presents with fever, persistent cough, sinus drainage or other obvious signs of communicable infection. When scheduling new patients, ask how they are feeling, refer if appropriate, (sick).
- Wash your hands (lots) before and after each patient/client, meals, bathroom breaks before using office computer or devices. Handwashings are scrubs, front and back of hands, around finger nails and past the middle of the forearm. Wash hands first thing coming to work, last thing leaving work.
- Wear short sleeve tops, shirts so you can wash (scrub) to the mid forearm before and after sessions. Change tops daily.
- Once in session do not touch your own face, eyes, ears, hair or clothing.
- Use masks if coughing begins for you or your client. Masks help keep germs in and other germs out. Use gloves as needed for all mouth work, with infants, and any time when deemed appropriate (skin lesions, rashes, hygiene issues, questionable illness assessments). Never reuse gloves. Don't put used gloves in your pocket. Remove gloves inside out, and throw away immediately.
- In session, move away from a pending cough or sneeze if you can. Encourage the client to turn toward their elbow if possible. AND REMEMBER; you can stop a session if coughing ensues or if either you or your client become uncomfortable or have concern.
- Use single-use washable linens, pillow cases and towel covers for plinths, treatment tables, massage chairs and especially head and face cradles.
- Alcohol or sanitizer wipe tables, chair arms, door knobs, head rests, face cradles as well as hand-held exercise equipment, weights, canes, crutches and walkers. Wipe your cell phone with a baby wipe at least three times a day. Also wipe your file draw handles, counter tops, stapler, land phone (if you still have one) and the pen you use to do notes. Also periodically do bathroom door, railings, sink and toilet handles. Keep clean any surface in your clinic where hands go.
- Have everyone in your home use a wipe on their hands before they come in. Wash clothes often. Give the kids a small hand sanitizer for school and drill them in how to use it. Keep anyone who is sick at home. Contact doctor if conditions worsen, especially if fever, headache and breathing difficulties increase to concern levels.
- Use hand sanitizer lotion with grocery carts, public handles, gas pumps, card swipe buttons, open doors with your elbow when possible. Use less cash.
So if it seems like a lot, it is. But I remember MERS, SARS, HIV, West Nile, Zika. We will manage and make it through this. Remember, most people just get sick, some don't even know they are sick. Elderly with pre-existing conditions seem to be the most severely affected. Our job is to not spread it to those coming to us for therapy. Many are elderly and therefore need the extra precautions you see here. Sanitizing agents include but are not limited to; baby wipes, commercial sanitizing lotion, isopropyl alcohol wipes, or alcohol applied to Kleenex, solution of bleach and water (one teaspoon to 16 ounces of water)* applied to paper towel or Kleenex. Avoid eyes with all wipes, flush with straight water if irritated or itchy. If you are sick, remember the value of water -- stay hydrated and stay home. Water makes up more than 90 percent of blood, CSF, and lymph. To consider the importance of water, where would soap be without it??
Finally, remain compassionate. Don't avoid touch, but do it in a way that is kind and caring, safe and healing, while ethically reducing the chance of spreading communicable disease to you, your patients or your community. In a society where touch is often inhibited we must continue our caring touch, but in public we must be extra safe in times of community infections. We can and should lead by example as we move CST forward in a world that is in need of CST now more than ever. Stay kind, stay smart, stay strong. Please share far and wide to fellow colleagues, friends and family.
Don Ash, PT, CSTA-CP
* Statistics from the New Hampshire Department of Health and Human Services:
The sanitation range of bleach/water ratio solutions is 50 to 200 parts per million.
one teaspoon bleach per pint (16 oz.) water or 5 ml. bleach per 500 ml. water
1 - 2 tablespoons bleach per gallon (128 oz.) or 15 ml. - 30 ml. bleach per 3785 ml. water | <urn:uuid:568d4454-b0cf-442e-be2a-19e76591f42c> | CC-MAIN-2021-25 | https://www.cstalliance.com/don-ash-circle-59-antisepsis-touch-therapy | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00420.warc.gz | en | 0.926315 | 1,354 | 2.59375 | 3 |
Getting to know your menstrual cycle can help you learn about what’s happening inside your body. There’s more to it than your period, and those additional aspects can provide a wealth of knowledge when it comes to health patterns, possible illnesses and your fertility.
We spoke about all things cyclical with Rachel Urrutia, MD, an obstetrician-gynecologist at UNC Medical Center and assistant professor at the UNC School of Medicine.
The Menstrual Cycle
First, the basics on your cycle: A menstrual cycle is the series of events that occur between the first day of a period and the start of the next period. During that time the body goes through a cycle in which estrogen increases and spurs ovulation, which then produces progesterone to support a possible pregnancy. If there is no pregnancy, the hormone levels will drop and menstruation will occur.
The average menstrual cycle lasts 28 days, but cycles are still considered regular if they happen every 21 to 38 days. Most every woman’s cycle varies in some way, whether it’s how often ovulation happens, how long periods last or the severity of PMS (premenstrual syndrome) symptoms such as cramping, mood swings and fatigue.
The length of a woman’s cycles can differ from month to month and might change with age. The same can be said for period length and the amount of menstrual bleeding. The average period length is five days, but three to eight days is considered normal.
Many doctors “say ovulation is supposed to happen on day 14 of a cycle, but that only happens in 10 percent of all cycles,” Urrutia says. “Just because your pattern is different from the average doesn’t mean there is something abnormal that requires treatment. But for people who are having menstrual abnormalities, tracking their cycle can really help them get a better sense of their own pattern.”
Of course, tracking a menstrual cycle helps you time sex if you’re trying to conceive. But even for women who aren’t trying to get pregnant, tracking the cycle can reveal patterns of overall health.
For example, you might notice that you get migraines or feel fatigued during the same part of your cycle every month. Or, you might find out that you might have a different ovulation pattern—information that could help you if you want to get pregnant in the future. Getting to know your cycle can also help you identify changes that could be indications of a health issue.
A caveat: Women who use hormone-based contraception methods like the pill or IUDs (intrauterine devices) will not be able get an accurate picture of their fertility or health from keeping track of their cycle. These hormonal methods stop ovulation and alter or prevent typical cycle symptoms. Still, it’s a good idea for women using birth control to pay attention to bleeding patterns so they can spot anything unusual.
How to Track Your Menstrual Cycle
There are multiple ways to track a menstrual cycle and different tools to help understand that information. Here’s what to know:
The most basic way to track your cycles is to note the beginning and ending date of each period and how heavy or light it is. Keeping up with that information can help you pinpoint any changes or irregularities in cycle length.
A step beyond keeping track of the period is monitoring vaginal secretions, or cervical mucus, during a cycle.
A telltale sign of ovulation is mucus, Urrutia says. Before ovulation, estrogen levels are very high, and “there are receptors in the cervix that bind to the estrogen and create a different type of mucus, very wet and slippery. This usually happens three to seven days leading up to ovulation.”
After ovulation, progesterone levels rise, producing a thicker mucus, which may lead to women observing no mucus at all. About two weeks later, you should get your period.
Basal Body Temperature
Basal body temperature is the body’s lowest temperature during sleep. It’s usually taken right after waking up in the morning, using a digital oral thermometer or one specially designed to measure basal body temperature.
Taking this temperature every day is used to track changes in progesterone, Urrutia says.
“The increase in progesterone after ovulation can increase the body temperature very slightly,” Urrutia says. “When your body temperature is taken every morning consistently, you can figure out an average preovulatory basal temperature range. For most women, there should be about a half a degree increase from that basal temperature range after ovulation. This shift in temperature is the surest indication that you have ovulated.”
Basal body temperature can be affected by things like unusual sleep patterns or drinking too much alcohol the night before.
Some of the hormones responsible for your menstrual cycle can be measured in your urine. One important hormone for determining fertility is the luteinizing hormone (LH), which triggers the release of an egg from the ovary for ovulation. A big increase of LH in urine indicates that ovulation will happen within 24 to 48 hours, which are the two most fertile days for conceiving.
“This method is most helpful if you are tracking fertility to achieve or avoid pregnancy,” Urrutia says. “There are digital monitors available over the counter that will test urine and display hormone level results.”
What Your Cycle Can Tell You
Beyond being an indication of when to attempt to get pregnant—a woman is most likely to conceive just before or during ovulation—a menstrual cycle can be a window into overall health. After tracking several cycles, patterns might start to appear. Those seemingly random headaches and mood swings could be attributed to hormone fluctuation. Or, it might turn out that what is perceived as irregular discharge is actually quite normal mucus. However, Urrutia says the reverse is true as well.
“Irregular bleeding patterns can indicate a health issue. If you find that you are bleeding more frequently than every 21 days, bleeding less frequently than every 40 days, or having heavier than average periods that last eight days or longer, you should see a doctor,” she says.
Irregularity or absence of ovulation might be an indication of an issue like polycystic ovarian syndrome (PCOS), and is reason to talk to a doctor as well.
If a doctor’s visit is needed, Urrutia says it will usually include an ultrasound, Pap smear and testing for an infection. Some of the common causes of long or irregular periods include PCOS, polyps, fibroids, infection or pregnancy.
Urrutia says, “Tracking your cycles can also shed light on health issues that could happen in the future. Really long periods and PCOS are associated with increased rates of heart disease, diabetes and other long-term health risks.”
Getting Started with Tracking
First, start by keeping up with your period’s beginning and ending dates. You can do that on a paper calendar or use one of many apps available. But be careful when it comes to any predictions the app makes about your cycle.
“Many of the period tracker apps out there only track dates of your period and then predict a day that it thinks you ovulated. For women with very regular cycles it might be accurate,” Urrutia says, but it won’t help women with irregular cycles and very few of these apps have undergone testing to be sure that their predictions are accurate.
Some apps allow you to keep track of mucus and basal body temperature, which together produce a more accurate picture of what’s happening in your body.
While technology may be convenient, Urrutia advises that if you are tracking your cycle as a way to prevent pregnancy, there is only one app—Natural Cycles—that has been approved by the FDA as a fertility awareness-based method of contraception. Urrutia has led research on how little information is available to those who practice fertility awareness-based contraception, and she urges all women to learn more about contraception methods.
And if you’re looking to learn more about your body ahead of a possible pregnancy, tracking cycles is a great place to start.
“For women who want to learn more about their fertility, it’s a good way to gain more knowledge about how everything works,” Urrutia says.
Looking for an OB-GYN? Find one near you. | <urn:uuid:78156838-f8c3-491a-965e-e0af9de01d77> | CC-MAIN-2020-50 | https://healthtalk.unchealthcare.org/what-cycle-tracking-can-tell-you-about-your-health/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00312.warc.gz | en | 0.939583 | 1,786 | 3.5625 | 4 |
This track depicts gaps in the assembly. These gaps — with the
exception of intractable heterochromatic gaps — will be closed during the
Gaps are represented as black boxes in this track.
If the relative order and orientation of the contigs on either side
of the gap is known, it is a bridged gap and a white line is drawn
through the black box representing the gap.
This assembly contains the following principal types of gaps:
- Clone — gaps between clones in the same map contig. These
may be bridged or not.
- Contig — non-bridged gaps between map contigs.
- Centromere — non-bridged gaps from centromeres.
- Telomere — non-bridged gaps from telomeres.
- Heterochromatin — non-bridged gaps from large blocks of heterochromatin.
- Short Arm — non-bridged long gaps on the short arm of the chromosome.
NCBI discussion of genome
The Feb. 2009 human reference sequence (GRCh37) was produced by the
Genome Reference Consortium. | <urn:uuid:d1778819-b11b-42ee-8d01-17c72fc7d8ba> | CC-MAIN-2023-50 | https://genome-euro.ucsc.edu/cgi-bin/hgTables?db=hg19&hgta_group=map&hgta_track=gap&hgta_table=gap&hgta_doSchema=describe+table+schema | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100164.15/warc/CC-MAIN-20231130000127-20231130030127-00465.warc.gz | en | 0.858445 | 236 | 3.484375 | 3 |
Although nowhere near rivaling the largest living species of whale -- the blue whale -- the largest extinct whale that's been discovered is known as Leviathan melvillei. The name is in homage to Herman Melville, the author of "Moby Dick," as it's thought this extinct species was fearsome and ferocious. It lived in the oceans of our planet roughly 12 to 13 million years ago.
The fossilized skull of Leviathan melvillei was found in a coastal desert region of Peru -- an area once covered by ocean -- in 2010. Scientists had previously found large fossilized teeth, which suggested there was a extinct whale species they hadn't yet discovered, but hadn't come across any further evidence than that. The team that discovered the creature was led by Olivier Lambert, a paleontologist from the Museum National d'Histoire Naturelle in Paris.
The skull that these paleontologists discovered was 10 feet long. Although the entire skeleton wasn't found, it's estimated that Leviathan melvillei measured roughly 60 feet in length. There were still teeth remaining in the fossilized skull, and some of these measured up to 14 inches long, suggesting that this whale was a fierce -- and somewhat scary -- predator.
Although it's impossible to be completely certain what Leviathan melvillei ate, experts believe that smaller whales, such as baleen whales, were probably its main source of food. The wear lines on this whale's teeth are vertical, and there's a big gap in the skull that would've held a large jaw muscle. Due to these attributes, it's safe to say Leviathan melvillei bit its prey, much like the killer whale, rather than sucking it in, like sperm whales and many other large whales.
Although not a direct ancestor of today's sperm whales, Leviathan melvillei was definitely related to them, more likely as a distant cousin. Although they were roughly the same size as modern sperm whales, there are several evolutionary differences between them. For instance, sperm whales have no teeth in their top jaws. They have no need for them, since they suck in squid and other prey as they swim along, rather than hunting for and biting their dinner.
- Jupiterimages/Photos.com/Getty Images | <urn:uuid:64b34cd9-312f-438e-b567-adf8ba99e968> | CC-MAIN-2017-22 | http://animals.mom.me/largest-extinct-whale-5971.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463615105.83/warc/CC-MAIN-20170530124450-20170530144450-00004.warc.gz | en | 0.982428 | 452 | 3.75 | 4 |
Defining the PCMH
The medical home is a concept first introduced by the American Academy of Pediatrics (AAP) in 1967. In its initial version, the AAP defined the medical home as the center of a child’s medical records. At the time, the care of children with special health care needs was the primary focus of the medical home concept. Over time, however, the definition of the medical home has evolved to reflect changing needs and perspectives in health care.
The modern medical home expands upon its original foundation, becoming a home base for any child’s medical and non-medical care. Today’s medical home is a cultivated partnership between the patient, family, and primary provider in cooperation with specialists and support from the community. The patient/family is the focal point of this model, and the medical home is built around this center. Another key factor is that the focus of the medical home has shifted to include all children and adults, not just children with special health care needs. In the 2002 revision (PDF – 45KB) of its 1992 statement (PDF – 32KB) on the medical home, the AAP reiterated and enhanced its explanation of the medical home’s crucial characteristics. These guidelines stress that care under the medical home model must be accessible, family-centered, continuous, comprehensive, coordinated, compassionate, and culturally effective. In 2007, the AAP joined with the American Academy of Family Physicians (AAFP), the American College of Physicians (ACP), and the American Osteopathic Association (AOA) to form the Joint Principles of the Patient Centered Medical Home. Under this collaborative effort, the characteristics of the medical home have been defined within these 7 principles:
1. Personal physician:
- Each patient has an ongoing relationship with a personal physician trained to provide first contact, continuous and comprehensive care.
2. Physician directed medical practice:
- The personal physician leads a team of individuals at the practice level who collectively take responsibility for the ongoing care of patients.
3. Whole person orientation:
- The personal physician is responsible for providing for all the patient’s health care needs or taking responsibility for appropriately arranging care with other qualified professionals. This includes care for all stages of life; acute care; chronic care; preventive services; and end of life care.
4. Care is coordinated and/or integrated:
- Across all elements of the complex health care system (e.g., subspecialty care, hospitals, home health agencies, nursing homes) and the patient’s community (e.g., family, public and private community-based services). Care is facilitated by registries, information technology, health information exchange and other means to assure that patients get the indicated care when and where they need and want it in a culturally and linguistically appropriate manner.
5. Quality and safety are hallmarks of the medical home:
- Practices advocate for their patients to support the attainment of optimal, patient-centered outcomes that are defined by a care planning process driven by a compassionate, robust partnership between physicians, patients, and the patient’s family.
- Evidence-based medicine and clinical decision-support tools guide decision making.
- Physicians in the practice accept accountability for continuous quality improvement through voluntary engagement in performance measurement and improvement.
- Patients actively participate in decision-making, and feedback is sought to ensure patients’ expectations are being met.
- Information technology is utilized appropriately to support optimal patient care, performance measurement, patient education, and enhanced communication.
- Practices go through a voluntary recognition process by an appropriate non-governmental entity to demonstrate that they have the capabilities to provide patient centered services consistent with the medical home model.
- Patients and families participate in quality improvement activities at the practice level.
6. Enhanced access to care:
- Is available through systems such as open scheduling, expanded hours and new options for communication between patients, their personal physician, and practice staff.
- Appropriately recognizes the added value provided to patients who have a patient-centered medical home. The payment structure should be based on the following framework:
- It should reflect the value of physician and non-physician staff patient-centered care management work that falls outside of the face-to-face visit.
- It should pay for services associated with coordination of care both within a given practice and between consultants, ancillary providers, and community resources.
- It should support adoption and use of health information technology for quality improvement;
- It should support provision of enhanced communication access such as secure e-mail and telephone consultation;
- It should recognize the value of physician work associated with remote monitoring of clinical data using technology.
- It should allow for separate fee-for-service payments for face-to-face visits. (Payments for care management services that fall outside of the face-to-face visit, as described above, should not result in a reduction in the payments for face-to-face visits.)
- It should recognize case mix differences in the patient population being treated within the practice.
- It should allow physicians to share in savings from reduced hospitalizations associated with physician-guided care management in the office setting.
- It should allow for additional payments for achieving measurable and continuous quality improvements.
The Maternal and Child Health Bureau (MCHB) at the Health Resources and Services Administration (HRSA) has identified specific criteria to establish whether a child’s health care meets the definition of a medical home. This criteria include:
- Whether the child has at least one personal doctor or nurse who knows him or her well and a usual source of sick care;
- Whether the child has no problems gaining referrals to specialty care and access to therapies or other services or equipment;
- Whether the family is very satisfied with the level of communication among their child’s doctors and other programs;
- Whether the family usually or always gets sufficient help coordinating care when needed and receives effective care coordination;
- Whether the child’s doctors usually or always spend enough time with the family, listen carefully to their concerns, are sensitive to their values and customs, provide any information they need, and make the family feel like a partner in their child’s care;
- Whether an interpreter is usually or always available when needed.
A medical home is an important mechanism for uniting the many segments of a child’s care, including behavioral and oral health, to accomplish these goals. Furthermore, Drs. David Kibbe of the American Academy of Family Physicians and Joseph Kvedar of the Center for Connected Health at Partners HealthCare believe that the medical home model of care works synergistically with participatory medicine (PDF – 455KB) models in which the active role of the patient is emphasized. | <urn:uuid:9aabe942-21ef-4e0c-9f63-97ee7f05e085> | CC-MAIN-2017-39 | http://arkansasaap.org/pcmh/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689897.78/warc/CC-MAIN-20170924062956-20170924082956-00073.warc.gz | en | 0.946775 | 1,386 | 3.4375 | 3 |
Located in the southwest part of Wajima City, Monzen-machi-Kuroshima-machi once flourished as a port of call for Kitamae ship trading. Running a shipping agency, the Kadomi family was at their peak of prosperity from the closing days of the Tokugawa shogunate through the early Meiji period. They owned seven Kitamae ships in their heyday. The existing Kadomi residence was built in 1872. The main building is a single-story wooden structure with a pantile gable roof. Its main entrance is on the side, and there is a passage connecting the entrance to the backyard. There are four plastered storehouses: a three-storied one for the family’s possessions and three more that are two-storied and used for salt, adzuki beans and rice. This architectural style is typically seen in the houses of shipping agents. The 2007 Noto Peninsula Earthquake totally destroyed the house, but it was reconstructed in 2011. The house, which contains possessions of the family, is open to the public. There are other shipping agents’ houses in this area, which has been designated as an important preservation district for groups of traditional buildings by the national government. | <urn:uuid:faeb0238-6b51-47ea-8b07-16f3d2f7a193> | CC-MAIN-2023-40 | http://noto-satoyamasatoumi.jp/detail_en.php?tp_no=303 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510942.97/warc/CC-MAIN-20231002001302-20231002031302-00705.warc.gz | en | 0.978053 | 255 | 3.15625 | 3 |
WelCom November 2017:
“People who are deeply depressed are transformed when they know they are really loved.” – Jean Vanier
World Mental Health Day is observed annually on October 10. This year, the Church sought to celebrate this day as an opportunity to encourage a truly pastoral view that embraces our total community as the living Body of Christ. Following a recent international report that reveals the rate of adolescent suicide higher in New Zealand than anywhere else Bishop Peter Cullinane explores the spirituality of suicide prevention.
A recent Unicef Report showed New Zealand has the highest rate of adolescent suicide of any country. What a record to have! If we are living in the real world, we are going to want to know why.
Much commentary on suicide rates and suicide prevention recites statistics and demographics, trying to identify the risk factors. This is an essential part of what needs to happen; but only part. It is commonly acknowledged that risk factors include loneliness, bullying, mental illness, trauma and deprivation. There is also risk from some illnesses which have an organic origin and these require a more specialised discussion than is possible here.
“People need to know their life is worth living no matter what is happening to them.”
A consequence of the risk factors considered here can be the feeling that one’s life is no longer worth living. Somehow, therefore, people need to know their life is worth living no matter what is happening to them.
Before looking more closely at this, it might be useful to identify some of the obstacles that get in the way of them being sure of this. I shall name four characteristics of our national culture that are not helpful:
1. Intellectual superficiality
I support the view there should be public discussion of this topic, which obviously, needs to be accurately informed and responsibly conducted. But in a culture with a diminishing regard for careful argument, preferring just head-line speak and blips of information such as are available through the social media, popular discussion often falls short of being a true ‘discussion’.
Moreover, in this cultural context, clear understanding and good judgment can be impeded by the way actual examples of pain and suffering, which we all find distressful, can distort careful argument. Within a popular culture that is intellectually superficial, even the social sciences find it hard to compete with the pulling power of emotion.
2. Double standards and ambiguity in society’s attitudes
It is not easy to convey the idea that one’s life always matters regardless of what is happening, if at the same time society is proposing that sometimes life is no longer worth living. Whatever the case for or against euthanasia, ultimately, that is the message of legalised euthanasia.
“If youth suicides are to be discouraged, and assisted suicide made legal, the question has to be asked: what makes them different?”
3. How people are valued
If youth suicides are to be discouraged, and assisted suicide made legal, the question has to be asked: what makes them different? Here we come face to face with what it means to live within a culture that values people not on the intrinsic dignity of being human but on their ability to function, that is, ‘their ability to be successful, productive, independent and in control’. (Kleinsman, Dr J Nathaniel Report, August 2017, p 3.) The ability to function becomes the basis of differentiation between lives that are worth living and lives that are deemed not to be. Society needs to face up to what this way of categorising people implies, even apart from the question of suicide.
4. Loss of a sense of transcendence
We live in a culture that doesn’t even look for reasons why life might still be worthwhile when it is no longer useful or has become a burden. Are there reasons that transcend the criteria of functionality? To see no further than what people can be useful for, or how well they can still manage, is a stunted way of looking at people and at human life. This brings us to the spiritual dimension of suicide prevention.
“To see no further than what people can be useful for, or how well they can still manage, is a stunted way of looking at people and at human life.”
The Spiritual Dimension
It is not enough to analyse why a person might not want to live. We need to reflect on what usually makes people want to live.
The desire to live depends on, more than anything else, the experience of being loved. This experience carries with it the experience of belonging, and a sense of self-worth, that normally come through the tangible experience of other people’s love for us – starting with one’s own parents. The absence of this experience of being loved can be damaging, and devastating. Fortunately, the experience of being really loved, even where it has previously been lacking, can still be a powerful source of healing. One who ought to know, having given his own life over to helping the most troubled and most needy, namely Jean Vanier, has said ‘People who are deeply depressed are transformed when they know they are really loved.’
“The desire to live depends on, more than anything else, the experience of being loved.”
The Catholic tradition dares to say God’s love is made present to us in human love. The ‘spiritual’ dimension of human well-being is deeply human!
Unfortunately, the human experience of being loved can fail so easily. When the experience of being loved, especially by those who know us best, is lacking, we become unsure of ourselves, self-doubting and prone to anxiety. There is more than enough evidence of how marriage failure can affect children, and spouses.
The mystery of suicide is more complex yet, because some of its victims come from seemingly good family backgrounds. As young people begin to move out on their own – the normal development of autonomy – the bonds that helped them to know their self-worth become looser. But they still have a deep human need to know they are truly loveable.
It comes down to this: whether we come out of strong family life or weak family life, our sense of self-worth and the value of our life, need to have roots in a love that cannot fail us.
It is not being suggested here that ‘religion is the answer’. On the contrary, there are distortions of ‘religion’ that can do the damage. But, ultimately, the love God has for us is of the kind that cannot fail us. Unlike every other love, God’s love for us, revealed in the Person of Christ and the events of his life, is unconditional and everlasting. God’s mercy pursues us even when we have let ourselves – and perhaps everybody else – down. Christian revelation is above all the revelation of how much we mean to God – and that can mean more to us than anyone or anything that would make us think less of ourselves.
It is this game-changing love that is denied to people by widespread failure to give them a formation in life-giving, joyous faith. This lack deprives them of the greatest reason for believing in themselves and believing their lives really matter. They need to know this, especially in times of difficulty. Without this deep sense of reassurance, some will look for other ways of escaping the pain of a life that seems cruel and unfair, when opportunities constantly elude them, and then self-blame makes it worse. Short of suicide there are drugs and other ways of trying to forget. At a deeper level, what they are trying to escape is meaninglessness. What they need is meaning – over-arching, all-encompassing, unassailable meaning!
Like all false prophets, the deniers have much to answer for. It is an illusion on their part to think secular ideology is the touchstone of truth. Most of humankind applauds the work of Mother Teresa and the very many others like her, from all religions, whose work is pointless if people are to be valued only in terms of their usefulness, or ability to manage for themselves. Those who do see the point, know that human beings have a value that reaches beyond the short horizons of our life-spans, which is what makes them so special even during this life.
A spirituality that is ‘deeply human’ is not somewhere ‘up in the sky’. It is earthed in all that makes up human life. Its raw material includes the planet we are made from, as well as the events of our daily lives. What we do, socially, culturally, artistically, economically – no matter how small or seemingly insignificant – has a value that goes beyond our short life-times. ‘All the good fruits of human nature, and of human enterprise, cleansed and transfigured, we shall find again.’ (Second Vatican Council, Church in Modern World, n. 39). Again, we cannot fully taste and savour our lives without a sense of transcendence.
But what about situations that can only be described as bad? People rightly try to escape poverty, oppression and hardship in all its forms. Bad is bad, and an authentic spirituality never tries to bless what is bad or unjust. On the contrary, it works for justice, peace and human development. So, in what sense can we still claim that every life is worth living, even when things are going very wrong?
Again, just as a sense of transcendence is the only way to see past the limited and limiting criteria of functionality, so here too, a sense of transcendence is the only way to see beyond the ills that oppress all people in one way or another. Hope is not a mere assurance that things will turn out right. Rather, it is deep down knowing that ultimately all will be well even when things don’t turn out right! But this is a God-given awareness; it presupposes a person’s openness to God, an intimate familiarity with God and God’s ways. And this is what young people are deprived of in an environment of religious indifference and disregard. Does this have something to do with our high rate of youth suicide?
If people are to know their lives are still worth living even when the odds seem hopelessly against them, they will need to have reasons that don’t collapse when everything else does; transcendent reasons; God-given reasons.
“Hope … is deep down knowing that ultimately all will be well even when things don’t turn out right! This God-given awareness…is what young people are deprived of in an environment of religious indifference and disregard.” | <urn:uuid:1afc3c13-5c71-45bf-b589-a8032d35ea5f> | CC-MAIN-2018-22 | http://www.wn.catholic.org.nz/the-spiritual-dimension-of-suicide-prevention/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870771.86/warc/CC-MAIN-20180528024807-20180528044807-00590.warc.gz | en | 0.964862 | 2,213 | 2.546875 | 3 |
April 20, 2011
New Theory Of Evolution For Spiral Galaxy Arms
A study of spiral patterns found in galaxies like our Milky Way could overturn the theory of how the spiral arm features form and evolve. The results are being presented by postgraduate student, Robert Grand, at the Royal Astronomical Society's National Astronomy Meeting in Llandudno, Wales this week.
Since 1960s, the most widely accepted explanation has been that the spiral arm features move like a Mexican wave in a crowd, passing through a population of stars that then return to their original position. Instead, computer simulations run by Grand and his colleagues at University College London's Mullard Space Science Laboratory (MSSL) suggest that the stars actually rotate with the arms. In addition, rather than being permanent features the arms are transient, breaking up and new arms forming over a period of about 80-100 million years."We have found it impossible to reproduce the traditional theory, but stars move with the spiral pattern in our simulations at the same speed. We simulated the evolution of spiral arms for a galaxy with five million stars over a period of 6 billion years. We found that stars are able to migrate much more efficiently than anyone previously thought. The stars are trapped and move along the arm by their gravitational influence, but we think that eventually the arm breaks up due to the shear forces," said Grand.
In the simulations, Grand found that some stars gradually move outwards and inwards along the spiral arms. Stars travelling at the leading side of the spiral arm slide in towards the center of the disc, whereas the stars travelling at the trailing side are kicked out to the edges.
"This research has many potential implications for future observational astronomy, like the European Space Agency's next corner stone mission, Gaia, which MSSL is also heavily involved in. As well as helping us understand the evolution of our own galaxy, it may have applications for regions of star formation," said Grand.
Image Caption: Snapshots of face-on view of a simulated disc galaxy. A Brighter color indicates higher density. The Image shows two examples of star particles: the red star are travelling at the leading side of the arm, and the blue star are at the trailing side. It can be seen that the blue and red stars interchange their radial distances, with rapid migration within 40 million years. The dotted lines trace circles with radii of 4, 5 and 6 000 parsecs (1 parsec = 31 trillion kilometers), to guide the eye.
On the Net: | <urn:uuid:eb6c2c48-bc7a-4449-8882-6bddbeb58c53> | CC-MAIN-2016-07 | http://www.redorbit.com/news/space/2033292/new_theory_of_evolution_for_spiral_galaxy_arms/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145578.23/warc/CC-MAIN-20160205193905-00106-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.942958 | 509 | 3.65625 | 4 |
Has The Tsunami In Japan Destroyed The Japanese Economy?
The entire world is in a state of mourning today as details regarding the horrific damage caused by the massive tsunami in Japan continue to trickle in. The magnitude 8.9 earthquake that caused the tsunami was the largest earthquake that Japan has ever experienced in modern times. Waves as high as 30 feet swept over northern Japan. The tsunami waters reached as far as 6 miles inland, and authorities have already recovered hundreds of dead bodies. Those of us that have seen footage of this disaster on television will never forget it. But this nightmare is not over yet. There have been dozens of aftershocks, and many of them have been quite large. In fact, there have been 19 earthquakes of at least magnitude 6.0 in the area over the last 24 hours. So what is this disaster going to do to the 3rd largest economy in the world? Japan already had a national debt that was well over 200 percent of GDP. Could this be the “tipping point” that pushes the Japanese economy over the edge and into oblivion?
It is hard to assess the full scope of the damage to Japan at this point, but virtually everyone agrees that much of northern Japan is a complete and total disaster area at this point. Many towns have essentially been destroyed. Some are estimating that the economic damage from this disaster will be in the hundreds of billions of dollars. Others believe that the final total will be in the trillions of dollars.
Fortunately, major cities such as Tokyo came through this event relatively unscathed and most of the major manufacturing facilities are not in the areas that were most directly affected by the earthquake and the tsunami.
But let there be no doubt, this was a nation-changing event. Japan will never quite be the same again.
Also, it isn’t just Japan that will be affected by this. The truth is that economic ripples from this event will be felt all over the world.
An economist from High Frequency Economics, Carl Weinberg, told AFP the following about the economic consequences of this disaster…. | <urn:uuid:958f4f5b-e45b-4698-bf42-5be627ffacfa> | CC-MAIN-2016-07 | http://revolutionradio.org/?p=12916 | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152130.53/warc/CC-MAIN-20160205193912-00186-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.97481 | 423 | 2.515625 | 3 |
One in five Americans experienced some sort of mental illness in 2010, according to a new report from the Substance Abuse and Mental Health Services Administration. About 5 percent of Americans have suffered from such severe mental illness that it interfered with day-to-day school, work or family.
Women were more likely to be diagnosed with mental illness than men (23 percent of women versus 16.9 percent of men), and the rate of mental illness was more than twice as likely in young adults (18 to 25) than people older than 50.
About 11.4 million adult Americans suffered from severe mental illness in the past year and 8.7 million adults contemplated serious thoughts of suicide. Among them, more than 2 million made suicide plans and about 1 million attempted suicide.
Nearly 2 million teens, or 8 percent of the adolescent population, experienced a major depressive episode in the past year. The research defined a major episode as at least a two-week period when a person is depressed with a loss of interest or pleasure in daily activities, while also experiencing at least four of seven symptoms defined in the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders.
Only about 60 percent of people with mental illness get treatment each year, according to the report, and whites and Native Americans were more likely to seek help than African-Americans, Latinos and Asians.
Researchers drew the findings from nearly 70,000 surveys on mental health and addiction among children and adults.
"Mental illnesses can be managed successfully, and people do recover," Pamela S. Hyde, head of Substance Abuse and Mental Health Services Administration, said in a news release. "Mental illness is not an isolated public health problem. Cardiovascular disease, diabetes, and obesity often co-exist with mental illness and treatment of the mental illness can reduce the effects of these disorders. The Obama Administration is working to promote the use of mental health services through health reform. People, families and communities will benefit from increased access to mental health services."
Dessa Bergen-Cico, assistant professor of public health, food studies and nutrition at Syracuse University in New York, said there are several aspects of mental health treatment that should be improved in this country, including better access to preventive mental health care, which should include coverage for evidence-based prevention, intervention programs and counseling. An example of such a program is the Mindfulness-Based Stress Reduction (MBSR), an eight-week secular mindfulness and meditation training program that teaches and prepares people to develop lifelong skills for dealing with anxiety, stress depression, post-traumatic stress disorder and chronic illness.
"Despite legislation calling for coverage of mental health and addictions, not much has changed in insurance coverage for prevention or treatment," Bergen-Cico said. "Whereas health care providers are readily prepared to practice medicine, [and] by this I mean write appropriate prescriptions for medication to treat depression, anxiety, ADHD, etc., they are not trained as counselors and do not and should not fill that role."
Mental illness cost about $300 billion in 2002 alone in the United States, according to the report.
"What is missing is the approach to mental health problems with a comprehensive ongoing strategy much like what we do for physical injury for which health care providers commonly employ a robust treatment that in addition to surgery would include any or all of the following: physical therapy, medication, preventative education and long term follow-up," Bergen-Cico said. | <urn:uuid:73a05ed4-d50e-45e2-aa7c-2d04e4854de3> | CC-MAIN-2017-13 | http://abcnews.go.com/blogs/health/2012/01/19/1-in-5-americans-suffer-from-mental-illness/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187227.84/warc/CC-MAIN-20170322212947-00006-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.959743 | 700 | 2.90625 | 3 |
The World of Sir Thomas Malory
Sir Thomas Malory (1415 – 1471) was an English writer, most famous as the author or compiler of a reworking of existing tales about the legendary King Arthur, Guinevere, Lancelot, Merlin, and the Knights of the Round Table. Malory took existing French and English stories about these figures and added original material, effectively defining a base of stories known as Le Morte d'Arthur from which many other authors have drawn through the centuries. Le Morte d'Arthur was first published in 1485 by William Caxton. Modern Arthurian writers who have used Malory as their principal source include T. H. White (The Once and Future King) and Alfred, Lord Tennyson (The Idylls of the King). Sir Thomas Malory of Newbold Revel in Warwickshire, was a knight, land-owner, and Member of Parliament.
These quotes, mainly from Le Morte d’Arthur but including some from other works, show not only his wit and talent but give us a glimpse of the thinking of the Middle Ages too:
'But in those days, as rare old Chaucer tells, All Britain was fulfilled of miracles. So, as I said, the great doors opened wide. In rushed a blast of winter from outside, And with it, galloping on the empty air, A great green giant on a great green mare.’
'The very purpose of a knight is to fight on behalf of a lady.'
'Laugh if you will, my queen, but let me be a woman still. You fairies love where love is wise and just; We mortal women love because we must.’
'We shall now seek that which we shall not find.’
'And therein were many knights and squires to behold, scaffolds and pavilions; for there upon the morn should be a great tournament: and the lord of the tower was in his castle and looked out at a window, and saw a damosel, a dwarf, and a knight armed at all points.’
'Yet some men say in many parts of England that King Arthur is not dead, but had by the will of our Lord Jesu into another place; and men say that he shall come again, and he shall win the holy cross.'
'But she pursued them through their tangled lair And caught them, and put fire-flies in their hair; And then they all joined hands, and round and round They danced a morris on the moonlit ground.'
'The sweetness of love is short-lived, but the pain endures.'
'Do thou thy warste,' seyde Mordred, 'and I defyghe the!'
'In the midst of the lake Arthur was ware of an arm clothed in white samite, that held a fair sword in that hand.’
'And therefor, sir,' seyde the Bysshop, 'leve thys opynyon, other ellis I shall curse you with booke, belle and candyll.'
'For I have promised to do the battle to the uttermost, by faith of my body, while me lasteth the life, and therefore I had liefer to die with honour than to live with shame ; and if it were possible for me to die an hundred times, I had liefer to die oft than yield me to thee; for though I lack weapon, I shall lack no worship, and if thou slay me weaponless that shall be thy shame.'
'O Merlin', said Arthur, 'Here hadst thou been slain for all thy crafts had I not been.' 'Nay,' said Merlin, 'Not so, for I could save myself an I would; and thou art more near thy death than I am, for thou goest to the deathward, an God be not thy friend.'
'Enough Is as Good as a feast.'
'And when matins and the first mass was done, there was seen in the churchyard, against the high altar, a great stone four square, like unto a marble stone; and in midst thereof was like an anvil of steel a foot on high, and therein stuck a fair sword naked by the point, and letters there were written in gold about the sword that said thus:—Whoso pulleth out this sword of this stone and anvil, is rightwise king born of all England.'
'They both laughed and drank to each other; they had never tasted sweeter liquor in all their lives. And in that moment they fell so deeply in love that their hearts would never be divided. So the destiny of Tristram and Isolde was ordained.'
'Then he looked by him, and was ware of a damsel that came riding as fast as her horse might gallop upon a fair palfrey. And when she espied that Sir Lanceor was slain, then she made sorrow out of measure, and said, 'O Balin ! two bodies hast thou slain and one heart, and two hearts in one body, and two souls thou hast lost.'
'And thus it passed on from Candlemass until after Easter, that the month of May was come, when every lusty heart beginneth to blossom, and to bring forth fruit; for like as herbs and trees bring forth fruit and flourish in May, in like wise every lusty heart that is in any manner a lover, springeth and flourisheth in lusty deeds. For it giveth unto all lovers courage, that lusty month of May, in something to constrain him to some manner of thing more in that month than in any other month, for divers causes. For then all herbs and trees renew a man and woman, and likewise lovers call again to their mind old gentleness and old service, and many kind deeds that were forgotten by negligence. For like as winter rasure doth alway arase and deface green summer, so fareth it by unstable love in man and woman. For in many persons there is no stability; for we may see all day, for a little blast of winter's rasure, anon we shall deface and lay apart true love for little or nought, that cost much thing; this is no wisdom nor stability, but it is feebleness of nature and great disworship, whosomever useth this. Therefore, like as May month flowereth and flourisheth in many gardens, so in like wise let every man of worship flourish his heart in this world, first unto God, and next unto the joy of them that he promised his faith unto; for there was never worshipful man or worshipful woman, but they loved one better than another; and worship in arms may never be foiled, but first reserve the honour to God, and secondly the quarrel must come of thy lady: and such love I call virtuous love. But nowadays men cannot love seven night but they must have all their desires: that love may not endure by reason; for where they be soon accorded and hasty heat, soon it cooleth. Right so fareth love nowadays, soon hot soon cold: this is no stability. But the old love was not so; men and women could love together seven years, and no licours lusts were between them, and then was love, truth, and faithfulness: and lo, in like wise was used love in King Arthur's days. Wherefore I liken love nowadays unto summer and winter; for like as the one is hot and the other cold, so fareth love nowadays; therefore all ye that be lovers call unto your remembrance the month of May, like as did Queen Guenever, for whom I make here a little mention, that while she lived she was a true lover, and therefore she had a good end.'
'I shall bere your noble fame, for ye spake a grete worde and fulfilled it worshipfully.'
'Now, said Sir Ector to Arthur, I understand ye must be king of this land. Wherefore I, said Arthur, and for what cause? Sir, said Ector, for God will have it so; for there should never man have drawn out this sword, but he that shall be rightwise king of this land'
'for it is better that we slay a coward, than through a coward all we to be slain.'
'...and there encountered with him all at once Sir Bors, Sir Ector, and Sir Lionel, and they three smote him at once with their spears, and with force of themselves they smote Sir Lancelot's horse reverse to the earth. And by misfortune Sir Bors smote Sir Lancelot through the shield into the side...' | <urn:uuid:f81dd5c1-207b-4956-85ca-a7296c61ebd2> | CC-MAIN-2023-40 | https://www.clarendonhousebooks.com/single-post/2018/03/11/the-world-of-sir-thomas-malory | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511220.71/warc/CC-MAIN-20231003192425-20231003222425-00125.warc.gz | en | 0.967233 | 1,814 | 3.046875 | 3 |
|Duluth Harbor North Breakwater, MN|
Description: An act of June 3, 1896, unified the harbors of Duluth, Minnesota and Superior, Wisconsin and provided over $3 million for improvements. Part of this money was used to widen the Duluth Canal and replace the existing piers with substantial structures of timber and monolithic concrete. Butler Ryan Company of St. Paul was contracted for the construction of the substructure and superstructure for the new north pier, and work started in April 1898. The south pier was completed in 1900 and marked the following year by a pair of range lights, while the north pier was completed in 1901 and was not lit.
In 1908, the Lighthouse Board acknowledged the need to mark the north pier:
The approach to Duluth Harbor is one of the worst and most dangerous on the whole chain of lakes. The entrance piers are only 300 feet in width, and the north pier is so close to the shore that a vessel making a mistake in judging the width would be immediately on the rocks. The Lake Carriers' Association considers this a matter of such importance that it has made arrangements for the exhibition of private lights for the balance of the season of navigation in 1908.
Congress appropriated $4,000 on March 4, 1909, and after plans and specifications were prepared for a metal tower, the lowest bid for furnishing and delivering the metalwork was accepted. A conical tower consisting of latticed steel columns covered with a 5/16” steel shell was erected on the outer end of the north pier and lit for the first time on April 7, 1910. The lighthouse stands thirty-seven feet tall and tapers from ten feet six inches at its base to eight feet at the base of the octagonal lantern room.
A fifth-order, Henry-Lepaute Fresnel was mounted on a pedestal in the lantern room, and a motor connected to the city’s electric lighting system was used to drive a clockwork that produced the light’s characteristic of fixed white two seconds, eclipse two seconds. Three keepers were assigned to care for the range lights on the south pier along with the pierhead light on the north pier. The head keeper lived in a frame building that had been built in 1874, when the first light on the south pier was established, and the two assistants rented houses in the city until a redbrick duplex was built across from the old keeper’s dwelling in 1912/1913.
The Duluth Canal piers are a dangerous location during a storm. On the night of April 30, 1967, two sixteen-year-old twins and their seventeen-year-old brother were challenging ten to fifteen-waves on the north pier, when witnesses observed a huge wave sweep one of them away. Boatswain’s Mate First Class Edgar Culbertson, Boatswain’s Mate Second Class Richard R. Callahan, and Fireman Ronald C. Prei from the local Coast Guard base braved the storm and ventured out on the pier to rescue the two boys reportedly stranded at the pierhead light. The men tethered themselves together, with a spacing of twenty-five feet, and by the light of hand lanterns, proceeded to the end of the pier.
After finding no trace of the boys at the lighthouse, the coastguardsmen headed back. While making their way along the pier, a twenty-foot wave swept Culbertson off his feet and carried him over the breakwater wall and into the turbulent Lake Superior waters. Despite a valiant effort by his crewmates, Culbertson perished. Culbertson was posthumously awarded the Coast Guard Medal, and a plaque on the north pier commemorates his sacrifice.
In 2014, an LED beacon replaced the active Fresnel lens in the lantern room. The change reduced the range of the light from about fourteen nautical miles to ten-and-a-half nautical miles, but the historic lens will no longer be subjected to temperature fluctuations and ultraviolet rays that can cause the lens to deteriorate.
Head Keepers: Ernest R. Jefferson (1873 – 1888), James Prior (1888 – 1908), Alexander Shaw (1908 – 1910), Charles Lederle (1910 – at least 1918), Edwin C. Bishop (at least 1920 – at least 1930), John Woods (at least 1935 – at least 1940).
Photo Gallery: 1
Located at the end of the northern breakwater,
marking the entrance to the canal
in Duluth. The lighthouse is owned by the Coast Guard. Grounds open, tower closed.
The lighthouse is owned by the Coast Guard. Grounds open, tower closed.
Pictures on this page copyright Kraig Anderson, used by permission. | <urn:uuid:9aa6f151-5723-4356-9c46-10e1cfc6be60> | CC-MAIN-2015-14 | http://www.lighthousefriends.com/light.asp?ID=266 | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300799.9/warc/CC-MAIN-20150323172140-00190-ip-10-168-14-71.ec2.internal.warc.gz | en | 0.959382 | 982 | 3.015625 | 3 |
- Infection (viruses, bacteria)
- Inflammatory causes such as chemicals, fumes, dust, and debris
- Oral genital contact with someone who might be infected with a sexually transmitted disease (STD) such as chlamydia, gonorrhea, or herpes
- Follow good hygiene, such as washing hands regularly.
- Use proper eye protection when in conditions that might increase your risk, such as working in dusty or fume-filled areas.
- Avoid allergic influences that might affect you, such as perfumes, weeds, mold, etc.
- Wash hands frequently so as not to contaminate others or reinfect yourself.
- Separate your towels and washcloths so that others will not be at risk.
- If itching is the most irritating feature, apply cold compresses.
- If swelling is bothersome, apply cold compresses.
- If there is a lot of discharge, especially if mucous-like, use warm compresses.
- If there is aching and/or pain, use warm compresses.
- Wash the eyelids very gently and soak off debris; do not pick at it.
- Never rub the eyes, as this can spread the problem.
- Do not share contact lens paraphernalia with an affected person.
Note: Do not rub or touch your eyes when you get a cold or upper respiratory infection, as this can spread the disease to the eyes.
- Pain is increasing.
- Vision is worsening.
- There is blistering and/or rash on the eyelids.
- Swelling is increasing.
- There is a lot of thick mucus secreting.
- The condition is not getting better within a week. | <urn:uuid:a874e84b-c0e0-4a47-bd7c-9a7b4103d6d0> | CC-MAIN-2014-15 | http://www.skinsight.com/adult/conjunctivitisPinkEye.htm | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00601-ip-10-147-4-33.ec2.internal.warc.gz | en | 0.900049 | 357 | 3.125 | 3 |
As Russian bombs rain down on Ukraine, there is a huge focus on the ‘oligarchs‘ and business figures close to Vladimir Putin who have profited from the corrupt underpinnings of Russia’s political economy.
Russia was dubbed a kleptocracy by American political scientist Karen Dawisha in her 2014 book Putin’s Kleptocracy: Who Owns Russia? But what exactly is a kleptocracy and in what ways is the term appropriate for Russia?
Definition of a kleptocracy
Most explanations of kleptocracy – derived from the Greek for ‘thief’ and ‘rule’ – stress the aspect of ‘grand corruption’ whereby high-level political power is abused to enable a network of ruling elites to steal public funds for their own private gain using public institutions.
Kleptocracy is therefore a system based on virtually unlimited grand corruption coupled with, in the words of American academic Andrew Wedeman, ‘near-total impunity for those authorized to loot by the thief-in-chief’ – namely the head of state.
Often oligarchs are seen as characteristic of Russia’s kleptocracy, but the Russia of the 1990s was not a kleptocracy as the oligarchs represented a power base outside of the Kremlin, one that Putin had to dismantle by exiling or jailing those who opposed him.
In a true kleptocracy, the oligarchs are the politicians themselves – often referred to as ‘poligarchs’. The former head of the Russian state railways Vladimir Yakunin – whose mansion famously had a whole room dedicated to storing his wife’s fur coats – is a good example, as is Dariga Nazarbayeva from Kazakhstan, the eldest daughter of the country’s first president, who rose to the rank of chair of the Senate while sitting on a $595 million fortune.
The undeniable vast wealth of each of these ‘PEPs’ (politically-exposed person) is explained, they have said, by legitimate personal business earnings and salary. Nazarbayeva successfully resisted a UK court process to explain the source of expensive real estate linked to her.
Some definitions of kleptocracy introduce the concept of illegality – for example a ‘rent-seeking state where favouritism happens illegally’ although this poses problems as kleptocratic regimes do not apply the law evenly.
Illegal activity on behalf of government officials is either ignored, allowing corrupt funds to flow out of the country, or ruled legal by a corrupted legal system – ‘legalized’ illicit financial flows.
Such nations also provide ample opportunities for rent-seeking by awarding lucrative contracts to family members or friends of those in power, which is corrupt but may not be illegal under the laws of that country.
Self-enrichment not the only motivating factor
Clearly self-enrichment is a driving force behind kleptocracies but kleptocratic overreach – stealing too much – may be the death knell for a regime. In a story relayed in Sarah Chayes’ Thieves of State, Tunisian president Ben Ali ‘went berserk’ in his quest to capture the country’s wealth, causing an ‘unimaginable’ development gap and ultimately leading to the overthrow of his government.
Wealth will be lost if the ruling elite cannot remain in power. If they succeed in ousting the incumbents, opposition political factions look to confiscate assets of the previous regime and close opportunities for them to further enrich themselves.
Therefore, a ‘well-functioning’ kleptocracy maintains the system by controlling the money-making enterprises and natural resources, with the head of state attempting to avoid intra-elite conflict by dividing the spoils between various groups or family members.
This is manifested through the country’s top enterprises being controlled by economic ‘frontmen’ who use a network of offshore shell companies to funnel earnings out of the company – and then the country – on behalf of their patrons, the more senior members of the regime.
Some kleptocratic gains are reinvested in political campaigns or in media companies to help frame the kleptocrat’s narrative. Dariga Nazarbayeva’s ownership of Kazakhstan’s largest media company Khabar is a good example of how the domestic narrative can be reframed.
Money is also invested in ‘safe’ assets, such as real estate overseas, or simply hoarded in foreign bank accounts to be used in emergencies – a war chest for a political campaign – or in relocating if the ruling elite is ever removed from power. Hoarding also prevents potential economic and political rivals from getting their hands on capital that could be used to oust the current regime.
It is not surprising that one of the only serious challenges to the presidency of Nursultan Nazarbayev in Kazakhstan came from former government minister and bank manager Mukhtar Ablyazov who was accused of siphoning $5 billion from a bank he managed. Without any possibility of challenging leadership through democratic means, opponents often try to accrue wealth in the same fashion.
The most successful kleptocracies are those which, rather than strip the house bare, occupy it and allow other members of the household to generate their own income while paying ‘rent’ to the landlord – the godfather-like head of state.
This is why the structure of kleptocracies are is often compared to that of an organized crime groups – unlike in a democracy where capital by and large flows down to the people, money in a kleptocracy is passed up the chain from junior ministers to ministers, then to the head of state and his family.
The relationship between kleptocracies and dictatorships
Kleptocracies may appear stable for decades but are ultimately fragile. In January 2022, peaceful protests in Kazakhstan concerning increased fuel costs descended into violence, leading to the deaths of more than 225 people.
Kazakhstan observers reported the violence may have been provoked by, amongst others, a nephew of former president Nazarbayev, as it was likely their money-making opportunities were being threatened by his successor Kassym-Jomart Tokayev. With no rules-based approach as to who should control what, such seemingly spontaneous conflagrations are to be expected.
A successful kleptocracy provides just enough for the national economy to prevent popular uprisings or be protected by repressive state security services so that uprisings are quashed. It is of course easier to maintain control over a country’s resources if absolute power can be exerted. Turkmenistan, a country ruled for 30 years by dictators of ludicrous proportions, sees 80 per cent of its gas revenues disappear into a black hole.
But not all kleptocracies are dictatorships. Ukraine under President Yanukovych resembled a kleptocracy – Yanukovych’s luxurious private residence made headlines with its golden toilet and bathroom decorated with €350,000 worth of semiprecious stones – yet the election that saw him come to power in 2010 was contested against bitter rival Yulia Tymoshenko. And Nigeria is a country which is politically plural to some extent yet has lost billions over the decades from kleptocratic practices, especially in relation to its oil sector.
And being an autocracy does not necessarily make a country a kleptocracy. Communist dictatorships provide a different model of economic control as there are no private economic actors.
Elites in communist regimes do find ways of garnering private wealth corruptly – in 1980s USSR, high-ranking agents in the KGB generated money by smuggling in embargoed goods from Europe – but the system itself is not built on private control of what should be state assets.
The effects of kleptocracy are felt in other countries
Sometimes debate over whether a particular country is or is not a kleptocracy can detract attention away from whetherhow liberal democracies should support such regimes through providing financial services to them.
Journalist Oliver Bullough says it is ‘pointless’ to ask whether Russia is a kleptocracy, but rather is more appropriate to ‘examine how Russia’s elites are part of a kleptocratic system by which their thefts from the national budget are connected, via Scottish limited partnerships and Moldovan or Latvian banks, to the London property market’.
Kleptocracies need other systems to survive, and recent academic research rightly stresses their ‘transnational’ aspect – UK academic John Heathershaw defines kleptocracy as a state supported by ‘cross-border ties, typically in the form of non-state networks, by which authoritarian elites gain and keep power and wealth’.
As described in the December 2021 Chatham House paper, kleptocracies rely on professional services provided by the UK and other democracies to legalize, legitimize, and hide dubiously acquired wealth.
Wealth managers and solicitors suggest tax optimization schemes, company formation agents help kleptocrats create complex networks of offshore companies to make their assets hard to trace, real estate agents help them invest in luxury property with few questions asked, and PR agents suggest donations to universities and charities to help launder their reputation.
For many years the UK has been happy to accept cash from kleptocracies but, following Russia’s attack on Ukraine, the folly of the ‘no questions asked’ approach is starting to be more widely understood. | <urn:uuid:c62ac780-4f13-41b8-be3b-7927d3d17300> | CC-MAIN-2024-10 | https://www.chathamhouse.org/2022/07/what-kleptocracy-and-how-does-it-work | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00770.warc.gz | en | 0.955358 | 1,982 | 2.578125 | 3 |
A team of scientists has looked to the heavens for a way to double the energy produced by microbial fuel cells (MFCs), using bacteria usually found 30km from the Earth’s surface.
If sci-fi B-movies are anything to go by, then life-forms from outer space are not usually the benevolent type. However, researchers reckon that Bacillus stratosphericus could in fact be used to help save the planet by generating electricity, like another famous extraterrestrial.
The bacteria is usually found to be in high concentrations way up in the stratosphere in the realm of satellites. Having found the microbe down here on terrafirma in the Wear Estuary, County Durham, the researchers at Newcastle University were able to engineer a biofilm that could generate a lot more energy than is usually possible with an MFC.
Previously it has been possible to generate around 105 Watts per cubic metre, the biofilm has been able to reach 200 Watts per cubic metre.
This may not be a massive amount but the environmentally sound method of energy production is enough for small applications, such as providing a light source. The researchers reckon that this could mean it would be very useful in developing countries where there is little electricity grid infrastructure, and little access to basics such as light.
The microbe is thought to have reached the bed of the River Wear after dropping down to earth due to atmospheric cycle processes. This allowed it to be processed and isolated from the numerous other microbes found, presumably separating from the other junk such as disused trollies, broken tellies, or David Walliams’ that accumulate in rivers these days.
By manipulating the microbial mix the team was able to engineer a biofilm. While this is not new in itself, the ability to generate much more power was innovative, using the usual MFC process of converting organic compounds into electricity via bio-catalytic oxidation.
This works by coating the electrodes of the MFC in the stratospheric microbial ooze, with the bacteria producing electrons as they feed, generating electricity.
In addition to B. Stratosphericus the team were able to add more names to TechEye’s list of favourite electricity producing microbes, with Bacillus altitudinis – another space-bound microbe – and Bacteroidetes also used. | <urn:uuid:4fa8477a-2a0a-49f6-9950-156000240728> | CC-MAIN-2014-23 | http://news.techeye.net/science/space-bound-bacteria-doubles-bio-battery-power | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266597.23/warc/CC-MAIN-20140728011746-00050-ip-10-146-231-18.ec2.internal.warc.gz | en | 0.947264 | 475 | 3.671875 | 4 |
Grazing Lands Management
Rangeland condition monitoring, aka residual dry matter or RDM monitoring, is an essential, collaborative task involving the Conservancy, Tejon Ranch Company and representatives of both of Tejon’s cattle-grazing operations. Measurements are generally made in the autumn of each year just prior to the arrival of winter precipitation.
The overall intent of the monitoring is to ensure the structural integrity of the Ranch’s grazing lands is maintained in such a way that promotes their long-term sustainability, both from an operational and an ecological perspective.
To achieve this, sufficient biomass, generally in the form of palatable grasses and forbs, must be retained such that the risks of soil erosion and landscape degradation are minimized, if not eliminated. Because soil erosion prevention is a key objective, steep terrain generally requires more residual dry matter, with gently sloping areas requiring less. Soil type and depth are key factors as well. On a landscape as topographically complex as Tejon Ranch’s, and with pasture sizes ranging from a few hundred to tens of thousands of acres, residual dry matter can vary significantly, even within the same pasture. | <urn:uuid:1b587a6c-965e-4549-9ee7-130e99dfcfa2> | CC-MAIN-2020-40 | https://www.tejonconservancy.org/11-grazeland | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00430.warc.gz | en | 0.938872 | 239 | 3.078125 | 3 |
Now we understand why taking care for the environment around us.You will learn many ways that you can make green energy techniques in your home today.
When you’re coming up with a design for your outdoor lighting project, incorporate solar-powered lamps. These lamps aren’t costly and do not need additional power besides sun exposure. This saves you a lot of energy!It also means that you money by not having to go outside and wire up outdoor lights to your home.
Government grants help consumers invest in renewable energies. Contact your local government in order to see the programs that exist locally.You may qualify for installation or a tax deduction.
Be socially responsible, and cut your home energy usage by unplugging your electronic chargers when they are not in use.Chargers for devices like phones, mp3 players, laptops and other devices draw some power when they are plugged in, even if they aren’t charging your device.
Solar water heaters are a great option for your home’s water. If you live in an area where freezing temperatures are not a problem, try installing a system that uses a solar heater. However, it’s best to keep a traditional heater for times when you need a lot of heated water, or if the sun does not come out for some time.
Research different energy sources available in your community. Compare the costs of operating your home using several of these utilities, and take current or pending energy legislation into consideration. You may find that switching to well water or another may provide a reduction in energy use and cost.
Only run your dishwasher when it is full will save you money and energy. Don’t run it with only a few dishes. You may be shocked to learn the number of dishes your dishwasher can handle in one load.
When planning a home solar system, think about how much energy will be produced during winter. This prevents any unanticipated effects from a winter months, and it will have you entering the summer safely without energy concerns as well.
This prevents energy from dissipating in the cables.
Use a tankless and more green instead of a tank style heater. Tankless heaters require less energy to heat water, but they are more effective in heating solely the water necessary instead of a huge tank of water constantly. Tankless heaters can supply the whole house or even just a single water faucet with water that is hot.
Learn about the difference between passive and active solar power.While passive simply uses the sun in storing thermal energy inside your walls in order to heat your house.
Use LED holiday lights to decorate for the holidays instead of traditional strand lights. According to a study by the United States That is enough power to run more than 200,000 homes with electricity for a whole year. You can save money on your energy bills!
If you live in a neighborhood with lots of children, try setting up a carpool with other parents, where you trade rides with other parents in the neighborhood. You can even carpool to the supermarket with friends that live near you.
Replace your old toilet with a water-saving model.Some estimates show that up to half of the water consumption in your home is from the toilet. An older model uses 5 gallons of water per flush, 5 gallons versus 1.6 gallons, saving about 70 percent on your yearly water use.
If it is one of your priorities to treat the environment better you can do so by utilizing some of these tips, which will increase the energy efficiency of your home. The money you’ll save is an added plus. | <urn:uuid:dd82b45f-5a7b-4afb-af22-6985bae8f0a4> | CC-MAIN-2020-29 | http://www.tecno-solar.com/seeking-informative-ideas-on-green-energy-look-at-these-ideas/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00328.warc.gz | en | 0.940465 | 736 | 3.1875 | 3 |
Introduction to Zero Trust
The traditional security model, which is based on the assumption that everything inside the network can be trusted, is no longer effective. As the threat landscape continues to evolve and expand, it is becoming increasingly clear that organizations must adopt a new security approach that assumes everything outside and inside the network is not trusted. This new approach is known as Zero Trust.
Zero Trust is a security model that emphasizes the need to verify and authenticate every access request before granting access to a resource. It is an approach that assumes that no user, device, or application can be trusted until it is authenticated and authorized to access a resource. In other words, Zero Trust is a security model that requires organizations to adopt a mindset of “never trust, always verify.”
Traditional Security Approaches
Traditional security approaches are based on the assumption that everything inside the network can be trusted. This approach relies on a perimeter-based security model that is designed to keep external threats out of the network. However, this approach has several weaknesses. First, it assumes that the internal network is safe and trustworthy, which is no longer true. Second, it is designed to protect against external threats, but it does not provide sufficient protection against internal threats. Third, it does not take into account the mobility of users and devices, which can move between internal and external networks.
The Evolution of Zero Trust
Zero Trust is not a new concept. It was first introduced by Forrester Research in 2010. However, it has evolved significantly over the years. Today, Zero Trust is more than just a security model; it is a comprehensive security strategy that includes policies, procedures, and technologies designed to protect an organization’s assets and data.
The Core Principles of Zero Trust
Zero Trust is based on four core principles, which are:
a. Verify Explicitly
Every access request must be explicitly verified and authenticated before access is granted to a resource. This means that every user, device, and application must be identified and authenticated before access is granted.
b. Use Least Privilege Access
Access should be granted on a need-to-know basis. This means that users, devices, and applications should only be granted access to the resources that they need to perform their job functions.
c. Assume Breach
Organizations should assume that they have already been breached and that attackers are already inside the network. This means that organizations should adopt a proactive approach to security, which involves continuous monitoring and analysis of network traffic.
d. Never Trust, Always Verify
Organizations should never trust any user, device, or application until it has been explicitly verified and authenticated. This means that access requests should be verified and authenticated every time, even for users and devices that have been previously granted access.
Traditional security is based on the assumption that everything inside the network can be trusted, while Zero Trust assumes that nothing can be trusted and requires verification and authentication of every access request.
The core principles of Zero Trust are verify explicitly, use least privilege access, assume breach, and never trust, always verify.
Yes, Zero Trust can be implemented in a hybrid cloud environment by segmenting the network, controlling access, and continuous monitoring and logging.
The benefits of implementing Zero Trust in a mixed estate include improved security, better visibility, enhanced compliance, and reduced risk.
The challenges of implementing Zero Trust in a mixed estate include complexity, resistance to change, and lack of resources.
Implementing Zero Trust in a Mixed Estate
Implementing Zero Trust in a mixed estate, which includes on-premises, cloud-based, and hybrid environments, can be challenging. However, there are several steps that organizations can take to implement Zero Trust in a mixed estate, including:
a. Identify the Assets and Data
The first step in implementing Zero Trust in a mixed estate is identifying the assets and data that need protection. This includes identifying the devices, applications, and data critical to the organization’s operations.
b. Segmenting the Network
Once the assets and data have been identified, the next step is to segment the network. Network segmentation involves dividing the network into smaller segments and controlling the flow of traffic between these segments. This helps to limit the impact of a security breach and prevent attackers from moving laterally through the network.
c. Controlling Access
Controlling Access is a critical component of Zero Trust. Access should be granted on a need-to-know basis, and users, devices, and applications should only be granted Access to the resources needed to perform their job functions. Access should also be continuously monitored and analyzed for any suspicious activity.
d. Monitoring and Logging
Continuous monitoring and logging are essential components of Zero Trust. This involves monitoring network traffic, analyzing access logs, and detecting suspicious activity. It also involves collecting and analyzing data from various sources, such as endpoints, servers, and applications, to detect and respond to security threats.
The Benefits of Zero Trust in a Mixed Estate
Implementing Zero Trust in a mixed estate has several benefits. These include:
a. Improved Security
Zero Trust provides a more secure environment by assuming that everything inside and outside the network is untrustworthy. This helps to prevent security breaches and limit their impact.
b. Better Visibility
Zero Trust provides better visibility into network activity, making detecting and responding to security threats easier. This includes real-time monitoring of network traffic and access logs.
c. Enhanced Compliance
Zero Trust can help organizations meet compliance requirements by better-controlling Access to sensitive data and resources. It also helps to ensure that Access is granted on a need-to-know basis.
d. Reduced Risk
Implementing Zero Trust can help to reduce the Risk of security breaches and limit their impact. This can help minimize the financial and reputational damage resulting from a security breach.
The Challenges of Implementing Zero Trust
Implementing Zero Trust in a mixed estate can be challenging. Some of the challenges include the following:
Implementing Zero Trust requires a significant investment in time and resources. It can also be complex, especially in mixed estates, including on-premises, cloud-based, and hybrid environments.
b. Resistance to Change
Implementing Zero Trust requires a significant shift in mindset and culture. This can be challenging, especially in organizations that are resistant to change.
c. Lack of Resources
Implementing Zero Trust requires significant time and resources, including personnel, tools, and technologies. This can be a challenge for organizations with limited resources.
Zero Trust is a security model emphasizing the need to verify and authenticate every access request before granting Access to a resource. It is an approach that assumes that no user, device, or application can be trusted until it is authenticated and authorized to access a resource. Implementing Zero Trust in a mixed estate can be challenging, but it provides several benefits, including improved security, better visibility, enhanced compliance, and reduced Risk. | <urn:uuid:92d2723e-e40e-476b-a31a-7b5f7fa6e2d3> | CC-MAIN-2023-40 | https://terrazone.io/zero-trust-building-a-mixed-estate/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510219.5/warc/CC-MAIN-20230926175325-20230926205325-00319.warc.gz | en | 0.941883 | 1,437 | 3.234375 | 3 |
In The Heart Of Cygnus, NASA's Fermi Reveals A Cosmic-ray Cocoon
30 November 2011
Tour the Cygnus X star factory. This video opens with wide optical and infraredimages of the constellation Cygnus, then zooms into the Cygnus X region using radio, infrared and gamma-ray images. Fermi LAT shows that gamma rays fill cavities in the star-forming clouds. The emission occurs when fast-moving cosmic rays strike hot gas and starlight. Credit: NASA/Goddard Space Flight Center
The constellation Cygnus, now visible in the western sky as twilight deepens after sunset, hosts one of our galaxy's richest-known stellar construction zones. Astronomers observing the region in visible light see only hints of this spectacular activity thanks to a veil of nearby dust clouds forming the Great Rift. The Great Rift is a dark lane that appears to split the bright band of the Milky Way's central plane.
Gamma-ray emission detected by Fermi LAT fills bubbles of hot gas created by the most massive stars in Cygnus X. The turbulence and shock waves produced by these stars make it more difficult for high-energy cosmic rays to traverse the region. When the particles strike gas nuclei or photons of starlight, gamma rays result. Credit: NASA/DOE/Fermi LAT Collaboration/I. A. Grenier and L. Tibaldo
Located in the vicinity of the second-magnitude star Gamma Cygni, the star-forming region was named Cygnus X when it was discovered as a diffuse radio source by surveys in the 1950s. Now, a study using data from NASA's Fermi Gamma-ray Space Telescope finds that the tumult of star birth and death in Cygnus X has managed to corral fast-moving particles called cosmic rays.
Cosmic rays are subatomic particles mainly protons that move through space at nearly the speed of light. In their journey across the galaxy, the particles are deflected by magnetic fields, which scramble their paths and make it impossible to backtrack the particles to their sources.
Yet when cosmic rays collide with interstellar gas, they produce gamma rays the most energetic and penetrating form of light that travel to us straight from the source. By tracing gamma-ray signals throughout the galaxy, Fermi's Large Area Telescope (LAT) is helping astronomers understand the sources of cosmic rays and how they're accelerated to such high speeds. In fact, this is one of the mission's key goals.
The galaxy's best candidate sites for cosmic-ray acceleration are the rapidly expanding shells of ionized gas and magnetic field associated with supernova explosions. For stars, mass is destiny, and the most massive ones known as types O and B live fast and die young.
Cygnus X hosts many young stellar groupings, including the OB2 and OB9 associations and the cluster NGC 6910. The combined outflows and ultraviolet radiation from the region's numerous massive stars have heated and pushed gas away from the clusters, producing cavities of hot, lower-density gas. In this 8-micron infrared image, ridges of denser gas mark the boundaries of the cavities. Bright spots within these ridges show where stars are forming today. Credit: NASA/IPAC/MSX
They're also relatively rare because such extreme stars, with masses more than 40 times that of our sun and surface temperatures eight times hotter, exert tremendous influence on their surroundings. With intense ultraviolet radiation and powerful outflows known as stellar winds, the most massive stars rapidly disperse their natal gas clouds, naturally limiting the number of massive stars in any given region.
Which brings us back to Cygnus X. Located about 4,500 light-years away, this star factory is believed to contain enough raw material to make two million stars like our sun. Within it are many young star clusters and several sprawling groups of related O- and B-type stars, called OB associations. One, called Cygnus OB2, contains 65 O stars the most massive, luminous and hottest type and nearly 500 B stars.
Astronomers estimate that the association's total stellar mass is 30,000 times that of our sun, making Cygnus OB2 the largest object of its type within 6,500 light-years. And with ages of less than 5 million years, few of its most massive stars have lived long enough to exhaust their fuel and explode as supernovae.
Intense light and outflows from the monster stars in Cygnus OB2 and from several other nearby associations and star clusters have excavated vast amounts of gas from their vicinities. The stars reside within cavities filled with hot, thin gas surrounded by ridges of cool, dense gas where stars are now forming. It's within the hollowed-out zones that Fermi's LAT detects intense gamma-ray emission, according to a paper describing the findings that was published in the Nov. 25 edition of the journal Science.
"We are seeing young cosmic rays, with energies comparable to those produced by the most powerful particle accelerators on Earth. They have just started their galactic voyage, zig-zagging away from their accelerator and producing gamma rays when striking gas or starlight in the cavities," said co-author Luigi Tibaldo, a physicist at Padova University and the Italian National Institute of Nuclear Physics.
The energy of the gamma-ray emission, which is measured up to 100 billion electron volts by the LAT and even higher by ground-based gamma-ray detectors, indicates the extreme nature of the accelerated particles. (For comparison, the energy of visible light is between 2 and 3 electron volts.) The environment holds onto its cosmic rays despite their high energies by entangling them in turbulent magnetic fields created by the combined outflows of the region's numerous high-mass stars.
"These shockwaves stir the gas and twist and tangle the magnetic field in a cosmic-scale jacuzzi so the young cosmic rays, freshly ejected from their accelerators, remain trapped in this turmoil until they can leak into quieter interstellar regions, where they can stream more freely," said co-author Isabelle Grenier, an astrophysicist at Paris Diderot University and the Atomic Energy Commission in Saclay, France.
The well known Gamma Cygni supernova remnant - so named for its proximity to the star - also lies within this region; astronomers estimate its age at about 7,000 years. The Fermi team considers it possible that the supernova remnant spawned the cosmic rays trapped in the Cygnus X "cocoon," but they also suggest an alternative scenario where the particles became accelerated through repeated interaction with shockwaves produced inside the cocoon by powerful stellar winds.
"Whether the particles further gain or lose energy inside this cocoon needs to be investigated, but its existence shows that cosmic-ray history is much more eventful than a random walk away from their sources," Tibaldo added.
Fermi is providing a never-before-seen glimpse of the early life of cosmic rays, long before they diffuse into the galaxy at large. Astronomers know of a dozen stellar clusters at least as young and rich as Cygnus OB2, including the Arches and Quintuplet clusters near the galaxy's center. Energetic gamma rays are detected in the vicinity of several of them, so perhaps they also corral cosmic rays in their own high-energy cocoons.
The Imagine Team
Project Leader: Dr. Barbara Mattson
Curator: Meredith Gibb
Responsible NASA Official: Phil Newman
All material on this site has been created and updated between 1997-2014.
This page last updated: Wednesday, 11-Jan-2012 15:27:57 EST
Do you have a question, problem or comment about this web site? Please let us know.
External links contain material that we found to be relevant. However they're not maintained by us and the content may have changed. If you find any external links that contain inappropriate material, please let us know! | <urn:uuid:5601f460-6298-4437-9a29-4cd31464e30e> | CC-MAIN-2014-35 | http://imagine.gsfc.nasa.gov/docs/features/news/30nov11.html | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00027-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.934901 | 1,661 | 3.65625 | 4 |
Increasingly, dental technologies and innovations happen to be changing the way in which dentists buy and sell and deal with patients. These types of innovations is going to democratize dental hygiene and make it cheaper and accessible.
Some of the most essential innovations consist of computer-assisted design and manufacture. This allows dentists to customize pearly whites enhancements, including crowns and implants. In addition , it can reduce the expense of dental research laboratory products, and let more procedures to be done in one resting.
Another originality involves 3D printing. This technique allows dental surgeons to create custom teeth enhancements with regards to patients, that may be used in plastic dental steps. Currently, dentist have to make a mould of the dental, then send this to a dental laboratory to make a permanent overhead.
Advanced imaging technology is likewise making a difference in dental hygiene. These devices allow dentists to see the entire enamel, including the bone tissue, at once. This is resource important in detecting early on symptoms of gingivitis.
Other improvements in the field of dentistry include intraoral cameras. These are small gadgets that are cordless and hook up into a computer. These types of cams allow dentist to see within the mouth, which will alleviate not comfortable situations and improve the patient’s experience.
A lot of intraoral cameras are equipped with LED lights. This will make it easier to view the teeth and conditions from the mouth. Several also have a wifi connection, so that patients can watch the image.
In addition, there are several firms that develop intraoral cameras. These include Durrdental, Kapanu, and Carestream Tooth. | <urn:uuid:71f6b15a-44a0-4925-8721-5d25a7f501e4> | CC-MAIN-2023-14 | http://youandfamily.xyz/dentistry-technologies-and-innovations/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00000.warc.gz | en | 0.934032 | 337 | 2.90625 | 3 |
The National Day of Remembrance and Action on Violence against Women is a day commemorated in Canada on December 6, to remember the anniversary of the 1989 École Polytechnique Massacre, in which 14 women were singled out for their gender and murdered. In December 1999, the 54th session of the United Nations General Assembly declared November 25th the International Day for the Elimination of Violence against Women (later to be called as White Ribbon Day). Respecting this cause in which few men in Canada pledged to fight violence against women; 57 countries in the world prepare, plan and execute various activities relating to White Ribbon Day. It is often marked by vigils, discussions and other reflections on violence against women. It is followed by 16 days of Activism against gender violence.
SNEHA has always believed in bridging the gender gap by not only having a positive bias towards women but also by ensuring that men get involved in the fight against gender bias. The community team of SNEHA intervenes in the cases of gender violence in the community. It also ensures effective sensitisation and involvement of men folk in the community towards the issue.
25th November primararily focuses on the involvement of men in the crusade against gender violence.
25th November in SNEHA is organized on a mass scale. Men associated with SNEHA are encouraged to perform dance, songs, short plays that depict the hazards of all types of gender violence and its repercussions on the family and society. SNEHA was able to elicit participation of many men to express solidarity for the White Ribbon Day. Programme Director of the Prevention of Violence against Women and Children, (PVWC), SNEHA, Dr. Nayreen Daruwalla expressed the need for the involvement of the men in the issue of violence against women. This will ensure equality between the sexes. Giving the example of stalwarts from Maharashtra, viz Chhatrapati Shivaji Maharaj, Chhatrapati Shahu Maharaj, Mahatma Jyotiba Phule, Dr. B.R. Ambedkar who worked for the cause of upliftment of women, she urged women to take cognizance of the work done by men in this regard.
The programme unfolded with dance, some recited songs they had penned on the issue of gender violence and the youth group performed skits that carried the message of showing respect to women and ending violence against them in whichever way one can. The highpoint of the programme was a fashion show in which leaders of Maharashtra state who have uplifted the status of women through their work and policy formulation were impersonated by the men from SNEHA. These included Chhatrapati Shivaji Maharaj, Chhatrapati Shahu Maharaj, Mahatma Jyotiba Phule, and Dr. B.R. Ambedkar.
Mr. Hasmukh, a community person in his speech said "We live in a culture where we are socialized to live with and accept atyachaar (violence) One keeps quiet to save ones culture. But we should not live in the culture of silence but instead speak up and lodge a complaint with the police. The family also should support the victim of abuse and violence to file a complaint. The family should not blame the victim herself.' Quoting his example he said 'I am going to teach my daughter till she is a graduate. I have not studied but I will ensure that she will study. She is not a Parayaa dhan. If I educate her she will be able to handle all kinds of hardships she would face. His parting words ring true "Atyachaar karna nahi, Atyachaar sahana nahi'. which means do not perpetrate violence and don’t keep mum about it.
Ms. Madhuri, (a transgender) performed dances, one of which explained what a woman does to meet both her family needs and also her own. Another dance depicted the importance being a woman.
The program ended with a candle light pledge that was taken by all the pledge asked for all to accept the need to respect women, to keep eyes and ears ever open to any kind of violence happening around, saying no to violence and striving to end violence from everyone’s lives. | <urn:uuid:66aa5d26-de6a-48ba-a9a5-58fedfbdbd7b> | CC-MAIN-2017-17 | http://snehamumbai.org/events/event-details.aspx?EventId=2 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00103-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.971786 | 875 | 3.34375 | 3 |
1635 – John Cotton, a Puritan minister, establishes America’s first public school, Boston Latin, with fewer than 10 students.
1852 – In Massachusetts, Horace Mann helps pass the first compulsory attendance law in the nation for children of elementary-school age. New York follows in 1853.
1857 – The National Education Association, the first national teachers group — and now the largest labor union in the United States — is founded. The Amercan Federation of Teachers follows in 1916.
1897 – The National Parent Teacher Association is founded. It now has about 5.5 million members, down from 12 million in the late 1960s.
1946 – The Richard B. Russell National School Lunch Act subsidizes low-cost or free lunches for qualified students.
1946 – The country’s first organized teachers strike begins in St. Paul, Minnesota, in single- digit winter weather. More than 1,000 teachers picket in front of the 77 schools in the district for nearly six weeks. Many students join their teachers.
1954 – With Brown v. Board of Education of Topeka, the Supreme Court finds that “separate educational facilities are inherently unequal.”
1965 – The Elementary and Secondary Education Act provides federal funds to expand educational opportunities for low-income students. It has been reauthorized since, including most recently as part of No Child Left Behind.
1968 – McCarver Elementary in Tacoma, Washington, becomes the nation’s first magnet school, inviting students from anywhere in the city to enroll. This is seen as a way to end de facto segregation.
1970 – Test scores are reported to the government and the public for the first time and become a tool for measuring school performance.
1980 – Several federal offices are combined into the Department of Education.
1983 – A Nation at Risk, a federal report, indicates very low academic achievement and declining American test scores. It spurs most states to mandate curricula and more frequent testing.
1990 – Milwaukee is the first city to offer vouchers to its students to attend schools outside the traditional public-school system.
1991 – Boston’s Thomas Menino becomes the first mayor to assume control over a public-school system. He appoints a seven-member committee, which then names a superintendent. Voters reaffirm mayoral control in 1996.
1992 – The nation’s first charter school, City Academy in St. Paul, Minnesota, opens with roughly 40 students in attendance.
2002 – George W. Bush signs the No Child Left Behind Act, with goals of increasing parental choice and accountability for states and schools.
2007 – On June 1, President Bush signs legislation that hands control of Washington’s public schools to Mayor Adrian Fenty. The following week, Fenty appoints Michelle Rhee to head the D.C. schools. | <urn:uuid:bce467f5-51de-4069-92b7-207dbcbad532> | CC-MAIN-2019-04 | https://www.fastcompany.com/958575/class-actions | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583668324.55/warc/CC-MAIN-20190119135934-20190119161934-00018.warc.gz | en | 0.94398 | 586 | 3.484375 | 3 |
Thermodynamic properties of heavy water (D2O) like density, melting temperature, boiling temperature, latent heat of fusion, latent heat of evaporation, critical temperature and more.
Heavy water ( deuterium oxide , 2 H 2 O , D 2 O ) is a form of water that contains a larger than normal amount of the hydrogen isotope deuterium (= heavy hydrogen = 2 H = D), rather than the common hydrogen-1 isotope ( 1 H = H = protium) that makes up most of the hydrogen in normal water.
Thermodynamic properties of heavy water - D 2 O:
- Boiling temperature (at 101.325 kPa): 101.40 o C = 214.52 °F
- Bulk modulus elasticity (at 25°C): 2.10 x 10 9 Pa or N/m 2
- Critical density: 0.356 g/cm 3 = 0.691 slug/ft 3 = 3.457 lb m /gal(US)
- Critical pressure : 213.88 atm = 220.98 bar = 21.671 MPa (MN/m 2 ) = 3143 psi (=lb f /in 2 )
- Critical temperature : 370.697 o C = 699.255 °F
- Ionization constant , pK w (at 25°C): 14.951
- Latent heat of evaporation (at 101.4°C): 41.521 KJ/mol = 2073.20 kJ/kg = 891.32 Btu(IT)/lb
- Latent heat of fusion: 6.132 kJ/mol = 306.2 kJ/kg = 131.64 Btu(IT)/lb
- Maximum density (at 11.23 o C): 1105.9 kg/m 3 = 2.1460 slug/ft 3 = 10.74048 lb m /gal(US)
- Melting temperature (at 101.325 kPa): 3.81 o C = 38.86 °F
- Molar mass: 20.02751 g/mol
- pD (~pH) (at 25°C): 7.43
- Specific heat (C p ) water (at 20°C): 4.219 kJ/kgK = 1.008 Btu(IT)/(lb m °F) or kcal/(kg K)
- Specific weight (at 11.23 o C): 10.8452 kN/m 3 = 69.0391 lb f /ft 3
- Surface tension (at 25°C): 71.87 dyn/cm
- Triple point pressure: 0.00652 atm = 0.00661 bar = 661 Pa = 0.0959 psi (=lb f /in 2 )
- Triple point temperature: 3.82 °C = 38.88 °F
- Vapor pressure (at 25°C): 20.6 mmHg = 0.027 atm = 0.028 bar = 2750 Pa = 0.398 psi
- Viscosity (at 20°C): 1.251 cP or mPa s
See also more about atmospheric pressure , and STP - Standard Temperature and Pressure & NTP - Normal Temperature and Pressure ,
as well as Thermophysical properties of: Acetone , Acetylene , Air , Ammonia , Argon , Benzene , Butane , Carbon dioxide , Carbon monoxide , Ethane , Ethanol , Ethylene , Helium , Hydrogen , Hydrogen sulfide , Methane , Methanol , Nitrogen , Oxygen , Pentane , Propane , Toluene and Water . | <urn:uuid:0018304b-a6ce-48e1-9a94-c542abd185d5> | CC-MAIN-2023-50 | https://www.engineeringtoolbox.com/heavy-water-thermodynamic-properties-d_2003.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100081.47/warc/CC-MAIN-20231129105306-20231129135306-00241.warc.gz | en | 0.69844 | 760 | 2.875 | 3 |
By AMERICAN HEART ASSOCIATION NEWS
For the first time in history, prevalence of high blood pressure is higher in low- and middle-income countries, according to new research.
In a 2010 data analysis involving more than 968,000 people from 90 countries, more than 30 percent of adults worldwide live with high blood pressure — and 75 percent of them live in low- and middle-income countries.
High blood pressure is a major risk factor for heart disease and stroke, and is the leading preventable cause of premature death and disability worldwide.
Using sex- and age-specific high blood pressure prevalence from 131 past reports, researchers found:
- In 2010, 31.1 percent (1.39 billion) of the global adult population had high blood pressure — 28.5 percent (349 million) from high-income countries and 31.5 percent (1.04 billion) from low- and middle-income countries.
- High blood pressure prevalence decreased by 2.6 percent in high-income countries while increasing 7.7 percent in low- and middle-income countries between 2000 and 2010.
- In high-income countries, significant high blood pressure improvements occurred from 2000 to 2010: awareness increased from 58.2 percent to 67 percent, treatment rates improved from 44.5 percent to 55.6 percent and control increased from 17.9 percent to 28.4 percent.
- In low- and middle-income countries, awareness slightly improved, from 32.3 percent to 37.9 percent, and treatment increased from 24.9 percent to 29 percent. But high blood pressure control worsened, from 8.4 percent to 7.7 percent.
“Aging populations and urbanization, which is often accompanied by unhealthy lifestyle factors such as high sodium, fat and calorie diets and lack of physical activity, may play an important role in the epidemic of hypertension in low- and middle-income countries,” said Jiang He, M.D., Ph.D., senior study author and a researcher at Tulane University School of Public Health and Tropical Medicine in New Orleans.
“Healthcare systems in many low- and middle-income countries are overburdened and do not have the resources to effectively treat and control hypertension,” he said. “In addition, because hypertension is symptomless and many people in low- and middle-income countries do not have access to screenings or regular preventative medical care, it is often underdiagnosed.”
Most of the world’s population is represented in the study, but more than half of the countries didn’t have data on hypertension prevalence. So some of the regional and global estimates of adults living with high blood pressure may be inaccurate, researchers said.
“Hypertension needs to be a public health priority in low- and middle-income countries to prevent future cardiovascular and kidney disease, and associated costs to society,” said Katherine T. Mills, Ph.D., the study’s lead author and a researcher at Tulane University. “Collaboration is needed from national and international stakeholders to develop innovative and cost-effective programs to prevent and control this condition.”
The study appears in the American Heart Association journal Circulation. | <urn:uuid:79cfb47d-5f39-4942-a6d0-7775d67d1c09> | CC-MAIN-2019-13 | https://newsarchive.heart.org/high-blood-pressure-more-prevalent-in-low-middle-income-countries/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205163.72/warc/CC-MAIN-20190326115319-20190326141319-00428.warc.gz | en | 0.926573 | 667 | 3.078125 | 3 |
String Theory: Phantom Energy and Rogue Universes
In string theory, the assumption is that the separate, parallel universes don’t normally interact with each other, but some approaches over the years have called this into question. One of the most recent is a 2008 paper in the journal Physics Review D by Eduardo Guendelman and Nobuyuki Sakai, in which they examine the idea of bubble universes expanding without the need for a big bang.
To make the equations work, Guendelman and Sakai had to introduce a repulsive phantom energy, which is possibly similar to dark energy. They found two types of stable solutions:
The child universe, which is isolated from the parent universe (essentially a universe inside a black hole)
A rogue universe, which is not isolated from the parent universe
This second kind of universe is troublesome, because as it begins to go through its inflation cycle, it does so by devouring the space-time of the parent universe. The parent universe is swept away as the rogue universe expands in its place — and it does so faster than the speed of light, so there’s no warning.
Fortunately, there’s no evidence that this phantom energy actually exists, or, if it does, it’s possible that it exists in the form of dark energy (or inflation energy), which means that we may be one of these rogue universes ourselves. As our universe expands, it may be devouring some other, larger universe! | <urn:uuid:5741c354-ba65-460c-bf1e-e1bcb1ff04f3> | CC-MAIN-2016-30 | http://www.dummies.com/how-to/content/string-theory-phantom-energy-and-rogue-universes.navId-811234.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828283.6/warc/CC-MAIN-20160723071028-00159-ip-10-185-27-174.ec2.internal.warc.gz | en | 0.933831 | 303 | 2.578125 | 3 |
The BRICS countries, Brazil, Russia, India, China, and South Africa, are emerging economies that have shown remarkable growth and development over the past few decades. Together, they represent over 40% of the world’s population and a combined GDP of over $16 trillion. However, despite their impressive economic growth, the BRICS countries remain heavily dependent on the US dollar for international trade and finance.
In recent years, the BRICS nations have been pushing for de-dollarization. In other words, de-dollarization is the process of reducing their dependence on the US dollar and increasing their economic autonomy. This blog post will explore the reasons behind this push, the potential impact on the global economy, the challenges to de-dollarization, and the BRICS nations’ efforts towards the development of a new currency.
What are the BRICS Countries?
The BRICS countries are a group of five emerging economies that have shown significant economic growth and development over the past few decades. The term was first coined in 2001 by economist Jim O’Neill to describe these nations that he believed would become some of the world’s largest economies by the year 2050. Initially, only Brazil, Russia, India, and China were part of it, as “BRIC”, but in 2010 South Africa was added, creating the BRICS known as nowadays.
Brazil, Russia, India, China, and South Africa represent a significant portion of the world’s population, resources, and economic potential. These countries are characterised by their large domestic markets, abundant natural resources, and increasing integration into the global economy.
Although the good signs set at the beginning, the most recent economic progress made by the BRICS countries, seems to stray from the original ideals on which the group was established. Out of the five member states, only China has been able to attain consistent and significant expansion since the beginning. While China’s gross domestic product soared from $6 trillion in 2010 to about $18 trillion in 2021, the economies of Brazil, South Africa, and Russia displayed lacklustre growth. Although India’s GDP also increased from $1.7 trillion to $3.1 trillion, it was eclipsed by China’s immense growth.
Why do the BRICS Countries Want to De-Dollarize?
The BRICS countries want to reduce their dependence on the US dollar and increase their economic autonomy for several reasons. First, the US dollar’s dominance in international trade and finance gives the US significant power and influence over the global economy. This has led to concerns that the US could use its position to impose economic sanctions or other measures that could harm the BRICS countries’ economies.
BRICS_Countries_and_The_Push_for_De-Dollarization 1512 words Second, the US dollar’s role as the world’s reserve currency means that the BRICS countries must hold large amounts of US dollars to facilitate international trade and finance. This has led to significant dollar-denominated debt for some of the BRICS nations, which could pose a risk to their economies if the value of the dollar were to decline sharply.
Finally, the BRICS countries believe that reducing their dependence on the US dollar could increase their economic autonomy and strengthen their position in the global economy. By reducing the role of the US dollar in international trade and finance, the BRICS nations could create more opportunities for their own currencies to be used and accepted globally.
How Would De-Dollarization Affect the Global Economy?
The push towards de-dollarization by the BRICS countries could have significant implications for the global economy. One potential impact is the reduced role of the US dollar in international trade and finance. This could lead to a shift towards a more multipolar international monetary system, with multiple currencies playing a larger role in global trade and finance.
Another potential impact is the increased use and acceptance of the BRICS countries’ currencies in international trade and finance. This could lead to a more diversified global economy, with more opportunities for emerging markets to participate in global trade and finance.
However, there could also be challenges associated with de-dollarization. For example, the BRICS countries may face difficulty in developing and promoting their own currencies as viable alternatives to the US dollar or even a new currency. There may also be geopolitical tensions and conflicts between the BRICS countries and the US and other Western countries that could impact their efforts towards economic autonomy.
What are the Challenges to De-Dollarization?
There are several challenges that the BRICS countries must overcome to achieve their goal of de-dollarization. One significant challenge is political and economic instability. Several of the BRICS nations have experienced significant political turmoil and economic instability in recent years, which could impact their efforts towards economic autonomy.
Another challenge is corruption and the unequal distribution of wealth. These issues could hinder the development and promotion of the BRICS countries’ own currencies and limit their potential to participate in global trade and finance.
Finally, there is the challenge of developing a viable alternative to the US dollar. While the BRICS countries have made some progress in this area, like trading in their own currencies, there is still a long way to go before they can create viable alternatives like that of a new currency. This would require significant investment in infrastructure, technology, and financial institutions.
BRICS Nations and the Development of a “New Currency”
One potential solution to the challenges of de-dollarization is the development of a new currency that could serve as an alternative to the US dollar and the BRICS nations are already moving ahead with plans. According to an announcement made by a Russian official last week, this development is yet another indication that the dominance of the US dollar in global trade is diminishing. Alexander Babakov, who serves as the deputy chairman of Russia’s legislative assembly, the State Duma, has stated that the shift towards using national currencies for settlements is just the first step. In recent times, we have already witnessed examples of this approach being employed in oil deals between India and Russia that were settled using currencies other than the US dollar.
The idea behind a new currency is to create an alternative that is not tied to the US dollar or any other existing currency. This could reduce the BRICS nations’ dependence on the US dollar and increase their economic autonomy. reduce costs, promote more significant bilateral trade and facilitate investments.
However, developing a new currency would require significant investment, infrastructure, and institutional support. It remains to be seen whether the BRICS countries will be able to overcome these challenges and successfully develop a new currency or even opt for a digital currency. The summit for the discussion is scheduled for August 2023.
China and Brazil Trades Eliminated the US Dollar
As per the new agreement recently announced, Brazil and China will conduct trade by directly exchanging their currencies, the yuan and reais, respectively, without the need for converting them into US dollars.
This denotes another minor deviation from the prevalent control of the US dollar and is aimed to reduce costs and promote even greater bilateral trade and facilitating investment.
In the past few years, multiple nations have endeavoured to reduce their reliance on the US dollar. Due to the reckless borrowing, expenditure, and monetary creation carried out by the US government, trust in the American currency has been consistently declining. Additionally, America’s deployment of the US dollar as a tool of foreign policy has instilled misgivings among several countries, prompting them to be cautious about relying solely on the dollar.
Creating a New World Bank Model
In 2014, the BRICS countries established the New Development Bank with a seed capital of $50 billion (around €46 billion). The purpose of the bank was to provide an alternative to the World Bank and the International Monetary Fund, which many developing and emerging economies had criticised for their structural adjustment programs and austerity measures. Along with the bank, the BRICS nations created the Contingent Reserve Arrangement, a liquidity mechanism designed to provide financial support to members facing payment difficulties.
The establishment of the BRICS bank and the Contingent Reserve Arrangement was a significant development that attracted many countries beyond the BRICS nations. These countries were interested in joining the group as they saw the bank as a viable alternative to the World Bank and the IMF. The bank is open to new members, and in 2021, Egypt, the United Arab Emirates, Uruguay, and Bangladesh took up shares. However, these countries’ investment amounts were significantly lower than the founding members’ $10 billion investment.
The BRICS countries’ push towards de-dollarization is a significant development in the global economy. The shift towards a more multipolar monetary system could have far-reaching implications for global trade and finance. However, there are significant challenges to overcome, including developing a viable alternative to the US dollar and addressing political and economic instability.
Ultimately, the success of the BRICS countries’ efforts towards de-dollarization will depend on their ability to work together and invest in infrastructure, technology, and institutions that can support their economic autonomy. The BRICS nations’ future will be shaped by their ability to adapt to these challenges and harness their economic potential to promote sustainable and inclusive growth. | <urn:uuid:45bee37c-1007-49f1-9fe7-f5445983aff0> | CC-MAIN-2024-10 | https://opendeclaration.com/acw/brics-multipolar-vs-unipolar-world/ | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00250.warc.gz | en | 0.954792 | 1,898 | 3.046875 | 3 |
Starting a new theme here: Garden Beneficials.
I would say “beneficial insects” but so many organisms that keep us alive and healthy are from other orders. You’ll see as we get further into this!
Here are this week’s guests of honor
These two tiny millipedes are front-line shredders, so you find them in leaf litter. They chew up organic matter (old stems, fruit, leaves and such) so that other organisms can get to work on turning cellulose-heavy plant parts into loamy soil. Other chewing organisms enlarge their tiny holes, and fungi and algae have an easier way in to join the decomposition fest.
You might at first think they’re earthworm, but close up their many tiny legs are moving incessantly as they try to find a dark spot to hide. These young ones above rolled up when I disturbed them – their bodies will acquire harder shells later after several molts, but rolling into a ball gives them extra protection. They can also exude noxious substances to deter predators.
According to Life in the Soil, (James B. Nardi), “Millipedes take a year or more to mature and may live for several years”. He also notes that fox sparrows are one of the few birds that can down a millipede without getting sick from the chemical defenses it secretes.
Later on, these guys will develop more color and harder shells. They don’t move too fast, but they can roam around in cracks and crevices looking for dead material, fungi and algae to eat. So when you see them at work – remember – they are not eating your live plants or roots! They’re just the recyclers. Thank them for the nice soil you enjoy. | <urn:uuid:5e7760cd-5080-438b-a56d-e5a0b19e0c87> | CC-MAIN-2018-13 | https://taylorgardensnw.com/2012/10/18/beneficial-organism-of-the-week/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00133.warc.gz | en | 0.958066 | 376 | 2.90625 | 3 |
Professor Alexis Vallée-Bélisle of the University of Montreal Department of Chemistry has worked with Professor Francesco Ricci of the University of Rome Tor Vergata and Professor Kevin W. Plaxco of the University of California at Santa Barbara to improve a new biosensing nanotechnology. The results of the study were recently published in the Journal of American Chemical Society (JACS).
Toward a new generation of screening tests
"Nature is a continuing source of inspiration for developing new technologies,” says Professor Francesco Ricci, senior author of the study. “Many scientists are currently working to develop biosensor technology to detect—directly in the bloodstream and in seconds—drug, disease, and cancer molecules."
“The most recent rapid and easy-to-use biosensors developed by scientists to determine the levels of various molecules such as drugs and disease markers in the blood only do so when the molecule is present in a certain concentration, called the concentration window,” adds Professor Vallée-Bélisle. "Below or above this window, current biosensors lose much of their accuracy.”
To overcome this limitation, the international team looked at nature: “In cells, living organisms often use inhibitor or activator molecules to automatically program the sensitivity of their receptors (sensors), which are able to identify the precise amount of thousand of molecules in seconds,” explains Professor Vallée-Bélisle. “We therefore decided to adapt these inhibition, activation, and sequestration mechanisms to improve the efficiency of artificial biosensors.”
The researchers put their idea to the test by using an existing cocaine biosensor and revising its design so that it would respond to a series of inhibitor molecules. They were able to adapt the biosensor to respond optimally even with a large concentration window. “What is fascinating,” says Alessandro Porchetta, a doctoral student at the University of Rome, “is that we were successful in controlling the interactions of this system by mimicking mechanisms that occur naturally.”
“Besides the obvious applications in biosensor design, I think this work will pave the way for important applications related to the administration of cancer-targeting drugs, an area of increasing importance," says Professor Kevin Plaxco. “The ability to accurately regulate biosensor or nanomachine’s activities will greatly increase their efficiency.”
Source: Université de Montréal
If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
Check out these other trending stories on Nanowerk: | <urn:uuid:9c0b7530-5fe4-413c-b980-5f4a6c234929> | CC-MAIN-2015-06 | http://www.nanowerk.com/news2/newsid=29001.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118551401.78/warc/CC-MAIN-20150124165551-00076-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.9364 | 543 | 3.109375 | 3 |
What is CSS (Cascading Style Sheets) is a stylesheet language | EP #8 Coding Talk Show Podcast
CSS (Cascading Style Sheets) is a stylesheet language that is used to control the appearance of web content. DDSRY.com
It is used to define the layout, font, and color of web pages, and it is typically used in conjunction with HTML (Hypertext Markup Language), which is used to structure and format the content of a webpage.
CSS allows developers to separate the content of a webpage from its presentation, which makes it easier to maintain and update the look and feel of a website. It is a key tool in the development of responsive websites, which are designed to adapt to different screen sizes and devices.
To use CSS, developers write style rules that specify how different elements on a webpage should be displayed. These rules can be applied to specific HTML elements or groups of elements using selectors, and they can be written in an external stylesheet file or embedded directly in the HTML code of a webpage.
Overall, CSS is a powerful and widely used language that is essential for creating visually appealing and well-designed websites. | <urn:uuid:ab4d3126-c809-4bae-a795-f172c758f1e3> | CC-MAIN-2023-23 | https://ddsry.medium.com/what-is-css-cascading-style-sheets-is-a-stylesheet-language-ep-8-coding-talk-show-podcast-517d8932f5bb | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653183.5/warc/CC-MAIN-20230606214755-20230607004755-00519.warc.gz | en | 0.893194 | 241 | 3.796875 | 4 |
Between the years 1979 and 1983, although no new versions of the Apple II were released, it enjoyed a broad popularity and annually increasing sales. The open architecture of the computer, with its fully described hardware and firmware function via the Reference Manual, made it appealing both to hardware and software hackers. Third-party companies designed cards to plug into the internal slots, and their function varied from making it possible to display and use 80-column text, to clocks and cards allowing the Apple II to control a variety of external devices. During this time there was also an explosion of new software written for this easily expandable machine, from the realm of business (VisiCalc and other spreadsheet clones), to utilities, to games of all types. Each month a host of new products would be available for those who wanted to find more things to do with their computer, and the Apple II was finding a place in the home, the classroom, and the office.
At Apple Computer, Inc., however, the Apple II was not viewed with the same degree of loyalty. Although it had continued to be a sales leader, there were sentiments within the company as early as September, 1979 that it was unlikely the II could continue to be a best seller for more than another year or two. Since Apple Computer was a business, and not just a vehicle for selling the Apple II computer, they began to enlarge the engineering department to begin designing new products. The beginning of these new design efforts began in 1978, and one of the earliest projects was an enhanced Apple II that used some custom chips; however, that project was never finished. They also began work on a different, more powerful computer that would use several identical microprocessor chips sharing tasks. The main advantage would be speed, and the ability to do high precision calculations. This computer was code-named Lisa, and because it was such a revolutionary type of design, they knew it would take many years to come to actual production. Because of the power it was to have, Apple executives felt that Lisa would be the future of the company.,
Because they knew that the Lisa project would take a long time to complete, and because the Apple II was perceived to have only a short remaining useful life as a product, they began a new computer project called the Apple III. Instead of building upon the Apple II as a basis for this new computer, they decided to start from scratch. Also, although Wozniak made most of the design decisions for the II, a committee at Apple decided what capabilities the Apple III should have. They decided that the Apple III was to be a business machine, and not have the home or arcade-game reputation that the II had. It was to have a full upper/lowercase keyboard and display, 80-column text, and a more comprehensive operating system. They also decided that since it would be a while before many application programs would be available for this new computer, it should be capable of running existing Apple II software. In some ways this handicapped the project, since it was then necessary to use the same microprocessor and disk drive hardware as was used in the Apple II.
Apple executives also decided that with the introduction of the Apple III they wanted a clear separation between it and the Apple II in regard to marketing. They did not want any overlap between the two. The III would be an 80-column business machine and was predicted to have ninety percent of the market, while the Apple II would be a 40-column home and school machine and would have ten percent of the market. Apple’s executives were confident that after the release of the Apple III, the Apple II would quickly lose its appeal.
Because of their desire for a strong and distinct product separation, the Apple II emulation mode designed into the Apple III was very limited. The engineers actually added hardware chips that prevented access to the more advanced features of the Apple III from Apple II emulation mode. Apple II emulation couldn’t use 80 columns, and had access to only 48K memory and none of the better graphics modes. As a result, it wouldn’t run some of the better Apple II business software, during a time when there wasn’t much new business software for the Apple III.
The Apple III engineers were given a one-year target date for completion. It was ready for release in the spring of 1980, but there were problems with both design and manufacturing. (It was the first time that Apple as a company tried to come out with a new product; the Apple II had been designed and built by Wozniak when he was the engineering department). The first Apple III computers were plagued with nearly 100% defects and had to be recalled for fixes. Despite the efforts that Apple took to fix these problems, even taking the unprecedented step of repairing all of the defective computers at no charge, they never recovered the momentum they lost with that first mistake, and the III did not become the success Apple needed it to be.
Although all of the bugs and limitations of the Apple III were eventually overcome, and it became the computer of choice within Apple, it did not capture the market as they had hoped. At that point, they weren’t sure exactly what to do with the II. They had purposely ignored and downplayed it for the four years since the II Plus was released, although without its continued strong sales they would not have lasted as a company. In a 1985 interview in Byte magazine, Steve Wozniak stated:
When we came out with the Apple III, the engineering staff cancelled every Apple II engineering program that was ongoing, in expectation of the Apple III’s success. Every single one was cancelled. We really perceived that the Apple II would not last six months. So the company was almost all Apple III people, and we worked for years after that to try and tell the world how good the Apple III was, because we knew … If you looked at our advertising and R&D dollars, everything we did here was done first on the III, if it was business related. Then maybe we’d consider doing a sub-version on the II. To make sure there was a good boundary between the two machines, anything done on the II had to be done at a lower level than on the III. Only now are we discovering that good solutions can be implemented on the II… We made sure the Apple II was not allowed to have a hard disk or more than 128K of memory. At a time when outside companies had very usable schemes for adding up to a megabyte of memory, we came out with a method of adding 64K to an Apple IIe, which was more difficult to use and somewhat limited. We refused to acknowledge any of the good 80-column cards that were in the outside world–only ours, which had a lot of problems.
Wozniak went on in that interview to say that at one time he had written some fast disk routines for the Pascal system on the Apple II, and was criticized by the Apple III engineers. They didn’t think that anything on the II should be allowed to run faster than on a III. That was the mindset of the entire company at the time.
Apple has been much maligned for the attention they gave the Apple III project, while suspending all further development on the Apple II. They pegged their chances for the business market in 1980 on the Apple III. Even Steve Wozniak had stated in another interview, “We’d have sold tons of if we’d have let the II evolve … to become a business machine called the III instead of developing a separate, incompatible computer. We could have added the accessories to make it do the business functions that the outside world is going to IBM for.” Part of the problem was the immaturity of the entire microcomputer industry at the time. There had never been a microcomputer that had sold well for more than a couple of years before it was replaced by a more powerful model, usually from another company. The Altair 8800 and IMSAI had fallen to the more popular and easier to use Apple II and TRS-80 and Commodore PET, as well as other new machines based on the Intel 8080 and 8088 processors. It is entirely understandable that Apple’s attitude between 1978 and 1980 would be of panic and fear that they wouldn’t get a new computer out in time to keep their market share and survive as a company. However, during the entire time when Apple was working on the III as a computer to carry the company through until Lisa would be ready, and during the entire time that the Apple II was ignored by its own company, it continued to quietly climb in sales. It is a credit to both the ingenuity of Wozniak in his original design, and to the users of the Apple II in their ingenuity at finding new uses for the II, that its value increased and stimulated yet more new sales. The Apple II “beat” the odds of survival that historically were against it.
When Apple saw that the sales on the Apple II were not going to dwindle away, they finally decided to take another look at it. The first new look at advancing the design of the II was with a project called “Diana” in 1980. Diana was intended primarily to be an Apple II that had fewer internal components, and would be less expensive to build. The project was later known as “LCA”, which stood for “Low Cost Apple”. Inside Apple this meant a lower cost of manufacturing, but outsiders who got wind of the project thought it meant a $350 Apple II. Because of that misconception, the final code name for the updated Apple II was “Super II”, and lasted until its release. (Click on this link for a picture and description of a prototype for the Super II.)
Part of the IIe project grew out of the earlier work on custom integrated circuits for the Apple II. When they finally decided to go ahead and improve the design by adding new features, one of the original plans was to give the Apple II an 80-column text display and a full upper/lowercase keyboard. Walt Broedner at Apple did much of the original hardware planning, and was one of those at Apple who pushed for the upgrade in the first place. To help maintain compatibility with older 40-column software (which often addressed the screen directly for speed), he decided to make 80-columns work by mirroring the older 40 column text screen onto a 1K memory space parallel to it, with the even columns in main memory and the odd columns in this new “auxiliary” memory. To display 80-column text would require switching between the two memory banks. Broedner realized that with little extra effort he could do the same for the entire 64K memory space and get 128K of bank-switchable memory. They put this extra memory (the 1K “80-column card”, or a 64K “extended 80-column card”) in a special slot called the “auxiliary” slot that replaced slot 0 (the 16K Language Card was going to be a built-in feature). The 80-column firmware routines were mapped to slot 3, since that was a location commonly used by people who bought 80-column cards for their Apple II computers, and was also the place where the Apple Pascal system expected to find an external terminal. The auxiliary slot also supplied some special video signals, and was used during manufacture for testing on the motherboard.
The engineers who worked on the IIe tried hard to make sure that cards designed for the II and II Plus would work properly in the new computer. They even had to “tune” the timing on the IIe to be slightly off (to act more like the II Plus) because the Microsoft Z-80 Softcard refused to function properly with the new hardware. A socket was included on the motherboard for attaching a numeric keypad, a feature that many business users had been adding (with difficulty) to the II Plus for years. The full keyboard they designed was very similar to the one found on the Apple III, including two unique keys that had first appeared with the III–one with a picture of an hollow apple (“open-apple”) and the other with the same apple picture filled in (“solid-apple”). These keys were electrically connected to buttons 0 and 1 on the Apple paddles or joystick. They were available to software designers as modifier keys when pressed with another key; for example, open-apple-H could be programmed to call up a “help” screen. The newer electronics of the keyboard also made it easier to manufacture foreign language versions of the Apple IIe.
Over all, Broedner and Peter Quinn (the design manager for the IIe and later the IIc projects) and their team managed to decrease the number of components on the motherboard from over one hundred to thirty-one, while adding to the capabilities of the computer by the equivalent of another hundred components.
Peter Quinn had to beg for someone to help write the firmware revisions to the Monitor and Applesoft for the IIe. He finally got Rich Auricchio, who had been a hacker on the Apple II almost from the beginning. Quinn said in a later interview, “You cannot get someone to write firmware for this machine unless he’s been around for three or four years. You have to know how to get through the mine field . He was extremely good. He added in all the 80-column and Escape-key stuff.” Quinn also got Bryan Stearns to work on the new Monitor.,
Changes were made in the ROMs to support the new bank-switching modes made necessary by having two parallel 64K banks of RAM memory. To have enough firmware space for these extra features, the engineers increased the size of the available ROM by making it bank-switched. This space was taken from a location that had previously not been duplicated before–the memory locations used by cards in the slots on the motherboard. Ordinarily, if you use the Monitor to look at the slot 1 memory locations from $C100 through $C1FF, you get either random numbers (if the slot is empty), or the bytes that made up the controller program on that card. Any card could also have the space from $C800 through $CFFF available for extra ROM code if they needed it. If a card in a slot did a read or write to memory location $CFFF, the $C800-$CFFF ROM that belonged to that card would appear in that space in the Apple II memory. When another card was working, then its version of that space would appear. On the IIe, they made a special soft-switch that would switch out all the peripheral cards from the memory, and switch in the new expanded ROM on the motherboard. The firmware in the new bank-switched ROM space was designed to avoid being needed by any card in a slot (to avoid conflicts), and much of it was dedicated to making the 80-column display (mapped to slot 3) work properly.
Also added were enhancements to the ESC routines used to do screen editing. In addition to the original ESC A, B, C, and D, and the ESC I, J, K, and M added with the Apple II Plus, Auricchio added the ability to make the ESC cursor moves work with the left and right arrow keys, and the new up and down arrow keys. The new IIe ROM also included a self-test that was activated by pressing both apple keys, the control key, and RESET simultaneously.
The new Apple IIe turned out to be quite profitable for Apple. Not only was it more functional than the II Plus for a similar price, but also Apple was able to sell it to dealers for about three times what it cost to manufacture. They had gotten their “Low Cost Apple”, and by May of 1983 the Apple IIe was selling sixty to seventy thousand units a month, over twice the average sales of the II Plus. Christmas of 1983 saw the IIe continue to sell extremely well, partly resulting from the delayed availability of the new IBM PCjr. Even after the Apple IIc was released in 1984, IIe sales continued beyond those of the IIc, despite the built-in features in the IIc.
Early Apple IIe motherboards were labelled as “Revision A”. Engineers determined soon after its introduction that if the same use of parallel memory was applied to the hi-res graphics display as was done with the text display, they could create higher density graphics. These graphics, which they called “double hi-res”, also had the capability of displaying a wider range of colors, similar to those available with the original Apple II lo-res graphics. The IIe motherboards with the necessary modifications to display these double hi-res graphics were labelled “Revision B”, and a softswitch was assigned to turn on and off the new graphics mode.
Later versions of the IIe motherboards were again called “Revision A” (for some reason), although they had been modified for double hi-res graphics. The difference between the two “Revision A” boards was that the latter had most of the chips soldered to the motherboard. An original “Revision A” board that had been changed to an Enhanced IIe was not necessarily able to handle double hi-res, since the change to the Enhanced version involved only a four-chip change to the motherboard, but not the changes to make double hi-res possible.
This version of the Apple IIe was introduced in March of 1985. It involved changes to make the IIe more closely compatible with The Apple IIc and II Plus. The upgrade kit (for previous IIe owners) consisted of four chips that were swapped in the motherboard: The 65c02 processor, with more assembly language opcodes, replaced the 6502; two more chips with Applesoft and Monitor ROM changes; and the fourth chip was a character generator ROM that included graphics characters (first introduced on the IIc) called “MouseText“. The Enhanced IIe ROM changes fixed most of the known problems with the IIe 80-column firmware, and made it possible to enter Applesoft and Monitor commands in lower-case. The older 80-column routines were slower than most software developers wanted; they disabled interrupts for too long a time. Also, there were problems in making Applesoft work properly with the 80-column routines. These problems were solved with the newer ROMs.
For those who purchased the Enhanced IIe new, there were modifications to the appearance of the keyboard, including a darker color to the keys, a smaller size to the characters on the keys, a change to black color for the keycap text, and movement of the character to the upper part of the key. Also, the power light had the word “Enhanced” added to it, to help distinguish it from the original Apple IIe. (This sticker was also included in the upgrade kit).
Monitor changes also included a return of the mini-assembler, absent since the days of Integer BASIC. It was activated by entering a “!” command in the Monitor, instead of a jump to a memory location as in the older Apple II. Features also added included the ability to enter ASCII characters directly into memory, and an “S” command to make it possible to search memory for a byte sequence. Interrupt handling was also improved. However, the “L” command to disassemble 6502 code still did not handle the new 65c02 opcodes as did the IIc disassembler.
Applesoft was modified in the Enhanced IIe ROMs to let commands such as GET, HTAB, TAB, SPC, and comma tabbing work properly in 80-column mode.
The new MouseText characters caused a problem for some older programs at first, until they were upgraded; characters previously displayed as inverse upper-case would sometimes display as MouseText instead.,
The Platinum Apple IIe, introduced in January 1987, had a keyboard that was the same as the IIGS keyboard, but the RESET key was moved above the ESC and “1” keys (as on the IIc), and the power light was above the “/” on the included numeric keypad (the internal numeric keypad connector was left in place). The CLEAR key on the keypad generated the same character as the ESC key, but with a hardware modification it could generate a Ctrl-X as it did on the IIGS. The motherboard had 64K RAM in only two chips (instead of the previous eight), and one ROM chip instead of two. An “extended 80-column card” with 64K extra memory was included in all units sold, and was smaller than previous versions of that memory card.
No ROM changes were made. The old shift-key modification was installed, making it possible for programs to determine if the shift-key was being pressed. However, if using a game controller that actually used the third push-button (where the shift-key mod was internally connected), pressing shift and the third push-button simultaneously caused a short circuit that shut down the power supply.
In November 1993, the sad news hit the online services (GEnie, America Online, etc) that the Apple IIe had been removed from the latest price lists distributed by Apple, effectively discontinuing the last remaining Apple II computer from production.
In early 1991, Apple introduced a hardware add-on card for the Macintosh LC computer (the first low cost Mac that could display color ) which allowed it to emulate an 128K Apple IIe. This Apple IIe-on-a-card cost only $199, but the Mac LC needed to use the card sold for $2,495, which made the combination the most expensive Apple II ever made.
Apple engineers managed to put the function of an entire IIe onto a card smaller than the old Disk II controller card. With version 2.0 of the Apple II interface software (which ran on the Mac and accessed the features of the card), more of the memory allocated to the Macintosh could be used by the IIe. However, unlike all previous versions of the IIe, there were no hardware-based slots on the IIe card; instead, it used software-based slots that were allocated by moving icons that represent various peripherals into “slots” on the Mac screen.
To use 5.25 disks with this Apple IIe, there was a cable that attached to the card. The cable would split into a game connector (for paddles or joystick operation) and a connector that accepted IIc and IIGS style 5.25 drives. The IIe card ran at a “normal” (1 MHz) speed and a “fast” (2 MHz) speed. It had limitations, however. For a 1991 Apple II, it was limited in being unable to be accelerated beyond 2 MHz (a Zip Chip could run a standard IIe at 8 MHz), and the screen response seemed slow, since it was using a software-based Mac text display instead of the hardware-based Apple II character ROM. As a Macintosh it lacked the power and speed of the newer Macintosh II models (which also ran color displays). But if having a Apple II and a Mac in one machine was important, this was the best way to do it. This card lasted longer than the real Apple IIe, not being discontinued until May 1995.
The start and end dates for each model of the Apple IIe and Apple III:
(Many thanks to Peltier Technical Services, Inc. for assistance in creation of this chart.) | <urn:uuid:65d96928-ced6-45c8-8e31-e2bcd79940cb> | CC-MAIN-2022-40 | https://www.apple2history.org/history/ah07/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00738.warc.gz | en | 0.977934 | 4,957 | 3.328125 | 3 |
In the eastern region of Kyoto, Japan, there lies an area named Higashiyama, filled with shrines, temples, and the Kyoto National Museum. It was here in Higashiyama that Nintendo built an office complex with buildings adjacent to one another that the company’s greatest designers worked in. Almost everything videogame-related that Nintendo developed before the year 2000 came from the complex known as 60 Kamitakamatsu-cho—from the original Game & Watch and Nintendo Entertainment System (NES), to Donkey Kong (1981), Super Mario Bros. (1985), The Legend of Zelda (1986), and Metroid (1986). But while these games can still be played the buildings they were created in are now gone.
At one point in time there were approximately five buildings at 60 Kamitakamatsu-cho, including Nintendo’s headquarters and its iconic Research Center. The construction and exact dates of establishment for each of these buildings remains a mystery. What is known, however, is that in the past 15 years, two of these five buildings have been demolished. This absence coincides with recent concerns related to videogame preservation. As more buildings are torn down, we are urged to question the historical value of the physical birthplaces of iconic videogames.
The discussion among those in the game preservation community, including the IGDA, concerns whether game companies should begin preserving their physical legacy in the form of museums and archives on company property. Japanese corporations including Casio, Mitsubishi, NEC, Panasonic, Seiko, Sharp, Sony, TDK, and Toshiba all have corporate museums and archives open to the general public. They are regularly visited by tourists, educators, students, and prospective job applicants. Many of these same companies have manufactured countless integrated circuits, display screens, and other components for videogame hardware over the years.
By establishing a museum or archive, a videogame company moves towards acknowledging the value of its legacy, making it both publicly observable and ensuring its ongoing protection through a continuous cataloguing of production material. Keeping all of this in a single location deemed permanent and secure adds to the seriousness of the commitment. Yet the unsettling prospect remains: Where does every single element from a finished videogame go once production and publishing is finished? The fear is that it remains scattered throughout departments and development staff workstations, unorganized and vulnerable. Making use of company real-estate to establish a place to organize, store, and even display these elements is the simple but often overlooked solution to a medium’s disappearing history.
Due to confidentiality, we don’t know the full extent of the work that went on in the buildings at 60 Kamitakamatsu-cho. When the time comes to tell the stories behind Nintendo’s earliest games, it is often pieced together from various sources, that is if it can be pieced together at all. All that’s currently left are fading memories from former and current staff as well as some archived media, both of which are often difficult to track down. And so, the limited history of Nintendo’s 60 Kamitakamatsu-cho complex is the result of bringing together various tourist photographs, Google Map images, and news media snapshots and video. All this first-hand evidence is vital to establishing the true history of these buildings. For instance, Nintendo’s official history on its corporate website says it merged all its playing card manufacturing facilities to this location in 1952, and then moved its headquarters to the complex in 1959. However, a stone plaque hanging down the street from the guarded entrance gate to the complex in Higashiyama tells a different story—it commemorates its establishment as 1954. Unfortunately, such historical clarity isn’t always possible. Nintendo’s headquarters once sat at the left of the entrance gate, and on the far-left was what is assumed to be a manufacturing facility, with Nintendo’s kanji logo displayed in blue on white signage. Both of these buildings are believed to have been built sometime in the 1970s. At the center of this complex was Nintendo’s three-story Kyoto Research Center, built in the late 1980s. Then there are two smaller buildings that stand at the back of the complex, but their history and current usage remain a mystery.
Nintendo ended up outgrowing this area, and in 2000 moved its headquarters to its present location: 11-1 Hokotate-cho in the Minami-ku ward of Kyoto. Within a few years (the exact date is unknown) the former Nintendo headquarters building at 60 Kamitakamatsu-cho was demolished and a green patch of land replaced it. Intelligent Systems, a Kyoto developer that has close relations with Nintendo, would move into Nintendo’s Kyoto Research Center at 60 Kamitakamatsu-cho in 2002. It was here that Intelligent Systems developed Fire Emblem, Paper Mario, and WarioWare as well as development tools for the Nintendo DS and 3DS handhelds. In 2013, Intelligent Systems moved out and transferred its entire operations into its own building, an eight-minute walk from Nintendo’s current headquarters. At around this time, Nintendo began constructing a new development center in the same area of its headquarters. The Nintendo Development Center, a seven-floor eco-friendly structure complete with rooftop solar panels and a rainwater recycling system, opened in June 2014 with reported construction costs of 19 billion yen ($186.3 million USD).
One year later in 2015 another one of Nintendo’s larger buildings at 60 Kamitakamatsu-cho would be quietly demolished. The three-story rectangular building iconically displaying Nintendo’s blue kanji logo to surrounding neighborhoods was gone. What remains is the former three-floor Kyoto Development Center. On occasion, it has been mistaken as Nintendo’s headquarters by the media. The Kyoto Development Center is now home to the Nintendo subsidiary Mario Club, which focuses on game debugging services for the company. Nintendo Co., Ltd. and Nintendo of America declined to comment on the history and future of the 60 Kamitakamatsu-cho complex in Kyoto.
Nintendo has certainly never ignored its past, and in fact has chronicled much of its history in former Iwata Asks website interviews with senior Nintendo staff. Nintendo New York (previously known as Nintendo World) and Pokémon Centers in Japan serve as both company stores and showrooms for new games. Nintendo New York in particular has exhibited items from Nintendo’s legacy, from game design documents to its vast line of handhelds. However, this has not stopped tourists in Japan from trekking to its various office buildings in Kyoto to pay tribute. While some may see these buildings as basic offices with fluorescent lighting and a maze of workstations, fans see them as attractions to visit and pay tribute. They pose for photos with character merchandise outside company entrance gates to share on social media, some hope to catch a glimpse of some kind of game development (or game developer personality), while others ask security guards at the gate if they can enter and take pictures at a better angle, only to have their request turned down.
Having traveled across Hyrule as part of the quest in The Legend of Zelda, it seems these fans have been inspired to go on a similar pilgrimage to Kyoto, to the origin of their enthusiasm. Instead of talking to townspeople they talk to train station attendants and ask for directions that lead them to the white block towers of Nintendo’s buildings in the city. Along with that same sense of adventure, game players seek a connection with companies like Nintendo to display appreciation, gratitude, and take in the same physical environment that hosted and informed the games they play, hoping to be inspired in some way. However, there may not be anything left to visit or view, the kicker being that the absence of a museum or archive means their arrival probably wasn’t ever considered. Just as Warner Bros. Studios in Burbank, California has plaques hanging on each of its soundstages listing the motion pictures and TV shows filmed there, game development offices hold a somewhat similar allure, albeit without the official recognition.
The big difference between film and game companies is that the latter don’t need soundstages and backlots to produce games, they just need technology, efficiency, as well as secrecy to stay ahead of competitors. It’s upholding these tenets that encouraged many Japanese videogame companies to recently sell off old real estate and bring their operations under a single roof, with all the modernizations they required. Notably, this is a contrast to the structure they adopted during the 80s and 90s, which saw some Japanese game developers, handling both consumer and coin-op game production, opting to spread out operations across different buildings. Some operational structures even had game development, sales, marketing, and distribution spread across different cities for various reasons. The shake-up to this system has been caused by big, often unexpected changes across many Japanese game makers in recent years. Several small studios have been hit by bankruptcy that led to their closure. And larger development houses and publishers have seen a dramatic shift in operations—not just Nintendo. Many have sold off property in other areas of Japan in order to focus their efforts in Tokyo and Osaka.
The obvious reason for this shift would be to raise cash to acquire new real estate. It also serves to keep up-to-date with Japanese building standards in a country vulnerable to earthquakes. But there’s more to it. In the transition to digital, retail disc assembly isn’t required to as large an extent as it once was, eliminating the need for warehouse space to store inventory. Coin-op game development is not what it once used to be, either. Factories and plants utilized to build game cabinets in the downsized coin-op market can be closed with the logistical work outsourced entirely. Some would argue that certain buildings destined for demolition are relics from decades-past, eyesores from the interior to exterior, and do not reflect an industry that’s all about moving forward. This may be why numerous game companies are not holding on to old real-estate. But in doing so they deny any possibility of turning these properties into corporate archive or museum facilities. Those on the other side of the debate see this as a depressing state of affairs given that a part of both a company’s and videogaming’s legacy practically vanishes as a result.
Taito, Konami, and Sega are Japanese game companies that share one thing in common: very early in their business operations they were selling jukeboxes in Japan in addition to videogames. Sega also manufactured slot and pinball machines in Japan in its infancy. Namco once produced mechanical rocking-horse rides with Disney characters on them in the late 1960s, and also distributed Atari arcade games in Japan starting in the mid-70s. The entire inventory of all amusement and consumer gaming items manufactured by these four companies combined over the past several decades is enormous.
What doesn’t end up stored in warehouses is sold off to buyers, or discovered by collectors. And what isn’t discovered is left outside for trash collectors to pick up and toss into a landfill. Anything that breaks and is considered irreparable is trashed by arcade operators or distributors that may deem the work a waste of time, effort, and money. The entertainment these games provide is considered short-term by them, and each year countless replacements for this entertainment are presented to the marketplace—that is their protocol. No such protocol exists among the gaming industry for cataloging game assets and it’s an issue rarely discussed at industry events, if at all.
When videogame development buildings are demolished, the mystery remains about what is deemed important to keep and what is thrown out in the trash and who is put in charge of making these decisions. From design documents, marketing materials, artwork, source code, music files, ROMs, circuit boards, spare components, contracts, legal documents—the responsibility and work that would go into organizing all of this is extensive and comes at a cost. It’s unclear if game companies can afford warehouse space, security, and personnel on staff to keep archived games catalogued and in working order. If these can be considered worthwhile operating costs is currently in question. Certain pieces of Taito, Konami, and Sega real-estate owned for decades by each of these game companies in Japan has already been sold off, some of these buildings that once sat on these properties have been demolished. What’s clear is that the opportunity to turn these properties into an archive or museum vanishes with the buildings themselves.
Last year, Konami sold off the downtown Sapporo nine-floor high-rise—which was once the long-time headquarters of its Hudson Soft subsidiary—taking with it the famous Hudson Soft bee mascot that once sat atop its entrance awning. The legacy that Hudson Soft built, which included such titles as Bomberman (1983), Adventure Island (1986), Bonk’s Adventure (1989), and Bloody Roar (1997) disappeared from the snowy island of Sapporo where the company originally began in 1973. Konami would however make headlines in 2013 by purchasing an old hotel and theater complex in the Ginza district of Tokyo for a reported 17.8 billion Yen ($222.5 million USD), demolishing it to make way for a new complex tentatively named “Konami Creative Center Ginza.” In statements made to the press, Konami said it intends to use the facility for the production of new content and “intends for it to be a hub for production of its content and for communication between the Konami Group and its customers.” Konami declined to comment on its completion date nor elaborate on what type of production (videogames or casino gaming) it would be used for. The building is currently under construction.
In 2013 Sega sold off its five-story headquarters annex across the street from Sega Haneda Buildings 1&2, still sitting on land that Sega established itself on over five decades ago in Tokyo’s Ota-ku ward. The building maintained a towering neon sign of the Sega logo for decades and was sold off and demolished to make way for a new apartment building. That same year, Sega also sold off Sega Building 3, a complex once home to its internal development teams AM2, Hitmaker, Sega Rosso, Amusement Vision, and WOW Entertainment. The building, once the headquarters of AKAI (before Sega acquired it in 1996), was just a 10-minute walk from Sega’s Haneda Building 1&2. It was demolished and will reportedly make way for a furniture superstore. Sega’s Haneda 1 and 2 buildings are still in use and are now occupied by Sega Games (its mobile, PC and home console game division) and Sega Interactive (its coin-op amusement division).
BANDAI NAMCO Games moved into a new 13-story high-rise building this year in the Minato ward of Tokyo, leaving its famous trapezoid-shaped eight-floor office that it once occupied in Shinagawa since May 2007—it will be demolished for a 19-story apartment building. Their Shinagawa building lobby once prominently displayed their legacy of arcade cabinets ranging from Pac-Man (1980), Xevious (1982), and Tekken (1997). In the process of bringing operations into a single complex, four of Namco’s Ota-ku Tokyo branch offices originally built in the 1980s were sold off, and some were demolished. One of these buildings was nicknamed “Xevious” because the profits from its Nintendo Famicom release helped fund its construction. Its most recent Ota-ku office departure took place in September 2014 when Namco moved out of its Yaguchi office that was once prominently featured on the Japanese cover of “We Love Katamari.” Namco’s Yaguchi office was built in 1985 and previously served as Namco’s long-time headquarters.
It hasn’t all been about expansion, some Japanese game companies have, in fact, opened their doors to welcome game players into shops on company property (similar to Nintendo New York), even lending company buildings and games to museums and exhibits. While they are not specifically corporate museums or archives, they are a step in the right direction, moving towards opening up to the public to a certain degree.
Square Enix sold off its Hatsudai Building last year, which was formerly the headquarters of ENIX itself (built in 1996), before its merger with Square. The Hatsudai Building once famously housed a Square Enix Character Goods Shop open to the public on its first floor. Square Enix moved into the Shinjuku Eastside Square with subsidiary Taito Corporation in 2012 occupying several floors. Square Enix also opened an all new character goods, shop, café, and bar area named ARTNIA on the ground floor of Shinjuku Eastside Square complete with souvenirs and limited-edition items. The company calls ARTNIA “an area that serves as a bridge between our goods and our customers.”
Taito Corporation closed its Ebina Development Center and factory in 2014, the site was originally established in 1979 within Kanagawa Prefecture in the midst of the Space Invaders (1978) craze. It was here that cabinets for countless coin-operated games from Darius (1986) to Chase H.Q. (1988) were manufactured, it also served as development offices for numerous games. Another Taito building where consumer and coin-op development occurred for many years in Yokohama, known as the Taito Central Research and Development Laboratory, was also demolished in 2014. The building was reportedly unoccupied for a number of years before the demolition occurred. Taito does provide help to the game museum sector in Japan. Taito currently lends its warehouse and former development office in Saitama prefecture, known as the Taito Kumagai Building, to a game museum that showcases arcade cabinets on a regular basis.
Videogames have solidified themselves as part in the lives of players and are passed on to new generations. However, they still struggle to be taken seriously by governments, educators, and those that would dismiss them as harmful, unhealthy, and a waste of time. Walking into a corporate museum to experience the personal storytelling of the design, artistry, and engineering that goes into game design could educate these critics on the value of videogames while further engaging those that already enjoy them.
Much of the game development work produced in these demolished buildings continues to be a mystery to game audiences in and outside Japan. Some can often take the end product that’s developed and the effort that went into it for granted. On occasion, during interviews, game designers and programmers will tell stories of the places they worked in, the camaraderie they shared with co-workers, and the numerous challenges, both personal and technical, they faced during development. These game development operations were places where countless game designers, programmers, and engineers worked endless hours creating games that went on to be played for years and still are today. They are a part of our culture, bringing people together, and some have an inspirational meaning to individuals. What future generations of game players and creators can learn from them is invaluable.
Whether or not the game industry is open to putting their interactive legacies on display to a personal audience of game players, aspiring game designers and visiting tourists is now in question. Time will tell if game industry real-estate can be turned into museums or archives and if the game industry, game players, and the general public are ready to support it.
Header image: The Sega Annex Building, an office that had been a part of Sega for decades with its enormous signage, was recently demolished for a residential building. Courtesy of Twitter user: @t_arai2012 | <urn:uuid:430de249-e186-4e08-a2d4-fb08f7b98298> | CC-MAIN-2017-51 | https://killscreen.com/articles/the-demolition-of-japans-videogame-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948541253.29/warc/CC-MAIN-20171214055056-20171214075056-00402.warc.gz | en | 0.968686 | 4,064 | 2.5625 | 3 |
“There are complaints from parents and staff members that you are lazy and unprofessional. The perception is you don’t seem to care about your students or the people who work with you. ”
“Well, how would you know? You don’t have any idea how difficult it is to teach these days with so many unrealistic expectations. You spend all day in the office or at meetings, bark out a bunch of orders from your phone or desk, and are completely out of touch with what it is really like in the classroom now.”
Fear of difficult conversations with teachers, like the example above, affect the growth of school districts which improve or fail based on the behaviors of those whose job it is to carry out instruction and affect student learning. When teachers and staff members are committed to their craft and doing everything they can to further the district’s mission, growth is significantly more likely.
But what if some staff members are not committed to their role and exhibit behaviors that inhibit growth among the students they are responsible for as well as growth within their professional team? The question of how to have difficult conversations with teachers is a topic in which school leaders need to become comfortable.
In an ideal world teachers and staff members would welcome feedback regarding their performance and work behaviors, positive or negative. Ideally, praise would encourage more of the same behaviors and constructive criticisms would be accepted with an open mind, lead to self-reflection, and a promise to do better in the future. As most of you know, negative feedback is seldom accepted in this way. People tend to seek feedback when they have performed well and avoid feedback when they have performed poorly (Moss, Valenzi, & Taggart, 2003).
The Paradox of Becoming a Principal
Most principals began their careers as teachers. The personalities of teachers are not confrontational in nature. The question of how to have difficult conversations with teachers isn’t on the radar. Educators are more concerned with how to have difficult conversations with parents, and that topic isn’t covered in most college classes either. Teachers are, for the most part, nurturing and supportive individuals. They work to establish relationships and develop confidence in those they work with. The transition to school leader can seem like a divergence from that role.
Principals must deal with conflict head on. Conflict delivers itself in many forms to the principal’s office. It can include conflict between students and teachers, teachers and parents, teachers and teachers, principal and parents, and finally (and likely most concerning for new administrators) conflict between teachers and principals. When preparing for how to have difficult conversations with teachers, the fear of an employee becoming defensive and the potential of strained working relationship can prevent school leaders from engaging in honest feedback that is needed to improve district performance.
Research supports school leaders’ fears about difficult conversations with teachers. DeNisi (1996) found that people are demotivated when they receive feedback that threatens their self-confidence. Other organizational psychologist found in later research, “If leaders believe that he or she can successfully manage the performance of subordinates in a way that performance improves, they will be more likely to actually influence subordinate performance” (Corbett & Anderson, 2011).
The sooner school leaders can ease their discomfort in having difficult conversations with teachers and develop the confidence and strategies to deliver meaningful and constructive feedback, the more quickly the staff and district as a whole can improve.
Why do Leaders Avoid Difficult Conversations with Teachers?
An article from the Academy of Management Executive titled Are Your Employees Avoiding You?: Managerial Strategies for Closing the Feedback Gap explored different types of leaders and how they have affected the outcomes of major events ranging from the 2003 Columbia Space Shuttle tragedy, the Iraq War, and the spread of the SARS epidemic. One type of leader profiled in the article was the “Conflict Avoider.”
The conflict avoider was described as uncomfortable with giving bad news to others. These types of managers, as the name would suggest, deliberately avoid giving bad new to others. The authors contend conflict avoiders often delay giving feedback to others when they deem the conversation could be uncomfortable or contentious. As a result, the delay causes a disconnect in the employees minds between the perceived poor performance and tardy feedback. Conflict avoiders may also attempt to distort feedback or sugarcoat poor performance so it will seem less severe than it really is. The result is a message that fails to address the performance problem and will most likely fail to result in improved performance.
The same article also addressed the nurturing style of management that conflict avoiders may resort to. The article found some conflict avoiders do not overtly try to avoid conflict they just want to be supportive, often to a fault. In this case leaders may focus on providing support, consolation, and reaffirmation of the employee’s competence as well as the need for approval among their employees. This need to provide nearly unconditional positive feedback can result in ignoring glaring indicators of needed improvement.
The nurturing style of management becomes even more problematic when attempting to remove a poor performer from their position if their job evaluations have been positive in nature and fail to address serious performance concerns that should have been documented and previously addressed.
Avoiding Difficult Conversations with Teachers is Natural
It’s human nature to back away, or at least be apprehensive regarding difficult conversations. Humans approach situations they fear will hurt them or others very carefully. Remember fight or flight from psychology 101? Difficult conversations with teachers are not predictable. As a result, many people would rather avoid having them than walk into a difficult conversation unarmed, and who can blame them? After all it is part of our psychological makeup that has to be rewired through training and cannot change just because a job title changes from teacher to principal.
Effective school leaders do not choose to fight, nor do they avoid difficult conversations and opt for flight. Rather, they respect their responsibility as instructional leaders of their school as much as they respect the teachers they lead. Overtime and with practice they become more comfortable with having difficult conversations with teachers.
We are going to look at how to have difficult conversations with teachers from two perspectives. The first perspective, provided in this article, is obviously from an ineffective leader. It provides a sure fire way to lose friends and alienate people.
At the end of the article are offering a complimentary handbook, How to Open a Can of Worms: A Principal’s Guide for Having Difficult Conversations with Teachers.
10 Tips on How to make Difficult Conversations with Teachers . . . . . More Difficult.
If you want to blow up a professional relationship and sabotage any hope of working in a collaborative manner with the teachers and staff you lead, take careful notes and follow these steps when you prepare for difficult conversations with teachers. (Look for the truth behind these tips that are dripping in sarcasm and find more information in the handbook, How to Open a Can of Worms.
1. Do not establish an outcome objective before difficult conversations with a teachers begin. Ignore warning signs when your difficult conversation with teachers is going off track.
2. Assure yourself that the employee whom you are about to speak with bears 100% of the blame for their poor performance, or whatever the issue is. Assume mentor teachers, who may have also dodged difficult conversations, previous leaders, college advisers, or factors unknown to you have no impact on the behaviors that need to be addressed.
3. Assume you have all the facts before you ever meet with the teacher. Forget about the need to gather their perspective and refuse to look outside of the preconceived frame in which you have already shaped the situation. Hit them with the truth as you see it and be closed minded to any other reality.
4. Ease into a difficult conversation or plan to trick or manipulate your subordinate so when you have difficult conversations with teachers you are able to hoodwink them to get your way using an indirect approach. Here’s an example:
Shawna, who has been principal for nine years in a busy suburban elementary school, has some difficult news to share with Dalton, a second year 3rd grade teacher. Shawna has observed Dalton’s performance over the past couple of years and has decided he may not be a good fit for her building. Dalton seemed to have a lot of potential when he first started. He interacts well with students, but after the first few months of his second year of teaching he just cannot seem to get it together. He teaches with a veteran team of long term teachers, so Shawna knows he has the support he needs. Shawna has recently learned about an upcoming opening for a middle school gym teacher at a nearby school in the same district. She knows that Dalton is certified to teach physical education, though he has never expressed an interest in teaching it at her school, or any other school for that matter. Let’s watch as Shawna tries to coerce Dalton to step into her frame and follow her plan.
Shawna: “Hey Dalton! Sit down. I wanted to visit with you about an opportunity. (Dalton sits and smiles thinking he has done something well to deserve an “opportunity”). Do you sometimes feel like teaching 3rd grade is soooo overwhelming and nearly impossible to get your arms around?”
Dalton: “Well it is far more difficult than I imagined when I was in college, but I think I’ve learned a lot in my second year and I have a lot of good ideas that I want to implement next year.”
Shawna: “That’s great! But do you ever wish that maybe you would have used that physical education certification and coached instead of being trapped in a room with a bunch of 8 and 9 year old children all day?”
Dalton: “Not really. I hope I haven’t given you the impression that I don’t like teaching here. I love working with the students. There are many parts of the job that are challenging for sure, and I wish I had more time to spend talking with some of my peers, but I feel like I’m starting to get the hang of it.”
Shawna: “I see. But, what if you could work with students and not have to grade papers every night? They have a physical education job opening over at Westview Middle School and with your experience working with students and your certification in that field, I think you would do great!”
(Notice how Shawna has ignored possible sources of frustration Dalton has revealed and keeps pushing her solutions instead of digging addressing her performance concern and exploring the causes).
Dalton: “It sounds like you want me out of the classroom. Why? I haven’t complained about grading papers and I don’t think I’ve ever expressed an interest in being a P.E. teacher. I know I haven’t been as good as a teacher as I had hoped for, but I really feel like I have gotten better this year.”
Shawna: “No, no, Dalton. I just wanted to make sure you were aware of some upcoming job openings and that you were enjoying teaching 3rd grade.”
Dalton: “I do.”
Well that didn’t work out as planned. Shawna was able to assure this disastrous outcome by framing the issue as a job that Dalton did not want to begin with. If Dalton really was a bad teacher, Shawna missed the opportunity to discuss her performance concerns with him and begin interventions to improve instruction. She was able to further sabotage their relationship by throwing trust out the window.
Shawna implied a level of dissatisfaction with Dalton’s performance and urged him to apply for another job. When Dalton called her on it, she reversed course and acted as though everything was fine. It is likely Dalton will question Shawna’s motives and honesty for the next several years and is likely to share this experience with a few co-workers.
5. Do not show any personal interest or investment in the person you are speaking with. The more impersonal or distant you seem the more you can alienate the person you are speaking with. Make them feel like no one else has ever made similar mistakes or faced the same struggles.
6. Compare the teacher’s behavior or shortcomings to their peers, not to an objective standard of expectations. For example, “Becky, when I compare your test scores to those on your grade level team, you consistently have the lowest scores.”
7. When having difficult conversations with teachers, fail to connect how the undesired behavior or lack of performance is affecting building overall and the mission of the district. By doing so, the teacher will be sure to think the difficult conversation is personal and be blind to the larger impact of their performance.
8. For the sake of everyone’s comfort, don’t dig below the surface of the problem. Assume the undesired behavior, especially when it seems to occur frequently throughout the district, is a product of the current generation of new hires, bad karma, changes in society, or any other defensive reasoning explanation that is not related to deeper level issues that need to be resolved within the school building or district. Discourage organizational reflection and deeper learning. Assume that systematic negative behaviors point to other people’s problems, not issues that need to be addressed within the district itself.
9. Delay feedback as long as possible when preparing to have difficult conversations with teachers. When leaders wait long enough they can ensure subordinates cannot see a connection between their behavior and the immediate cause. Remain clueless to effective methods of providing performance feedback.
While this last tip does not apply to having a difficult conversation with a particular teacher, it does offer a sure fire way to allow the leader to feel like they have addressed an issue, without really having to address the issue or deal with the actual work of providing specific and difficult feedback.
10. Manage with a bullhorn to blast messages to the entire staff when the message is really intended for a particular person. Make a generalized statement at a staff meeting or send out an all staff email to address negative behaviors that are only applicable to a couple of staff members. When you do this you can ensure conscientious and effective teachers will wonder what they have done to draw criticism, while the target audience completely misses the message and assumes it was intended for someone else.
A Free Resource to Help School Leaders Have Difficult Conversations with Teachers
This is not an exhaustive list by far. There are hundreds of school leaders who follow the K12 HR Solution’s Blog that have additional insights on how to make sure difficult conversations with teachers go terribly wrong. Feel free to join in the sarcasm and offer your comments below. Your insights can help other school leaders avoid mistakes that experienced school leaders have witnessed or experienced firsthand.
Now that you know how to wreck difficult conversations with teachers, allow us to share some insights on how to prepare for and navigate difficult conversations with teachers successfully. This article is already long and we want to provide school leaders with a useful guide (free of sarcasm) packed with advice from organizational behavior research and proved techniques to improve performance in the midst of difficult conversations with teachers.
Click on the link below to receive your free guide and we encourage you to share a link to this blog article on your favorite social media platform to let other school leaders know about this resource we will only be offering for a limited time.
Click here to receive your free guide of
How to Open a Can of Worms:
A Principal’s Handbook for Having Difficult Conversations with Teachers.
Corbett, A.T., & Anderson, J.R. (2001). Locus of feedback control in computer based tutoring: Impact on learning rate, achievement and attitudes. In Proceedings of ACM York: Association of Computing Machinery Press.
Kluger, A.N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119 (2), 254-284.
Moss, S. E., Valenzi, E. R., & Taggart, W. (2003). Are you hiding from your boss? The development of a taxonomy and instrument to assess the feedback management behaviors of good and bad performers. Journal of Management, 29, 487–510. | <urn:uuid:3012ab61-509f-4edd-ab28-5dec2b3e4b3b> | CC-MAIN-2021-39 | http://www.k12hrsolutions.com/2014/11/02/difficult-conversations-teachers/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057119.85/warc/CC-MAIN-20210920221430-20210921011430-00038.warc.gz | en | 0.966118 | 3,418 | 2.890625 | 3 |
Quantum fields and how they function dumbfound scientists. Science understands how they work. Now it’s your turn.
Quantum fields are certainly one of the most fascinating subjects in the realm of physics if not in all knowledge. I will illustrate what I mean with an example from Inventory of the Universe. In chapter 7, Human Life, we broached protein folding. How it’s possible for a chain of hundreds of amino acids to fold into an intricate protein machine in a millionth of a second. Amazing, when I wrote that a few years ago I had no idea how it was possible.
(Audit of Humankind, chapter 5.10)
Well, quantum fields are the answer. Not that this is any less amazing, to the contrary. In this chapter, we’ve discussed the controversy around science. But, let’s not forget there’s been tremendous progress and research has made discoveries that have far-reaching implications. Quantum fields are key to what is unfolding with The Explanation. The next books are Origin of the Universe and Origin of Humankind. We cannot discuss origins without understanding what is at the very base of the pyramid. What substance, or as it’s referred to, fields, quantum fields, is the Universe composed of. I’ll be referring back to quantum fields in Origin of Humankind. Let’s understand what you and I are composed of.
As science and physics have progressed over the past couple of centuries scientists have searched for answers. Always trying to look for the simplified common denominator that neatly tied everything together. The most fundamental well-known structure of matter is the atom. Here’s the Bohr model which is what a lot of us are familiar with.
The Bohr model changed as we had electronic equipment that peered deeper and deeper into the atom. Electronic microscopes and the very sophisticated Large Hadron Collider revealed that protons and neutrons (in the nucleus of an atom) were made of even tinier particles called quarks. Quarks and electrons are the tiniest visible, measurable pieces of our Universe. They are known as fundamental particles. Because they are particles I entitled this Sensory Science. Quarks and electrons can be detected by our senses. They are obviously invisible to the naked eye but we can see them with the right equipment. This is in opposition to Spiritual Science which I’ll evoke next week.
The next jump was understanding that the particles are not only point particles but ALSO waves. Frankly, this is where it begins to get weird. Why? because particles are physical or material whereas waves are not material, they’re invisible. The term quanta, from which we get quantity, means small packets, in this case, of energy. That’s how we now describe quarks and electrons. These minute packages of energy are the building blocks of our Universe.
Here’s another short video that graphically explains the basis of quantum physics.
This next video by David Tong, Quantum Fields: The Real Building Blocks of the Universe is a must see. It’s a lecture that brings quantum physics to an understandable level. He covers a lot of ground. It’s a lecture that you almost need to listen to a few times to absorb the important basic scientific knowledge he disseminates. If you haven’t heard or are not aware of quantum fields, they will turn your world upside down.
Here are a few pointers:
The fundamental building blocks, out of which all the material universe is made, are nebulous and abstract. They are fluid-like substances, the quantum fields, which are spread throughout the entire universe and ripple in strange and interesting ways. The ripples and waves of this fluid get tied into little bundles of energy by rules of quantum mechanics and those bundles of energy are what we call particles like electrons. Get your mind around that.
David Tong explains that what we think of like a vacuum when all air is removed and nothing remains is actually seething with activity. There are quantum vacuum fluctuations. You’ll see an animation of this about 23 minutes into the video and he says they are very complicated. These fluctuations can be measured so we know they’re there. They are the quantum fields and are so complicated, we don’t know where to begin in how to explain them. Just try and get your mind around the fact that a vacuum is full of activity and there’s plenty of stuff there.
The bottom line here is that the reality of quantum fields is a revolutionary idea regarding the understanding of what solid matter really is. Not solid at all. But rather vibrations of intermingled fields throughout the universe. Frankly, it’s a concept that is very difficult to grasp.
This last video, yes, there are a number today because they do an excellent job explaining such a complex subject, is also a must-see. Professor Al-Khalili has a whole series of videos in the Spark series and they’re all worth watching. This one is very practical because it shows us graphically quantum fields in action. How robins use their eyes to capture information from the electromagnetic fields to navigate during migration.
There’s the example of the method by which a tadpole transforms into a frog and how quantum fields play a role in breaking down the collagen in their tails. This is very similar to apoptosis, or the loss of cells of the web-like hands of a fetus to form the fingers that I discussed in Inventory of the Universe. The most amazing part of this video, in my opinion, is at 37 minutes when discussing photosynthesis and how energy particles from the sun travel across a plant by wave smearing. Instead of blindly trying to find their way to their destination they ripple in every direction at the same time. Each energy particle follows ALL paths to a cell at the same time. You can’t say it goes from A to B, in a straight line, it doesn’t. The featured image above shows this quantum field wave and how it spreads out in every direction at the same time. Consider that these vibrational waves are jiggling billions of atoms and molecules simultaneously.
David Tong ends his lecture by displaying a short, but complicated, mathematical equation. He says that every single experiment ever done by scientists can be explained by this equation. That it’s the best, most complete presentation of the universe we have today. As I wrote above, we want to encapsulate our understanding in the simplest presentation possible.
But, scientists know, and David Tong admits, there is much that is not understood, many problems that have not been resolved. It seems the deeper we go, the more complex it gets. And indeed that’s the case. Next week we’ll see there’s a spiritual science dimension that most sensory scientists are not willing to assume.
In fact, while preparing this post I ran across the following comment on the website physics stackexchange. It is directly related to vibrating quantum fields.
There is a theory, that all matter is energy in vibration in its core. I have read a lot of articles and watched a lot of videos and documentaries about that topic and they all state, that there is basically no matter and all is vibration. What we call matter is basically the energy in the low states of vibration.
Vibration is energy. Energy is vibration. What does vibrate then?
Sound is the vibration of the air molecules (mechanical wave of pressure and displacement, through a medium). That means, that the sound is the vibration of matter (air). If matter is also vibration, then air must be a vibration of something else (because how can vibration exist without a medium?). That means, that sound is the vibration of the vibration of something, which should also be a vibration… But what does actually vibrate?
The moderator of the physics Q and A section closed comments on this question.
This question appears to be off-topic. The users who voted to close gave this specific reason:
- “We deal with mainstream physics here. Questions about the general correctness of unpublished personal theories are off topic, although specific questions evaluating new theories in the context of established science are usually allowed. For more information, see Is non mainstream physics appropriate for this site?.” – knzhou, Community, John Rennie, David Z
Yes, it is probably not mainstream physics. But what if we asked the question, what is dark matter or dark energy? Just like vibrations, we know that, like quantum fields, dark matter and energy are there. But, we’ve never seen or measured them. They are mathematical calculations that fit into the quantum physics equation. They beg answers which today, we are not able to supply. But tomorrow there will be answers.
This blog post is an excerpt from chapter 5.10 of the book Audit of Humankind.
Dig Deeper into The Explanation
Join The Explanation Newsletter to stay informed of updates. and future events. No obligations, total privacy, unsubscribe anytime, if you want.
The Explanation series of seven books. Free to read online or purchase these valuable commentaries on Genesis 1-3 from your favorite book outlet. E-book and paperback formats are available. Use this link to see the details of each book and buy from your favorite store.
Since you read all the way to here… you liked it. Please use the Social Network links just below to share this information from The Explanation, Quantum fields in Sensory Science. A Very Brief History | <urn:uuid:96024161-dbc7-4fec-8c3f-c1dcc3b34a0a> | CC-MAIN-2020-45 | https://theexplanation.com/quantum-fields-a-very-brief-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891428.74/warc/CC-MAIN-20201026145305-20201026175305-00619.warc.gz | en | 0.944969 | 1,982 | 3.078125 | 3 |
Florida’s Dolphins up close
- 24. September 2020
- Wildlife & Zoo
In Florida, dolphins can be seen off the coast, on boat trips and in aquariums where they perform spectacular tricks….Read More
Manatees are as much a part of Florida as alligators, Miami Vice and Key Lime Pie. Nowhere else in the USA can you come as close to these animals as here. But what are the best spots to observe manatees? When can you see them, and which places let you swim with manatees? We will tell you in this article
Manatees are Florida’s biggest marine mammals and possess some remarkable features. Here are the most important manatee facts:
The name “manatee” denotes a particular kind of “Sirenia”, commonly known as Sea Cows. More precisely, when speaking about manatees, we mean the Caribbean Manatee, a subspecies that lives in the Southern USA, the Caribbean and as far south as Brazil and Venezuela.
The name stems from the pre-Columbian Taino word “maniti”, meaning “breast”. This name choice can be explained by the milk glands manatees have near their armpits.
Grown manatees can reach 8.2–13 ft in size. Their weight normally ranges around 440–992 lbs. However, there have been reports of manatees that weighed up to 1323 lbs.
Manatees living in the wild have no natural predators. Thus, the animals can live up to 60 years. In captivity, their lifespan is even longer. A good example is Snooty, a 69 year old male manatee that lived at Parker Manatee Aquarium.
Manatees are strict herbivores that feed on seaweed, grasses and leaves. In the past, they were suspected to eat fish, but this turned out to be false. In order to feed their enormous appetite, manatees have to consume 4–10 % of their bodyweight in food every day.
Fun Fact: “Did you know that manatees spend up to 50 % of their days sleeping? The animals have to come to the surface to breathe every 20 minutes. However, they are able to do so without waking up.”
Manatees usually lead a solitary life and only sporadically gather in groups – for example, at the warm springs in Florida’s interior. An exception is the mating season when multiple males are competing for one female. Manatees communicate with their peers via squeaking and whistling sounds. The animals have an excellent sense of hearing.
After a gestation period of 12 to 14 months, manatee cows mostly give birth to one calf that they feed up to 2 years. Twins also occur, but they are rare. Manatee babies can weigh up to 66 lbs and measure 3.3 ft in length. They are able to swim right after birth, but are often carried on their mother’s back.
Manatees in Florida are one of the biggest tourist attractions. However, the species remains endangered. There are a number of threats to these animals, prompting the state to react with strict conservation efforts
Florida’s native inhabitants used to hunt manatees for their meat and fat. Even though hunting the animals is illegal now, the human impact on manatees is significant. Sea cows are facing numerous threats, among them:
As an endangered species, manatees are protected in Florida. To save them from extinction, the state created special zones that are off-limits for boats. Hunting, disturbing or feeding manatees is illegal in Florida and can result in hefty fines or even jail time.
These conservation efforts have already proved effective: In the last 25 years, Florida’s manatee population has increased by 400 %.
Even though you can always see manatees in Florida, the best time to go manatee watching are the winter months from December to March. Then, many manatees from northern parts of the US come to Florida for the warm water. They do not only swim along the coast, but also make their way upstream to warm springs. In summer, the animals are only rarely found in fresh water. An exception is the Wakulla River.
Tip: “The best time for manatee watching are the early morning hours when the animals are particularly active.”
Many visitors to the Sunshine State are not content with watching the manatees on land. Swimming with manatees is a popular activity that is offered on the Crystal River. There, the animals are known to gather around warm springs.
First, your guide will take you to the river by boat and scan the water for ripples that indicate the presence of manatees. Then, you put on your snorkel and dive right in. Manatees are curious creatures that often swim close to humans – especially if they are not moving. Getting kissed by a manatee is definitely an unforgettable experience.
However, there are some important rules when interacting with manatees:
If you want to swim with manatees, you should choose a legitimate and experienced company, e.g. Florida Manatee Tours, Gulf Coast Expeditions or Nature Coast Manatee Tours.
In addition to manatees in zoos and aquariums, there are about 6.000 animals living in the wild in Florida. Their habitat includes the whole state. However, in the following places chances of seeing manatees are especially high:
If you want to see manatees in Miami, you have come to the right place. Nowhere else in the state are there more spots for manatee watching. The following places are particularly worthwhile:
The “River of Grass” offers ample opportunities to watch wild animals – among them the iconic sea cows. Especially in the winter months, manatees in the Everglades are a common sight. You have the choice: Do you want to watch the animals on an airboat tour or on a hiking trip?
Guests in Everglades City should try their luck at the canals near the Gulf. A well-kept secret among manatee fans is the Port of the Islands Resort. The port of Flamingo is another perfect spot to see manatees in the water.
There is no better place to see manatees in Ft Myers than the Lee County Manatee Park. There, the animals frolic in the warm waters of a power plant. Multiple observation platforms, picnic tables and canoe rentals are available at the Manatee Park Fort Myers.
If you want to see manatees in Fort Myers Beach, you should head for the secluded Lovers Key State Park. There, you can rent a kayak and observe the animals on the water.
Like in other cities on the Gulf of Mexico, you might see manatees in the canals of Naples if you are lucky. The animals come here mostly in the summer and autumn months.
However, if you want to increase your chances of spotting manatees in Naples, you should book a tour with Manatee Sightseeing Eco-Tourism Adventure. This company knows the perfect spot for manatee watching, and will even give you your money back if you do not see any manatees on the tour.
If you want to see manatees in Tampa, you should take a trip to the Apollo Beach Power Plant. Granted, a power plant might be the last spot where you would expect manatees; however, the animals love swimming in the warm water and can be seen from the Manatee Observation Tower.
And you do not have to drive far: Even when taking a stroll on the Tampa Riverwalk, you should keep your eyes open. It is not rare for manatees to stick their snouts out of the water of the Hillsborough River.
Do you want to combine manatee watching with a boat trip in the Tampa Bay? Then, Anna Maria Island is the perfect place for you. This barrier island is one of the state’s best spots to observe the animals. It is no coincidence that the area where Anna Maria is located is called Manatee County.
Speaking of which: Visitors in the town of Bradenton can see manatees in the Parker Manatee Rehabilitation Habitat. Here, the sea cows live in 60.000 gallons of fresh water. The Habitat focuses on treating and studying the animals. In addition, guests can watch manatees above water and through glass walls.
This nature reserve with the difficult name is a hotspot for kayaking. The crystal-clear water makes it easy to see manatees in Weeki Wachi. Especially in winter, the animals seek refuge around the park’s warm springs, so that is where you should steer your boat.
In general, fresh water springs are a favorite winter refuge for manatees. In addition, this State Park has a remarkable characteristic: Manatees in Wakulla Springs stay for the whole year. What is more, females use this place to give birth to their young.
Blue Springs State Park is a popular recreational area north of Orlando. Big groups of manatees are a common sight in the shallow, crystal-clear waters of these springs. The winter months in particular are a perfect time to see manatees in Blue Springs State Park. Oftentimes, injured manatees are brought here for rehabilitation. From the wooden boardwalk, you will enjoy amazing views, and if you are lucky, you might see females with their calves.
At the moment, Ellie Schiller Homosassa Springs Wildlife State Park houses two manatees that live there permanently. The park also serves as a rehabilitation center for injured animals that are released back into the wild after treatment. In winter, you can observe even more manatees in Homosassa Springs.
Thanks to their warm, tropical waters, the Florida Keys are an ideal place for manatees – at the beach as well as in marinas. In winter, you can even see them in the canals of Key West. Generally, the best thing is to go where water temperatures are the highest. Manatees in the Florida Keys prefer these spots. For example, you could try your luck in the following places:
Of course, you cannot only observe the marine mammals in the wild. If you want to have a hundred percent chance of seeing manatees, you should visit Florida’s zoos and aquariums. Places that feature manatees are:
Manatees in the wild can live for up to 60 years. However, the oldest manatee in captivity reached the age of 69.
The short answer: everywhere on the coast, even in marinas, canals and the basins of power plants. As a rule of thumb: Bodies of water that are warmer than the nearby coast provide a good chance of seeing manatees.
It depends where you are staying. On the coast, you can theoretically see the animals the whole year round. However, keep in mind that only 6.000 manatees remain in the wild, and that they spread across a large area. In winter, manatees tend to concentrate in Florida’s warmer waters where they can easily be observed.
If you want to swim or snorkel with manatees, you can take a trip to the Crystal River. King’s Bay is the only place in Florida where you can legally swim with the animals, and there are many tour providers.
No. As herbivores, manatees do not harm humans. Should a manatee swim close to you, just remain calm. The animals are very curious and like to feel foreign objects with their snouts. Nonetheless, you should not swim towards manatees, but keep your distance in order to not disturb them. | <urn:uuid:1dad175e-4611-4d78-abd8-59b3d7e7ac65> | CC-MAIN-2020-50 | https://vacation-in-florida.net/sightseeing-tourist-attractions/wildlife-zoo/manatee/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00004.warc.gz | en | 0.954586 | 2,481 | 2.59375 | 3 |
The 20092011 project titled 'Cereal fibre consumption and vascular function in overweight individuals' examined the effect of a wholegrain diet on the vascular function of 17 males from the age of 40 to 65 over a 12-week period.
The literature review in year one demonstrated that wholegrain cereal products give the greatest protection against cardiovascular disease, compared with the combination of refined and wholegrain cereals. The protective effects of wholegrains are likely to originate from the synergistic action of compounds contained within wholegrain cereals, including fibre, but also a number of other bioactive components.
The human intervention study in year two aimed to investigate the effect of integrating the diet with three servings (48g) of wholegrain per day on flow mediated dilation (FMD) over a 12-week period. The secondary objective was to assess the correlation between FMD measurements and other markers of cardiovascular function.
"The key idea was to take a sample of the population with risk factors for cardiovascular disease and give them three daily servings of wholegrain to see if following US dietary guidelines would benefit them in a short- to medium-term," said Sarah Kuczora, nutritionist, nutrition research at Leatherhead.
The findings indicate a significant association between FMD and two markers of inflammation (tumour necrosis factor- and c-reactive protein), diastolic blood pressure and cholesterol: HDL ratio.
Kuczora believes this association would help manufacturers to carry out significant trials to support any health claims they want to make. This would enable them to gather scientific evidence in line with the requirements of the European Food Safety Authority (EFSA).
"It centres around the link between the blood biomarkers, which appear to be associated with the FMB," said Kuczora. "It's early research and it's not been investigated before but significant associations would be a real help."
FMD is not something that's been measured widely in terms of wholegrain research but now EFSA has approved it as a biomarker for vascular function. The study introduces manufacturers to the idea of FMD, which enables them to carry out more low-cost studies, according to Kuczora.
"Part of this research intended to reduce the costs of trials by using cheaper blood biomarkers and if you see a positive impact, you can design a study," she said.
"Wholegrain is still an interesting area to research it's just a case of finding the right study design," said Kuczora. "No claims have been approved for fibre and wholegrain and vascular function. There are a few surrounding beta-glucan and cholesterol, but it's a limited area."
For manufacturers, it's all about knowing EFSA's requirements and having a firmly designed protocol. All of this increases the chance of getting health claims approval. | <urn:uuid:2cbc5735-bb2f-47cd-a0df-59de5d7e61e8> | CC-MAIN-2015-06 | http://www.foodmanufacture.co.uk/Sectors/Healthy-foods/Get-the-heart-of-wholegrain-health-claims | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862015.5/warc/CC-MAIN-20150124161102-00055-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.95043 | 589 | 2.84375 | 3 |
E-cigarettes are not safer and healthier than regular cigarettes. Both of them are bad for your lungs, heart and overall health.
If you were among those who considered e-cigarettes to be a safer alternative to tobacco cigarettes, we are happy to be busting this misconception of yours. Many people switched to e-cigarettes despite these being expensive thinking that this would minimize health damage. Not anymore! According to a new research, e-cigarettes are equally health hazardous as the tobacco ones. The electronic version poses nearly the same health risks as combustible cigarettes. This finding can prove to be an eye-opener for a lot of people who used to consider e-cigarettes as healthy.
Know From The American Heart Association
Cigarettes directly affect the lungs and indirectly cause damage to other vital organs of the body including the heart. Smoking is one of the most dangerous lifestyle habits and it is more harmful than drinking alcohol. It not only damages the lungs but also triggers serious health problems which slowly takes the person towards death. After lung diseases, heart ailments are very common with smokers. Many people switched to electronic cigarettes to minimize this risk but health specialists have a different opinion as per the study published in the journal of the American Heart Association.
"Many people believe e-cigarettes are safer than combustible cigarettes. In fact, most e-cigarette users say the primary reason they use e-cigarettes is that they think e-cigarettes pose less of a health risk," says lead author of this study Jessica L. Fetterman who is a research scholar and assistant professor of medicine at School of Medicine, Boston University.
"Meanwhile, the evidence from scientific studies is growing that e-cigarettes might not be a safer alternative to smoking traditional cigarettes when it comes to heart health. Our study adds to that evidence," she added.
The team studied more than 400 men and women aged between 21 to 45 years without any heart-related problem. They randomly took non-smokers, cigarette smokers, e-cigarette smokers and dual smokers. The team found that the arteries of the heart were equally stiff in tobacco cigarette users and electronic cigarette users.
As told by Fetterman, "Stiffening of the arteries can cause damage to the small blood vessels, including capillaries, and puts additional stress on the heart, all of which can contribute to the development of heart disease.”
“We studied measures of blood vessel function in e-cigarette and dual users who had been using e-cigarettes for at least three months. Most studies to date have looked at the impact of the acute use of e-cigarettes on blood vessel function measured right before and after use, whereas our study evaluated blood vessel function in chronic e-cigarette use among young, healthy adults," she added.
The Equal Damage
The research also found that endothelial cells which are at the lining of blood vessels were also damaged in both the cases. Due to this, their body cannot produce nitric oxide which protects the heart. This also damages other components of the cells which are cell proteins and DNA.
There is no evidence that switching to electronic cigarettes and decrease the risk of heart problems including cardiovascular injuries.
Read More in Miscellaneous
All possible measures have been taken to ensure accuracy, reliability, timeliness and authenticity of the information; however Onlymyhealth.com does not take any liability for the same. Using any information provided by the website is solely at the viewers’ discretion. In case of any medical exigencies/ persistent health issues, we advise you to seek a qualified medical practitioner before putting to use any advice/tips given by our team or any third party in form of answers/comments on the above mentioned website. | <urn:uuid:c99e2f2d-f487-4d7d-973d-39881ddf0efc> | CC-MAIN-2021-10 | https://www.onlymyhealth.com/e-cigarettes-are-bad-for-heart-health-as-tobacco-cigarettes-1588557886 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00606.warc.gz | en | 0.951049 | 750 | 3 | 3 |
Writing ideas for the unstructured homeschool classroom
WHENEVER I talk to parents who homeschool, I’m surprised at how many still treat the experience as if their children were in a traditional, structured classroom.
As a homeschooled child myself, I remember my folks being pretty adamant about making the process one of fairly unstructured personal and academic discovery. In terms of writing, my mother took full advantage of the fact that we didn’t have to be glued to our desks while attending class!
No matter your homeschool style, try these four dynamic writing prompts to inspire your kids as they learn the basics of writing and expressing themselves in a perhaps less-than-traditional way:
Have your child pretend she’s a reporter by interviewing people in the community.
It’s important for children to learn early on that the process of writing is deeply connected not just with their own thoughts, but with the opinions of other people. The best way to learn this is by talking to others about a specific topic or issue.
For example, ask your students to interview older neighbors about what life was like when they were children, or talk with a community worker about his or her job, and them compile these interviews into an article, story, or essay.
Go on an outdoor adventure, and then have children write a descriptive essay about what they saw and felt.
Inexperienced writers can forget to include descriptions of scene and setting in their work, often because they’re stuck indoors where they must rely on their own imagination.
To ameliorate this problem, consider taking them to a local park, zoo, or wildlife sanctuary. Have them take notes about their surroundings, and then later write a short essay or story containing details about what they saw, smell, heard, and felt.
Allow kids to choose their own books and write reports or reviews about them.
Too often, kids become disenchanted with writing because they can’t really pick what they’d like to write about. The same goes for reading. While most standard reading curricula are well intentioned, they can’t account for young readers’ diverse tastes.
Put the ball in their court by encouraging them to pick their own books and write summaries, reviews, or book reports about their selections. If a particular child needs boundaries, you might give him three books from which to choose the one he wants.
Invite children to write a short play and perform it together.
Whether they’re young or old, it can be very difficult for writers to master an ear for spoken language. To improve this specific skill in a fun way, have your kids write and act out a short, five- or ten-minute play.
When they can hear out loud what they’ve written on paper, they start to understand how to make their writing sound more natural and conversational. It’s a great way to improve your students’ speaking abilities as well.
When you aren’t stuck in a traditional classroom, the sky is the limit as far as learning goes. Take advantage of the opportunities afforded by homeschooling, and think of as many different, off-the-beaten-path learning methods as you can!
This guest post is contributed by Barbara Jolie, who enjoys writing about trends in the academic world. Even when she’s not blogging, Barbara is always contemplating and considering issues concerning education and modern society. You can reach her at firstname.lastname@example.org. | <urn:uuid:047093bd-1dbc-466f-830c-252582aee68f> | CC-MAIN-2017-26 | https://writeshop.com/writing-ideas-for-the-unstructured-homeschool-classroom/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320823.40/warc/CC-MAIN-20170626152050-20170626172050-00028.warc.gz | en | 0.970675 | 736 | 3.375 | 3 |
The cultural impacts of tourism on the host communities
The impact that tourism has on the cultural lives of communities is one of the most important issues debated by tourism researchers and academics today. There is an increasingly growing concern that tourism development is leading to destinations losing their cultural identity by catering for the perceived needs of tourists. Although they take longer to appear, the cultural consequences of tourist activity have the potential to be much more damaging in the long term than environmental or social effects. In many countries, tourists are not sensitive to local customs, traditions and standards. Offence is given without intent, as tourists are short-stay visitors carrying with them their own cultural norms and behavioural patterns. They are usually unwilling to change these norms for a temporary stay – and may be unaware that these norms are offensive to the host community. Commercialisation of traditional cultural events and customs is leading to ‘fake folklore’ for the tourists, but more importantly, with no cultural value for the local population or the visitors. The issue is the potential conflict between the economic and cultural interests, leading to culture being sacrificed for reasons of promoting tourism i.e. creating an additional economic value at the price of losing a cultural value.
Leonard J. Lickorish, Carson L. Jenkins (1997), An Introduction To Tourism
Tourists are sometimes presented with a commercialised and stylised presentation of a destination’s cultural identity, which may lack authenticity. This is currently happening in parts of South America, for example. This destination is becoming ever more popular and in places such as Paraguay, the cultures and traditions are in danger of disappearing. The native Indians have become mercenary, changing their traditional dances for the tourists’ benefit. The dancers now put on a show for the tourists and are dressed in a...
Bibliography: · http://www.biodiversity.ru/coastlearn/tourism-eng/why_socioimpacts.html
· Ray Youell (1998), Tourism an introduction
· Leonard J. Lickorish, Carson L. Jenkins (1997), An Introduction To Tourism
· D. Pearce (1996), Tourism Development
· Geoffrey Wall, Alister Mathieson (2005), Tourism - change, impacts and opportunities
Please join StudyMode to read the full document | <urn:uuid:f3fc0a33-975b-4059-8273-9b222ecc2132> | CC-MAIN-2018-43 | https://www.studymode.com/essays/The-Cultural-Impacts-Of-Tourism-On-172302.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514162.67/warc/CC-MAIN-20181021161035-20181021182535-00445.warc.gz | en | 0.917672 | 465 | 3.203125 | 3 |
The Radiation Laboratory in Berkeley, California, was the birthplace of particle accelerators, radioisotopes, and modern big science. This first volume of its history is a saga of physics and finance in the Great Depression, when a new kind of science was born.
Here we learn how Ernest Lawrence used local and national technological, economic, and manpower resources to build the cyclotron, which enabled scientists to produce high-voltage particles without high voltages. The cyclotron brought Lawrence forcibly and permanently to the attention of leaders of international physics in Brussels at the Solvay Congress of 1933. Ever since, the Rad Lab has played a prominent part on the world stage.
The book tells of the birth of nuclear chemistry and nuclear medicine in the Laboratory, the discoveries of new isotopes and the transuranic elements, the construction of the ultimate cyclotron, Lawrence's Nobel Prize, and the energy, enthusiasm, and enterprise of Laboratory staff. Two more volumes are planned to carry the story through the Second World War, the establishment of the system of national laboratories, and the loss of Berkeley's dominance of high-energy physics.
J. L. Heilbron is Class of 1936 Professor of History and History of Science at the University of California, Berkeley. He is the author of Dilemmas of an Upright Man: Max Planck as Spokesman for German Science (California, 1986), among many other books.
Since 1985, Robert W. Seidel has been the Administrator of the Bradbury Science Museum at the Los Alamos National Laboratory. He holds a Ph.D. in history from the University of California, Berkeley, and has written numerous articles on the history of the DOE national laboratories and on the history of military laser research and development.
"This is a first-rate contribution to the history of science and—in view of the central importance of physics for modern civilization—to the history of the twentieth century in general."—Spencer R. Weart, Center for History of Physics at the American Institute of Physics | <urn:uuid:d9ffdb8f-31ec-4973-8ff4-734ca082f79b> | CC-MAIN-2017-47 | https://www.ucpress.edu/book.php?isbn=9780520064263 | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806317.75/warc/CC-MAIN-20171121055145-20171121075145-00778.warc.gz | en | 0.899006 | 420 | 3.34375 | 3 |
We cannot reverse the clock and return to the medieval era, but we can re-look at the architecture of the rural past and learn lessons.
None of us know exactly when humans started protecting themselves by constructing houses, after being cave dwellers for long. Surely, this saga of sheltering the self is as exciting as the story of civilisation it self, with shelters of a million types across the globe. These stories can showcase wide possibilities of manipulation of land; extended use of resources; and human potential to modify the contexts and enrich the idea of living.
We may assume that the act of shelter-making must have started to seek safety from wild animals, then adding the idea of protection from the vagaries of nature. As the nomadic life gave way for the settled, need for storage and areas for specific activities must have emerged. These four — safety, storage, activity and protection — might have defined the basic home which continue to be the essence even today, enabled through design, materials and construction.
Today we have deviated from the original, contextual, vernacular approaches to shelter making. We build comfortable, complex and luxurious homes, appearing different from the hut- like historic houses.
Unfortunately, today we claim more embodied energy, spend more money and consume more resources than in the past. Given this shift and the context of construction industry being among the major contributors for greenhouse gas emissions, it becomes relevant to re-examine our approaches from the criteria of climate change.
We cannot reverse the clock and return to the medieval era, but can we re-look at the architecture of the rural past and learn lessons? Of course we can. Among them, building with grass and straw appears to be a universal practice, still relevant in India. Modern architects have been rediscovering this wonder material, even it is more for resorts, roadside facilities and temporary structures.
Grass or straw, as an individual strand, has neither the strength nor the durability to shape a shelter, but together in hundreds twisted like a rope becomes a linear fibrous material that acts like a beam. In a thick form, it becomes a mat-like surface to roof a space or become the wall for a room. If dense enough, grass surface becomes a water-proof layer to withstand rains at least for a decade. Being porous in nature, grass roofs breathe out hot air. They keep the indoors warm during winter and cool during summer. And finally, when old and rotten, grass joins the mother earth again!
Of course, all these will not make it a wonder material. Across long spans, they may sag and eventually crack in dried condition. Fire hazard is always a risk. Local availability, both of materials and skilled workers, is also a challenge which if not met with, will negate any grass structures.
We may list the problems of a local and traditional material and rule it out. Alternatively, we can also solve those problems, look at the positive qualities and build on that strength. It is the latter we need to follow today. | <urn:uuid:336cc0fa-e530-4327-bce8-447c6d2dfc48> | CC-MAIN-2018-05 | https://sathyaconsultants.wordpress.com/2015/10/17/grassroot-shelter/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888077.41/warc/CC-MAIN-20180119164824-20180119184824-00050.warc.gz | en | 0.942248 | 614 | 3.15625 | 3 |
In the last few days, we’ve all heard about fruits, whole grains, watermelon, and saturated fat. But do you know what they are and how you can get the most out of them? There’s a better way! Read on for more tips! Healthy eating is all about balance. Instead of eating the same unhealthy food every day, you can replace it with something healthier and incorporate more physical activity. Here are a few examples of foods to eat on a daily basis.
Aside from providing essential vitamins and minerals, fruits are high in fiber and antioxidants. In fact, eating a variety of fruits regularly may reduce your risk of developing many chronic conditions. These foods can also improve your digestive health, reduce your risk of constipation, and promote a healthy heart. Here are some examples of fruits to add to your daily diet:
When you eat cereal grains, look for the “whole grain” label. Whole grains contain the endosperm, germ, and bran of the seed. Refined grains are stripped of the germ and bran, leaving only the endosperm. You can get more nutritional benefit from whole grains by eating them at least three times a day. But how can you know which whole grains are the best choice? Read on to learn more.
Many studies lump saturated fats together, but the truth is that different types of saturated fat play different roles in the body. People don’t consume saturated fats in isolation, however. Instead, they select foods with a variety of different fats. The same type of saturated fat may have different effects depending on your diet. Dairy and poultry contain neutral amounts of saturated fats, while certain vegetable oils are beneficial to the heart.
Although most people think that consuming watermelon is unhealthy, this fruit is loaded with beneficial antioxidants and anti-inflammatory compounds. For instance, watermelon is high in L-citrulline, an amino acid precursor to arginine, which is required for protein synthesis. Also, watermelon contains citrulline, an amino acid that smooths out blood flow and reduces blood pressure. All of these benefits make watermelon a healthy choice for people of all ages and health conditions.
Rooibos is a herbal tea that has health benefits similar to green tea. The primary difference is its taste and colour. Although rooibos is a natural product, it’s not a substitute for green tea. Most people can safely drink rooibos tea without any side effects. However, before making the switch, it’s important to know what exactly rooibos tea is and how it can be used as a health drink. | <urn:uuid:89a6b4f1-451b-43ee-a7c4-abbe3768c233> | CC-MAIN-2023-50 | https://nokonabaseball.org/how-to-eat-healthy-fruits-whole-grains-watermelon-and-saturated-fat/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100523.4/warc/CC-MAIN-20231204020432-20231204050432-00035.warc.gz | en | 0.941692 | 545 | 3.234375 | 3 |
REVIEW OF THE RELATED LITERATURE
This chapter presents the various literature and related studies critically reviewed by the researchers in the course of conducting this study following this sequence.
The complex process of thinking is divided into higher order thinking and lower order thinking. Higher order thinking is used when someone relates stored and new information to solve extraordinary and difficult problems, or to obtain new ideas. Higher order thinking skills include contextualization, metacognition, creativity, insight, intelligence, problem solving and critical thinking. While the Lower order thinking is used to develop daily routines and mechanical processes. Critical thinking means to have criteria, analyze, infer, explain arguments, and develop them (King, Goodson and Rohani, 2009; Pearson, 2011).
Many authors talk about Higher Order Thinking Skills (HOTS). King and others (2009) traced their historical development and mentioned several key movers in this regard: Dewey explained how thinking is evoked by problems, and Bruner argued that inquiry is necessary in the learning process. Piaget clarified that these skills are needed in the last developmental stages of thinking; on the other hand, Bloom explained how HOTS require previous levels of knowledge. Gagne put HOTS in the top of his taxonomy, and Marzano situated these skills as a dimension of learning. Glaser declared HOTS are the type of thinking for problem solving, and Vygotsky affirmed that HOTS are necessary to move into the zone of proximal development. Furthermore, Haladyna sustained that HOTS are a level of mental processes, and Gardner declared HOTS are developed by our multiple intelligences (as cited in King et al., 2009). Definitely, each theory posits a different way of understanding thinking and how to develop HOTS. There are also theories about the different skills themselves. However, one of the most important skills is critical thinking, divided also into other skills such as analyzing and solving problems, as well as creating new arguments (Beyer, 1990; Pearson, 2011).
In fact, critical thinking has been studied by different sciences. Philosophers like Bailin, Ennis, Lipman, McPaul and Peck focused on what people are capable of doing under the best circumstances to get to the truth. Psychologists such as Halpern, Sternberg, and Willingham focused on how people actually think. Finally, educators like Bloom and Marzano explained critical thinking based on research about their own experience in the classroom and observation of student learning (King et al., 2009; Lewis and Smith, 1993; Pearson, 2011).
Critical thinking skills and education have been researched in different fields since the age of Socrates (Fahim, 2012). However, in the last fifteen years, majority of studies added pedagogical elements to improve these skills. Other research studies tried to identify if critical thinking is related with demographic information, cognitive aptitudes or environment. Finally, a few studies described how to demonstrate and assess critical thinking in the classroom.
Critical thinking is an important and necessary skill because it is required in the workplace; it can help people to deal with mental and spiritual questions, and it can be used to evaluate people, policies, and institutions, thereby avoiding social problems (Hatcher and Spencer, 2005). Critical thinking is considered important in the academic fields because it enables one to analyze, evaluate, explain, and restructure their thinking, thereby decreasing the risk adopting, acting on, or thinking with, a false belief. However, even with knowledge of the methods of logical inquiry and reasoning, mistakes can happen due to thinker’s inability to apply the methods or because of character traits such as egocentrism. Critical thinking also gives students the ability to not only understand what they have read or been shown but also to build upon that knowledge without incremental guidance. It further teaches students that knowledge is fluid and builds upon itself. It is not simply rote memorization or the ability to absorb lessons unquestioningly.
According to a review of critical thinking studies conducted by Pascarella and Terenzini (1991), attending college has a positive influence on the development of students’ critical thinking. It is important to develop students’ graduate attributes across the curriculum and across the three years of a degree. Hughes and Barry (2010) suggest that assessing these attributes is critical in ensuring that students understand their importance. Students need to grasp that it is essential for them to develop a critical approach in order to be skilled employees who are able to adapt to new situations in the workplace (Forrester, 2008). It is especially important that students develop their meta-cognitive skills in their application of critical thinking in order to be successful at university (Jones and Ratcliff, 1993; Johnson, Archibald, and Tenenbaum, 2010).
Critical thinking is a necessary skill all students need to develop in order to fully understand information presented in lessons (Lambert and Cuper, 2008). Students that fail to develop their critical thinking skills accordingly typically suffer with lower academic grades (Quitadamo, Faiola, Johnson, and Kurtz, 2008). Understanding the disconnection between the information presented and the students’ ability to deduce the information is a vital component to change teaching methods and approaches in the classroom (Dewey and Bento, 2009; Lucariello, 2012).
Duran and Sendang (2012) define that the critical thinking is based on relating and drawing conclusions on notions and events. Furthermore, these authors say it involves different cognitive processes such as implicating problem solving, reflecting and criticizing. All these are skills `necessary to live in today’s world. These authors say that thinking begins with a physical or psychological inconvenience stemming from lacking the solution for a problem whose solution becomes the objective for an individual. Higher order thinking skills, like critical thinking and problem solving are considered necessary skills for 21st century individuals. All education institutions should be using these skills. Learners need higher order thinking skills if education is to make any sense. Shannon and Bennett (2012) cite a number of authors who observed that critical thinking evolves with the following stages: (1) the application level is categorized into two sub-levels namely; giving an example and applying concepts; (2) the analysis level has six (6) sub-levels namely; interpreting data, classifying, interpreting diagrams, making comparisons, drawing conclusions and making inferences; (3) the synthesis level has five (5) sub-levels developing hypothesis, designing experiments, developing models, making predictions and using the writing process; and (4) the evaluation level is categorized into two (2) evaluating and making judgments.
Educators have long been aware of the importance of critical thinking skills as an outcome of student learning. More recently, the Partnership for 21st Century Skills has identified critical thinking as one of several learning and innovation skills necessary to prepare students for post-secondary education and the workforce.
Lewis and Smith (1993) both are wondering if there is a difference between lower-order and higher-order thinking skills. In fact, the term “higher order” thinking skills seems a misnomer in that it implies that there is another set of “lower order” skills that need to come first. Newman (1990), in order to differentiate between the two categories of skills, concludes that the lower skills require simple applications and routine steps. In contrast and according to Newman (1993) higher order thinking skills “challenge students to interpret, analyze, or manipulate information”. However, Newman argues that the terms higher and lower skills is relative, a specific subject might demand higher skills for a particular student, whereas, another one requires lower skills. Splitting thinking skills into two categories will help educators in developing activities that can be done by slow learners before they can move to skills that are more sophisticated. As well as to develop activities that can be performed by fast learners and place them in their appropriate level. Furthermore, this splitting helps educators in constructing remediation programs for slow learners consisting of drill and practice. By a process of remediation through repetition, students are expected to master the lower order level thinking skills, which will help them in further stages to master the higher order skills.
Moreover, by breaking down skills into simple skills and higher level skill will help curricula developer to design the subject’s contents according to this splitting by focusing on basic skills in lower grades and in later grades, they can build the students’ competences and higher-order thinking skills. Educators consider higher-order thinking skills as high order thinking that occurs when the student obtains new knowledge and stores it in his memory, then this knowledge is correlates, organized, or evaluated to achieve a specific purpose. These skills have to include sub-skills such as analysis, synthesis and evaluation, which are the highest levels in Bloom’s cognitive taxonomy.
In spite of efforts to better define the purposes and role of laboratory work in science education, research has shown that teachers see laboratory activities as contrived (Tan, 2008; Tobin, 1986). In general, teachers cannot see laboratory activities as conceptually integrated with theoretical science lessons. In addition, teachers fail to understand that laboratory activities may provide opportunities for students to produce new knowledge through scientific investigations. According to a research conducted by Kang and Wallace (2005), teachers perceive laboratory work solely as an activity for the purpose of verification. Researchers have also uncovered that teachers do not think of the laboratory as an environment where scientific knowledge claims are discussed.
Different reasons have been shown for the problems relating to laboratory work (Tan, 2008). According to Bencze and Hodson (1999), problems in laboratory work arise when students blindly follow the instructions of the teachers. Some researchers, on the other hand, claim that the laboratory, instead of being a place for science and experiments, has become a place where tasks set by the teacher are carried out. No attention is given to the methods or purposes during laboratory work, only the set tasks are carried out (Hart et al., 2000; Jimenez-Aleixandre et al., 2000). Wilkinson and Ward (1997a; b) have connected the problems with laboratory work to a poor evaluation of the purposes of the tasks undertaken in the laboratory.
Tobin (1990) suggested that meaningful learning is possible in the laboratory if the students are given opportunities to manipulate equipment and materials in an environment suitable for them to construct their knowledge of phenomena and related scientific concepts. This allows the students to explore the concept of science and understand it better compare to a plain discussion in the classroom. Four years later, Roth (1994) suggested that although laboratories have long been recognized for their potential to facilitate the learning of science concepts and skills, this potential has yet to be realized. Tobin (1990) wrote that “Laboratory activities appeal as a way of allowing students to learn with understanding and, at the same time, engage in a process of constructing knowledge by doing science”.
“Learning by Doing” is about the history of experimentation in science education. The teaching of science through experiments and observation is essential to the natural sciences and its pedagogy. These have been conducted as both demonstration or as student exercises. The experimental method is seen as giving the student vital competence, skills and experiences, both at the school and at the university level (Heering and Wittje. 2010).
Active learning can make the course more enjoyable for both teachers and students, and, most importantly, it can cause students to think critically. For this to happen, educators must give up the belief that students cannot learn the subject at hand unless the teacher covers it. While it is useful for students to gain some exposure to the material through pre-class readings and overview lectures, students really do not understand it until they actively do something with it and reflect on the meaning of what they are doing (Duron, et al, 2006).
Proponents and Views of Higher Thinking Skills
Jean Piaget’s View
According to Piaget, the developmental stages are the key to cognitive development. School-age and adolescent children develop operational thinking and the logical and systematic manipulation of symbols. As adolescents move into adulthood, they develop skills such as logical use of symbols related to abstract concepts, scientific reasoning, and hypothesis testing. These skills are the foundation for problem solving, self-reflection, and critical reasoning (Crowl et al., 1997; Miles, 1992). Recent research shows that children perform certain tasks earlier than Piaget claimed, vary in how rapidly they develop cognitively, and seem to be in transition longer than in the cognitive development stages (Crowl et al., 1997). However, research also shows that biological development, together with instructional techniques, affects the rate of movement from one stage of learning to the next.
Jerome Bruner’s View
According to Bruner, learning processes involve active inquiry and discovery, inductive reasoning, and intrinsic motivation. Stages of cognitive development are not linear; they may occur simultaneously. Bruner introduced the “spiral curriculum” in which learners return to previously covered topics within the context of new information learned. Both Piaget and Bruner focus on active learning, active inquiry and discovery, inductive reasoning, intrinsic motivation, and linkage of previously learned concepts and information to new learning. Stages include enactive (hands-on participation), iconic (visual representations), and symbolic (symbols, including math and science symbols) (Crowl et al., 1997).
Benjamin Bloom’s View
In each of Bloom’s three taxonomies (cognitive, affective, and psychomotor), lower levels provide a base for higher levels of learning (Bloom, 1956; Kauchak and Eggen, 1998). Comprehension and application form linkages to higher order skills; here, the learner uses meaningful information such as abstractions, formulas, equations, or algorithms in new applications in new situations. Higher order skills include analysis, synthesis, and evaluation and require mastery of previous levels, such as applying routine rules to familiar or novel problems (McDavitt, 1993). Higher order thinking involves breaking down complex material into parts, detecting relationships, combining new and familiar information creatively within limits set by the context, and combining and using all previous levels in evaluating or making judgments. There also appears to be some interaction across taxonomies. For example, the highest level of the psychomotor taxonomy involves the use of our body’s psychomotor, affective, and cognitive skills to express feelings or ideas as in the planning and execution of a dance performance or song designed to convey a particular message.
Robert Gagné’s View
According to Gagné, intellectual skills begin with establishing a hierarchy according to skill complexity. Within this structure, discriminations are prerequisites for concrete and defined concepts, simple rules, complex higher order rules, and then problem solving. Cognitive strategies may be simple or complex (Gagné, 1985; Briggs and Wager, 1981; Gagné, Briggs, and Wager, 1988). Attitudes and motor skills, related varieties of learning, may involve lower as well as higher order thinking – spanning from a simple application of a tool to a complex systems analysis and evaluation. Bloom (1956) and Gagné and Briggs (1974) allow for greater possibilities of teaching complex skills to younger learners and the possibility that learners can be “young” at any age, starting at lower levels and connecting to higher levels of thinking. This variation for learning capabilities does not fit as well in Piaget’s and Bruner’s frameworks.
Robert Marzano’s View
To Marzano, the dimensions of thinking feed into dimensions of learning, both of which build upon contributions from other scholars and researchers (Marzano et al., 1988). For example, Gagné refers to the generalizations that describe relationships between or among concepts as “rules” (Gagné, 1974; Gagné, Briggs, and Wager, 1988), while Marzano calls them “principles” (Marzano et al., 1988, p. 37). The book Dimensions of Thinking has been designed as a practical handbook with definitions, examples, and classroom applications.
Lev Vygotsky’s View
Vygotsky (cited in Crowl et al., 1997) seems to have consolidated major concepts of cognitive development. Cognitive development progresses as children learn; biological maturity accounts for “elementary processes” such as reflexive responses. When learning a specific skill, students also perceive the underlying principles. Social interaction and social culture play major roles in learning and cognitive development; children internalize knowledge most efficiently when others, such as teachers, parents, or peers, guide and assist them; significant people in an individual’s life contribute to the development of “higher mental functions”; people’s cognitive processes function differently when working on their own versus working in groups. Everyone has a “zone of proximal development,” and asking certain questions or giving suggestions will move the individual toward potentially higher levels; such support helps students in solving problems until they can solve them independently and may include hints, questions, behavior modeling, rewards, feedback, information giving, self-talk, or peer tutoring (pp. 69–71).
Thomas Haladyna’s View
Haladyna (1997) expressed the complexity of thinking and learning dimensions by classifying four levels of mental processes (understanding, problem solving, critical thinking, and creativity) that can be applied to four types of content (facts, concepts, principles, and procedures). Applying a set of skills across dimensions of content fits well with the actual complex, recursive, and systemic processes of higher order thinking.
Howard Gardner ‘s View
According to Gardner (1983), multiple intelligences form a major part of an individual’s dispositions and abilities. These intelligences are independent of each other and account for the spectrum of abilities used in different fields of work, such as teaching, surgery, athletics, dancing, art, or psychotherapy. Gardner’s theory, which regards intelligence as having seven dimensions, has been receiving recent attention related to teaching (Kauchak and Eggen, 1998). Schools are shifting curricula and teaching methods to accommodate the diverse abilities and talents of students (Crowl et al., 1997). Teachers may have a greater impact by creating lessons that “use the various types of intelligence in classroom activities” (p. 187).
Although Gardner is commonly credited with theories related to multiple intelligences, others also have developed models of thinking that reflect the multifaceted nature of intelligence.
Certain components of models or theories of intelligence are similar to factors identified in models and theories of learning. For example, Guilford’s products (cited in Crowl et al., 1997, p. 184) resemble the learning outcomes described by Gagné, Briggs, and Wager (1988). “Units” are like the lower levels of discriminations and verbal information, “classes” are like the classification of concepts, “relations” are like the rules formed by relating one concept to another, and “systems” are like the systems of rules integrated into problem-solving strategies.
Similarly, Guilford’s “content areas” are ways of receiving and perceiving information and instruction, and Guilford’s “operations” parallel the mental processes that teaching strategies attempt to influence. There also are parallels with the notion of learning capabilities, in that Gagné and Briggs refer to stating, classifying, demonstrating, generating, and originating as the functions associated with different learning outcomes (i.e., stating verbal information, classifying concepts, demonstrating rules, generating problem solving, and originating cognitive strategies). These functional terms guide instructional designers in their specification of learning strategies and test items and have meanings that are similar to Guilford’s terms of cognition, memory retention, memory recording, and divergent and convergent production.
Atmospheric Pressure as Demonstrated in Atmospheric Pressure Apparatus
Atmospheric pressure is defined as the force per unit area exerted against a surface by the weight of the air above that surface. Atmospheric pressure in high altitude area is lesser than the atmospheric pressure at sea level. Atmospheric pressure is measured quantitatively with an instrument called “barometer”, which is why atmospheric pressure is also referred to as barometric pressure (Jarantilla, 2008).
Electrolytes and Non-Electrolytes as Demonstrated in the Electric Conductivity Apparatus
Electrolyte and Non-electrolyte solutes that exist as dissociated ions in aqueous solutions are called electrolytes. Solutes that are present as neutral molecules and not as ions in solutions are called nonelectrolytes. Electrolytes conduct electrolytes electricity and non-electrolytes do not conduct electricity. The solid state of ions will not be able to conduct electricity because it locked in to position in their crystal structure and is not able to move (Jarantilla, 2008).
Radiant Energy Absorption by Soil/Sand and Water as Demonstrated in Differential Thermoscope
Heat Capacity of Soil and Water all bodies are continually radiating energy and are also continually absorbing radiant energy. If a body is radiating more energy than it is absorbing, its temperature does decrease; but if a body is absorbing more energy than it is emitting its temperature increase. A body that is warmer than its surroundings emits more energy than it receives and therefore cools; a body colder than its surroundings is a net gainer of energy and its temperature therefore increases. A receives none, it will radiate away all of its available energy, and its temperature will approach absolute zero. The rate at which the body radiates or absorbs radiant energy depends on the nature of the body and the difference between its temperature and the surrounding temperature. Emission and absorption take place at the surface of a body. A rough surface is therefore a better absorber and emitter since microscopically it has more surface area. If the surface area is hotter than the surrounding air, it becomes a net radiator and cools (Hewitt, 1977).
Tyndall Effect as Demonstrated by the Tyndall Effect Apparatus
Tyndall Effect unlike solutions, colloidal suspension exhibits light scattering. A beam of light or laser, invisible in a clear air or pure water will trace the visible path through a genuine colloidal suspension, e.g. a headlight on a car shining through fog. This is known as Tyndall Effect (after its discoverer, the 19th century British physicists John Tyndall), and is a special instance of diffraction. This effect is often used as a measure of existence of a colloid. It is visible in colloids as will 0.1 ppm (parts per million). However, there are exceptions. For example, the effect cannot be seen with milk, which is a colloid.
Tyndall scattering occurs when the dimensions of the particles that caused the scattering are larger than the wavelength of the radiation that is scattered. It is caused by reflection by the incident radiation from the surfaces of the particles, reflection from the interior walls of the particles, and refraction and diffraction as it passes through the particles (Jarantilla, 2008).
Thermal Expansion of Liquids as Demonstrated in Water and Alcohol Thermoscope
Water and Alcohol Thermoscope when the temperature of the substance is increase, its molecules is made to jiggle faster. The more energetic the collision between molecules the more force to move them further apart, resulting in an expansion of the substance. All forms of matter- solids, liquids, gas and plasma generally expand when heated an contract when they are called (Hewitt, 1997). The most famous exception is water, which contracts as it is warmed from 0 ?C to 4 ?C. This is actually a good thing, because as freezing weather sets in, the coldest water, which is about to freeze, is less dense than slightly warmer water (Fowler, 2006).
Vast related studies show that student teachers are not aware of the benefits of laboratory work on the students facing their own misconceptions. These results support the results of Ottander and Grelsson (2006). The scientific discussions held during the laboratory work help to define the misconceptions entertained by the students. Furthermore, laboratory work provides concrete experiences and opportunities for students to face their own misconceptions (Lazarowitz and Tamir, 1994). As a matter of fact, it has been shown that students positive attitude towards science increases with laboratory work (Freedman, 1997). According to Kang and Wallace (2005), it is likely that teachers with naive epistemological beliefs will prefer the delivery of information as the prime teaching goal.
Hofstein and Naaman (2007) reviewed and reported several studies conducted in various countries about laboratory applications. In their evaluation, they stated that laboratory applications aimed to enhance students’ science process and problem-solving skills and their interest in and attitudes toward scientific approaches in accordance with the objectives of basic science education. Garnett and Hackling (1995) argued that laboratories will contribute to improving students’ conceptual understanding, application skills and techniques, and ability to analyze inter-variable relationships and chemical analyses-syntheses. The study aimed to demonstrate the importance of laboratory work in chemistry education for chemistry instructors. The authors highlighted the need to use student-active laboratory approaches so as to enhance students’ research skills including problem analysis, research plans, research management, data recording, and interpretation of the findings.
A careful study reported by Reif and St. John (1979) showed that students in a college-level physics laboratory course based on inquiry training developed high level skills more successfully than did students in a conventional physics laboratory course. The students in this laboratory course used instructional materials that presented information in a carefully organized way and incorporated specific features stimulating students to think independently.
Another research tendency is to understand which demographic factors are related with critical thinking skill. In these kinds of studies, researchers analyze significant numbers of participants from different schools that are chosen following specific characteristics. Edman, Robey, and Bart (2002) selected a sample of 232 Colleges and University students, Mahiroglu (2007) studied a sample of 134 schools from Turkish provinces, and Yang and Lin (2004) selected 1119 male senior high school students from military schools. The study sought to determine if these demographic elements isolated from others generate a disposition for critical thinking skill by tests specially designed to identify disposition of critical reasoning, such as the Minnesota Test of Critical Thinking II, a demographic information sheet, or a general survey mode. Demographic studies have been carried out in the United States (Edman, Robey, and Bart, 2002), Taiwan (Yang and Lin, 2004), and Turkey (Mahiroglu, 2007). They found that demographic differences as gender, age, region, school, class, grades or parent’s education level are related significantly with critical thinking disposition.
Ramasamy (2011), on the other hand, considered the age, discipline, program, grade point average, and number of reading hours of the participants. LaPoint-O’Brien (2013) analyzed understanding and reasoning. Findings of these studies sustain only that disciplines, programs and age directly influence results of Critical Thinking Skill Tests positively. In fact, Ramasamy (2011) concludes that age is an essential part of developing critical thinking. According to her, this is because age is related with maturity and only maturity helps making critical and complex judgments.
A study by the University of Arkansas also discovered that field trips contribute to the development of student’s critical thinking skills and increase their knowledge of Art and culture. According to Greene and others (2013), it says that enriching field trips contribute to the development of students into civilized young men and women who possess more knowledge about art, have stronger critical-thinking skills, exhibit increased historical empathy, display higher levels of tolerance, and have a greater taste for consuming art and culture.
Patrick (2010), proposed that field trips should be weaved into the teaching schedule as this will provide an opportunity for students to view information for themselves and use their own senses to touch, or feel materials that they had previously only heard about. Patrick’s study considered the effects of field experiences on students’ knowledge in relation to their science achievement, in particular biology. Patrick found that there was a significant difference in test scores between the students that had participated in field trip experiences and those who were not included on field trips. Patrick concluded that these field trip experiences significantly improved the students’ understanding of science and also improved their motivation/attitude towards the subject. This subsequently influenced and increased their overall achievement in biology (Patrick, 2010).
The environment also provides a valuable asset to be considered when teaching critical thinking. A study conducted by Nelson Laird (2005) identified that students exposed to diversity and other various interactions demonstrate greater propensity toward critical thinking. Those students typically are found to be more open-minded, and therefore willing to exhibit greater flexibility when solving problems or understanding larger aspects of complex skills. Ernst and Monroe (2006) conducted a similar study on how the environment affects critical thinking skills and dispositions, and they arrived at a similar conclusion.
Critical Thinking is a process in which it helps us to conceptualize, apply, analyze, and synthesize. And also evaluate information gathered from or generated by observation, experiences, reflection, reasoning, or communication as a guide belief and action. Laboratory activities play a vital role in improving the critical thinking skills. It aimed to enhance students’ process and understanding towards science education. Also, it improves students’ conceptual understanding and cognitive skills. Students have different level of critical thinking skills as they go along the process of learning in the school environment. Field trips also contribute to developed students’ critical thinking skills. And also improved their motivation/attitude towards the subject and influenced and increase their overall achievement especially in biology.
Science instruments are instrument used for scientific purposes. Instruments were used for better understanding of the students in terms of the science concepts. Science DIY Instruments are homemade materials used to replace the apparatus which are not available in the science laboratory for science activities or experiments. These devices are less costly but they exactly work the same as the laboratory apparatuses. Evaluating the critical thinking of the respondents in utilizing Do-It-Yourself equipment and laboratory activities was essential in order to determine if the DIY apparatuses can really help the respondents in developing their critical thinking skills to be able to know what to improve about the apparatuses and if it is effective to use inside the classroom. | <urn:uuid:1ecd58c1-b65c-41db-88d2-88735ed80659> | CC-MAIN-2022-27 | https://transectscience.org/chapter-ii-review-of-the-related-literature-this-chapter-presents-the-various-literature-and-related-studies-critically-reviewed-by-the-researchers-in-the-course-of-conducting-this-study-following-thi/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00054.warc.gz | en | 0.944073 | 6,331 | 3 | 3 |
Invariably, horses must spend time inside their horse stall. Appreciating the consequences that result from inadequate stall maintenance, which creates poor air quality, is imperative to the health and longevity of your animals. A poorly maintained stall can lead to serious health issues for your horse.
Stall confinement can lead to respiratory problems, especially when horses remain in the stall for long periods. Inadequately maintained horse stalls are a primary cause of breathing issues in horses. Here is how you can improve the air quality in your horse stalls and avoid these problems.
Cause of Respiratory Problems in a Horse Stall
Bacteria are responsible for respiratory and breathing ailments in horses. A horse stall is a natural breeding ground for the particles that contain them. They are inherently a part of their feed, bedding and foot materials. Poor maintenance practices in a horse stall allow these harmful bacteria to proliferate.
When foot traffic, or any type of air movement disturbs the bacteria, they become airborne as endotoxins. These harmful endotoxins compromise your horse's breathing and leads to serious respiratory issues over time. Here are some preventative measures that will reduce the amount of bacteria, and therefore the prevalence of harmful endotoxins in the air within your stable.
- Air Circulation
Circulating the air inside the stable is one way to help reduce the elements that create respiratory problems in horses. However, you must use caution. Blowing bacteria filled air around inside the horse stall is not a wise solution. All you do is circulate the problem whipping up dust in a whirlpool effect. This actually makes the problem worse. Floor fans, even upright types, will move the air around, but they also send harmful particles into the air that would otherwise not become airborne.
When you set up a ventilation system for your horse stalls, make sure it circulates the air, but does not stir up dust. The best way to reduce the dust factor, but still keep the air moving, is to use ventilation fans that pull air out of the entire stable. The fans should be situated at least midway from the floor to the peak of the roof. Two fans at adjacent ends of the barn, will help circulate air, bring in fresh air, and keep the air inside the stalls from becoming stale and breeding bacteria.
- Clean & Dry
Damp bedding and feed are very prone to bacteria. Bedding needs to be changed at least every day and often twice daily during the hot, humid summer months. While lightly wetting hay for feeding helps keep the number of airborne bacteria down, it should be cleaned immediately after feeding.
The manure produced by horses is full of natural bacteria, which begins to dry almost immediately. If left in the stall for more than a couple hours, the bacteria will become airborne, resulting in endotoxin release into the air. This situation almost instantly compromises the air quality for your horses. Be vigilant with your schedule of keeping the stalls clean and dry.
Since the consequences of poor air quality in a horse stall is so important, be sure to keep the air fresh and the area clean and dry. Your horses will breathe better and you won't have to deal with serious respiratory problems caused by poor air quality in your horse stall. Check out a stall dealer, like Rarin' To Go Corrals, for more help. | <urn:uuid:c495de77-b6f7-48c9-b7c8-85eaef4fc03b> | CC-MAIN-2018-13 | http://rec-sports.com/2017/06/07/good-horse-stall-maintenance-practices-to-prevent-respiratory-problems-in-your-horses/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648313.85/warc/CC-MAIN-20180323141827-20180323161827-00247.warc.gz | en | 0.941026 | 677 | 3.078125 | 3 |
The artistical technique values local knowledge and represents a new way to generate income through the sustainable use of timber resources.
By Clara Machado
Timber is present in the daily life of those who live in the forest, either to make canoes, oars, boards for houses or harpoons and household utensils. Timber is collected on small scale in the forest and worked by skillful hands distinguished by a specific knowledge capable of turning it into daily items of the community life.
Thinking about this audience, which has the knowledge of how to use timber, Instituto Juruá, in partnership with Associação de Produtores Rurais de Carauari (Asproc) offered the first course on marquetry in the Mid-Juruá. The course took place on November 24-29 at the Casa Familiar da Floresta Campina (CFFC), a rural zone of Carauari (AM). The course was supported by ICMBio through the RESEX Mid-Juruá manager, Manoel Cunha. He contributed to the planning, lecturing, and he also participated in classes.
The course aimed at fomenting alternatives for sustainable use of timber in the region and expanded the capacity of generating income for the riverside families involved in the program. While valuing timber workers’ previous knowledge such as carpentry and joinery, the course purpose was to teach marquetry, which is an artistic technique of making objects through angled cutting, fitting, gluing, and refining. This way, the possibility of adding value to the residual timber emerges, thus generating income from sustainable forest management on a small scale.
The training lasted 40 hours, encompassing five practical and theoretical classes lectured by masters Edielson Bezerra and Manoel Silva, from Nov’Arte Association (Novo Airão – AM). Twenty-eight timber workers from 21 communities from Mid-Juruá region participated, representing the territories of the Mid-Juruá Extractivist Reserve, Uacari’s Sustainable Development Reserve, Deni Indigenous Land and the Carauari’s Fishing Agreement Area.
“Students were pleased, they took books, Personal Protective Equipment, and other materials home as well as the handicraft produced such as small boxes, boards, and decorative pieces”, Nathalia Messina said. She is Instituto Juruá’s socioenvironmental analyst and participated in the preparation and delivery of the course.
During the course, some tools and marquetry items were raffled and there was a miter saw giveaway. The machine makes it possible to quickly and precisely cut angular pieces and it is under the responsibility, use and custody of the communities.
According to Nathália, “to make better use of the results achieved through the course, the intention is for this to be the initial activity of a timber exchange project to be developed by artisans and community-based organizations from the Mid-Juruá, with the support from Instituto Juruá”. | <urn:uuid:86c28f43-185e-456a-bf75-583b42488628> | CC-MAIN-2023-06 | https://institutojurua.org.br/en/marquetry-course-is-offered-to-mid-jurua-timber-workers/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00626.warc.gz | en | 0.946323 | 631 | 2.6875 | 3 |
x = 1
Work Step by Step
-2(3x - 4) = 2x Apply distributive property: -2(3x) - 2(-4) = 2x -6x + 8 = 2x Subtract 2x from both sides: -8x + 8 = 0 Subtract 8 from both sides: -8x = -8 Divide both sides by -8: x = 1
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | <urn:uuid:c39b9497-e225-4873-bb12-5ff9508436dd> | CC-MAIN-2018-34 | https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-2-section-2-3-further-solving-linear-equations-exercise-set-page-118/5 | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211167.1/warc/CC-MAIN-20180816191550-20180816211550-00653.warc.gz | en | 0.896239 | 140 | 2.78125 | 3 |
My exploration into science literacy began with a sense of wonder for the world around me. Today we’ll explore wonder and why it’s one of the greatest tools both a scientist, and science literate citizen, can have.
It’s no coincidence that many great minds have commented on the value in having a sense of wonder for the world around you. Here are some of their thoughts:
Wonder is the beginning of wisdom.Socrates
I was a young man with unformed ideas. I threw out queries, suggestions, wondering all the time over everything; and to my astonishment the ideas took like wildfire.Charles Darwin
The more clearly we can focus our attention on the wonders and realities of the universe about us, the less taste we shall have for destruction.Rachel Carson
The feeling of awed wonder that science can give us is one of the highest experiences of which the human psyche is capable. It is a deep aesthetic passion to rank with the finest that music and poetry can deliver.Richard Dawkins
What does having a sense of wonder mean for a scientist today? To wonder is, in my own words, to marvel with curiosity. It means you ask questions because something about the universe impresses or astounds you. Why is this thing or phenomenon the way it is? What causes it? How? These are good questions from which you can formulate a research question, a hypothesis, or simply set out to learn more about something if the question has already been answered. This is how wonder drives both science and science literacy.
One of my favorite books that is in many ways about wonder is Carl Sagan’s The Demon-Haunted World. His book explores some of the most wonder-provoking questions: Is there other forms of intelligent life, or are we alone in the universe? What happens when we die, and can we speak with the dead? Sagan treats each and every odd question and conspiracy theory like a legitimate scientific investigation, reasoning through with evidence and a healthy dose of skepticism (and a little hope that, just maybe, something extraordinary might be true). But as he says, extraordinary claims require extraordinary evidence, and in most cases, this evidence doesn’t seem to exist.
As science communicators and as science literate citizens, we can balance our wonder with skepticism without losing our awe of the universe and our world. There is no reason we can’t be amazing by the human body, evolution, and our existence even if we weren’t created by a divine being. There’s no reason to think less of the stars if they don’t have planets with intelligent life orbiting them that we know of. It’s inspiring to ask exciting, controversial questions. Until we have good evidence for or against, we shouldn’t try to make firm conclusions. We can say, “I don’t believe this, but when we have evidence to support it, I’d be very interested,” or, “I feel comfortable believing that we haven’t yet been visited by extraterrestrials – the evidence seems insufficient.”
Wonder can inspire and ignite curiosity to learn about the world. Indeed, wonder is what drove me to read popular science books in the first place. It’s important to cultivate it in children and young adults, and to retain that sense of wonder through college and graduate school. At least, I’m aiming to. This fall, I start the next half of my undergraduate studies at Oregon State University. I’ll be a microbiology major delving into research and STEM for the first time. Wonder about the microcosmos, the world of invisible microorganisms and their ecosystems, is what drives me toward my degree.
Stephen Hawking was another scientist who understood the value of wonder. In a tribute to his life on Space.com, ‘He Inspired Us All to Wonder’: Scientists and the Public Remember Stephen Hawking, they remember how Hawking valued and encouraged having a sense of wonder. Hawking’s commitment to being in awe of the universe is one of the (many) attributes that makes him such a memorable scientists. So I want to end on one of his quotes, which never fails to bring tears to my eyes:
Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at.Stephen Hawking | <urn:uuid:88877bf0-9788-4e10-b5b0-f21081eb0a89> | CC-MAIN-2020-05 | https://readmorescience.com/2019/07/02/why-a-sense-of-wonder-is-your-greatest-scientific-tool/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00329.warc.gz | en | 0.949203 | 928 | 2.78125 | 3 |
Why is good writing important?
Good writing is writing that clearly communicates your research. Scientists are busy people, so if your manuscript is poorly written and difficult to understand, they might not take the time to read it (or cite it later). Writing well helps others understand the work you’ve done and helps strengthen your own comprehension of your research.
High-quality writing has the following benefits:
• increases the chances of acceptance for publication
• increases the impact of a manuscript within the research community
• accelerates understanding and acceptance of the research
• increases the faith of readers in the quality of the research
Poorly written manuscripts annoy journal editors, peer reviewers and readers, and hinder their understanding of complex scientific concepts. | <urn:uuid:0015b13b-d3da-483f-a993-3800650b6852> | CC-MAIN-2023-06 | https://www.springer.com/it/authors-editors/authorandreviewertutorials/writinginenglish/overview/10252642 | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00734.warc.gz | en | 0.923678 | 150 | 3.125 | 3 |
Abraham Lincoln single-handedly change the world in the best ways possible just by following his heart and wanting the absolute best for his people and for his country. Abraham Lincoln was our 16th president and arguably the most impactful one because of how much he did for the U.S. and how many lives he helped. He was a self-made man, liberator of slaves, and the savior of the union. Lincoln is also known for his famous speech “The Gettysburg address”. Lincoln unfortunately had a short life due to his assassination on April 15, 1865 at the Peterson House in Washington D.C. Abraham was a very controversial man and was loved by most but hated by many. The 16th president of this country completely changed millions of lives just by his morals.
Lincoln was the first president who made his opposition to slavery open and clear. He did not care about how many southern voters he would lose when it came time to Election Day because he stood up for what he believed in. In 1863, Lincoln nonetheless issued the Emancipation Proclamation which freed all of the slaves that were under Confederate control. Lincoln saved over 20,000 slaves from Confederate rule and gave people the chance to live their own lives. He gave them a chance to start over, have a real home and a real family, have family dinners, smile, have unforgettable moments, and have the smallest things mean the most just like we do everyday. These people were being tortured in hot fields, not being fed properly, being beaten and used, were always sleeping on cold grounds or even in the fields where they worked; Lincoln gave them a shot at a regular life.
Abraham Lincoln was the president to successfully end of the Civil War to preserve the nation. Abraham was an extremely strong leader, his ambition and his beliefs won us a war and he kept his promises that he made to the nation. The last battle fought was in Texas in May 3, 1865. Robert E. Lee ended up surrendering the last of the confederate army to Ulysses S. Grant. Abraham was so committed to his choice to end slavery that he didn’t back down. That commitment and that confidence won us the war and preserved the Union.
Lincoln changed so many broken hearted people‘s point of view with “The Gettysburg Address”. He connected the sacrifices of our fallen soldiers with the desire for “a new birth of freedom”. He cited the importance of human equality that was in the Declaration of Independence. With his speech alone, he told all of the family and friends of the fallen soldiers that their deaths weren’t for nothing and that they had an important role in the new country to come. He made sure that everyone knew that he was supporting equality and that everything will be made right.
Lincoln was such a big inspiration that when he was assassinated the nation was devastated. Within days of his death, his life was being compared to that of Jesus Chris because he was such a huge inspiration to so many people and he was looked at as if he was a god.. A volume of condolences was published with collected responses from every corner of the globe by people who cared for him, were inspired by him, wanted to thank him, loved him, and even wanted to praise him.. His death was a test for the country and its strength to see if losing such a big inspiration would weaken us or make us stronger. Abraham Lincoln’s death impacted so many people because of his accomplishments for the U.S., which shows that he did a lot for his people and he had such a major impact of their lives that they were absolutely heartbroken and crushed as if they lost a family member.
Lincoln was given many important life-changing decisions to make during his presidency and he was strong enough to not fall under pressure and do what his heart believed was right. Many people were telling Lincoln to end the war early because it wasn’t worth the deaths and the lost of money, but Lincoln didn’t buckle. He was majorly hated during his presidency for his opposition to slavery and that hate towards him only made his will stronger. Many people said they wish they had Lincoln’s ambition and his courage because through everything he followed his own heart and by doing that he changed millions of people’s lives forever. Lincoln faced many extremely difficult tasks and angry people just to follow what he believed was the best for his people and his country, then he won us the war. He also helped many bleeding hearts still healing from the loss of their family in the war and helped free all of the African Americans that the Confederates used for their own selfish desires.
Millions and even billions of lives were changed by just one man and his decision to make the world a better place. Abraham Lincoln was the first president to be open that he was against slavery and established the Emancipation Proclamation. He preserved the union when winning the Civil War and afterwards gave a very heartfelt speech about how the soldiers who sacrificed their lives in the war did it for a purpose and were heroes for losing their lives for such an important cause, he called it the called “The Gettysburg Address”. His main concern through all of the hate and all of the pressure was his people and his country. Lincoln was the most kind hearted and confident president in history who accomplished some of the most amazing things in his time of presidency and even lost his life to fight for us.
Our editors will help you fix any mistakes and get an A+!Get started
Please check your inbox | <urn:uuid:a18c3932-93f6-4d8c-b388-fb0b0f3bc08e> | CC-MAIN-2022-33 | https://supremestudy.com/abraham-lincoln-as-a-president-of-us/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00732.warc.gz | en | 0.993755 | 1,142 | 3.125 | 3 |
The solar wind is swirly 18 December 2012
The solar wind is swirly 18 December 2012 Using ESA’s Cluster quartet of satellites as a space plasma microscope, scientists have zoomed in on the solar wind to reveal the finest detail yet, finding tiny turbulent swirls that could play a big role in heating it.
Earth’s magnetosphere behaves like a sieve24 October 2012
Earth’s magnetosphere behaves like a sieve24 October 2012 ESA’s quartet of satellites studying Earth’s magnetosphere, Cluster, has discovered that our protective magnetic bubble lets the solar wind in under a wider range of conditions than previously believed.
Earth’s magnetic field provides vital protection08 March 2012
Earth’s magnetic field provides vital protection08 March 2012 A chance alignment of planets during a passing gust of the solar wind has allowed scientists to compare the protective effects of Earth’s magnetic field with that of Mars’ naked atmosphere. The result is clear: Earth’s magnetic field is vital for keep...
Cosmic particle accelerators get things going16 November 2011
Cosmic particle accelerators get things going16 November 2011 ESA's Cluster satellites have discovered that cosmic particle accelerators are more efficient than previously thought. The discovery has revealed the initial stages of acceleration for the first time, a process that could apply across the Universe.
'Dirty hack' restores Cluster mission from near loss30 June 2011 Using ingenuity and an unorthodox 'dirty hack', ESA has recovered the four-satellite Cluster mission from near loss. The drama began in March, when a crucial science package stopped responding to commands – one of a mission controller's worst fears.
Ten years flying in formation: The legendary Cluster quartet01 September 2010
Ten years flying in formation: The legendary Cluster quartet01 September 2010 Today marks the 10th anniversary of the start of formation flying for the four satellites of ESA's Cluster quartet, one of the most successful scientific missions ever launched. | <urn:uuid:1729856d-701f-4d3d-ab5f-347540c7773e> | CC-MAIN-2014-35 | http://www.esa.int/Our_Activities/Space_Science/Cluster | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834663.62/warc/CC-MAIN-20140820021354-00168-ip-10-180-136-8.ec2.internal.warc.gz | en | 0.900607 | 408 | 3.140625 | 3 |
How to Pets to support child development
People say that a dog is a man’s best friend – they are also a child’s best friend and good teachers.
Finding a comfortable resting place
Finding the perfect resting place. There’s something so catchy about the pure feeling of a warm smile that you can see on your newborn’s face when he’s happy to help but smile back. This young man seems to have found his way to the furry back of his great friend Buddy, the Labrador family. Here Doggo doesn’t seem to care about my little friend on the back and the little kid seems to be happier than ever to have something so warm and nice! He has your back!…
Dogs can also help support the child’s later development.
Many pet owners feel that pet involvement improves among other things, the child’s social skills, mental development, self-esteem and responsibility, and helps the child express his or her feelings.
The most important process of a person’s self-esteem is self-esteem, that is, how one perceives one’s worth and significance. Self-awareness also emphasizes self-awareness and self-esteem. Good self-esteem implies accepting one’s existence and, above all, being aware of one’s strengths and weaknesses. However, the person must have a positive self-image, self-esteem, and satisfaction
Central to the development of self-esteem and self-concept is how the child feels that other people close and important to him or she sees and thinks of him or her. The child’s self-esteem and self-concept evolve gradually, and the most sensitive developmental age is 6-13
Pets also give the child positive and genuine feedback, which enables the child’s social development. Pets usually have a very confident attitude towards people and can show their feelings to children very genuinely. The love and presence that is important to the child are shown to the child when they are together. The child’s sociality with other people and animals will thus evolve.
Pets influence a child’s mental development in many ways. Owning a pet and being a family member helps the child’s mental development. Pets accept the child as a child The child gets success experiences while working with the pet and it is easy for the pet to show their feelings. All of the above-mentioned things will strengthen the child’s mental development.
During the first three years of life, the child develops a sense of self, individuality, and uniqueness. The child develops a basic sense of how people interact with one another and learns how he or she can regulate his or her mood and control his or her behavior. Learning all this requires close interaction with people who are sensitive and emotionally important to the child, as well as animals.
Pets can also be an integral part of a child’s growth environment. Being together with family children and pets contributes to the child’s emotional development because the child or pet can truly show their feelings to each other. Both emotions are acceptable and both learn from each other. Parents need to ensure a safe and nurturing environment for both their child and their pets.
Credits: Minna Hautala, pets to support child development. | <urn:uuid:775174d1-908d-434a-b8c2-6acedae384f7> | CC-MAIN-2020-10 | https://vantagedog.com/pets-to-support-child-development/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00154.warc.gz | en | 0.965444 | 683 | 3.15625 | 3 |
What is Zellweger Syndrome? Why don’t we call it Zellweger syndrome anymore?
What is Zellweger syndrome?
Zellweger syndrome is the most severe form of peroxisome biogenesis disorder-Zellweger spectrum disorder (PBD-ZSD). Peroxisomal disorders are rare, genetic, terminal conditions that affect all major organ systems of the body. A peroxisomal disorder on the Zellweger spectrum (formerly referred to as Zellweger syndrome) means that the peroxisomes in your cells aren’t working properly, are absent, or are severely decreased. Peroxisomes are necessary for cell function, normal brain development, and the formation of myelin.
What’s in a name?
Zellweger syndrome is the most commonly known peroxisomal disorder. It is named after Hans Zellweger, a pediatrician and professor of pediatrics and genetics, who in the mid-1960’s researched the disorder and noticed the familial disorder among siblings. Zellweger syndrome was later named for him in recognition of his discovery.
Today, it is known that Zellweger syndrome actually falls on a spectrum. That means Zellweger syndrome is part of a group of disorders that all have the same genetic mutations. These mutations cause certain features and symptoms that vary from mild to severe.
While you may hear the names Zellweger syndrome (ZS), neonatal adrenoleukodystrophy (NALD), infantile Refsum disease (IRD), and Heimler syndrome used to describe or diagnose a patient, it is now known that all of these names are actually the same disorder with different levels of severity. Today, these groups of disorders are known as peroxisome biogenesis disorder-Zellweger spectrum disorder (PBD-ZSD).
Why do peroxisomal disorders have so many different names?
You might hear peroxisome biogenesis disorder-Zellweger spectrum disorder (PBD-ZSD) referred to by other names, including Zellweger syndrome (ZS), neonatal adrenoleukodystrophy (NALD), infantile Refsum disease (IRD), and Heimler syndrome. In recent years. disorders that were formerly grouped into separate diseases are now known to be a continuum of PBD-ZSD with varying degrees of disease severity.
As the understanding of this disorder has grown, there has been a movement away from the original disease categories towards a continuum of disease severity for PBD-ZSD, ranging from most severe (Zellweger syndrome), intermediate (neonatal adrenoleukodystrophy), and mild (infantile Refsum disease and Heimler syndrome). Although less common now, you may occasionally hear PBD-ZSD referred to by any of these former names.
Why does it matter what we call it?
For many of our patients, it would be inaccurate to state that they will be firmly classified as “mild” “moderate” or “severe” throughout their lifespan. Simply saying a patient has Zellweger syndrome ignores the understanding that a patient can move through the spectrum of this disorder depending on disease progression. When we use the accurate diagnosis of peroxisome biogenesis disorder-Zellweger spectrum disorder (PBD-ZSD), we acknowledge that a patient can fall on different parts of the spectrum throughout their lifespan.
Disease progression can cause negative movement on the spectrum, moving from mild to severe. While a patient may appear to be on the mild end of the spectrum at birth, they can develop serious medical problems as they age, placing them on different points on the spectrum during their lifespan. Conversely, some patients that are born with severe symptoms, particularly with what appears to be very severe liver involvement, can actually have improved liver function during their lifetime and move toward a more mild presentation of the disorder.
It’s important for physicians and families to use inclusive language about this spectrum disorder to better understand that PBD-ZSD can be ever changing for patients and very difficult to predict.
For more information
For further reading on peroxisome biogenesis disorder-Zellweger spectrum disorder (PBD-ZSD).
Are you the caregiver of a patient diagnosed with PBD-ZSD? Join our patient registry!
To stay up to date on the GFPD’s news and research updates, follow us on social media on Facebook @GlobalFoundPD | <urn:uuid:59be63d7-12d8-48aa-a6a5-6087ae12612c> | CC-MAIN-2020-45 | https://www.thegfpd.org/single-post/2018/07/23/what-is-zellweger-syndrome-and-why-don-t-we-call-it-zellweger-syndrome-anymore | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912807.78/warc/CC-MAIN-20201031032847-20201031062847-00117.warc.gz | en | 0.927369 | 946 | 3.546875 | 4 |
Posted by Cicilia on Wednesday, March 7, 2012 at 5:48pm.
What are the verbs in these sentences? I just want to check my work.
Gasping, Matty called for his gift to come. There was no sense of how to direct it. He became aware, suddenly, that he had been chosen for this.
Thanks in advance!
- English Grammar: Verbs - Ms. Sue, Wednesday, March 7, 2012 at 5:55pm
I'll be glad to check your answers.
- English Grammar: Verbs - Cicilia, Wednesday, March 7, 2012 at 5:59pm
These are my answers:
had been chosen
I'm not sure about "to come" and "to direct". Are the ones that I listed right? Did I miss any?
Thanks Ms. Sue!
- English Grammar: Verbs - Ms. Sue, Wednesday, March 7, 2012 at 6:08pm
All except gasping are correct. There are only the last four verbs you listed.
In this sentence, "gasping" is a present participle used as an adjective. "To come" and "to direct" are infinitives used as adjectives in these sentences.
- English Grammar: Verbs - Cicilia, Wednesday, March 7, 2012 at 6:12pm
Oh, okay! Thanks!
Just curious- What words do "to come" and "to direct" describe?
- English Grammar: Verbs - Ms. Sue, Wednesday, March 7, 2012 at 6:24pm
To come describes gift. To direct describes how.
Answer This Question
More Related Questions
- English - What is a verb? An action word like run,walk,play! There are also "...
- English Grammar - What is the difference between modal verbs and auxiliary verbs...
- grammar - directions: rewrite the 2 sentences as 1 sentence by combining the ...
- English - Can you please help me work out phrasal verbs related to work? When do...
- English Launguage Arts 6th feet - Does anyone know a website that i can go and ...
- Spanish - This is my assignment: Imagine you are babysitting your little brother...
- Language - Break and thing are examples of what kind of verbs (not "action") ...
- Spanish (Please help me) - This is my assignment: Imagine you are babysitting ...
- Urgent English help - The sentences below have transitive verbs so each verb has...
- English - Whats the nouns and verbs in this sentences ? Seek his will in all you... | <urn:uuid:57f8d9b0-0309-409d-9fb9-8eba46c2e0f8> | CC-MAIN-2016-36 | http://www.jiskha.com/display.cgi?id=1331160513 | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982291015.10/warc/CC-MAIN-20160823195811-00026-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.948143 | 552 | 2.828125 | 3 |
IBM Standard Modular System
|This article needs additional citations for verification. (May 2010) (Learn how and when to remove this template message)|
The Standard Modular System (SMS) was a system of standard transistorized circuit boards and mounting racks developed by IBM in the late 1950s, originally for the IBM 7030 Stretch. They were used throughout IBM's second generation computers, peripherals, the 7000 series, the 1400 series, and the 1620. SMS was superseded by Solid Logic Technology (SLT) introduced with System/360 in 1964, however they remained in use with legacy systems through the 1970s.
Many IBM peripheral devices that were part of System/360, but were adapted from second-generation designs, continued to use SMS circuitry instead of the newer SLT. These included the 240x-series tape drives and controllers, the 2540 card reader/punch and 1403N1 printer, and the 2821 Integrated Control Unit for the 1403 and 2540. A few SMS cards used in System/360 peripheral devices even had SLT-type hybrid ICs mounted on them.
SMS cards were constructed of individual discrete components mounted on single-sided paper-epoxy printed circuit boards. Single-width cards were 2.5 inches wide by 4.5 inches tall by 0.056 inches thick, with a 16-pin gold plated edge connector. Double width cards were 5.375 inches wide by 4.5 inches tall, with two 16-pin gold plated edge connectors. Contacts were labeled A–R (skipping I and O) on the first edge connector, and S–Z, 1–8 on the second.
The cards were plugged into a card-cage back-plane and edge connector contacts connected to wire wrap pins. All interconnections were made with wire-wrapped connections, except for power bus lines. The back-plane wire-wrap connections were mostly made at the factory with automated equipment, but the wire-wrap technology facilitated field-installation of engineering changes by customer engineers.
Some card types could be customized via a "program cap" (a double rail metal jumper bar with 15 connections) that could be cut to change the circuit configuration. Card types with a "program cap" came with it precut for the standard configuration and if a customer engineer needed a different configuration in the field he could make additional cuts as needed. This feature was intended to reduce the number of different card types a customer engineer had to carry with him to the customer's site.
The card type was a two to four letter code embossed on the card (e.g., MX, ALQ). If the card had a "program cap" the code was split into a two letter card type code and a two letter "cap connection" code (e.g., AK ZZ).
When SMS was originally developed, IBM anticipated a set of a couple hundred standard card types would be all that would be needed, making design, manufacture and servicing simpler. Unfortunately that proved far too optimistic as the number of different SMS card types soon grew to well over 2500. Part of the reason for the growth was that multiple digital logic families were implemented (ECL, RTL, DTL, etc.) as well as analog circuits, to meet the requirements of the many different systems the cards were used in.
Another 1401 SMS card, this one with power transistors. It was used to drive print hammers on an IBM 1403 line printer.
- Boyer, Chuck (April 2004). "The 360 Revolution" (PDF). IBM. p. 18. Retrieved 25 November 2013. | <urn:uuid:717e89e6-54e6-4eb5-9b59-576170139dcf> | CC-MAIN-2017-39 | https://en.wikipedia.org/wiki/Standard_Modular_System | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689686.40/warc/CC-MAIN-20170923141947-20170923161947-00638.warc.gz | en | 0.96957 | 749 | 3.09375 | 3 |
Press Release 12-195
National Science Foundation Dedicates Wyoming Supercomputing Center
NSF’s National Center for Atmospheric Research will manage the facility
October 15, 2012
The National Science Foundation (NSF) dedicated the NCAR-Wyoming Supercomputing Center (NWSC), its first facility in decades in Wyoming and one of the world's most powerful supercomputers, as part of dedication ceremonies held in Cheyenne today.
"The NCAR-Wyoming Supercomputing Center will offer researchers the opportunity to develop, access and share complex models and data at incredibly powerful speeds," said NSF Director Subra Suresh. "This is the latest example of NSF's unique ability to identify challenges early and make sure that the best tools are in place to support the science and engineering research communities."
The NWSC will be managed by NSF's National Center for Atmospheric Research (NCAR). The supercomputer, known as "Yellowstone," has the ability to work at 1.5 petaflops--equal to 1.5 quadrillion (a million billion) mathematical operations per second. Its speed is comparable to 7 billion people (the world population) each simultaneously conducting 200,000 calculations a second.
Yellowstone's capabilities will improve scientific understanding of climate change, severe weather, air quality, and other atmospheric and geosciences research. It allows researchers to address research challenges with software, data storage and management, and data analysis and visualization.
Based in Cheyenne, the NWSC is located on a 24-acre site. It houses high-performance computers, mass storage (data archival) systems, and required mechanical and electrical infrastructure. It is a LEED-certified building, showcasing sustainable technologies as well as energy-efficient design and operation. A main component is a public visitor center that illustrates the types of computational science research that will be carried out by scientists across the nation and explains the impact of that research.
Wyoming is part of NSF's Experimental Program to Stimulate Competitive Research (EPSCoR), which allows the agency to strengthen research and education in science and engineering throughout the United States and improve R&D capacity and competitiveness.
In addition to NSF Director Suresh, the dedication ceremony included Wyoming Governor Matt Mead, University Corporation for Atmospheric Research President Thomas Bogdan, NCAR Director Roger Wakimoto, University of Wyoming Vice President for Research Bill Gern and NSF's Atmospheric and Geospace Sciences Division Director Michael Morgan.
The NWSC, funded by NSF with additional support from the state of Wyoming and a broad public-private consortium, will be a mainstay of U.S. geoscience computing for decades to come. Its extraordinary computing power will enable scientists to capture many aspects of our planet's workings in unprecedented detail. The results will improve forecasting of hurricanes, tornadoes, and other severe storms; map critical supplies of water; boost predictions of wildfire behavior; help protect society from solar disruptions; and address many other concerns.
Dana Topousis, NSF, (703) 292-7750, email@example.com
NCAR-Wyoming Supercomputing Center: http://nwsc.ucar.edu
NCAR-Wyoming Supercomputing Center Opens: https://www2.ucar.edu/atmosnews/news/8122/ncar-wyoming-supercomputing-center-opens
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ | <urn:uuid:202005b6-ba1c-4246-b5e0-68b20551a57c> | CC-MAIN-2014-41 | http://nsf.gov/news/news_summ.jsp?cntn_id=125708&org=NSF&from=news | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657140890.97/warc/CC-MAIN-20140914011220-00092-ip-10-234-18-248.ec2.internal.warc.gz | en | 0.902764 | 922 | 2.59375 | 3 |
In an ideal world our kids and youth players would all be intrinsically motivated to play basketball and improve their skill level. Playing basketball and working on their skills at home would make them feel good and provide a sense of accomplishment versus being extrinsically motivated for rewards or an adverse punishment. However, this is not always the case. Below are some ways to improve intrinsic motivation as well as when extrinsic motivation can be utilized and useful.
Ideas to improve intrinsic motivation:
Best uses of extrinsic motivation or small rewards:
Once basic skill levels and initial intrinsic motivation has been established external motivators should be phased out as they can be detrimental to long-term participation secondary to basketball and skill development potentially feeling like work or an obligation.
Use them both
By utilizing both intrinsic and extrinsic motivational factors and finding the right balance for your child skill level and enjoyment of the game can grow. | <urn:uuid:ff0533f2-17b8-4220-a377-6aeef020727d> | CC-MAIN-2020-34 | https://www.youthbasketballdevelopment.com/learn/ideas-to-inspire-youth-basketball-players | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739073.12/warc/CC-MAIN-20200813191256-20200813221256-00027.warc.gz | en | 0.950606 | 183 | 3.234375 | 3 |
We found 320 resources with the concept mythology
Legend and Symbol: An Etymology Lesson
6th - 12th CCSS: Adaptable
New ReviewHow are the words tradition and traitor related? A thorough language arts resource explores the ways Latin and Greek words are represented in traditional legends and artwork, and reinforces word meaning with several grammar exercises.
The Cambodian Myth of Lightning, Thunder, and Rain
5 mins 9th - 12th CCSS: Adaptable
Life—plants, animals, people—depend on water. Its importance is evident in the number of myths in global cultures that offer explanations for the origins of rain, thunder, and lightning. A sacred dance drama that represents the Cambodian... | <urn:uuid:b9f56622-7244-4d42-bacb-0dfce11eb2ff> | CC-MAIN-2018-47 | https://www.lessonplanet.com/search?concept_ids%5B%5D=25156 | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743717.31/warc/CC-MAIN-20181117164722-20181117190722-00291.warc.gz | en | 0.909936 | 145 | 2.953125 | 3 |
Find the area and perimeter of each polygon below. Show all work. Homework Help ✎
Divide the figure into two separate shapes.
Afterwards, find the area for each shape and then add them together.
The piece shared between the trapezoid and the rectangle is 5 - 2 = 3 because the red rectangle has side 5. This makes the base of the trapezoid 8.
Area of Shape = Area of Trapezoid + Area of Rectangle
Total area = 36 units² | <urn:uuid:6935c33e-e7a0-4c7e-920b-84474a531bed> | CC-MAIN-2019-39 | https://homework.cpm.org/cpm-homework/homework/category/CCI_CT/textbook/Int1/chapter/Ch10/lesson/10.1.2/problem/10-38 | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574050.69/warc/CC-MAIN-20190920155311-20190920181311-00343.warc.gz | en | 0.908413 | 105 | 3.765625 | 4 |
Library Home || Full Table of Contents || Suggest a Link || Library Help
|Ten to fifteen minute-long whiteboard introductions to Common Core State Standards (CCSS) ideas, geometry constructions, proofs, pre-college algebra, college algebra, and more.|
|Levels:||Middle School (6-8), High School (9-12), College|
|Resource Types:||Video, Tutorials|
|Math Topics:||Basic Algebra, Linear Algebra, Euclidean Plane Geometry|
|Math Ed Topics:||Common Core|
© 1994- Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education. | <urn:uuid:ddd81764-d7be-41c2-af28-59d26338109e> | CC-MAIN-2015-11 | http://mathforum.org/library/view/77210.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461944.75/warc/CC-MAIN-20150226074101-00239-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.696675 | 152 | 2.71875 | 3 |
How to Display Image in HTML?
Images are very important to beautify as well as to depict many complex concepts in simple way on your web page. This tutorial will take you through simple steps to use images in your web pages.
You can insert any image in your web page by using <img> tag. Following is the simple syntax to use this tag.
<img src="Image URL" ... attributes-list/>
The <img> tag is an empty tag, which means that it can contain only list of attributes and it has no closing tag.
To try following example, let’s keep our HTML file test.htm and image file test.png in the same directory:
<title>Using Image in Webpage</title>
<p>Simple Image Insert</p>
<img src="http://tutsnsols.com/wp-content/uploads/2016/05/TnSLogo-1.png" alt="Test Image" />
Result would be as followed. | <urn:uuid:2453bd19-dfff-4eb5-8133-49e8fb919924> | CC-MAIN-2017-43 | http://tutsnsols.com/html-images/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822513.17/warc/CC-MAIN-20171017215800-20171017235800-00309.warc.gz | en | 0.787912 | 211 | 2.765625 | 3 |
By Kathy Hubbard
Henry VIII had it. So did Benjamin Franklin, Alexander the Great, Beethoven, both of my brothers, my father and my friend, Jim. Although post-menopausal women can get it, it isn’t as common as men having a flare-up of this inflammatory arthritis called gout.
“The signs and symptoms of gout almost always occur suddenly, and often at night,” Mayo Clinic tells us. “It’s characterized by sudden, severe attacks of pain, swelling, redness and tenderness in the joints, often the joint at the base of the big toe.”
Ouch. Gout isn’t just about toes, either. It can affect any joint commonly ankles, knees, elbows, wrists and fingers. The pain is most severe within the first four to twelve hours after it starts.
“After the most severe pain subsides, some joint discomfort may last from a few days to a few weeks. Later attacks are likely to last longer and affect more joints,” Mayo says.
What causes it? Let’s turn to Medicinenet.com for their explanation: “Gout is caused by too much uric acid in the bloodstream and accumulation of uric acid crystals in tissues of the body. Uric acid crystal deposits in the joint cause inflammation of the joint leading to pain, redness, heat, and swelling.
“Uric acid is normally found in the body as a byproduct of the way the body breaks down certain proteins called purines. Causes of an elevated blood uric acid level (hyperuricemia) include genetics, obesity, certain medications such as diuretics (water pills), and chronic decreased kidney function.”
There are four stages of gout. The first is asymptomatic hyperuricemia when the blood uric acid levels are high, crystals are forming in the joints, but you have no symptoms. D’uh. Asymptomatic.
The second is acute gout or gout attack which we’ve already described. Interesting to note, is that as many as 84 percent of sufferers may have another gout attack within three years. The third level is called interval gout which is the time between attacks. The patient has no pain, but the low-level inflammation is still there and damaging the joints.
“Chronic gout develops in people with gout whose uric levels remain high over a number of years,” Arthritis Foundation explains. “Attacks become more frequent and the pain may not go away as it used to. Joint damage may occur, which can lead to a loss of mobility. With proper management and treatment this stage is preventable.”
Without a flare-up it’s unlikely you would know if your uric acid levels are high. And, even if you knew that they are, it doesn’t guarantee that you’ll have a gout attack. Confusing? Add to that the fact that you can have low uric acid and still have a gout attack is really baffling.
Your healthcare provider will put you through some tests to diagnose whether or not you actually have gout. They include a blood test; a joint fluid test where a needle draws fluid from the affected area and examined under a microscope; x-rays; ultrasound and/or a CT scan.
There are medications today that can help ease the pain of the attack and to prevent future ones. Your medico will walk you through the choices available to you and discuss how you want to proceed. One thing is for certain, you will want to prevent another attack and that prevention will require you to make some lifestyle changes primarily in your diet.
“Reaching and maintaining a proper weight is an important part of managing gout. Not only does losing weight help reduce the uric acid in the blood, it can lessen the risk of heart disease or stroke, both common in people who have gout, Arthritis Foundation says. They also say that keeping active is important and that you and your medical team should make a plan of action, literally.
Gout used to be called the rich man’s disease because it can be exacerbated by eating lots of red meat, organ meats, shellfish such as shrimp and lobster, sugary beverages and excessive alcohol. If you’ve ever had, or thought you had a gout attack you’ve probably read all the info on what you can eat and drink to avoid another attack, if you’re interested search “gout diet” for lots of good advice.
Kathy Hubbard is a member of Bonner General Health Foundation Advisory Council. She can be reached at email@example.com. | <urn:uuid:884a0726-6246-4509-ad25-99d6fa555d19> | CC-MAIN-2023-40 | https://bonnergeneral.org/all-about-gout/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00237.warc.gz | en | 0.961673 | 994 | 2.546875 | 3 |
Gender identity is defined as “A persons sense, and subjective experience, of their own gender.” (Wikipedia) Unlike your biological sex, which is the sex you were born as, gender identity does not have to mirror your sex. Striker mentions gender identity in the work “Transgender Story”. Striker states, “For most people there is congruence between the category one has been assigned to and trained in. Transgendered people demonstrate that this is not always the case.” (Striker 13) Striker’s work also says “sense of ones self not of the gender role they are assigned to.” (Striker 13) Gender identity is specific to each individual. Gender identity is not binary nor is it limited to only male and female. Gender identity is also separate from an individual’s sexuality.
Towle and Morgan bring up the term third gender in the “Romancing The Transgender Narrative”. In the reading, it is explained that third gender is a western concept that breaks away from the idea that there are two genders. Third gender is uniquely western and is used as more of a psychologized, medicalized, and moralistic term. The reading mentions that “western binary systems are not universal or innate.” Like gender identity, third gender acknowledges that transgendered people do not have to identify with the sex they were born as, or accept the gender role they were assigned to. Although third gender addresses that a dichotomous gender system does not work, it is a blanket term and very general. Third gender does not acknowledge the many variations of identity outside of male and female. This takes away the individuality of those who fall under the third gender category. Third gender excludes many identities, and not everyone who is neither innately male nor female will identify with the term third gender.
One issue I have with third gender is that when you use one blanket term to categorize a certain type of people, those who fall under that category will be judged, stereotyped, and generalized. When you categorize and label a certain type of people as being of the third gender, they are subject to discrimination whether or not they themselves identify with third gender. Any image established with third gender, good or bad, will be associated with everyone who falls under the third gender category. Aside from the generalizations associated with third gender, it is important to understand that gender identity is a concept that goes much deeper than simply male, female, and third gender. It is important to continue to educate people in the western world on gender identity opposed to third gender.
Its great that more and more information on transgender studies is becoming available. Old and dated information is constantly being reviewed and revised and adapted to fit present-day society and culture. I look forward to learning more about gender identity in class. | <urn:uuid:9b56355c-b0d9-4d3c-96d4-0eb5a42ac8d3> | CC-MAIN-2018-05 | https://wgs552spring2014.wordpress.com/2014/02/27/gender-identity-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887746.35/warc/CC-MAIN-20180119045937-20180119065937-00359.warc.gz | en | 0.967679 | 588 | 3.65625 | 4 |
Everyone could use another tool in the toolbox when it comes to handling stress. So the next time you’re feeling stress or anxiety, center yourself with one of these breathing techniques. Most people feel improvements in as little as four to six breath cycles.
1. Resonance breathing
True resonance breathing is inhaling for six seconds and exhaling for six seconds. But if this is too hard, try inhaling for four seconds and exhaling for six seconds, or five seconds for both. And work up to the six-second mark. The main thing is to simply focus on exhaling longer than your inhale.
2. “Bee breath” or bhramari from pranayama yoga
For this technique, find a comfortable position and close your eyes and mouth and relax your lips, jaw, and base of your tongue. Then take a slow, controlled breath through your nose. Exhale through your nose while making a humming sound.
You can even try humming higher or lower pitches to see how that changes the effect. You don’t have to worry about a specific cadence or count. Just focus on slowing your breath and extending your exhales longer than your inhales.
3. Box breathing
Just think of a box with four sides. Start with a slow inhale through your nose for four seconds. Hold your breath for four seconds. Exhale slowly through your nose or mouth for four seconds. Then hold your breath for four seconds before inhaling and starting the pattern over again.
4. 6 – 4 – 10 breath
Remember to pause and notice your breath before you start. Then inhale for six seconds, hold for four seconds, and exhale for 10 seconds. Work on making that exhale nice and long.
Disclaimer: If your breathing rate is 20 times per minute or higher, consult a physician. People who have low blood pressure or are on medication to lower it, people with diabetes, and pregnant women need to exercise caution with breathing exercises. Slow, deep breathing exercises are not recommended for people with very low blood pressure or for anyone prone to fainting. | <urn:uuid:1849fe23-ae56-4b3c-b782-2fba696ac03b> | CC-MAIN-2023-14 | https://www.mercyfitness.net/recovery-take-a-breather-to-reduce-your-stress/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00379.warc.gz | en | 0.936161 | 436 | 2.828125 | 3 |
This book offers a new interpretation of the spatial-political-environmental dynamics of water and irrigation in long-term histories of arid regions. It compares ancient Southwest Arabia (3500 BC-AD 600) with the American West (2000 BC-AD 1950) in global context to illustrate similarities and differences among environmental, cultural, political, and religious dynamics of water. It combines archaeological exploration and field studies of farming in Yemen with social theory and spatial technologies, including satellite imagery, Global Positioning System (GPS), and Geographic Information Systems (GIS) mapping. In both ancient Yemen and the American West, agricultural production focused not where rain-fed agriculture was possible, but in hyper-arid areas where massive state-constructed irrigation schemes politically and ideologically validated state sovereignty. While shaped by profound differences and contingencies, ancient Yemen and the American West are mutually informative in clarifying human geographies of water that are important to understandings of America, Arabia, and contemporary conflicts between civilizations deemed East and West.
This book presents the results of the extensive excavation of a small, rural village from the period of emerging cities in upper Mesopotamia (modern northeast Syria) in the early to middle third millennium BC. Prior studies of early Near Eastern urban societies generally focused on the cities and elites, neglecting the rural component of urbanization. This research represents part of a move to rectify that imbalance. Reports on the architecture, pottery, animal bones, plant remains, and other varieties of artifacts and ecofacts enhance our understanding of the role of villages in the formation of urban societies, the economic relationship between small rural sites and urban centers, and status and economic differentiation in villages. Among the significant results are the extensive exposure of a large segment of the village area, revealing details of spatial and social organization and household economics. The predominance of large-scale grain storage and processing leads to questions of staple finance, economic relations with pastoralists, and connections to developing urban centers.
This volume assembles scholars working on cuneiform texts from different periods, genres, and areas to examine the range of social, cultural, and historical contexts in which specific types of texts circulated. Using different methodologies and sources of evidence, these articles reconstruct the contexts in which various cuneiform texts circulated, providing a critical framework to determine how they functioned.
Legal texts recording the purchase or exchange of entire settlements are among the most important cuneiform tablets discovered at Old Babylonian/Middle Bronze Age (Level VII) Alalah. Following the Man of Yamhad is the first book-length study of these legal texts and the socio-economic practice that they document. The author explores the nature of the alienated settlements, the rights enjoyed by their owners, the underlying system of land tenure, and the larger political context in which the transactions occurred. The study is supported by extensive collations and up-to-date editions of relevant legal and administrative texts. Its conclusions will be of interest to anyone working on the history, society, and economy of the Bronze Age Near East.
Communities of Style examines the production and circulation of portable luxury goods throughout the Levant in the early Iron Age (1200–600 BCE). In particular it focuses on how societies in flux came together around the material effects of art and style, and their role in collective memory.
This volume publishes the proceedings of the Theban Symposium that took place in May 2010, in Granada, Spain, at the Institute for Arabic Studies of the Spanish National Research Council (CSIC), on the general theme of “Creativity and Innovation in the Reign of Hatshepsut.” The volume contains nineteen papers that present new perspectives on the reign of Hatshepsut and the early New Kingdom. The authors address a range of topics, including the phenomenon of innovation, the Egyptian worldview, politics, state administration, women’s issues and the use of gender, cult and rituals, mortuary practices, and architecture.
Groundbreaking for the study of Hatshepsut’s reign and the beginning of the Eighteenth Dynasty, this volume will become an important reference for scholars and lay readers interested in the history, culture, and archaeology of the time of Hatshepsut and the early New Kingdom.
Critical Approaches to Ancient Near Eastern Art concentrates on the visual, material, and built aspects of the Ancient Near East from the fourth millennium BCE to the Hellenistic period. Presenting innovative theoretical approaches to Ancient Near Eastern art history, this volume will be of value to scholars of the Ancient Near East, as well as to those interested in contemporary art historical and anthropological approaches to visual culture.
Mapping Archaeological Landscapes from Space offers a concise overview of air and spaceborne imagery and related geospatial technologies tailored to the needs of archaeologists. Leading experts including scientists involved in NASA’s Space Archaeology program provide technical introductions to five sections: Historic Air and Spaceborne Imagery, Multispectral and Hyperspectral Imagery, Synthetic Aperture Radar, Lidar, Archaeological Site Detection and Modeling.
Each of these five sections includes two or more case study applications that have enriched understanding of archaeological landscapes in regions including the Near East, East Asia, Europe, Meso- and North America. Targeted to the needs of researchers and heritage managers as well as graduate and advanced undergraduate students, this volume conveys a basic technological sense of what is currently possible and, it is hoped, will inspire new pioneering applications.
Particular attention is paid to the tandem goals of research (understanding) and archaeological heritage management (preserving) the ancient past. The technologies and applications presented can be used to characterize environments, detect archaeological sites, model sites and settlement patterns and, more generally, reveal the dialectic landscape-scale dynamics among ancient peoples and their social and environmental surroundings. In light of contemporary economic development and resultant damage to and destruction of archaeological sites and landscapes, applications of air and spaceborne technologies in archaeology are of wide utility and promoting understanding of them is a particularly appropriate goal at the 40th anniversary of the World Heritage Convention.
The occurrence of textual variation is a significant but frequently neglected aspect of the study of Sumerian literary compositions. The correct evaluation of textual variants and the proper understanding of how and why they occur is essential to producing reliable editions of such texts. Such explorations also provide invaluable evidence for the written transmission of Sumerian literary works and a wealth of data for assessing aspects of Sumerian grammar. Drawing from a detailed analysis of the different types of textual variants that occur in the numerous duplicates of a group of ten compositions known collectively as the Decad, this book aims to provide a much needed critical methodology for interpreting textual variation in the Sumerian literary corpus which can be applied to editing and analysing these compositions with improved accuracy.
What is sacrifice? How can we identify it in the archaeological record? And what does it tell us about the societies that practice it? Sacred Killing: The Archaeology of Sacrifice in the Ancient Near East investigates these and other questions through the evidence for human and animal sacrifice in the Near East from the Neolithic to the Hellenistic periods. Drawing on sociocultural anthropology and history in addition to archaeology, the book also includes evidence from ancient China and a riveting eyewitness account and analysis of sacrifice in contemporary India, which engage some of the key issues at stake. Sacred Killing vividly presents a variety of methods and theories in the study of one of the most profound and disturbing ritual activities humans have ever practiced.
The manuscript consists of seven papers presented at the Theban Workshop, 2006. Within the temporal and spatial boundaries indicated by the title, the subjects of the papers are extremely diverse, ranging from models of culture-history (Manning and Moyer), to studies of specific administrative offices (Arlt), a single statue type (Albersmeier), inscriptions in a single temple (DiCerbo/Jasnow, and McClain), and inscriptions of a single king (Ritner). Nonetheless, all the papers are significant contributions to scholarship, presenting new interpretations and conclusions. Two papers (DiCerbo/Jasnow and McClain) are useful preliminary reports on long-term projects. The cross-references in Arlt and Albersmeiers and in Mannings and Moyers papers attest to value added by presentation at the workshop.
This volume presents a series of papers delivered at a two-day session of the Theban Workshop held at the British Museum in September 2003. Due to its political and religious prominence throughout much of pharaonic history, the region of ancient Thebes offers scholars a wealth of monuments whose physical remains and extant iconography may be combined with textual sources and archaeological finds in ways that elucidate the function of sacred space as initially conceived, and which also reveal adaptations to human need or shifts in cultural perception. The contributions herein address issues such as the architectural framing of religious ceremony, the implicit performative responses of officiants, the diachronic study of specific rites, the adaptation of sacred space to different uses through physical, representational, or textual alteration, and the development of ritual landscapes in ancient Thebes.
From the Euphrates Valley to the southern Peruvian Andes, early complex societies have risen and fallen, but in some cases they have also been reborn. Prior archaeological investigation of these societies has focused primarily on emergence and collapse. This is the first book-length work to examine the question of how and why early complex urban societies have reappeared after periods of decentralization and collapse.
Ranging widely across the Near East, the Aegean, East Asia, Mesoamerica, and the Andes, these cross-cultural studies expand our understanding of social evolution by examining how societies were transformed during the period of radical change now termed “collapse.” They seek to discover how societal complexity reemerged, how second-generation states formed, and how these re-emergent states resembled or differed from the complex societies that preceded them.
Art and international relations during the Late Bronze Age formed a symbiosis as expanded travel and written communications fostered unprecedented cultural exchange across the Mediterranean. Diplomacy in these new political and imperial relationships was often maintained through the exchange of lavish art objects and luxury goods. The items bestowed during this time shared a repertoire of imagery that modern scholars call the first International Style in the history of art.
=The composition, which the editors entitle the “Book of Thoth”, is preserved on over forty Graeco-Roman Period papyri from collections in Berlin, Copenhagen, Florence, New Haven, Paris, and Vienna. The central witness is a papyrus of fifteen columns in the Berlin Museum. Written almost entirely in the Demotic script, the Book of Thoth is probably the product of scribes of the “House of Life”, the temple scriptorium. It comprises largely a dialogue between a deity, usually called “He-who-praises-knowledge” (presumably Thoth himself) and a mortal, “He-who-loves-knowledge”. The work covers such topics as the scribal craft, sacred geography, the underworld, wisdom, prophecy, animal knowledge, and temple ritual. Particularly remarkable is one section (the “Vulture Text”) in which each of the 42 nomes of Egypt is identified with a vulture. The language is poetic; the lines are often clearly organized into verses. The subject-matter, dialogue structure, and striking phraseology raise many issues of scholarly interest; especially intriguing are the possible connections between this Egyptian work, in which Thoth is called “thrice-great”, and the classical Hermetic Corpus, in which Hermes Trismegistos plays the key role. The first volume comprises interpretative essays, discussion of specific points such as the manuscript tradition, script, and language. The core of the publication is the transliteration of the Demotic text, translation, and commentary. A consecutive translation, glossary, bibliography, and indices conclude the first volume. The second volume contains photographs of the papyri, almost all of which reproduce their original size.
This book is the first comprehensive presentation of the archaeology of Syria from the end of the Paleolithic period to 300 BC. Although Syria has been the focus of intensive excavations for decades, no large-scale review of the results of these excavations has ever appeared until now. Syria is one of the prime areas of excavation and archaeological field work in the Middle East, and Peter Akkermans and Glenn Schwartz outline the many important finds yielded by Syria, before providing their own perspectives and conclusions.
This volume accompanies an exhibition of the same name, which includes artefacts from nearly 2000 years before the Christian era. Objects such as coffins, tombs, masks, jewellery, papyri, sarcophagi and monumental and small-scale sculpture reveal the reverence and awe with which the Egyptians considered the mystery of death. The essays in this book explore Egyptian art history, customs and worship, with specific focus on the Amduat, a book devoted to the pharaoh’s 12-hour journey to the afterlife. Additional writings detail the background of the collection and focus upon the role of art in ancient Egypt.
Volumes in Writings from the Ancient World provide teachers, literary critics, historians, general readers, and students direct access to key ancient Near Eastern writings that date from the beginning of the Sumerian civilization to the age of Alexander the Great. Volumes typically offer historical and literary background to the writings, the original text and English translation, explanatory or textual notes, and a bibliography.
The SBL Press WAW editorial board is led by series editor Theodore J. Lewis.
An introductionon to the goals and methods of textual criticism of the Bible, intended to give students of Hebrew the necessary tools to study the text. The principles of textual criticism are explained in terms of both their usefulness and their limitations, and are illustrated with examples from the Bible
Creditors have always sought the protection of the law to secure themselves against loss if the debtor cannot or will not pay the debt. This volume examines the legal instruments of security available to creditors in the earliest known legal systems, their use and abuse, and the ways in which the law sought to satisfy the differing interests of creditors, debtors, and society in general, with varying degrees of success. The book covers all the major legal systems of the ancient Near East, from Sumer to Ptolemaic Egypt, as well as comparative historical developments up to the present day. Twelve scholars have each contributed a study of their special period of expertise, while the general issues that arise from their research are discussed in a concluding chapter.
More than 500 years before the Odyssey and the Iliad, before the biblical books of Genesis or Job, masters of the epic lived and wrote on the Mediterranean coast. The Ugaritic tablets left behind by these master scribes and poets were excavated in the second quarter of the 20th century from the region of modern Syria and Lebanon, and are brought to life here in contemporary English translations by five of the best known scholars in the field. Included are the major narrative poems, “Kirta,” “Aqhat,” and “Baal,” in addition to 10 shorter texts, newly translated with transcriptions from photographs using the latest techniques in the photography of epigraphic materials (sample plate included).
The book presents the publication of papers delivered at a conference held in 1991 at Johns Hopkins honoring the centennial of the birthday of William F. Albright, on the subject of the future of ancient Near Eastern Studies in the 21st century CE. New ways of considering data, likely challenges to the field, and suggestions for new paths to follow are provided by scholars discussing a wide variety of ancient Near Eastern cultures, methods, and intellectual approaches.
Essays in Egyptology in honor of Hans Goedicke, edited by Betsy M. Bryan and David Lorton.
Working against the traditional focus of archaeology on the urban and elite, this volume presents a set of studies that focus on rural settlements and rural life during the formation and early history of urban societies in both the ancient Near East and Mesoamerica. The papers discuss the role that villages played in the development of urban societies, the emergence and character of social complexity in rural communities, and the changes in those communities during periods of urbanization.
Published in conjunction with the exhibition to be held at: the Cleveland Museum of Art, July 1-September 27, 1992; Kimbell Art Museum, Fort Worth, October 24, 1992-January 31, 1993; Galeries nationales du Grand Palais, Paris, March 2-May 31, 1993.
This booklet stresses the value of various academic studies (e.g., history, language, art, archaeology) as prerequisites for a career in Egyptology, by depicting real women whose careers provide inspirational role models. The first section is a text designed for use by elementary students and presents the career of Egyptology from a woman’s point of view. Both female and male students are encouraged to view Egyptology as a potential career choice. The second section provides the teacher with three lesson plans for classroom use. The lesson plans are aimed at exploring: (1) the processes involved in archaeology; (2) Egyptian art; and (3) the relationship between ancient Egyptian funerary practices and beliefs. Each lesson format includes a purpose or objective, materials, procedures, and conclusions.
The fully illustrated catalogue of a major exhibition organized by the Cleveland Museum of Art in collaboration with the Reunion des Musees Nationaux, Paris, Egypt’s Dazzling Sun is an exceptional contribution to scholarship on the art and history of the reign of Amenhotep III (1391-1353 BC), the pharaoh who called himself the “Dazzling Sun Disk.” Ruling in a period of unprecedented peace, Amenhotep III commissioned splendid temples and sponsored royal workshops in many media. His aesthetic and technical innovations resound in the styles of his direct descendant, Tutankhamen, and in Egyptian art of all centuries. Comprehensive essays along with discussions of 143 objects, drawn from collections in the United States, Europe, and Egypt, offer a remarkably complete view of this golden age of Egyptian art. A range of new research methodologies assist in unveiling the remarkable variety and superb quality of the best work of Amenhotep III’s reign.
Divides the reign into fourteen topics, covering demography, society, government, warfare, religion, ideas and the arts, to which are added a dedication and a unifying summary. The essays address three questions: What were things like prior to the reign of Louis XIV? What changes occurred during the reign? What did things look like after the reign? Bryan’s account of the reign of Thutmose IV, King of Egypt in the early 14th century BC, is derived largely from inscriptions and decorations found in temples and tombs. His reign is presented in six chapters on the length of his reign, his position as heir apparent before his accession, the female members of the royal family, royal monuments, people employed by Thutmose, and the major historical issues of his reign. Extensive bibliographies appear at the conclusion of each chapter. Annotation copyrighted by Book News, Inc., Portland, OR
Four lectures presented at a symposium sponsored by the Resident Associate Program, Smithsonian Institution on October 27, 1990. Speakers, and lecture titles, include: Hershel Shanks: “The Excitement Lasts: An Overview” ~ James C. Vanderkam: “Implications for the History of Judaism and Christianity” ~ P. Kyle McCarter, Jr.: “The Mystery of the Copper Scroll” ~ James A. Sanders: “Understanding the Development of the Biblical Text.” An 11-page Panel Discussion follows the lectures.
This book presents a detailed quantitative analysis of pottery from a 2000-year sequence of strata excavated at Tell Leilan in northeastern Syria, from the Ubaid period (ca. 4500 BC) to the mid-third millennium BC. Using statistical techniques and qualitative studies, the book sets forth a system of chronological periods and presents the characteristic pottery types of each, intended as a chronological framework for subsequent research in the region.
II Samuel completes P. Kyle McCarter, Jr.’s study of the book of Samuel. Based upon the introduction and commentary of his first volume, McCarter continues the discussion of textual and literary sources as they relate to a reconstruction of historical events.
A key issue for McCarter is accounting for the historical circumstances that led to the composition of the book of Samuel. In dialogue with major schools of thought pertaining to the origin and transmission of the book, the author offers his scholarly opinions on its composition. McCarter presents a unique new translation based upon the latest and most extensive textual sources available, including scrolls and fragments from Qumran. Furthermore, he resolves the complicated textual history of Samuel. | <urn:uuid:60d05b92-7172-4622-b723-b77eae56e60a> | CC-MAIN-2018-17 | http://neareast.jhu.edu/about/faculty-books | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00087.warc.gz | en | 0.932228 | 4,334 | 2.984375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.