text
stringlengths 144
682k
|
|---|
air compressors
How Do Rotary Screw Air Compressors Work?
In modern industrial machinery, screw compressors are one of the most widespread technologies. Known for their reliability and flexibility, screw compressors are the driving force behind many industrial processes and applications.
There are two basic principles of compression in air compressors. You can also look for the best screw air compressors via
Rotary Screw Air Compressor
Image Source: Google
One of them is the principle of repression. There are many types of compressors that use this method, with screw compressors being the most popular.
As the name suggests, screw compressors use a rotary motion to compress air. The compressor has a set of male and female rotors. They will be designed differently so that when they rotate in unison, air will enter between them.
The male rotor has convex blades and the female rotor has a concave cavity; in this way, they can be connected without contact to achieve compression.
Step by step messing up work
• The valve opens to draw gas into the compression chamber. The two screw rotors are located in the chamber; When the engine is running, they rotate at high speed.
• As the impellers rotate, they trap and isolate the air in the cavities between the rotors, thereby moving the air down the chamber.
• Space shrinks and moves away from the opening valve. Pressure increases when volume decreases.
• The pressure increases and thus the air condenses.
• Air pressure activates the compressor exhaust valve, where compressed air can reach a collection container or other storage tank.
• The air is compressed and can be sent to downstream devices such as dryers and oil/water separators to dry and remove impurities.
Find Out What Makes A Good Air Compressor Pump
An air compressor pump is very necessary in the world of the very industry in which many machines come in and change the mode of production and the job is done. Almost all of these machines through a series of evolutions to attend to the changing needs of many industries.
Like most other machines and even robots, machine types have species that are serving a particular type of industry or even a hobby. Evolution is not just making the air compressors pumps reliable equipment has become more reliable as well.
Image Source Google
Make sure that your pump air compressor is the most appropriate one for your tank. This way you can use your equipment to crash to the fullest and to avoid any injuries at work. It is important to identify the best pump for the nature of your work because there are many types of air compressor pump available.
The most common is a piston compressor. It is under a positive displacement compressor, which you can find at the fractional horsepower for the range is very high. Positive displacement air compressor works by filling the air space with air and reducing the volume of space.
Another type that falls below the positive displacement compressor is as follows: reciprocating, rotary screw, and rotary sliding vane. A positive displacement compressor such as a piston compressor works similarly to the internal combustion engine in a reverse way.
Another form is a non-positive displacement compressor. The kind that falls under it is a centrifugal compressor.
Most industries prefer industrial air compressors for their comfort. But homeowners can even maintain air compressor pumps around, for their needs. There is a wide range of portable air compressors available.
|
Where cats belong–and where they don’t
From ANIMAL PEOPLE, June 2003:
KISSEEMEE, Florida–Depending on who you listen to, the
Florida Fish & Wildlife Conservation Commission either declared war
on feral cats at a May 30 meeting in Kisseemee, or clarified their
position that they have no intention of so doing.
Claiming the support of the American Bird Conservancy,
National Audubon Society, and National Wildlife Federation, Florida
Wildlife Division director Frank Montalbano talked like a man going
to war in a March interview with Orlando Sentinel outdoors writer Don
“We estimate there are 5.3 million feral and free-ranging
domestic cats in the state,” Montalbano said. “We’re going to take
an aggressive policy toward eliminating the feral cat impact on lands
this agency manages. Cats roaming free in wildlife management areas
will be taken into captive management or euthanized. We may have to
get involved in euthanasia,” Montalbano reiterated, “in situations
where [nonprofit] corporations are maintaining colonies of feral cats
near populations of native endangered species.”
Montalbano, said Wilson, “was referring to a group of cats kept by
condominium owners on Key Largo, home of the Key Largo wood rat.”
Montalbano’s remarks touched off a furor, especially in
south Florida, where trap/neuter/return of feral cats, called TNR
for short, has taken hold in a big way.
Data developed separately by the FFWCC and by University of
Florida at Gainesville researcher Julie Levy agrees that Florida now
has 2.7 million to 2.8 million feral cats, amounting to 44% of the
total cat population–about twice as many cats per 1,000 human
residents and twice as high a percentage of ferals as the current
U.S. norms. The Florida climate enables cats to go through two and
even three successful breeding cycles per year, against the norm of
one in the snowbelt states.
Yet Florida used to have even more feral cats.
Since local TNR programs began in south Florida during the early
1990s, animal control killing per 1,000 human residents has dropped
by half, and reductions in the numbers of cats killed are believed
to account for most of the improvement.
In 2001, for instance, all shelters combined in the Fort
Lauderdale/Miami corridor killed 14.1 cats and dogs per 1,000
humans, less than the national average of 15.7, and down from 33.0
per 1,000 as recently as 1997.
In Tampa, where TNR has not taken hold, shelters
collectively killed 32.4 cats and dogs per 1,000 humans in 2001. St.
Petersburg, right across Tampa Bay, with several active TNR groups,
killed 13.7.
Florida Fish and Wildlife Conserv-ation Commission field
biologist Dwayne Carbonneau and northeast regional director Dennis
David followed Montalbano in spooking cat people when in mid-May they
accidentally left a conversation between them on the answering
machine of Alley Cat Allies in Washington D.C.
Neither realized that David’s telephone was still connected
to the answering machine, after David left a brief message about the
May 30 FFWCC meeting.
“Should I wear my uniform when I’m shooting these
neighborhood cats?” asked Carbonneau.
“Only after we adopt this policy,” David said.
Policy adopted
Softening their initial tone as cat defenders bared their
claws, but perhaps using doublespeak, the FFWCC unanimously voted
“To pursue staff recommendations and all of the strategies outlined
and to oppose TNR only when it is a threat to native wildlife and
then in the most socially acceptable way we can.”
The approved strategies include:
* “A comprehensive education program to increase public
awareness of the impacts that feral and free-ranging cats present to
wildlife,” which feral cat advocates read as mounting an anti-cat
propaganda blitz.
* Identifying “ways for cat owners to minimize impacts,”
meaning keeping cats indoors.
* Informing “cat owners of laws prohibiting the release or
abandonment of cats to the wild,” read by many TNR practitioners as
an attempt to legally define them as owners and arrest them. This
tactic has failed when attempted in other states.
* Eliminating “the threat cats pose to the viability of
local populations of wildlife, particularly species listed as
endangered, threatened or of special concern,” perhaps hinting at
an escalation of catch-and-kill.
* Prohibiting “the release, feeding or protection of cats
on lands managed by the Florida Wildlife Commission and strongly
opposing programs and policies that allow the release, feeding or
protection of cats on public lands that support wildlife habitat.”
This much was already public policy and is also the policy of the
National Park Service, U.S. Fish & Wildlife Service, U.S. Navy,
U.S. Postal Service, and other federal agencies.
* Providing “technical advice, policy support and
partnerships to land management agencies in order to prevent the
release, feeding or protection of cats on public lands that support
wildlife habitat,” read by TNR practitioners as a mandate for
creating an interagency cat extermination force.
* Opposing creation and supporting “elimination of TNR
colonies and similar managed cat colonies wherever they potentially
and significantly impact local wildlife populations,” which some TNR
practitioners read as meaning anywhere, although the phrase
“potentially and significantly” leaves room for tolerating low-level
predation on rodents and common bird species in developed areas,
where other predators such as coyotes and gulls are either few or
* Evaluating “the need for new rules to minimize the impacts
of cats on native wildlife.”
The FFWCC tried to mollify cat defenders by stating that it
“is not making drastic plans to kill cats; rather it is looking to
employ the least-restrictive methods possible to accomplish the
agency’s mission to protect wildlife.
The FFWCC also indicated that it would not take the active
role that some cat advocates fear in conducting feral cat roundups:
“Commissioners agreed that local governments have the primary
responsibility for managing domestic animals, including cats, and
the FFWCC will concentrate its efforts on coordinating with them and
other affected parties.”
In other words, catch-and-kill on land not under direct FWC
management is still delegated to local animal control agencies,
whose policies and activities are still under the direction of local
elected officials.
Elaborated FFWCC spokesperson Joy Hill to Associated Press
writer Mike Schneider, “We’re not forming a cat Nazi-patrol. That’s
not what this is about. It’s about protecting wildlife.”
Skeptical, Alley Cat Allies challenged the new FFWCC policy with a
June 10 lawsuit.
How great a difference the new FFWCC policy will actually
make remains to be seen. Although it lends itself to extremes of
interpretation, it really does little more than restate the
longstanding perspectives and policies of wildlife agencies all over
the U.S.
It also marks the first major state level escalation of a
policy debate already underway in communities with both active TNR
programs and active birders who blame cats for declines of
ground-nesting birds and songbirds. Friction over the alleged impact
of feral cats on a small reintroduced population of California golden
quail in Golden Gate Park has raged for more than a decade.
A similar confrontation in Akron, Ohio, brought the
extermination of 969 cats trapped by cat-unfriendly residents during
the latter half of 2002.
While the Florida debate was underway, comparable
resolutions were under discussion in Oakland, Michigan, and
Richmond, Indiana.
Maverick Cats
Few cities and counties and even fewer states have existing
written feral cat policies because historically feral cats were not
recognized as a presence, much less a problem. Feral cats were not
covered in the model animal control ordinances circulated by national
animal advocacy groups as recently as the early 1990s; there is no
corpus of common law pertaining to them; and felis catus, their
species, is not even mentioned in the Bible, even though cats were
and are native to the Middle East.
Recognition of the existence of feral cats in great numbers
may be traced to the 1982 first publication of Maverick Cats, by
Ellen Perry Berkeley.
Feral cats at the time were still generally seen–if seen at
all–as a rural phenomena, haunting dairy barns where they hunted
mice in haylofts and begged for milk.
Urban feral cats were presumed to be strays, and urban cats
dumped in rural habitat were believed to have a very low survival
rate. At Tilden Park in the hills above Berkeley, California, for
example, the ranger lecture given to visiting schoolchildren during
the 1960s and early 1970s included inspecting cat bones and hearing
about how cruel it was to dump unwanted cats to “give them a chance”
because a typical urban cat could not catch enough mice and birds to
feed herself.
Discussion of the possible impact of feral cats on rare
resident birds and reptiles was added after the passage of the
federal Endangered Species Act in 1973.
The Walt Disney film Lady & The Tramp (1955) marked the
apparent turning point in a battle begun with the passage of the
first U.S. animal control ordinances to persuade Americans to confine
dogs at home and have them wear identity collars. The popularity of
the film apparently accomplished what more than 200 years of
municipal dog-catching and 100 years of humane society lecturing had
not. Within the next 25 years allowing dogs to run at large passed
from being the American norm to being a socially unacceptable act in
most parts of the country, but not even Ellen Perry Berkeley seems
to have given thought to what the disappearance of free-roaming dogs
might mean to feral cats.
What happened was that confining dogs opened habitat and
diurnal hunting and travel opportunities to a self-sustaining cat
population who until then had been confined to places where dogs
could not go, hunting and traveling mostly by night.
Coyotes, foxes, raccoons, deer, and opossums also took
advantage of the absence of dogs to claim urban territory, but cats
had the dual advantages of already being there, albeit mostly
unseen, and of having by far the greatest fecundity, enabling them
to rapidly breed up to approximately the same biomass as the dogs
whose jobs as refuse raiders and rodent-catchers they took over.
Between 1960 and 1985, available records indicate, the
numbers of “stray” cats killed by U.S. animal control agencies
approximately tripled, even as dog intake leveled off and began to
In gist, each free-roaming dog weighing 30 pounds on average
was replaced by three 10-pound cats.
Feral cats became the most abundant and reproductively
prolific mammalian predator/scavenger in the urban environment.
That in turn brought feral cats to the attention of animal
advocates and wildlife researchers.
“Fewer than a dozen research papers [about feral cats] had
been published by the mid-1970s,” recalls Ellen Perry Berkeley in a
the new final chapter of a 2001 reissue of Maverick Cats. “We now
have more than 20 times that number.”
Most of the new studies focus on the relatively obvious
predatory role of outdoor cats, but a few researchers have also
recognized the importance of cats as prey.
Coyotes and foxes often take urban habitat niches from feral
cats by force. A 1998 study by the late Martha Grinder (killed in a
1999 car accident) and Paul Krausman, of the University of Arizona
in Tucson, found that feral cats were among the main prey of urban
coyotes. A1999 study by Kevin Crooks and Lee McClenaghan, of San
Diego State University, affirmed the Grinder/Krausman work by
discovering cat remains in 21% of the coyote scats they found in
canyons near San Diego.
As hawks, owls, and eagles recovered from the reproductive
depression of the 1950s through the 1970s caused by exposure to the
pesticide DDT, many species–including bald eagles–surprised
ornithologists by thriving as readily in some cities as out in the
wild. Cats, it seems, have also become a big part of urban
raptors’ prey base.
The common view of cats as a top predator in the wildlife
food pyramid because they are wholly carnivorous is true of most wild
species, but not of felis catus, who shares with coyotes the
distinction of being among the few predators with the fecundity of a
prey species.
During the peak years of the U.S. government Animal Damage
Control coyote-killing campaigns of the 1950s through the 1970s,
biologists found that the average coyote litter size in Texas grew
from four pups to seven. This occurred because the intense ADC
hunting pressure on coyotes shifted the odds of pup survival from
favoring the pups who got the most maternal care to favoring the
offspring of the coyote mothers who could produce the greatest
abundance of pups, among whom some might elude the killers.
In addition, with food competition artificially reduced,
the coyotes wiley enough to survive were able to feed more pups.
The ancestors of felis catus were chiefly the African desert cat,
with some apparent genetic input from the Pallas cat of Asia Minor
and the closely related Scots wildcat and Norwegian skaucat. All are
still capable of hybridizing with felis catus, but all normally bear
just two kittens. That was also true of the felis catus specimens
who were mummified by the ancient Egyptians circa 4,000 years ago,
and was probably still true of felis catus as recently as the 14th
Between 1334 and 1354, however, bubonic plague killed up to
75% of the human population of Europe and Asia. Brought to Europe by
flea-infested black rats who stowed away aboard the vessels of
Crusaders returning from the Middle East, the so-called Black Death
attacked most virulently after terrified cities blamed it on
“witchcraft” and purged from their midst both the majority of people
who had medicinal skill (mostly older women) and their “familiars,”
mostly the cats who pro vided rat control.
Cat-eating was first reported in Guangzhau, China, in 1346,
putting the Asian population of felis catus under similar pressure,
continuing in much of China, Korea, and some other Asian nations to
this day.
Human predation on cats waned in Europe for several centuries
after the Black Death, but resurged during a British purge of
“witches” in 1665, just before The Great Plague of London.
Intensive human predation on felis catus in the Americas
peaked with the height of catch-and-kill animal control in the U.S.
during the 1970s–much of it done, then and now, by humane workers
who believe they are “euthanizing” helpless abandoned cats to save
them from suffering.
Regardless of motive, the effect on the feral cat population
replicates natural predation: the most frequent victims are the very
if they can.
Responding to the intensified mortality, felis catus now
bears an average litter of four. Nearly seven centuries of killing
cats doubled the fecundity of the species.
Why TNR works
TNR is biologically effective in reducing cat numbers while
predation is not because it inhibits the reproductive potential of
the survivors. When at least 70% of the potential breeders in any
species from viruses to advanced mammals are vaccinated, or
sterilized, which amounts to vaccination against pregnancy, the
remainder have difficulty reproducing at more than the replacement
level. This is because the potentially reproductive population is
not only diminished, but also isolated from each other, among
specimens of the same species who hold habitat and whose sterility is
not evident.
Each vaccination or sterilization above 70% further reduces
the reproductive potential of the target species. The species can
even be eradicated, as smallpox was during the 1970s (at least
outside of laboratories), if there is not a favorable vacant habitat
into which the fecund few can expand and resume high-volume
If feral cats were to be eliminated from the U.S., hawks,
owls, eagles, foxes, and coyotes would eventually capture their
prey base–but feral cats reproduce at from two to six times the rate
of any of these rival predators. Until the rival predators are
numerous enough to eat any feral cats who try to reclaim a vacant
habitat niche, the animals most likely to fill open niches are more
Critical to understand is that this is not a matter of cats
exercising territoriality. Few predators are more gregarious with
each other than felis catus. Even dominant toms who drive away rival
toms during mating season may befriend them outside of mating season.
Feral cats hold habitat niches by consuming the available food supply
and occupying the safe cover. They surrender habitat niches to other
predators through attrition, as the other predators become able to
take the niches away from them.
How many cats?
“Of the 73 million pet cats in the United States,” Heidi
Ridgley declared in the April 2003 edition of the National Wildlife
Federation membership magazine National Wildlife, “an estimated 40
million roam outside unsupervised. Throw in feral cats-the
unsocialized offspring of discarded or lost pets-and as many as 100
million cats are on the loose. ‘These cats could easily be killing
100 million songbirds a year,’ says Al Manville, wildlife biologist
at the U.S. Fish and Wildlife Service Migratory Bird Management
Ridgely succinctly presented the worst fears of birders and
conservationists about feral cats, but much of her information was
either outdated or contextually misplaced.
ANIMAL PEOPLE estimated in 1992 that there were about 26
million feral cats in the U.S. at the low end of the annual
population cycle in the depth of winter, and about 40 million at the
summer peak of kitten season.
These estimates were projected from information about the
typical numbers of cats found in common habitat types, gleaned from
a national survey of cat rescuers sponsored by Carter Luke of the
Massachusetts SPCA, and were cross-compared with animal shelter
intake data.
TNR was then just beginning to be practiced in the U.S., and
was not even called TNR yet.
After a decade of intensive TNR in much of the country, 40 million
is now very close to being the upper-end plausible estimate of all
free-roaming cats in the U.S., including both pets and ferals, and
then only at the height of “kitten season,” when about half of the
total feral cat population are still too young to hunt, with
approximately a 50% chance of living long enough to ever hunt
In 1996, based on a follow-up survey of the same cat
rescuers who were polled in 1992, ANIMAL PEOPLE estimated that the
feral cat population had probably peaked in 1993 or 1994 before
beginning a downward trend.
ANIMAL PEOPLE projected the annual rate of decrease in the
feral cat population since peak at a maximum of 11% per year, if TNR
was performed with uniform vigor throughout the U.S.
ANIMAL PEOPLE also projected the maximum rate of assimilation
of feral cats into homes, over and above the historical rate of
about 25% found by many other researchers, as also being 11%.
Since 1994 the actual rates of decrease in the feral cat
population and of assimilation of feral cats as pets appear to have
been about half the maximum, because the maximum potential for using
TNR effectively has only been half realized. Thus the winter feral
probably no more than 24 million.
Zero growth
There is indirect confirmation of these numbers from other sources.
The American Animal Hospital Association estimated in 1997, based on
veterinary client surveys, that there were about 59 million pet cats
in the U.S. One year later the American Pet Product Manufacturers
Association estimated that there were 63 million pet cats.
The parallel surveys have shown similar increases in the pet
cat population ever since. Currently the AAHA projects that there
are 78 million pet cats in the U.S., for a 32% rise in six years.
Yet even a decade ago separate studies by the Tufts
University Center for Animals & Public Policy, the Massachusetts
SPCA, and Karen Johnson of the National Pet Alliance found that the
owned cat population, including cats deliberately bred by the pet
industry, appeared to be reproducing at only 70% of their own
replacement level.
Even then, up to 85% of all pet cats had already been
sterilized, amounting to 60% of the estimated total U.S. cat
population of about 100 million.
The pet cat population was maintaining itself and growing
only through taming and adoption of ferals. Surveying 20,000
California households in the San Diego and San Jose areas during
1993-1994, Johnson learned that at least 28% of the cats kept as
pets were apparently born feral–a slight rise from the findings of
the Tufts and MSPCA studies, which were done in 1991, but
consistent with the trend reported by other researchers since 1981.
Johnson also learned that about 10% of all the surveyed
households fed feral cats, who also amounted to about 10% of the
total cat population, and that about 9% of the feral cats had been
Overall, 64% of the San Diego and San Jose cats could no
longer reproduce, bringing the total cat population close to the 70%
threshold for zero growth.
No comparable surveys have been done in the rest of the U.S.
yet, but as of 1996, according to American Veterinary Medical
Association data, the number of pet cats in the U.S. acquired from
all sources and the number of cat sterilization surgeries performed
balanced, at 8.4 million of each.
At that point the pet cat population could no longer
reproduce at even replacement level. Up to a third of all pet cats
now appear to be recruited from the feral population–and the volume
of sterilizations performed each year may exceed recruitment.
The bottom line is that while the pet cat population has
grown by 32%, the total cat population, ferals included, is still
no more than the 100 million who inhabited the U.S. in 1992, and is
very likely less.
How many birds?
The estimate of feline predation on birds at about 100
million per year that Al Manville gave to National Wildlife, at
approximately one per cat, is probably low. It is certainly a much
more conservative projection than most.
In early 2000, in perhaps the most thorough study of cat
predation on birds to date, albeit analytically flawed, Carol Fiore
of the Wichita State University Department of Biological Sciences put
the annual pet cat toll on birds in the U.S. at anywhere from 134
million, if half of all pet cats roam (about 34 million), to 269
million, if every pet cat roams.
Fiore did not try to estimate the numbers of birds killed by
feral cats, but even her lower estimate markedly overprojected the
number of owned cats who are allowed to roam. This happened because
Fiore decided, based on a survey of Wichita residents, that about
half of all cat-keepers allow their cats to roam, and presumed that
could be extrapolated to mean that half of all pet cats roam.
ANIMAL PEOPLE has much more extensive data about cat-keeping
norms on file, from various other studies, which indicates that
cat-keepers whose cats do not roam have, on average, from two to
three times more cats than those whose cats can roam.
In other words, more than two-thirds and perhaps 75% of all
pet cats do not roam. The roaming pet cat population would therefore
be no larger than 26 million.
There is a fairly obvious reason for the greater abundance of
non-roaming cats, in that cats kept from running at large tend to
live much longer, avoiding cars, wild predators, and capture by
animal control officers.
Ferals kill fewer
Accordingly, even Fiore’s lowest estimate of pet cat
predation on birds may be twice too high. If Fiore was correct that
free-roaming pet cats kill an average of 4.2 birds per year, the
toll by pet cats would be 109 mil ion.
The feral cat toll on birds is unlikely to be more than half
as high as the pet cat toll.
First, there may be twice as many free-roaming pet cats as
ferals old enough to hunt for a living.
Second, ferals who hunt for a living tend to hunt mice by
night, not birds, who are mostly not out at night.
Third, feral cats appear to hunt no more, and perhaps less,
prey than they can eat is a pointless waste of energy. Conservation
of energy is a critical concern of predators, who typically sleep
about twice as much as primarily plant-eating prey species (except
when prey species hibernate).
Only the well-fed cat can afford the energy expenditure
involved in hunting just for fun– especially when the prey is not to
be eaten, like the lizards, shrews, and chipmunks commonly killed
and abandoned by pet cats.
Finally, relatively few cats are even capable of
successfully hunting birds.
Perhaps the best-known study of predation by individually
monitored cats was published by the British-based Mammal Society in
February 1998, based on their Mammal Action Cat Survey. Eight
hundred British cat-keepers recorded their cats’ kills for six
months: 144,000 cat-days of activity.
The most active feline killer was Missy, with 125 kills in 180 days,
including 28 birds. Almost all the rest were mice, voles, and
other small rodents.
The runner-up was Kipper, with 82 kills in 180 days,
including six birds. The two most predatory cats (by far) among the
entire sample base killed only 34 birds between them in 360 cat-days
of hunting. They managed to kill birds at a rate amounting to 16% of
their total prey, and succeeded in killing a bird on only 9.4% of
the days they hunted.
Only about one cat in 10 has the vertical visual acuity to
catch a bird who takes flight–a hypothesis easily tested with a wad
of paper on a string. Most cats will easily lcatch the paper when it
moves horizontally, like a mouse, but nine of 10 will lose track of
it if it is jerked up into the air like a startled bird.
Cats, in short, are rarely the primary cause of the death
of the birds they catch. Bird-hunting cats obey the same rules of
predation as all other animals who hunt for a living, dispatching
primarily the sick, the injured, the elderly, and the very young,
especially fledglings who try to fly too soon. Cats also finish
birds who become drunk from eating fermented berries, poisoned by
pesticide ingestion (typically with recently sprayed insects), or
who collide with human-created obstacles.
The ecological role of cats in preventing the spread of bird
disease by killing and eating those brought to the ground by
infection has barely been studied, but it may be that feline
predation is overall more beneficial to birds than harmful.
Examining the spleens of 500 birds who were either caught by
cats, flew into windows, or were hit by cars, researchers Anders
Moller and Johannes Erritzoe of the Universite Pierre et Marie Curie
in Paris reported in June 2000 that the spleens in the cat-killed
birds were a third smaller on average, in 16 of 18 species, than in
the birds killed in accidents. In part this was because 70% of the
cat-killed birds were juveniles; only half of the others were. But
a more important factor, Moller and Erritzoe suggested, was that
“Birds succumbing to lots of infections, or inundated with
energy-sapping parasites, have smaller spleens than healthy birds.”
Who killed Cock Robin?
All considered, the Fiore data suggests that contrary to her
own conclusions, pet and feral cats combined probably kill no more
than 163 million birds per year in the U.S.
By comparison, human hunters shoot at least 74.4 million
wild birds per year, including about 35 million mourning doves.
University of Pennsylvania researcher Daniel Klem estimates that
about 100 million birds per year die in collisions with window glass,
exclusive of birds who hit the glass first and then are caught by
cats. Another four million birds per year die in collisions with
cellular telephone transmission towers, also exclusive of birds
scavenged by cats.
The Dr. Splatt and Strah Poll roadkill counts indicate that
about 11 million to 18 million birds whose remains are big enough to
be seen from a car and/or cause a road hazard are roadkilled by cars
each year.
National Wildlife Federation vice chair and Virginia Wildlife
Center director Edward Clark recalls that, “A study done by the U.S.
Fish & wildlife Service of pesticide mortality shows that even with a
grid search of a field in which dead birds had been planted 24 hours
earlier, the discovery was only about 5%, which means that 95% were
either removed by scavengers or went unnoticed.”
If the same ratio applies to roadkilled birds, the vehicular
toll would be 220 million per year.
Interrupted attacks
Clark, an outspoken critic of TNR, told Heidi Ridgley of
National Wildlife that the Virginia Wildlife Center treats about 600
cat-injured animals per year, of whom under 20% recover.
“We have no way of knowing if cats are to blame for the
orphaned animals we get,” Clark added.
Wrote Heidi Ridgley, citing Clark, “The ‘fortunate’ few
whom people pry out of their cat’s claws and turn loose fare no
better. With 60 different kinds of bacteria in a cat’s saliva, even
a tiny puncture packs a lethal punch.”
Claimed Clark, “People are woefully mistaken if they think
they can turn an injured creature loose and it will survive.”
Clark also stressed in discussion with ANIMAL PEOPLE the fate
of “those who die from the infections associated with the attack that
fails to produce a direct kill. I won’t toss around any assumptions
about the percentage of success cats have in making direct kills,”
Clark said, “but if we apply the generally accepted success rate of
wild predators of one kill in 4 tries, the number of actual cat
victims skyrockets. The true number is certainly much higher than is
currently counted. We receive plenty of birds with missing tail
feathers who have bite or claw marks consistent with a cat attack.”
But Clark missed the obvious: the 600 cat-wounded birds he
sees are among the few who are rescued by humans, typically because
the humans intervene to break off the cat attack. That changes the
predator/prey dynamic. The cat has no opportunity to finish the kill
because of the human intervention.
Otherwise, the injuries he described would impair flight,
and would lead to a cat meal. These are not failures of predation,
but successes, interrupted, comparable to what happens when a hyena
chases a cheetah off a half-dead gazelle and appropriates the meal
for himself.
The true failures of predation rise into the air and get away
unscathed. The Clark hypothesis that large numbers of birds are
dying in the wild of cat-inflicted injuries and infections is simply
not supported by evidence–whereas, roadkilled birds and the remains
of birds who collide with windows, transmission towers, and power
lines, as well as those who succumb to pesticides, have all been
collected and studied by researchers in bucketloads.
The nonhuman mammal most responsible for declining birds in
the U.S. during the past 20 years is not any predator, but rather
the gentle-mannered Virginia whitetailed deer, whose main food is
“browse,” the brushy hardwood forest understory used as nesting
habitat by most neotropical migratory songbirds.
From the 1950s through the 1980s most states introduced “buck
laws” designed to boost the deer population for the pleasure of human
hunters by exempting does from being hunted. Thus the overwintering
herd came to have a gender ratio sometimes as high as 20 does to one
Because shooting up to 85% of the buck population each fall
made winter browse relatively abundant, more does were able to bear
and raise twin fawns.
By the early 1990s the Virginia whitetailed deer population
was believed to have exceeded pre-Columbian levels, and it has
continued to grow, despite the reintroduction of doe hunting,
increased bag limits, and experiments with contraception.
Comparing the range maps of declining neotropical migratory
songbird species with deer counts confirms the obvious: deer are
eating the birds out of house and home. The only role cats have in
the plight of the birds is that birds unable to find good nesting
habitat sometimes resort to nesting in more vulnerable
locations–where they are exposed to the full range of woodland
Temple & barns
Many of the other common claims about cat predation are
comparably weak. Summarized Ridgley of the findings most often cited
by foes of ferals, “A University of Wisconsin study in the early
1990s found that the estimated 1.4 million to 2 million cats that
range freely in rural areas of the state kill 31.4 million small
mammals and 7.8 million birds a year-at a minimum. ‘We knew the
study would be controversial so we went with the most conservative
estimates,’ says biologist Stanley Temple, coauthor of the study.”
Actually Temple used grossly inflated estimates of cat
numbers. The standard method of estimating the owned cat population,
based on AVMA U.S. Pet Ownership & Demographic Sourcebook data, is
human population divided by 2.65 (people/household), x .568 (ratio
of cats to people).
That would put the owned cat population of Wiscon-sin in the
early 1990s at just under 1.6 million. If feral cats were 40% of
the total cat population, the maximum plausible estimate of the
total number of cats in all of Wisconsin, not just the rural areas,
would have been 1.9 million.
Between ferals and free-roaming pet cats, there were
probably not more than 750,000 free-roaming cats in Wiscon-sin,
barely more than half of Temple’s low-end estimate.
“In parts of rural Wisconsin,” Temple told Ridgley,
“roaming cat densities can reach 114 cats per square mile.”
Yet if every barn in Wisconsin housed feral cats at the
average density of the barn colonies whose populations ANIMAL PEOPLE
surveyed in 1992, when barn colonies appeared to be at their peak
size, the 68,000 barns in Wisconsin would have housed 816,000 cats,
which would work out to 15 cats per square mile.
“The billboard effect”
There is support, however, for the view of San Francisco
quail advocate Alan Hopkins that TNR encourages cat abandonment–a
view shared by DELTA Rescue sanctuary founder Leo Grillo, who
believes that any visible presence of feral cats or feeding stations
creates a “billboard effect” which encourages people to drop cats
off to “give them a chance,” rather than take them to a shelter
where they may be killed.
Overall, pet abandonment was at an all-time high circa 1970,
when U.S. shelters were killing 115 dogs and cats per 1,000 human
residents, about half of them picked up at large. Cats were about
40% of the toll.
By 2002, shelter killing of dogs and cats was down to 15.7
per 1,000 human residents. Cats now account for about two-thirds of
the toll, but the total number of cats killed has fallen from circa
10 million per year to three million per year.
Clearly, the advent of TNR and no-kill sheltering have
reduced abandonment–but not at all sites. Complaints about TNR
programs typically begin when the numbers of cats fail to visibly
drop after several years, and perhaps even increase. Challenged,
the TNR program administrators usually blame abandonment, but resist
the suggestion that the site may be too conspicuous for TNR to
A second valid claim of TNR critics is that the practice of
feeding feral cats changes their hunting behavior from that of wild
predators to that of pets. Birders are often correct in asserting
that the cat toll on wildlife increases after a TNR program starts in
a park or conservation area, partly because feeding the cats means
they need no longer conserve energy, and partly because taking cats
out of the breeding cycle reduces wandering that puts them at risk
from other predators and vehicular traffic.
This means each cat can not only hunt more, but can also
hunts longer–and is among the biggest reasons why ANIMAL PEOPLE has
recommended since 1992 that TNR should not be practiced in sensitve
wildlife habitat.
The Prime Directive
ANIMAL PEOPLE publisher Kim Bartlett was instrumental in
introducing TNR to the U.S.. beginning in 1991 with a seven-month
trial of the method in northern Fairfield County, Connecticut.
Several cats who were removed from inappropriate habitat are still
part of the ANIMAL PEOPLE household.
From the beginning, the goal was to reduce the feral cat population
at the target sites to zero as rapidly as possible.
There are two preconditions for zeroing out a feral cat
colony through TNR, and both were stringently observed:
1) At least 70% of the cats and preferably 100% must be
sterilized. Before the 70% figure is reached, there will be no net
reduction. ANIMAL PEOPLE made every effort to trap and sterilize
100% of the cats at each site as rapidly as they could be identified.
2) The colonies must be monitored to ensure that all
newcomers are identified, caught, and sterilized.
In addition, Bartlett stipulated as fundamental humane
considerations that “All cats and kittens who can be socialized for
adoption should be; no ill, elderly, or disabled cats should ever
be released; all cats should be properly vaccinated”; and, as the
Prime Directive for practicing TNR successfully without rousing
politically problematic opposition, “no cat should be released into
hostile habitat,” such as places of high vehicular traffic, places
where the cats will be obvious to the public and will therefore
attract abandonments, places where the TNR practitioner does not
have permission of the property owner to work, and places where the
neighbors may shoot, poison, or otherwise harm the cats.
“The impact of feral cats on wildlife cannot be ignored,”
Bartlett added in her post-project review, “and should be a major
concern. Feral cats may fit as predators, especial ly in the urban
environment, taking the place of those long gone, but the balance
is delicate. I’m not at all sure how to compare a cat to a fox, but
I suspect the cat will kill many more animals than the fox, mostly
for sport. I’m certain that the predator/prey ratio is askew in
virtually all feral cat colonies. A feral who lives alone would be a
more natural fit.”
Between the Connecticut experiment, which handled 338 cats
in all, and the findings from our 1992 survey of rescuers, ANIMAL
PEOPLE projected that TNR might be suitable in only 12% of the
locations where feral cats are found– but, largely because the 12%
were hospitable to feral cats, they included nearly half the feral
cat population.
The Florida conflict, and many like it, seem to have
resulted from disregard of the Prime Directive. The outcome of
trying to “save” cats in unsuitable locations may be that not only
those cats but many more will be caught and killed.
Print Friendly
Leave a Reply
Your email address will not be published.
|
SIEF is supporting Forests for the future: making the most of a high CO2 world.
The challenge
A forest of tall trees.
Identifying tree species with strong, positive
growth responses to elevated CO2
For the past ten thousand years, atmospheric CO2 has been relatively stable, but over the past 150 years CO2 has risen 40% from this long-term value, and is projected to be at least double this historical value by the end of this century. While the rise in atmospheric CO2 presents a global challenge, it also offers opportunities to increase forest production and bio-sequestration. One consequence of this rapid rise in CO2 is that photosynthesis has been increased, generating increased carbon gain and plant production on a global scale.
The response
To develop a novel strategy that rapidly identifies tree species that exhibit a strong, positive growth response to elevated CO2, and the genetic attributes underlying these responses. This will be the first comprehensive attempt to link genomic and phenomic approaches to large-scale assessment of plant responses to elevated CO2
The collaboration
The Forests for the Future Project is a strong collaboration between The Australian National University, University of Western Sydney and CSIRO which will place Australia in the forefront of climate-change related biological science spanning from laboratory to greenhouse to plantation.
Projected impact
The outcomes of this project result in impact in a number of areas:
• Widespread application of the Project’s products by end-users will greatly improve capacity to identify
genotypes that are responsive to elevated CO2 in all plants, including trees and crops. This will lead to better
choices to achieve greater economic output from the forest industry.
• Development of less expensive and less labour-intensive procedures will add to the economic benefits of this
technology, and increased commercial application.
• Environmental impact includes an increase in plantation forests that grow well despite the effects of rising CO2
levels including higher temperatures, physiological responses and changes in water and carbon use, thus aiding
in sequestration of CO2 and increase in greening of Australia.
Download: Forests for the Future brochure [pdf · 112kb]
|
6 Benefits Of Learning Korean
Nowadays, many people, young and old, are getting hooked to the Korean wave, also known as Hallyu or K-Pop, whether it’s movies, series, or music. Due to its popularity and high demand, many are interested in learning the Korean language.
Unfortunately, Korean is one of the most challenging foreign languages to learn and master. If you want to succeed in learning Korean, you have to be highly motivated to make progress and improve your grammar and vocabulary in no time.
Thankfully, watching Korean variety shows and drama series may help improve your conversational Korean, according to hanakorean.com.sg. It is essential if you want to travel and talk to locals like a true Korean. You may also consider listening to entertaining music and trying to translate it on your own to see any improvements.
Benefits Of Learning Korean
But why would people go through such lengths to learn a challenging language when others are pretty simple, such as Spanish, Japanese, and German? In this article, you’ll discover the excellent benefits of studying Korean you’d possibly want to consider, such as:
1. You Can Talk To 80 Million People Or More
The Korean language is considered the 22nd most spoken mother tongue, with an estimated 80 million speakers worldwide. This number includes the two Korean peninsulas, the North and South, and other Korean in foreign countries, such as the Philippines and the United States.
On a side note, the numbers may not be as prominent as you would expect, considering the other 21 native tongues across the world. However, the language has become one of the most interesting languages to master because of the growing influence of Korean culture across the globe.
2. South Korea Has Extensive Global Connections
The popularity and the economic success of South Korea have become known worldwide, creating a name of its own on the international stage. It allows the country to produce investment opportunities from other powerful nations. That’s why Korean has become a prevalent language in the global arena despite not being used extensively.
According to experts, the Korean economy continues to grow and develop at a steady pace. In 2021, South Korea was one of the largest goods exporters, boosting their GDP and taking a seat at one of the top economies in the world.
Because of its growing economic powers and strong relationships with other countries, South Korea has made a name for itself. That said, learning Korean may come in handy someday, especially when pursuing to level up your business and establish international relations.
3. Learning Korean Opens Many Opportunities
Career Opportunities for Learning Korean Language
In the previous decades, Korea has been one of the poorest and undeveloped countries because of the continuous raging of wars. Today, it has made a name for itself and has become one of the richest countries in the world, with an average GDP of USD$1.65 trillion.
This growing economy means a lot of opportunities and endless possibilities, giving many people a stable and decent job to feed their families.
In addition, a lot of career opportunities exist in a wide range of industries, including textile, car manufacturing, technology, shipbuilding, tourism, hospitality, research and development, and electronics.
That’s why learning Korean can be very important if you plan to work in one of these industries to develop your skills and knowledge. If you know how to speak Korean and put it on your resume, many companies might be attracted to work with you.
4. Learn Chinese And Japanese Easily
Historically speaking, Chinese, Korean, and Japanese come from an interrelated family of languages. That’s why it’s easy for Korean speakers to learn Chinese and Japanese than in any other region.
Once you have achieved a high level of Korean fluency, around level four or five of the TOPIK test, you have an extra advantage in learning the other two languages. But why is this so?
It’s because Korean, Japanese, and Chinese share similar grammar functions, vocabulary, sentence structure, and the use of punctuations and markers. Also, these three languages still share the same ancient writing form called ‘Hanja or Chinese Hanja.’ Though, it’s not widely used anymore in Korean because of the Hangul or the Korean alphabet.
5. Work With Multinational Companies
South Korea is the birthplace of many brands you surely love and appreciate, such as Hyundai, Samsung, and LG. So, if you’re planning to get a job in these multinational companies, mastering Korean may give you a good advantage.
By studying Korean, you could work on other Korean corporations with offices and stores worldwide. These include Lotte, Hyundai Motors, Kia, Hanwha, etc. However, learning Korean alone will not get you a safe spot in these companies. You’d still need to work on your achievements, portfolios, and qualifications to land and secure a great position.
These international companies prefer someone who has excelled in their fields and is willing to learn more. But if you’re the only one who knows Korean, setting other factors equal, then you might have a slight chance to get the job for yourself.
Knowing Korean will allow you to have business with the local Korean business organizations, especially if they can’t converse properly in English. It is the advantage you have over those who don’t know how to speak the language.
6. Study In Korea
South Korea may not be an English-speaking country, but their universities are well-known internationally, ranging from different fields of study. These include a wide range of science degrees, business degrees, etc. That said, you might want to consider studying in Korea if you intend to pursue finishing a degree abroad.
However, you must learn how to speak Korean before studying abroad. It will help you talk to students, professors, and locals, as it allows you to understand everyone around you.
Furthermore, if you want to test your proficiency in Korean, you may take the TOPIK test. This test can also be an additional requirement since Korea is not an English-speaking country.
Final Words
Korean is among the most beautiful languages you can ever hear, with its unique tone and fascinating words. And with the dawn of Korean pop culture, people are intrigued to study this language.
Learning Korean is one of the best things you can achieve in life. It opens a different realm of opportunities and possibilities, especially when you want to work or study in Korea. Also, it’s a good language to learn if you’re planning to study Chinese and Japanese due to their historical interrelatedness.
However, mastering Korean will not be as easy as you think. You need a good amount of motivation and patience to succeed. But with perseverance, you’ll eventually learn the language.
You May Also Like
About the Author: sam
|
Belt system in Taekwondo
A Taekwondo student’s progress is marked by his attainment of coloured belts. The beginner starts at white belt with the aim of achieving black belt. For senior students and instructors the a new journey begins at black belt. You become a 1st DAN when you pass your black belt and thier are 8 more exams to take if you wish to achieve the ultimate Grand Master status at 9th DAN.
In order to progress from one belt to the next the student must undergo grading exams. An exam will involve the demonstration of his learning to date and a strict curriculum demands that he show his experience in Poomsae (forms) and breaking, kicking and hand techniques. Breaking boards successfully will demonstrate power and speed of his techniques while Poomsae will prove his memorisation of individual techniques in a set sequence.
Each Grade level is known as a geup, or kup and there are ten exams to pass to get to black belt. Each belt has a different colour with its own symbolic meaning.
Belt Order and Names:
Beginners wear white belt called 10th Kup, then progress through a series of coloured belts
10th Kup
9th Kup
8th Kup
7th Kup
6th Kup
5th Kup
4th Kup
3rd Kup
2nd Kup
1st Kup
1st Dan
|
Attention Difficulties
What is attention?
Attention is the cognitive process whereby there is focus on one area of the environment while ignoring the other things that are taking place at the same time.
Attention includes listening to the teacher in the classroom while ignoring other background noises or conversation that are taking place or concentrate when the teacher is teaching instead of looking at other activities that are taking place.
Children who have attention difficulties display the following behaviours:
• make careless mistakes in written work
• have short attention span, find it difficult to sustain attention during a task or when playing
• inattentive when spoken to directly
• carry out part of the instructions given
• homework given is incomplete
• have difficulty organising complex tasks
• forget things easily
• lose or misplace important items
• avoid or dislike activities that need long periods of concentration
Fix an appointment for your child to go for our Dynamic Diagnostic Assessment (DDA™) to identify your child’s learning and developmental strengths and weaknesses.
Bridge Learning specialised early intervention programmes to intervene in attention difficulties:
|
Skip to main content
photo & video
Mastering the Art of Photography
Lesson 23 of 39
Creating A Visual Sense Of Mood
Chris Weston
Mastering the Art of Photography
Chris Weston
buy this class
Sale Ends Soon!
starting under
Unlock this classplus 2000+ more >
Lesson Info
23. Creating A Visual Sense Of Mood
Great photographs reveal more than the physical nature of things, they elicit an emotional response, too. In this lesson, Chris heads out in the middle of winter to show you how to use light and color to add mood to a photograph.
Class Trailer
Class Introduction - Three Steps To Creative Photography
Firing The Creative Mind - Part 1: The Camera Points Both Ways
Firing The Creative Mind - Part 2: Letting Go Of Judgement
Firing The Creative Mind - Part 3: Detaching From Outcomes
Practicing Mindfulness In Photography
Finding The Visual Narrative
Behind-the-scenes: Naples
Seeing Beneath The Surface Of Things
Lesson Info
Creating A Visual Sense Of Mood
warm and cold. Soft and hard are adjectives used to describe the different types of light. They are also adjectives used to describe different types of people and places. So in terms of visual storytelling, you can use the color and quality of light to set the mood of a photograph. Compare these two images now for the purpose of this example, I've taken colour away, so just look at the subject. Swans flying a woodland background and rising missed. Both were shot at roughly the same time. There are the same subject and were taken at the same location. Now let's add some color to one of them. How does color affect your emotional response to the image? Forget whether you like it or not. That's not important. How is color making you feel? Okay, let's do the same with the second image. How do you feel now? Colour is an incredibly powerful tool for conveying emotion. Warm colors such as yellow and orange evoke feelings of happiness, optimism and energy. Think about how you respond when the s...
un comes out. Read does the same but also adds an edge of danger. Cool colors, blues and purples tend to be calming but can also induce sadness, which is where they're saying. Feeling the blues comes from blue also makes you feel physically gold. Color intensity also affects the mood of an image. Bright, vivid colors are uplifting and energizing. Softer pastel colors are soothing, but in color theory into practice and back at the waterfall. I visited earlier to show you an example of using color to create mood. Now, sitting at home watching this, you have no idea how cold I am. It's the middle of winter. I can barely feel my toes, and just looking at the water is making me shiver. But how do I convey that information? That emotion in a photograph? Well, here's a photo I took a couple of hours ago when we first turned up and I could still feel my feet. What can I say about it? It's a record shot of a waterfall. It tells you what it looks like and not much else. Technically, there's nothing wrong with it is well exposed and in focus. But where's the emotion? There isn't any, as I said, is a half decent snapshot. So I'm going to add some color. There's a control in your camera called the White Balance Control, the technical aspects of which I talk a lot about in the very first TCP course. Now you can think of white balance as a set of colored filters of the low white balance settings such as two or 3000 kelvin or the tungsten incandescent and fluorescent precepts as blue filters. The lower the value, the deep of the blue. At the other end of the scale, the high numbers or the cloudy and shade presets are red filters. The higher the number, the deeper the red. Now for this shot, I want to evoke a sense of the cold. I'm feeling blue is a cold color, so I'm going to switch my white balance setting to incandescent, which is around 3000 Calvin. And here's my new image. After two hours of standing in the freezing cold compositionally, I haven't changed much. But by shifting the color balance from a neutral tone to blue ivy evoked in you at home the same feelings I'm experiencing standing here, you are no longer a passive observer. You're here with me in the moment, and that, for me, is the very essence of photography, capturing the moment, not the external event, that the internal dialogue that connects you the photographer with the subject. I'm reminded of a quote from one of my favorite photographers, Freeman Patterson, who said the camera always points both ways in expressing your subject. You also express yourself. Photography is self expression and therefore requires a level of self awareness before you press the shutter. That, in part, is what mindfulness is self awareness. So practice mindfulness when you're out taking pictures and instantly your photography will improve. Rather than photographing simply what a subject or seen looks like. We'll be adding emotion to the composition and that will raise the visual power of your images tenfold overnight.
Class Description
• See images with a creative eye.
• Capture artistic photographs of the most popular subjects.
• Choose the right lens and camera settings for the image you want to create.
• Recognize and capture the “decisive moment”.
• Add visual mood and emotion to your photographs.
• Develop your own unique photographic style.
• Find what inspires you and apply that inspiration to your image-making.
• Fine-tune color, tone, and visual presence with easy-to-learn Adobe Lightroom adjustments.
Once you’ve mastered basic camera craft and photo-technique, what is the next step in advancing your photographic skillset? In this in-depth course, award-winner Chris Weston shares an approach to photography that has creativity at its heart, and reveals the secrets and professional techniques that will get you creating photographs that ‘sing’.
Taking you on a step-by-step journey, from vision to print, Chris shows you how to: tap into your natural creative instincts; ‘see’ much-photographed and everyday subjects with a unique vision; set a creative intention and get the camera to capture it authentically; and, with a few simple techniques, process superb print-ready photographs. Through ‘in-the-field’ examples and inspirational case studies, he reveals the nuances of composition that can make or break a photograph, and describes the creative tools that turn snapshots into stunning photographs good enough to adorn any wall.
Delivered in an easy-to-follow, down-to-earth style, using ‘real-life’ examples and ‘live’ tuition, this course builds on the practicalities of camera technique to equip you with the creativity and vision to see, capture and process compelling photographs time after time, whatever your camera or level of experience.
• Beginners who want to create better photographs.
• Intermediate photographers who want to refine their image-making and be more creative.
• All photographers looking for inspiration and creativity.
• Outdoor photographers interested in travel, landscape/cityscape, nature, sport, and wildlife photography.
Ratings and Reviews
Student Work
Related Classes
I loved this course - in particular the latter part of it in which he demonstrated how post processing lets you really tell the story of the image. Another fabulous course. Thanks Chris & thanks Creative Live.
Abdullah Alahmari
Thanks a lot to mr. Chris Weston This course is great and It is a 🌟 🌟 🌟 🌟 🌟 course for me. Beside the other course ( mastering photographic composition and visual storytelling) both courses are Complementing to each other and highly recommended.
Charles Ewing
Fantastic course. Great photographer, teacher and storyteller!
|
Is Unit Testing White Box Testing?
Unit Testing different from White Box Testing
How is Unit Testing different from White Box Testing?
Developers conduct white box testing or transparent box testing or glass box testing during development. They use it to check the outputs of the tested items and the internal variables that lead you to that output. The focus is able to be placed on the specific code that was changed or at a higher level of the entire module. It is in this way; they verify how the piece of code works and how it integrates with the overall internal structure.
Table of Content:
1. What is unit testing?
2. What is white box testing?
3. Similarities
4. Differences
5. Pros and Cons of transparent box testing
6. Black box and White Box testing
7. Grey Box testing
8. Types of white box testing
9. What does white box testing focus on?
10. Box testing technique and code coverage
11. Imperva Runtime Application Self Protection
12. Does it matter what testing you use?
What is Unit Testing?
Unit Testing
Generally, unit testing is handled by the software developer. It’s a straightforward method to test the minor possible individual components or individual units – individual sections of coding logic and algorithms – for successes and failures. This testing software is an excellent way of uncovering the issues present within a specific feature that is either being added or changed and catching bugs before being pushed into QA for complete testing.
What is White Box Testing?
White Box Testing
This is an approach that enables testers to inspect and verify the inner workings of a software system; this includes its code, internal structure, and integrations with external systems. This type of box testing is crucial for automated build processes in a modern Continuous Integration/Continuous Delivery (CI/CD) development pipeline. White box testing is also often referenced in Static Application Security Testing (SAST). This approach automatically checks source code or binaries and provides feedback on bugs and possible vulnerabilities.
Below are the similarities between Unit and White type of testing method
1. Developers commonly perform both during programming test cases rather than by external parties.
2. Focus at the code level, as opposed to QA software engineers focused on the software application functionality (function coverage) accessed through the UI.
3. You require coding skills and an understanding of algorithms.
4. You also require knowledge of the specific source code being tested.
On the one hand, white box tests are considerably more expert-driven, time-consuming, and complex, even if you’ve automated some parts of the testing process. At the same time, unit is quick and straightforward to implement because it looks at code sections rather than the bigger picture. It is possible to perform unit tests manually (manual testing) or, the more usual option, automate them with a testing tool such as Selenium. White box testing is able to be applied at the system, integration, or unit level.
Pros and Cons of Transparent Box Testing
Pros: Cons:
• Ability to achieve complete code coverage
• Reduces communication overhead between box testers and developers or programmers.
• Easy to automate
• Allows you to have continuous improvement of code and development practices
• Sensitive to changes in the code base, automation testing requires expensive maintenance
• Cannot test from the user’s perspective
• It is not possible to test expected functionality that does not exist in the codebase.
• Requires a significant effort to automate
Black Box and White Box Testing
White Box Testing and Black Box Testing
White or transparent or clear box tested is often contrasted with black-box testing; this is because it involves testing a software application from the user’s perspective without any prior knowledge of its implementation:
• While white-box testing can uncover structural problems, hidden errors, and problems with specific components.
• Black box testing checks that the system is working as expected, that is acceptance testing.
Grey Box Testing
As highlighted above, white box testing involves complete knowledge of the inner workings of a system under test, and a black box consists of no knowledge. However, grey box testing is a compromise – testing a system with a partial understanding of its internals. Commonly it is used in integration testing, penetration testing, regression testing, and end-to-end system testing.
This third type of box testing is a combination of inputs from developers and testers and also results in more effective testing strategies. The test also reduces the overhead required to perform functional testing of many user paths, focusing testers on the paths that are most likely to affect users or result in a defect.
Grey box testing is unique as it combines the benefits of black box and transparent or clear box testing in the following ways:
• It ensures that tests are performed from the user’s perspective, like black-box testing.
• Leveraging inside knowledge to focus on the most problems and identifying and resolving the system’s internal weaknesses, like in transparent or clear box testing.
In Application Security Testing, this box testing approach is called Interactive Application Security Testing (IAST). The IAST combines:
1. SAST – This is used to perform testing (white/transparent/clear box testing) by evaluating static application code.
2. Dynamic Application Security Testing (DAST) performs black-box testing by interacting with running applications and discovering faults and vulnerabilities as a user or external attacker would.
The Types of White Box Testing
This type of box testing can take multiple forms, including:
1. Unit Testing — These tests are written as part of the application code, which tests that each component is working as expected.
2. Integration Testing — These tests are specifically made to check integration points that are between internal components in a software system or are present in integrations with external systems.
3. Static Code Analysis — This is an automatically identifying vulnerabilities or coding errors in static code, using predefined patterns or machine learning analysis.
4. White Box Penetration Testing — This is an ethical hacker acts as a knowledgeable insider, attempting to attack an application based on an intimate knowledge of its code and environment.
5. Mutation Testing — This is a type of unit testing that enables you to check the robustness and consistency of the code via defining tests, making small, seeing if the tests still pass, and making random changes to the code.
What Does White Box Testing Focus On?
They can focus on discovering any of the problems with an application’s code as highlighted below:
1. Any security gaps and weaknesses — The test checks to see if security best practices were applied in the process of coding the application and whether the code has a weakness to any of the known security threats and exploits.
2. Any broken or poorly structured paths — The test identifies conditional logic that is redundant, inefficient, or broken.
3. Any expected output — The test executes all possible inputs to a function to see if it always returns the expected result.
4. Any loop testing — The test checks for single loops, concatenated loops, and nested loops for efficiency, correct handling of local and global variables, and conditional logic.
5. Also, Data Flow Testing (DFT) — The test tracks variables and their values as they pass through the code so as to find variables that are not correctly initialized, incorrectly manipulated, or declared but never used.
Box Testing Technique and Code Coverage
Code Coverage
One of the main goals of white-box type of software testing is to ensure the source code is covered as comprehensively as possible. This coverage on the code is used as a metric to show how much of an application’s code has unit tests checking its functionality (function coverage).
In the code coverage, you can verify how much of an application’s logic is executed and then tested by the unit test suite by using concepts such as statement coverage, path coverage, and branch coverage. These concepts are elaborated on below.
Statement Coverage
This white box testing technique in white-box testing ensures all the executable statements present in the code are run and tested at the very least once. For instance, if there are several conditions in a block of code, each of which is used for a specific range of inputs, the test should execute every range of information to ensure all lines of code are executed. Statement coverage helps uncover new statements, unused branches, missing statements referenced by part of the code, or the dead code justify over from the previous versions.
Branch Coverage
Branch coverage is used to map the code into conditional logic branches and ensure that unit tests cover every branch. If there are several nested conditional statements:
If X then.
If Y then.
if Z then.
C, A, and D are conditional branches because they occur when satisfied. B is an unconditional branch this is because it is always executed after A. The tester can identify all conditional and unconditional branches in a branch coverage approach and writes code to execute the most branches possible.
Path Coverage
Path testing is concerned with the linearly independent paths via the code. Testers can draw a control flow diagram of the code, such as the example provided below.
Control flow diagram
A control flow diagram is used to design the tests in a path coverage approach in structural testing.
In the example below, there are several possible paths present via the code:
1, 2
1, 3, 4, 5, 6, 8
1, 3, 4, 7, 6, 8
Using a path testing approach, the tester writes unit tests to execute several possible paths through the program’s control flow. The objective is to identify broken, redundant, or inefficient paths.
Imperva Runtime Application Self Protection
Runtime Application Self Protection
The Runtime Application Self Protection (RASP) was designed to complement white box and black box testing through adding an extra layer of protection once the application is already in production or a realistic staging environment.
RASP has the benefits listed below:
1. It helps you when testing applications in-depth during a fast, agile development life cycle.
2. It can test for unanticipated flow of inputs, inspect and control the system’s response.
3. It provides you with analysis and detailed information on the weaknesses and weaknesses present in your code, helping you quickly respond to attacks.
In summary, Imperva RASP provides the following benefits, keeping your applications secured and giving you crucial feedback used to eliminate any additional risks present. It also requires no changes to code, and it is easy to integrate with your existing applications and DevOps processes, thus protecting you from both known and zero-day attacks.
Additionally, Imperva provides multi-layered protection to make sure websites and applications are available, easily accessible, and safe. The Imperva application security solution includes:
• DDoS Protection maintains uptime in all situations. It also prevents any DDoS attack, of any size, from preventing access to your website or network infrastructure.
• CDN enhances website performance and reduces bandwidth costs with a CDN made according to the specification of the software developers. Cache static resources are at the edge while accelerating APIs and dynamic websites.
• Cloud WAF permits legitimate traffic and prevents horrible traffic. Safeguard your applications at the edge by using an enterprise‑class cloud WAF.
• Gateway WAF keeps applications and APIs that are inside your network safe with Imperva Gateway WAF.
• Attack analytics mitigates and responds to actual security threats efficiently and accurately with actionable intelligence across all your layers of defense.
Account takeover protection uses an intent-based detection process to identify and defend against attempts to take over users’ accounts for malicious purposes.
API security — protects APIs by ensuring only desired traffic can access your API endpoint, as well as detecting and blocking exploits of weaknesses.
Bot Management — analyzes your bot traffic to pinpoint anomalies, identifies bad bot behavior, and validates it via challenge mechanisms that do not impact user traffic.
Does it matter which testing you use?
By opting to use the terms interchangeably, you may be missing out on the benefits you can experience through one of these testing methods – the things that set them apart from one another. When you pick White/transparent box testing, it tells you more about the flow and interactions of the modules, and unit testing gives you granular information on each element. They often operate in conjunction, but there is a significant yet subtle difference in what you will uncover with each approach.
Scroll to Top
|
Hideyo Noguchi
Year: 1876-1928
Dr. Hideyo Noguchi was a prominent Japanese bacteriologist who in 1911 discovered the microorganism Treponema pallidum as the causative agent of the progressive paralytic disease syphilis. Born in Inawashiro, Fukushima Prefecture in 1876, he fell into a fireplace and suffered a burn injury on his left hand when he was 18 months old. Following surgery, he regained functionality in his hand, and he decided to become a doctor to help those in need. He apprenticed under Dr. Kanae Watanabe, the same doctor who had performed his childhood surgery. After moving to the U.S. and working as an assistant at the University of Pennsylvania School of Medicine, he became a researcher at New York's Rockefeller Institute of Medical Research. He was engaged primarily in bacteriology research and is known for his work on yellow fever and syphilis. He published numerous treatises and was named as a candidate for the Nobel Prize in Physiology or Medicine three times. In 1928, while researching yellow fever on the Gold Coast of Africa, Noguchi became ill and died at the age of 51 in Accra (currently the Republic of Ghana).
Source: © Hideyo Noguchi Memorial Museum
Added Date 03/15/2021
|
How to find server name from ip address cmd?
1. Click the Windows Start button, then “All Programs” and “Accessories.” Right-click on “Command Prompt” and choose “Run as Administrator.”
2. Type “nslookup %ipaddress%” in the black box that appears on the screen, substituting %ipaddress% with the IP address for which you want to find the hostname.
Correspondingly, how do I find my server name CMD? From the Start menu, select All Programs or Programs, then Accessories, and then Command Prompt. In the window that opens, at the prompt, enter hostname . The result on the next line of the command prompt window will display the hostname of the machine without the domain.
As many you asked, how do you find the server name? Open the DOS interface of your computer by typing the letters “cmd” into the “Open” field of the run menu. After you press enter, a new window should open which includes the DOS command prompt. In this window, type “Hostname” and press the enter key. Your computer’s server name should appear.
People ask also, how do I find server name from IP address?
See also How to get my ip address off blacklist?
Beside above, how do I find my server name and IP address?
1. Open the command prompt. Click on the Windows Start menu and search “cmd” or “Command Prompt” in the taskbar.
2. Type in ipconfig /all and press Enter. This will display your network configuration.
3. Find your machine’s Host Name and MAC Address.
Server URLs (or Uniform Resource Locators) are the names that we typically think of when we think about a server (, for example). These URLs are actually translated into server IP addresses when we navigate to a web page, because each URL is assigned to an IP address.
What is a server name or address?
What Is Server Name Or Address? The internet addresses of the name servers translate the domain names into web addresses. In this way, a user is able to visit a website by typing the domain name instead of the website’s IP address. Domain Name System (DNS) relies on name servers in critical parts.
What is my server name for email?
In Outlook, click File. Then navigate to Account Settings > Account Settings. On the Email tab, double-click on the account you want to connect to HubSpot. Below Server Information, you can find your incoming mail server (IMAP) and outgoing mail server (SMTP) names.
How do I find my server alias name?
You can find out all the CNAMEs for a host in a particular zone by transferring the whole zone and picking out the CNAME records in which that host is the canonical name. You can have nslookup filter on CNAME records: C:> nslookup Default Server: Address: …
What is Nbtstat command?
nbtstat. The nbtstat command is a diagnostic tool for NetBIOS over TCP/IP. Its primary design is to help troubleshoot NetBIOS name resolution problems. The command is included in several versions of Microsoft Windows.
See also Question: How to renew ip address on tablet?
How do I find my server name in Windows 10?
1. Click on the Start button.
2. In the search box, type Computer.
3. Right click on This PC within the search results and select Properties.
4. Under Computer name, domain, and workgroup settings you will find the computer name listed.
How do I see all the servers on my network?
You can get the IP address of the server by running ipconfig /all on a windows machine, and then you can get the MAC address by looking for that IP address using arp -a . You will be granted with the following results. Note that you can replace DHCP SERVER with SERVER and it will display all servers on the network.
What is a server name example?
The full name of the server on the network, also called the Domain Name System (DNS) name. For example, . The host name of the server.
Is server name the same as IP address?
A name server is a server that returns an IP address when given a domain name. This IP address is basically the domain’s location on the Internet.
Is host name the server name?
Hostnames are unique identifiers that are used in different modes of communication such as the WWW or email in order to tell a device from another within a domain. Name servers, on the other hand, are fully qualified hostnames. These are basically the servers where you DNS information is actually stored.
How do I find my server URL?
1. Open the Command Prompt. Press the Windows Key and “R” to open the Run box.
2. Type “Tracert” and the Website’s Address into the Command Prompt.
3. Note the IP Address Next to the Website’s URL.
See also Best answer: How to hack using ip address with cmd en vivo?
How do I find my SMTP server for Gmail?
1. Gmail SMTP server address:
2. Gmail SMTP name: Your full name.
3. Gmail SMTP username: Your full Gmail address (e.g.
4. Gmail SMTP password: The password that you use to log in to Gmail.
5. Gmail SMTP port (TLS): 587.
6. Gmail SMTP port (SSL): 465.
What is Server alias name?
ServerAlias : Alternate names for a host used when matching requests to name-virtual hosts. Most people simply use ServerName to set the ‘main’ address of the website (eg.
How do I get all DNS records for an IP?
1. Open the DNS lookup tool.
2. Enter the domain and select the DNS record you want to check.
3. If you want to perform the lookup of all the DNS records configured for the domain, select “ALL” from the dropdown list.
How do I use nslookup in Windows?
1. Click Start > Run (or press the Windows key + R on your keyboard)
2. In the run box enter “cmd” > OK.
3. In the command prompt enter “nslookup” without quotes > press ENTER.
4. Output will show the DNS server being used and the record lookup result.
What is NetBIOS name servers?
NBNS is a server responsible for maintaining a list of mappings between NetBIOS computer names and network addresses for a network that uses NetBIOS as its naming service. A computer registers itself with the NetBIOS name server upon startup by providing the name server with its computer name and network address.
Back to top button
Adblock Detected
|
Safeguarding user interest: 3 core principles of Design for Trust
Trust in technology is eroding. This is especially true when it comes to emerging technologies such as AI, machine learning, augmented and virtual reality and the Internet of Things. These technologies are powerful and have the potential for great good. But they are not well understood by end-users of tech and, in some cases, not even by creators of tech. Mistrust is especially high when these technologies are used in fields such as healthcare, finance, food safety, and law enforcement, where the consequences of flawed or biased technology are much more serious than getting a bad movie recommendation from Netflix.
What can companies that use emerging technologies to engage and serve customers do to regain lost trust? The simple answer is to safeguard users’ interests. Easier said than done.
An approach I recommend is a concept I call Design for Trust. In simple terms, Design for Trust is a collection of three design principles and associated methodologies. The three principles are Fairness, Explainability, and Accountability.
1. Fairness
There is an old saying from accounting borrowed in the early days of computing: garbage in, garbage out—shorthand for the idea that poor quality input will always produce faulty output. In AI and machine learning (ML) systems, faulty output usually means inaccurate or biased. Both are problematic, but the latter is controversial because biased systems can adversely affect people based on attributes such as race, gender, or ethnicity.
There are numerous examples of bias in AI/ML systems. A particularly egregious one came to light in September of 2021 when it was reported that on Facebook, “Black men saw an automated prompt from the social network that asked if they would like to ‘keep seeing videos about Primates,’ causing the company to investigate and disable the AI-powered feature that pushed the message.”
Facebook called this “an unacceptable error,” and, of course, it was. It occurred because the AI/ML system’s facial recognition feature did a poor job of distinguishing persons of color and minorities. The underlying problem was likely data bias. The datasets used to train the system didn’t include enough images or context from minorities to enable the system to learn properly.
Another type of bias, model bias, has plagued many tech companies, including Google. In the early days of Google, fairness was not an issue. But as the company grew and became the global de facto standard for search, more people began to complain its search results were biased.
Google search results are based on algorithms that decide which search results are presented to searchers. To help them get the results they seek, Google also auto-completes search requests with suggestions and presents “knowledge panels,” which provide snapshots of search results based on what is available on the web, and news results, which typically cannot be changed or removed by moderators. There is nothing inherently biased about these features. But whether they add to or detract from fairness depends on how they are designed, implemented, and governed by Google.
Over the years, Google has initiated a series of actions to improve the fairness of search results and protect users. Today, Google uses blacklists, algorithm tweaks, and an army of humans to shape what people see as part of its search page results. The company created an Algorithm Review Board to keep track of biases and to ensure that search results don’t favor its own offerings or links compared to those of independent third parties. Google also upgraded its privacy options to prevent unknown location tracking of users.
For tech creators seeking to build unbiased systems, the keys are paying attention to datasets, the model, and team diversity. Datasets must be diverse and large enough to provide systems with ample options to learn to recognize and distinguish between races, genders, and ethnicities. Models must be designed to properly weight factors that the system uses to make decisions. Because datasets are chosen and models designed by humans, highly trained and diverse teams are an essential component. Design for Trust is critical and it goes without saying that extensive testing should be performed before systems are deployed.
2. Explainability
Even as tech creators take steps to improve the accuracy and fairness of their AI/ML systems, there remains a lack of transparency about how the systems make the decisions and produce results. AI/ML systems are typically known and understood only by the data scientists, programmers and designers who created them. So, while their inputs and outputs are visible to users, their internal workings such as the logic and objective/reward functions of the algorithms and platforms cannot be examined so others can understand whether they are performing as expected and learning from their results and feedback as they should. Equally opaque is whether the data and analytical models have been designed and are being supervised by people who understand the processes, functions, steps and desired outcomes. Design for Trust can help.
Lack of transparency isn’t always a problem. But when the decisions being made by AI/ML systems have serious consequences — think medical diagnoses, safety-critical systems such as autonomous automobiles, and loan approvals — being able to explain how a system made them is essential. Thus, the need is for explainability in addition to fairness.
Take the example of the long-standing problem of systemic racism in lending. Before technology, the problem was bias in the people making decisions about who gets loans or credit and who doesn’t. But that same bias can be present in AI/ML systems based on the datasets chosen and the models created because those decisions are made by humans. If an individual feels they were unfairly denied a loan, banks and credit card companies should be able to explain the decision. In fact, in a growing number of geographies, they are required to.
This is true in the insurance industry in many parts of Europe, where insurance companies are required to design their claims processing and approval systems to conform to standards of both fairness and explainability in order to improve trust. When an insurance claim is denied, the firms must provide a criteria and thorough explanation of why.
Today, explainability is often achieved by the people who developed the systems creating documentation of the system’s design and an audit trail of the processes it goes through to make decisions. A key challenge in explainability is that systems are increasingly analyzing and processing data at speeds beyond humans’ ability to process or comprehend. In these situations, the only way to provide explainability to have machines monitoring and checking the work of machines. This is the driver behind an emerging field called Explainable AI (XAI). XAI is a set of processes and methods that let humans understand the results and outputs of AI/ML systems.
3. Accountability
Even with the best attempts to create technology systems that are fair and explainable, things can go awry. When they do, the fact that the inner workings of many systems are typically known only by the data scientists, developers, and programmers who created them, it can be difficult to identify what went wrong and trace it back to choices made by creators, providers, and users that led to those outcomes. Nevertheless, someone or some entity must be held accountable.
Take the example of Microsoft’s conversational bot, Tay. Released in 2016, Tay was designed to engage people in dialogue while emulating the style and slang of a teenage girl. Within 16 hours of its release, Tay had tweeted more than 95,000 times with a large percentage of them being abusive and offensive to minorities. The problem was Tay was designed to learn more about language from the interactions it had with people—and many of the responses to Tay’s tweets were themselves abusive and offensive to minorities. The underlying problem with Tay was model bias. Poor decisions were made by the people at Microsoft who designed the learning model for Tay. Yet, Tay learned racist language from people on the internet, which caused it to respond the way it did. As it’s impossible to hold “people on the internet” accountable, Microsoft must bear the lion’s share of responsibility… and it did.
Now consider the example of Tesla, its AutoPilot driver-assistance system and its higher-level functionality called Full Self-Driving Capability. Tesla has long been criticized for giving its driver-assistance features a name that might lead people to think it can operate on its own and over-selling the capabilities of both systems. Over the years, the U.S. National Highway Traffic Safety Administration (NHTSA) has opened more than 30 special crash investigations involving Teslas that might have been linked to AutoPilot. In August 2021, in the wake of 11 crashes involving Teslas and first-responder vehicles that resulted in 17 injuries and one death, the NHTSA launched a formal investigation of AutoPilot.
The NHTSA has its work cut out for it because determining who is at fault for an accident involving a Tesla is complicated. Was the cause a flaw in the design of AutoPilot, misuse of AutoPilot by a driver, a malfunction of a Tesla component that had nothing to do with self-driving, or a driver error or violation that could have happened in any vehicle regardless of whether it has an autonomous driving system or not, for example, texting while driving or excessive speed?
Despite the complexity of determining blame in some of these situations, it is always the responsibility of the creators and providers of technology to 1) conform to global and local laws, regulations, and standards, and community standards and norms; and 2) clearly define and communicate the financial, legal, and ethical responsibilities of each party involved in using their systems.
Practices that can help tech providers with these responsibilities include:
• Thorough and continuous testing of data, models, algorithms, usage, learning, and outcomes of a system to ensure the system meets financial, legal, and ethical requirements and standards
• Creating and maintaining a source model and audit trail of how the system is performing in a format that humans can understand and making it available when needed
• Developing contingency plans for pulling back or disabling AI/ML implementations that violate any of these standards
In the end, Design for Trust is not a one-time activity. Instead, it is a perpetual managing and monitoring and adjusting of systems for qualities that erode trust.
Arun ‘Rak’ Ramchandran is a corporate VP at Hexaware.
Welcome to the VentureBeat community!
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
Read More
Arun Rak Ramchandran Hexaware
|
February 24, 2022
• Ashwagandha is an herb that has been used in the traditional system of Indian medicine (Ayurveda). It is also considered an adaptogen, a group of herbs that help regulate stress in the body.
• The Ashwagandha plant is a small evergreen shrub with red berries that grow from papery husks in the autumn season. Also known as "winter cherry", the Ashwagandha plant originates from India and Southeast Asia.
• Ashwagandha's scientific name is Withania somnifera. While the roots of the herb were often used for oral medicine, the properties of the leaves were used as topical treatments.
• It is sometimes called "Indian Ginseng" due to sharing notably similar chemical components. However, it belongs to the nightshade family with potatoes, tomatoes, peppers, and eggplant.
• Ashwagandha is a natural antioxidant with moisturizing and soothing qualities, making it an ideal ingredient in natural haircare, skincare, and makeup.
Ashwagandha is a natural herb extracted from Withania somnifera, a small shrub that grows in the drier climates of India, Pakistan, and Sri Lanka. The name Ashwagandha translates roughly to "the smell of a horse". Ashwagandha is associated with the horse because the herb's miraculous benefits are believed to help harness the strength and endurance of the horse. Having been used in Ayurvedic medicine to promote optimal physical and sexual health, the herb is still used in herbal medicine today. However, Ashwagandha is gaining popularity in the western natural health and wellness industry and is being added to healthy diets as a beneficial supplement. The ashwagandha supplement is available in a variety of forms, including capsules, tablets, tinctures, raw powder, and even in food and beverage products.
Ashwagandha herb is categorized as an adaptogen, which are herbs that support the body in managing stress and other physical and mental challenges. Ayurvedic practitioners have historically prescribed ashwagandha to treat physical and mental exhaustion. Recent studies have proven that the herb energizes and awakens in a calming way, without overstimulating. As an adaptogen, the herb is believed to aid in flexibility in multiple organs throughout the body. This applies to strengthening the heart muscles, improving cardiovascular health. Ashwagandha can also ease tension in uterine and menstrual muscles and can help treat menstrual cramps. Ashwagandha is an ideal herb for holistic health, benefiting mental, sexual, and cognitive health, while also improving athletic performance, immunity, and healthy aging.
Along with being an adaptogen, Ashwagandha is also known to possess at least 40 different withanolides, which are hormone precursors that help rebalance the body. Ashwagandha's biologically active components also include alkaloids, steroidal lactones, and saponins. This profile can be skin-enriching because Ashwagandha is known to aid in the production of skin-enriching compounds, such as hyaluronan, elastin, and collagen. Since Ashwagandha's apoptogenic properties can help improve inner health and vitality, that same synergy is likely to extend to skin and hair health. This means that the herb can potentially aid in reducing hair loss and enhance the radiance of the skin.
Adaptogens are known to:
• Adaptogens increase the body's tolerance to stress instead of generating specific responses as defense mechanisms. This brings homeostasis to the body, or a state of balance, by generating multiple physiological processes at once.
• Supports healthy inflammatory response.
• With long-term regular use, ashwagandha's adaptogenic properties can help the body buffer stress, reducing strain and improving efficiency.
• Reduce mental stress.
• Improve cardiovascular function, and potentially decrease blood pressure.
• Regulate hormones.
• Helps to rebuild strength after physically or mentally stressful events.
Withanolides are known to:
• Convert into human physiological hormones as needed.
• Can simulate a hormone's effect when more of that hormone is needed.
• Can also block overactive hormones by taking up space in the cell's receptor site.
• Have neuroprotective properties.
• Helping to sooth inflammation when used topically.
• Chondroprotective effects.
Withaferin A has been discovered to:
• Potentially reduce neurodegenerative symptoms.
• May support healthy aging by healing and restoring the body after it is exposed to stressors.
Sitoindosides VII-X:
• Similarly to Withaferin A, sitoindosides can behave as an anti-stressor.
Ashwagandha is most used as a root powder, although liquid extracts, and capsules are widely available. Many supplements opt for a more concentrated ashwagandha extract. The tuberous roots of the Withania somnifera are harvested in the winter, then dried and grounded into a powder. The roots are cut into small pieces and spread on SS sieve of 10 to 30 mesh. This is when contaminants, infected roots, and dust are sieved or removed by hand. Once the roots are clean of all impurities, they are put into a clean bag to begin Pulverization, a process in which pressure is applied to a particle to break it down and reduce it in size. This practice is common in powder production.
Ashwagandha Liquid Extract (Standardized) is produced through a unique manufacturing technology. The process avoids applications that involve high amounts of alcohol, heavy toxic solvents, or high heat. Ashwagandha's active constituents are analyzed using the HPLC (high-pressure liquid chromatography) method, where a mixture of compounds is isolated to identify, quantify, or purify the individual constituent.
Ashwagandha in Health
Being an integral ingredient in Ayurvedic medicine, Ashwagandha powders, capsules, and liquid tinctures are taking the natural health industry by storm as a supplement to a healthy lifestyle. Having been the subject of numerous scientific studies, the extract known as an adaptogen and is believed to be beneficial to helping the body manage stress. Adaptogens, including Ashwagandha, are believed to support optimal health by cultivating a defense against environmental and emotional stressors as well as mental ones. These herbs have a naturally energizing essence that can provide an alertness without overstimulating. When taken orally, Withania somnifera is also ideal for athletes. Studies show that the herb can aid the body in recovering from vigorous exercise and potentially improve cardiovascular health. While oral supplements of Ashwagandha are plentiful in the market, New Directions Aromatics' Ashwagandha Liquid Extract is recommended for external use only.
Ashwagandha in Beauty
As a naturally occurring antioxidant, Ashwagandha Liquid Extract possesses hydrating properties and an abundance of vitamins and minerals, making it an ideal supplementary ingredient for natural hair and skin care products. New Directions Aromatics' Ashwagandha Liquid Extract can add a healthy-looking sheen and help promote strength when added to hair care routines. Ashwagandha contains Propanediol, an NPA approved, petrochemical- free emollient that is extracted from corn sugar. It is known to dissolve ingredients, decrease a formula's adhesiveness, and lock in moisture. This makes propanediol an ideal substitute for propylene glycol, which can trigger eczema and other skin inflammations. To make a toner, Ashwagandha has often been paired with honey or milk to enhance its astringent properties. New Directions Aromatic's Ashwagandha Liquid Extract is safe enough to be added to toners, as well as to lotions, creams, and body butters.
Regarding natural haircare, Ashwagandha is known to stimulate the scalp, encourage blood circulation, and reduce dandruff. The herb's moisturizing benefits makes it one of the key ingredients used in Ayurvedic shampoos. New Directions Aromatics' Ashwagandha Root Extract is ideal to use when self-massaging the scalp to work the maximum number of benefits into the scalp and roots.
In natural and organic cosmetics, Ashwagandha Liquid Extract's initial pigmentation adapts seamlessly to formulas that are also naturally derived, making it an optimal additional ingredient. Ashwagandha can take the place of artificial ingredients to achieve equally potent colors and beautiful results. NDA's Ashwagandha Liquid Extract contains a standardization of 1.68% Withanolids, and 1-3% is recommended when adding to skincare products.
As with all New Directions Aromatics products, liquid extracts are for external use only. It is imperative to consult a medical practitioner before using Ashwagandha Liquid Extract for therapeutic purposes. Ashwagandha may also cause excess heat in the body, head, and heart (known as sadhaka pitta). For this reason, ayurvedic practitioners recommend pairing ashwagandha with cooling herbs such as licorice, or with cooling foods such as ghee, milk, rice, and raw sugar.
Ashwagandha Liquid Extract is not recommended for people who are pregnant or breastfeeding, immunocompromised, or have a thyroid disorder. The extent of the side effects of Ashwagandha Liquid Extract are still unknown and require more research. However, large doses of the herb can result in upset stomach, diarrhea, and vomiting. Ashwagandha has the potential to interact with drugs such as sedatives, blood thinners, thyroid supplements, and drugs for anxiety and high blood pressure. Speak with your doctor before incorporating Ashwagandha into your diet.
Read Disclaimer
NDA Stack Banner
Copyright ©1997-2022
New Directions Aromatics Inc.
All rights reserved.
|
Design A Rocket
by Arvin61r58 - uploaded on June 25, 2020, 8:03 pm
Exterior design elements for a rocket. From NASA's International Space Station Activity Book page 27.
Most of the graphics was redrawn by hand. So, I can not guarantee accuracy in relation to the original document. The text is the same with the exception of automatic font conversion and text was changed to paths.
The ISS Activity Book was written for children, so don't feel insulted if it appears to be condescending. Also, the picture-box mentioned in the top paragraph was not included in this upload.
NASA space rocket ISS International+Space+Station InternationalSpaceStation activity+book ActivityBook design
417 k
Safe for Work?
|
Weight Training and Fat Loss
Imagine that your body is a motor vehicle. Your muscles: use energy to produce movement (like an engine); absorb impact forces that otherwise could destroy your bones, connective tissue and joint structures (like shock absorbers); and provide the framework that enables you to function physically (like the chassis). Just as mechanics know that proper maintenance keeps your car in good shape, researchers are finding that weight training plays a vital role in keeping your muscles well-tuned.
Weight training also plays a crucial part in your weight loss efforts and more importantly helping you to maintain your weight loss results. One of the biggest mistakes people make when starting a weight loss or body transformation program is not including weight training with their cardio-vascular exercise and eating regimen. This is unfortunate, because when you cut calories without weight training for an extended period, you can lose muscle as well as fat. And when you lose muscle, your body becomes a lot less efficient at burning fat. However, when you gain muscle, your body will burn more fat, 24 hours a day!
The benefits of weight training for weight loss include…
Weight Training Increases Your Metabolism
Your resting metabolic rate represents the amount of energy you need on a daily basis to sustain the function of your body. Even at rest, muscle is very active tissue. Consequently, muscle loss results in a reduction in your metabolic rate. Because less muscle means lower energy requirements, calories that were previously used for muscle maintenance are now stored as fat. Sensible weight training is the best means of avoiding decreases in muscle mass and metabolic rate, and guarding against the obesity creep, i.e. weight training will maintain or increase your metabolic rate, which in turn helps you to maintain or decrease your body fat levels.
Weight Training Improves Glucose Metabolism
Researchers have reported a 23% increase in glucose uptake after four months of weight training. Because poor glucose metabolism is associated with increasing body fat and adult onset diabetes, improved glucose metabolism is an important benefit of regular weight training.
Weight Training Helps Neutralise Age-related Muscle Loss
Most adults that do not do weight training lose between 2.3 and 3.2 kg of muscle per decade. This equates to a decrease of 2-5% in metabolic rate every decade. At rest, 1 kg of muscle requires 13 calories per day for tissue maintenance, and during exercise, muscle energy utilisation increases dramatically. An InBody Scan (or other similar test) will help you monitor changes in your lean muscle mass.
Weight Training Improves Daily Function
Increased functional strength from weight training does wonders to help you with activities of daily living such as house work, working in the yard, moving furniture, and carrying bags of groceries, without gasping for air and tiring within minutes. If you have a medical condition such as arthritis or multiple sclerosis, lifting weights can also be a great help. The greater efficiency in performing general activities due to increased strength can also lead to an increase in the use of body fat as energy thereby helping with fat loss.
Weight Training Improves Posture
When your body is stronger (including your core), you are better able to hold yourself with good posture, your back aches less, there is less stress in your neck and your legs feel strong. You simply function better! Most people who increase their strength, also report an increase in self-confidence.
How Much Is Enough?
Although a personal trainer can help determine the best program for you, as a general rule benefits can be achieved from training 2 – 3 non-consecutive days per week for a minimum of one set per exercise for each major muscle group. You should use enough resistance to fatigue the muscle group by the end of each set.
Getting Stronger
Training your muscles does take some effort, but no matter what age you are, you’ll find that strength training will fuel a healthy lifestyle and help you function better in all aspects of your life.
|
The antenna is then connected to a novel device made out of a two-dimensional semiconductor just a few atoms thick. The AC signal travels into the semiconductor, which converts it into a DC voltage that could be used to power electronic circuits or recharge batteries.
In this way, the battery-free device passively captures and transforms ubiquitous Wi-Fi signals into useful DC power. Moreover, the device is flexible and can be fabricated in a roll-to-roll process to cover very large areas.
Source: MIT news
Name of Author: Rob Matheson
Lost Password
|
Analytical Efforts Toward Monitoring Groundwater in Regions of Unconventional Oil and Gas Exploration
, ,
Special Issues
Spectroscopy Supplements, Special Issues-10-02-2015, Volume 30, Issue 10
Pages: 45–51
A mix of analytical methods is required to understand the impact, if any, that UOG activity is having on groundwater.
Gas chromatography (GC), inductively coupled plasma–mass spectrometry (ICP-MS), ICPâoptical emission spectrometry (OES), and other bulk analysis methods are applied to groundwater in proximity to unconventional oil and natural gas extraction activities.
The United States has experienced a dramatic shift in economic influence over the past 10 years with the widespread engineering advances that have allowed unconventional oil and gas (UOG) extraction to become more efficient and cost-effective. Small rural towns have become industry hubs overnight as a result of the hydrocarbons trapped beneath the ground. Educators have increased the number of engineering and technical programs available to students in an effort to meet the demand for a qualified workforce. Stories of multigeneration ranchers becoming millionaires overnight through the leasing of their land and mineral rights puts the television show The Beverly Hillbillies in a more current light.
Regulations related to UOG activity are currently left to each state, which creates disparities in environmental testing and monitoring across the United States. For example, Colorado and Illinois require baseline groundwater testing before drilling commences, while Pennsylvania only suggests baseline measurements in the event that a dispute arises after drilling. The list of organic compounds Colorado has chosen to monitor through chromatographic methods are total petroleum hydrocarbons, benzene, toluene, ethyl benzene, and xylenes (BTEX), polycyclic aromatic hydrocarbons (PAH) plus benzo[a]pyrene, and dissolved gases. These analytes have been the focus of analyses for years. Their presence in water is hypothesized to indicate an adverse environmental interaction with hydrocarbon extraction operations. However, their sources can still be convoluted and may not be wholly specific to UOG.
Within the past decade, a mix of analytical methods has been developed or applied to establish an understanding of the impact, if any, that UOG activity is having on groundwater in the vicinity. This article discusses chromatographic methods applied for particular organic compounds and considerations to assist in method development. A section is also dedicated to spectroscopic methods for detection and quantification of metals and ions in water samples that are relevant to UOG activity. Collections from our groundwater research are highlighted in each section to demonstrate the application.
Chromatography for the UOG Field
The ideal situation for creating a method would be to work with a system of known knowns (1) or compounds that are expected to be present and are positively identified. Current researchers must also have the capabilities to work with unknown knowns or the ability to determine unexpected identifiable compounds, and unknown unknowns, compounds unexpected and without standards, Chemical Abstracts Service (CAS) numbers, or absent in databases. For example, these may include proprietary polymers or surfactants developed primarily for the UOG field. A hesitation that may be encountered are known unknowns, which are expected compounds not detected. The internal debates are made up of how confident the compound is to be a “known;” is the concentration too low to detect in the given sample, or is the method inadequate? In the discussion to follow, there are very few “knowns” to be expected when monitoring groundwater possibly impacted by UOG activity.
Gas Chromatography
Gas chromatography (GC) methods have been at the forefront for analysis of organic compounds in groundwater and UOG wastewater (UOGWW) (2). While there is the potential for nonvolatile organic additives such as surfactants to be present, the majority of hydraulic fracturing additives or shale formation compounds of health or environmental concern are GC amenable. In a 2011 Congressional report, 24 organic hydraulic fracturing additives are listed as “Chemical Components of Concern,” of which 23 are GC amenable without the need for derivatization. Some of these include BTEX, diesel, and naphthalene, which have been suggested for baseline measurements by various states.
Numerous Environmental Protection Agency (EPA) and state regulatory methods have been established using GC for these and a multitude of other compounds of concern over the past 50 years. While a mix of regulatory methods can be found that include a subset of these compounds, the lack of a single dedicated standard approach to effectively extract and separate a probable list of compounds in groundwater or UOGWW is a complicating factor that has slowed research. Most officially standardized versions of these methods are less capable of the throughput needed to prepare and analyze a large number of samples in a limited timeframe.
Dissolved Gas Analysis
The earliest efforts to assess the impact of UOG activity on groundwater was through the measurement of dissolved gases, specifically methane, ethane, propane, butane, and pentane (C1, C2, C3, C4, and C5, respectively) in groundwater from regions within close proximity of UOG drilling sites (3). Methane is the most abundant component of natural gas extracted for energy purposes, with ethane and propane comprising the majority of the remaining small fraction. The hypothesis is that if there is a failure in the integrity of the protective casing of the UOG well (4,5) or if induced fractures in the shale create interconnectivity with the overlying aquifer, the natural gas would be the most abundant and mobile species to detect in groundwater.
Two types of methane can be measured in groundwater (6). The most common type found in shallow groundwater is biogenic methane, a by-product of bacterial metabolism. Thermogenic methane is the other type, the primary target of UOG recovery. This methane gas is formed by the presence of decomposing organic matter under high temperatures and pressures over a long period of time (that is, from deep geological formations). Because of the different implications for each type of natural gas, methane measured in shallow groundwater must include further investigations to distinguish between biogenic or thermogenic origins.
The origin of the measured methane can be determined either through isotopic abundances of carbon-13 (13C), deuterium (2H), or the methane to ethane and propane ratio (7). A ratio of methane to higher chain hydrocarbons of less than approximately 100 suggests thermogenic gas (3). Both of these approaches have even been found to not only identify thermogenic methane, but also distinguish between different natural gases produced in different geological formations (3,8,9).
GC separations coupled with flame ionization detection (FID) are most commonly used for the analysis of these light hydrocarbons. Groups measuring these light hydrocarbons do so in a targeted manner, meaning they are tailored specifically for C1–C5 gases and little else. These methods are quite sensitive and selective, but ultimately lack the ability to detect a wide suite of unknown compounds. Sample introduction is performed through either purge-and-trap techniques (10), where the water is purged with an inert gas and volatiles are trapped on a selective sorbent, or using headspace analysis (11), where the sample is heated and agitated to liberate the gas to an open headspace in the vial, which is then sampled. Column selection for this analysis leans toward the use of porous layered open tubular (PLOT) columns, primarily those with a divinylbenzene phase (10,12). As alluded to earlier, PLOT phases possess a great affinity for C1-C5 hydrocarbons, but that affinity is further extrapolated to the C6-C8 linear and branched alkanes and aromatics, which are also present in natural gas, making analysis of these larger hydrocarbons inefficient because of long retention and excessive band broadening. An example of this affinity is demonstrated in Figure 1 with a natural gas standard separated on a PLOT divinylbenzene column and a 5% diphenyl capillary column.
Intricate sample collection further complicates this specific analysis, which typically requires additional measurements, such as isotopic analysis, for results that can enable sourcing of the natural gas in the water (that is, a comparison of the natural gas isotope signature in the water with that from the targeted shale or other sources). Groundwater samples are typically collected from volunteers’ water wells, in which the withdraw rate and consistency will be variable across the population. Agitation, along with the pressure differential on the water once it reaches the surface, can cause the water to degas and skew dissolved gas measurements to below their actual values. For better control during sampling, it is recommended to use a nongas permeable tube with a valve connected to the water well head that flows at a constant rate into an evacuated bladder (for example, an IsoFlask sampling bladder [Isotech Laboratories]). This sampling bladder should be preloaded with a chosen biocide to reduced degradation of the gas by bacteria, on top of being stored at 4 °C for a 14-day maximum holding time before analysis (13).
As a complementary approach, our group has also demonstrated the capabilities of a new spectroscopic detector, the VGA-100 vacuum ultraviolet (VUV) detector (14) (VUV Analytics), which measures gas phase absorption in the VUV and ultraviolet wavelength regions (120–240 nm) to monitor dissolved gases in water. This universal detector offers qualitative gas-phase VUV spectra to accompany the quantitative capabilities of the C1–C5 hydrocarbons, along with N2, O2, and CO2 if interested (15). While this work separated C1–C5 hydrocarbons of three water samples from the Barnett Shale with the HP-PLOT Q column (30 m × 0.32 mm, 20-µm df), the deconvolution capabilities of the acquired spectra could allow the compounds to be quantified in the void volume of capillary columns. More work is needed to interface this detector with the various sampling protocols that exist for measuring natural gas in water to demonstrate its unique qualitative and quantitative capabilities for routine analysis. The blue chromatogram of Figure 1 is the previously discussed GC separation of a natural gas standard with the PLOT divinylbenzene column and VUV detection.
Organic Compounds
In the vast majority of UOG reservoirs, hydraulic fracturing is used to stimulate the formation. The fluids used to open fissures in low permeability shale formations include water, sand, and a small percentage of chemicals. These chemicals are a mixture of acids, bases, salts, organic compounds, and inorganic compounds, which serve myriad purposes. Even though these chemicals make up a small percentage of the liquid used for hydraulic fracturing, it can account for a median of over 10,000 kg (16) in the national average of 2.4 million gallons of water used per UOG well (17). This massive amount of chemicals is trucked to the pad site, stored and mixed onsite, and injected for hydraulic fracturing operations. Then, up to 30% of the water resurfaces during the flowback period before production begins. The storage, use, and collection of these chemicals, mixed hydraulic fracturing fluids, other chemicals involved in equipment cleaning and drilling processes, and the resulting flowback are all possible sources for groundwater contamination through controllable surface activities (18). Casing and cement failures (19) are a subsurface possibility for fluid introduction to the aquifer system, an event with little operator control, but which occurs at varying rates reported to be from 3% (20) up to as many as 12% of wells within the first five years (21).
The majority of hydraulic fracturing additives can be found in lists that have been becoming more populated over the recent years. One of the earliest lists (22) was found in a report by the US House of Representatives Committee on Energy and Commerce. This included over 750 unique additives found in more than 2500 products available to be used for hydraulic fracturing from 2005 to 2009. Additional pertinent information included in the report are the number of products in which the compound is found, a table of additives that are health or environmental risks, and highlights of statistics for use of specific compounds of concern like 2-butoxyethanol. FracFocus (, instated in April 2011, is the national hydraulic fracturing chemical registry. In the US, 28 states require chemical disclosure of hydraulic fracturing fluids, of which 22 are using FracFocus. It currently contains more than 80,000 disclosure documents from more than 1000 companies. This registry provides the operator, location, depths, chemicals, and mixed concentrations used in hydraulic fracturing activities (23). The companies that disclose this information are able to protect trade secrets with the ability to report some additives as “proprietary polymers” or under similar designations. A review of the FracFocus database from January 2011 through February 2013 (16) revealed more than 37,000 logs, which included chemical disclosure of 692 unique ingredients, of which 11% were deemed trade secrets.
GC coupled to mass spectrometry (MS) has been the workhorse used to separate, detect, and possibly identify volatile and semivolatile compounds present in groundwater, after appropriate sample preparation (2). The MS detector is practically a requirement when surveying groundwater for contaminants related to UOG, even when performing targeted analysis for specific compounds. The qualitative information gained from the MS detector is invaluable in confirmation and unknown identification. The potentially complex mixture of compounds in groundwater impacted by UOG has generated false positives when using the suggested FID in EPA methods used for general groundwater (24).
The overwhelming majority of the capability of unknown identification with GC–MS begins with the electron ionization (EI) source (1). The EI source generates diagnostic fragment ions of the compound in a systematic manner. The resulting spectra can then be matched across a number of mass spectral libraries, generated by the National Institute of Standards and Technology (NIST), National Institute of Health (NIH), and the EPA, among others. Another ionization source, chemical ionization (CI), can be used to complement the EI-resulting data. The molecular ion for the compound is generally preserved by CI. CI can be described as a softer ionization technique; therefore it generates fewer fragments and is not used as the primary source for unknown identification. An MS detector capable of CI can also possess the ability for negative chemical ionization (NCI), a selective ionization technique effective toward ionizing halogenated compounds. This selective ionization is a detriment to broad surveying of unknown compounds, but it is a valuable tool for researchers investigating halogenated species.
Fragmentation information from tandem MS (MS-MS) for further confidence in identification can be generated by ion-trap or triple-quadrupole MS detectors. High resolution and accurate mass (HR-AM) analysis of ions for an additional identification vector can be achieved with time-of-flight (TOF) and orbital trap MS detectors. Hybrid MS detectors can also be found to combine the MS-MS capabilities with HR-AM on the back-end, as with a Q-TOF or Q-orbital trap.
A portion of a Texas well water study conducted by our group included identification of the volatile and semivolatile compounds in groundwater across the Barnett Shale region, shown with sampling locations in Figure 2. A GC–MS method was developed to provide appropriate sensitivity and good sample throughput. The aim of the method was to extract and separate the greatest number of compounds from a small volume of water with minimal sample preparation. Sample preparation on the front end included a 2-mL ethyl acetate extraction from 5 mL of groundwater, shaken 1 min in a screw-top vial. GC–MS analysis was performed using a 30 m × 0.25 mm, 0.25-µm df Rxi-5ms (Restek) column with a single-quadrupole MS detector using an EI source. The “5” column, or 5% diphenyl, 95% PDMS stationary phase, is typically regarded as a general purpose column and has retention characteristics quite comparable to columns in which Kovat’s retention indices were calculated. The retention index of an unknown peak can be a valuable piece of data to narrow down possibilities of detected unknown compounds (25). Separation and MS parameters were set for the detection of 35 target compounds in our method. These compounds were chosen based on their popularity of use, possible health and environmental effects, detection in previous research, and GC amenability. These compounds consisted of various alcohols, aromatics, and other hydrocarbons. MS settings, summarized in Table I, included groups of selected ion monitoring (SIM) events for the base peaks of our target compounds, coupled with full spectral scanning for the confirmation of measured peaks by SIM, as well as the possible identification of unknowns through spectral matching. These acquisition groups were typically around 2 min each in an effort to keep the SIM ion count low to maintain an effective MS duty cycle. The scanning parameters also changed with each acquisition group, in that the concluding m/z increased from 100 to 400 over the time of the separation. The assumption used was that compounds eluted earlier from the column would be lighter than those eluted later; reducing the acquisition window reduces noise in the spectrum.
Initial application of this method detected methanol and ethanol in some groundwater samples. These detections were unable to be quantified with data at the time because of poor retention on the Rxi-5ms column and a fair amount of background noise from permanent gases like N2 and O2 when monitoring their base peak, 31 m/z. This led our team to develop a method to address both of these problems.
A mid-polarity GC column was chosen for the complementary analysis. The team still wanted to maintain the ability to adequately retain linear hydrocarbons if present, so working with a 100% PEG column was nearly out of the question, even though it maintains a great selectivity for these alcohols. The 30 m × 0.32 mm, 1.20-µm df Phenomenex ZB-BAC2 column, developed and marketed for blood alcohol analysis, was chosen for its retention and selectivity for these two alcohols and other solvents, along with the possibility of using a second paired column, the Phenomenex ZB-BAC1 column, for confirmation if needed. The method also incorporated FID to help reduce the background noise while detecting the light alcohols. A static headspace injection technique was chosen to effectively extract the analytes of interest and reduce background. A salt solution was added to the water sample to reduce the solubility of the alcohols in groundwater. Samples were agitated, heated, and injected automatically using an AOC-5000plus autosampler (Shimadzu Scientific Instruments).
In our initial study of Barnett Shale groundwater in 2011 (26), 29 of the 100 samples contained methanol at concentrations as high as 329 mg/L and 12 samples contained ethanol at levels as high as 11 mg/L. These detections had no correlation with distance to UOG wells. Numerous industrial processes use these alcohols, and they can be produced through a range of biological pathways, so identifying the sources for the occurrences was not practical with the limited data.
A follow-up study in 2014 expanded the research to 550 groundwater samples across 13 counties in north Texas (27), shown in Figure 2. Additional compounds were detected in this study in addition to the methanol and ethanol from the previous studies. Alcohols included methanol (35 wells), ethanol (240), isopropyl alcohol (8), and propargyl alcohol (155). Ethanol and propargyl alcohol had a positive correlation with each other. These are both ingredients in hydraulic fracturing fluids and were detected at a higher frequency than expected in the most productive counties based on chi-squared analysis. Chloroform, dichloromethane, and trichloroethylene were detected in 330, 122, and 14 wells, respectively. These chlorinated compounds are not disclosed ingredients in hydraulic fracturing fluid, but have been identified in UOGWW (28) and dichloromethane has been suggested (29) to be present during drilling operations as a degreaser for equipment. The study also found that 381 samples contained at least one aromatic of the BTEX class, with 10 samples containing all four species. Benzene was detected in 34 wells, toluene in 240, ethyl benzene in 22, and at least one xylene isomer in 240 water well samples. The BTEX compounds collectively can be found in hydrocarbon fuels, refined or unrefined, and some are used individually as industrial solvents, even as hydraulic fracturing additives.
All of the compounds mentioned above can be linked directly or indirectly to UOG operations. The fact that these compounds are fairly common in the industrial or agricultural setting in which this research was conducted, renders it impossible to implicate UOG as the source of the contaminants with absolute confidence. It is expected that the only definitive manner in which to conclusively attribute UOG as a source of groundwater contamination would be through the detection of proprietary tracers (30), suggested to be fluorinated compounds exotic enough to assist each company with MS detection for internal monitoring.
Chromatography has been at the forefront of advanced analytical chemistry to tackle the challenges of analyzing complex mixtures related to UOG that possibly could be encountered during research. The previously discussed approaches are appropriate when identifying individual compounds, but there are situations when monitoring of bulk chemical classes yields adequate information. Many metals and ions can also be determined spectroscopically (31). For the most part, the operational costs for these methods are less than chromatography–MS methods, less technical to operate, and can even be performed portably. Spectroscopic approaches associated with UOG have included UV–vis spectroscopy, infrared (IR) spectroscopy, and optical emission spectroscopy (OES). Yet, many of these methods can fall victim to interferences from chemically similar compounds or ions since they are being measured in bulk solution without prior sample preparation. These methods are also typically intended for oil field waste waters or produced water, both of which commonly contain higher concentrations of the analyte than ever expected in compromised groundwater.
Absorbance methods measured in the UV–vis region have been used for quantitating anionic surfactants (32), barium (33), boron (34), iron (35), sulfate (36), sulfide (37), and total petroleum hydrocarbons (TPH) (38). Anionic surfactants can be monitored near 655 nm as a complex with methylene blue according to EPA Method 425.1. Chloride solutions have been shown to give false positives with this approach (39). Barium has been measured as low as 2 mg/L as a precipitate after adding a sodium sulfate mixture and quantified at 450 nm. Strontium, silica, and calcium are the most detrimental interferences that may be encountered using this approach. Boron is measured at levels above 2 mg/L as the reaction product with carmine at 605 nm. Iron is monitored with the colorimetric phenanthroline indicator at 510 nm after the reagent has converted most forms of iron to a soluble ferrous iron. Iron can be measured down to 0.1 mg/L with the most common interference being a cumulative concentration of Ba2+ and Sr2+ greater than 50 mg/L. Sulfate at levels greater than 2 mg/L can be measured at 450 nm through a turbidimetric method after precipitation as barium sulfate. However, barium, magnesium, and silica present in the water sample can interfere with the accuracy of these results. Sulfide can be detected spectroscopically down to 0.01 mg/L at 665 nm after reacting with N,N-dimethyl-p-phenylenediamine sulfate to form methylene blue. A semiquantitative method for TPH, which is a cumulative measurement of hydrocarbons ranging from C6 to C36, uses an immunoassay in which the hydrocarbons and enzyme compete to bind to antibodies immobilized on the cuvette. Measuring this absorbance at 450 nm yields a sensitivity equivalent to at least 2 mg/L diesel fuel. Chlorine present in solution can interfere with the assay.
TPH can also be measured by IR spectroscopy. Previously, a method consisting of serial extractions with fluorocarbon-113 (CFC-113) and silica drying has been shown to be able to generate an extract adequate to quantify the absorbance of C-H stretches at 2950 cm-1. Since CFC-113 is an ozone-depleting substance, the EPA has discontinued the method and suggests using the ASTM International Method D7006-04, which uses S-316 as a CFC substitute.
OES has become the chosen technique for measuring dissolved metals in UOGWW (2). Metals of interest such as Ba, Sr, Fe, Na, Ca, and Mg are easily in the milligram-per-liter concentration range, comfortably above detection limits for inductively coupled plasma (ICP)-OES. The alkali and alkaline earth metals can be at levels of hundreds to thousands of milligrams per liter depending on the contribution of formation water to the overall UOGWW mixture. These excessive concentrations can be detrimental to an ICP-MS, accepted to be more sensitive than the ICP-OES. Most ICP-OES instruments come with the option to change between axial and radial viewing modes to assist in measuring samples across a wide concentration range, resulting in a much wider linear range for quantification than ICP-MS (40). Great attention needs to be taken in wavelength selection for the ICP-OES to ensure there is no spectral overlap from other metals at high concentrations or unexpected interferences from other hydraulic fracturing additives. Atomic absorption could also be implemented, but lacks the multielement throughput of ICP-OES.
The majority of research and application notes involving metals analysis with ICP-OES have been toward profiling UOGWW. These samples are currently a national disposal issue, a challenging matrix to overcome when making measurements, and possess a set of inorganic “known knowns” like brine salts to target. Our more recent investigation (27) of water quality overlying the Barnett Shale used ICP-OES (ICPE-9000 from Shimadzu Scientific Instruments, Inc.) for measurement of 13 metals most relevant to UOG exploration and that exhibited minimal spectral interferences. Strontium was the only metal determined to be above the 4.0 mg/L maximum contaminant level (MCL) by ICP-OES. Standard addition was used for quantification to overcome unpredictable matrix effects that had previously been observed (41).
ICP-MS is another approach for elemental analysis, primarily for trace metals in water. The MS detector is more sensitive than using OES, but has a limited dynamic range. Our groundwater studies (26,27) have used ICP-MS (Varian 820 ICP-MS) for the quantification of arsenic and selenium. Arsenic with an MCL of 10 µg/L and selenium at 50 µg/L warrant the additional sensitivity of ICP-MS for adequate quantitation. Strontium and barium were also measured by ICP-MS in 2011 because of instrument availability (26). In 2011, As, Se, and Sr each were shown to have a negative correlation with proximity to UOG wells (that is, higher values in water wells closer to UOG wells). The most plausible conclusion was that an increased pH and mechanical vibrations from the neighboring UOG activity liberated iron oxide that had complexed these metals in poorly maintained water wells. In 2011, 29 of the 100 samples exceeded the EPA MCL for arsenic, but only 10 of the 550 samples in 2014 exceeded the limit. It is hypothesized that the reduction in UOG exploration between the sampling campaigns reduced subsurface vibrations, in turn reducing the amount of dissolved arsenic. The water quality in 2014 was also found to be a less reducing environment, which would decrease the solubility of arsenic in groundwater.
Concluding Remarks
Significant efforts have been made in this decade, by predominately academic institutions, to understand the environmental, social, and economic effects of UOG exploration. The collaborations that have formed through this multifaceted research have generated astounding conclusions to date, but nearly all have commenced without technical or chemical advice from industrial partners. The guidance from drilling operators would allow for more focused and efficient analytical methods and more effective conclusions, which could in turn ease the public opinion of UOG. The detection of most of these aforementioned compounds can occur in groundwater through avenues other than UOG, convoluting the ability to identify the source. The burden has been on the researcher to present exhaustive evidence if contamination from UOG has been suggested, but operators are able to merely discredit the research and rely on the uncertainty of these other possibilities. It is expected that overcoming this hurdle will only happen once proprietary chemicals or tracers are incorporated, so that contamination events can be clearly attributed to a particular UOG process and operation. Until then, conclusions will continue to be deduced through disproving all other possibilities, which is not an efficient route to take.
(1) S. Stein, Anal. Chem.84, 7274–7282 (2012).
(2) D.D. Carlton Jr., Z.L. Hildenbrand, B.E. Fontenot, and K.A. Schug, in Hydraulic Fracturing Impacts and Technologies, V. Uddameri, A. Morse, and K.J. Tindle, Eds. (Taylor & Francis Group, New York, 2016), pp. 115–132.
(3) S.G. Osborn, A. Vengosh, N.R. Warner, and R.B. Jackson, Proc. Natl. Acad. Sci. U.S.A. 108, 8172–8176 (2011).
(4) R.E. Jackson, A.W. Gorody, B. Mayer, J.W. Roy, M.C. Ryan, and D.R. Van Stempvoort, Ground Water51, 488–510 (2013).
(5) A. Kissinger, R. Helmig, A. Ebigbo, H. Class, T. Lange, M. Sauter, M. Heitfeld, J. Klünker, and W. Jahnke, Environ. Earth Sci.70, 3855–3873 (2013).
(6) D.A. Stolper, M. Lawson, C.L. Davis, A.A. Fereira, E.V. Santos Neto, G.S. Ellis, M.D. Lewan, A.M. Martini, Y. Tang, M. Schoell, A.L. Sessions, and J.M. Eiler, Science344, 1500–1503 (2014).
(7) S.G. Osborn and J.C. McIntosh, Applied Geochemistry25, 456–471 (2010).
(8) R.B. Jackson, A. Vengosh, T.H. Darrah, N.R. Warner, A. Down, R.J. Poreda, S.G. Osborn, K. Zhao, and J.D. Karr, Proc. Natl. Acad. Sci. U.S.A.110, 11250–11255 (2013).
(9) M. Schoell, Geochim. Cosmochim. Acta44, 649–661 (1980).
(10) L. Chambers, OI Analytical, Application Note 37920312 (2012).
(11) F. Hudson, US EPA, RSK SOP-175 (2004).
(12) Z. Ji, Agilent Technologies, Application Note 228–387 (2000).
(13) Isotech Laboratories, Collection of Groundwater Samples from Domestic and Municipal Water Wells for Dissolved Gas Analysis (August 6, 2014) (2011).
(15) L. Bai, J. Smuts, P. Walsh, H. Fan, Z. Hildenbrand, D. Wong, D. Wetz, and K.A. Schug, J. Chromatogr. A 1388, 244–250 (2015).
(16) Analysis of Hydraulic Fracturing Fluid Data from the FracFocus Chemical Disclosure Registry 1.0, (U.S. Environmental Protection Agency, Washington, D.C., EPA/601/R-14/003, i-155, 2015).
(17) R.B. Jackson, E.R. Lowry, A. Pickle, M. Kang, D. DiGiulio, and K. Zhao, Environ. Sci. Technol.49, 8969–8976 (2015).
(18) A. Vengosh, R.B. Jackson, N.R. Warner, T.H. Darrah, and A. Kondash, Environ. Sci. Technol.48, 8334–8348 (2014).
(19) C. Brufatto, J. Cochran, L. Conn, and D. Ower, Oilfield Review Autumn, 62–76 (2003).
(20) R.D. Vidic, S.L. Brantley, J.M. Vandenbossche, D. Yoxtheimer, and J.D. Abad, Science 340, 1235009 (2013).
(21) A.R. Ingraffea, M.T. Wells, R.L. Santoro, and S.B.C. Shonkoff, Proc. Natl. Acad. Sci. U.S.A.111, 10955–10960 (2014).
(22) United States House of Representatives Committee on Energy and Commerce,
, (2011).
(23) (April 1, 2014).
(24) D.C. DiGiulio, R.T. Wilkin, C. Miller, and G. Oberley, Investigation of Ground Water Contamination Near Pavillion, Wyoming, (U.S. Environmental Protection Agency, Washington, D.C., 2011).
(25) Shimadzu Europa GmbH, Linear Retention Index Function (LRI) in GCMSsolution 2.4 (January 23, 2013).
(26) B.E. Fontenot, L.R. Hunt, Z.L. Hildenbrand, D.D. Carlton Jr., H. Oka, J.L. Walton, D. Hopkins, A. Osorio, B. Bjorndal, Q.H. Hu, and K.A. Schug, Environ. Sci. Technol.47, 10032–10040 (2013).
(27) Z.L. Hildenbrand, D.D. Carlton Jr., B.E. Fontenot, J.M. Meik, J.L. Walton, J.T. Taylor, J.B. Thacker, S. Korlie, C.P. Shelor, D. Henderson, A.F. Kadjo, C.E. Roelke, P.F. Hudak, T. Burton, H.S. Rifai, and K.A. Schug, Environ. Sci. Technol. 49, 8254–8262 (2015).
(28) T.D. Hayes and B.F. Severin, “Characterization of Flowback Waters from the Marcellus and Barnett,” Gas Technology Institute (2012).
(29) T. Colborn, K. Schultz, L. Herrick, and C. Kwiatkowski, Hum. Ecol. Risk Assess. 20, 86–105 (2014).
(30) S.K. Ritter, C&EN92, 31–33 (2014).
(31) Hach Company, Hydraulic Fracturing Water Analysis Handbook, (July 29, 2014) (2013).
(32) P.R. Wright, P.B. McMahon, D.K. Mueller, and M.L. Clark, U. S. Geological Survey, Data Series 718, (2012).
(33) Hach Company, Barium, Turbidimetric Method 10251 (July 29, 2014).
(34) Hach Company, Boron Carmine Method 10252 (July 29, 2014).
(35) Hach Company, Iron, Total Method 10249 (July 29, 2014).
(36) Hach Company, Sulfate Method 10248 (July 29, 2014).
(37) Hach Company, Sulfide Methylene Blue Method 10254 (July 29, 2014).
(38) Hach Company, TPH (Total Petroleum Hydrocarbons) Immunoassay Method 10050 (July 29, 2014).
(39) A.L. George and G.F. White, Environ. Toxicol. Chem. 18, 2232–2236 (1999).
(40) PerkinElmer Inc., “Atomic Spectroscopy - A Guide to Selecting the Appropriate Technique and System,” (September 10, 2013) (2013).
(41) D.D. Carlton Jr., B.E. Fontenot, Z.L. Hildenbrand, T.M. Davis, J.L. Walton, and K.A. Schug, Int. J. Environ. Sci. Technol. April, 1–10 (2015).
Doug D. Carlton Jr., and Kevin A. Schug are with the Department of Chemistry and Biochemistry and the Collaborative Laboratories for Environmental Analysis and Remediation at The University of Texas at Arlington in Arlington, Texas. Zacariah L. Hildenbrand is with the Collaborative Laboratories for Environmental Analysis and Remediation at The University of Texas at Arlington and Inform Environmental, LLC, in Dallas, Texas. Direct correspondence to:
|
' P E T R A E ' - M o l l i e ' s D i n e r ( s o h o h o u s e ) w. i t h g i n k g o. p r o j e c t s
Petrae (Latin, noun - the plural, feminine for rock/ boulder/ shaped stone as used in building)
Cribbs Causeway is believed to be built over the traces of a Roman road connecting Gloucester to Sea Mills, which was one of the main routes to the crossing of the Severn Estuary, known during the Roman period as Portus Abonae. Taking the Roman history of the area as a starting point for my research, I explored the physical layering of history in our landscapes, erosion as a measure of time, artefacts, and everyday materials.
Petrae features 3 pieces, seemingly rising through the earth in the garden of Mollie’s diner, a contemporary destination surrounded by ring roads and new developments.
These extruded stone forms nod to the history of the site, and each piece features a different visual representation of erosion over time- cracked, chipped and potholed- playing with the idea that these stones have endured the ever-changing landscape surrounding them over the last 1500 years.
Materials include black grogged clay, Stoneycombe limestone from the Southwest, and slate and copper from reclamation yards in Bristol.
|
Other formats
mail icontwitter iconBlogspot iconrss icon
SMAD. An Organ of Student Opinion. 1935. Volume 6. Number 17.
Economic Situation
Economic Situation.
The economic position was well summarised by the statement that population was increasing as the square, production as the cube, and debt as the fourth power of the increment of time. Despite the manifest success of the modern productive system, the distributive-money-system was failing to enable nations to purchase the whole of their output. From this resulted the absurd, though universal, struggle to export and to resist imports, which was the basic cause of international friction. For the causes of war, formerly dynastic, were now essentially economic.
|
Politics and Human History
Division of the human family into two distinct political groups began some 12,000 years ago. Humans existed as members of small bands of nomadic hunter/gatherers. They lived on deer in the mountains in the summer and would go to the beach and live on fish and lobster in winter.
Some men spent their days tracking and killing animals to B-B-Q at night while they were drinking beer. This was the beginning of what is known as “The Conservative Movement”.
Other men who were weaker and less skilled at hunting learned to live off the conservatives by showing up for the nightly B-B-Q’s and doing the sewing, fetching and hair dressing. This was the beginning of the “Liberal Movement”. Some of these liberal men eventually evolved into women. The rest became known as ‘girley men’.
Some noteworthy liberal achievements include the domestication of cats, the trade union, the invention of group therapy and group hugs, and the concept of Democratic voting to decide how to divide the meat and beer that conservatives provided. Liberals also taught their cats how to vote and knock things off horizontal surfaces.
Over the years, conservatives came to be symbolized by the largest, most powerful land animal on earth, the elephant. Liberals are symbolized by the jackass.
Most social workers, college professors, personal injury attorneys, journalists, dreamers in Hollywood and group therapists are liberals. Liberals invented the designated hitter rule because it wasn’t “fair” to make the pitcher also bat.
Conservatives drink domestic beer. They eat red meat and still provide for their women. Conservatives are big-game hunters, rodeo cowboys, lumberjacks, construction workers, medical doctors, police officers, corporate executives, soldiers, athletes, and generally anyone who works productively outside government. Conservatives who own companies hire other conservatives who want to work for a living.
Liberals produce little or nothing. They like to “govern” the producers and decide what to do with the production. Liberals believe Europeans are more enlightened than Americans are. That is why most of the liberals remained in Europe when conservatives were coming to America. They crept in after the Wild West was tame and created a business of trying to get MORE for nothing.
Here ends today’s lesson in human history.
No attribution.
If you send or receive similar words of wisdom, please include the author’s name and include it when you repost or send in an email.
|
Exercise 3
Maintaining Parallel Structure
Directions: In the exercise that follows, you will read sentences that contain blanks. To fill each blank, choose the option that maintains parallel structure in the sentence.
To keep track of your answers, print the accompanying handout. If you are unsure which choice to make, consult the rules.
Disclaimer: All prizes in this exercise are cyber, which means they have no physical reality and cannot be collected for use in the material world.
To avoid broken bones, stitches, tetanus shots, and emergency room visits, choose your answers with care! Now get a board!
Bangin'! Gnarly! Shreddin'!
Start here.
|
Monday, May 3, 2004
Daily Dose of Sanity
Thanks to the gentle urging of my Faithful Correspondent, I have added the link to Minnesota Public Radio’s Writer’s Almanac hosted by Garrison Keillor. This link provides the narrative to this daily radio program that acknowledges the birthdates of famous (and not-so-famous) writers, and concludes with a poem. It is an oasis of calm and literacy in a day of tumult and stress.
Today, by the way, would have been William Inge’s 90th birthday. You’ve heard of Inge, haven’t you? I understand there’s a festival or something somewhere that honors him…
And it’s also another famous writer’s birthdate:
It’s the birthday of Niccolo Machiavelli, born in Florence, Italy (1469). He was a prominent statesman, but in 1512 he was accused of conspiring against the government. Florence had just fallen into the hands of the Medicis, and Machiavelli was seen as a threat to their rule. He was tortured and imprisoned for three weeks, and then sent into exile. He went to live on his family farm and began writing a pamphlet to try to gain the favor of the Medici family. That pamphlet became his masterpiece, The Prince (1532), which is full of practical advice on how rulers can stay in power. Among other things, he advocated killing potential rebels, and said that it’s better to be feared than to be loved.
Machiavelli has never had a good reputation. Shakespeare referred to him as “Murderous Machiavel,” and others in the sixteenth century called him “Old Nick,” a nickname for Satan. In 1827, poet and philosopher Lord Macaulay wrote that he doubted “whether any name in literary history be so generally odious.” Twentieth-century philosopher Bertrand Russell called The Prince “a handbook for gangsters.” Some people say Machiavelli was a big influence on dictators like Hitler and Stalin. Today, the word “Machiavellian” has come to mean “marked by cunning, duplicity or bad faith.”
Machiavelli’s main point in The Prince is that the most important task for a ruler is to keep his country secure and peaceful, using whatever means possible. Sometimes, this means doing things that most people would consider immoral, but Machiavelli said that that’s just part of the job.
He was cynical about human nature: he argued that it was natural for most people to be selfish, and so a great ruler has to accept that he lives in an immoral world. He wrote, “A man who might want to make a show of goodness in all things necessarily comes to ruin among so many who are not good. Because of this it is necessary for a prince, wanting to maintain himself, to learn how to be able to be not good and to use this and not use it according to necessity.”
He also argued that most people value their property more than the lives of their friends and family, and so in some situations it’s okay for rulers to kill their citizens, but it’s almost never okay to take away their property. He wrote, “Men must be either pampered or crushed, because they can get revenge for small injuries, but not for grievous ones. So any injury a prince does a man should be of a kind where there is no fear of revenge.”
Despite Machiavelli’s hopes, The Prince didn’t win over the Medicis. A few years later, a new republic was established in Italy, but his name had already become so associated with evil and violence that he wasn’t able to get another government job for the rest of his life. He wrote two more books, and died in 1527.
Does Old Nick sound like anyone we know…?
|
Saving heritage – the sustainability argument
By Michael Kelly
Erskine College Main Block (1906), Island Bay, Wellington, under demolition, October 2018. Photo: Wikipedia.
Moral and emotional arguments for saving heritage can only go so far; there are many people who feel no particular attachment to the past and act accordingly. But what if there was a way to completely rethink how we manage old buildings that allows us to drastically cut CO2 emissions? After all, most people acknowledge the massive threat that climate change poses. The answer is pretty simple really and it could save most of our built heritage.
To help save the planet, we have to make a fundamental shift in our attitude to the materials already in our building stock. And that shift is to regard buildings and the materials in them as non-expendable – essentially reusable and recyclable. Construction debris makes up 50 per cent of all waste in New Zealand. In the United Kingdom it is as much as 63 per cent annually. Those are finite resources gone forever.
The building industry has a sky-high carbon footprint. The use of concrete is one of the biggest culprits, along with steel, but forming any new material generally carries a large carbon footprint. We extract finite resources out of the ground, use huge amounts of energy to turn them into building products and more energy transporting them and putting them together to form something new. Quite simply, erecting new buildings is catastrophically bad for the environment. (Building roads produces huge CO2 emissions, but that’s another story.)
So, the first principle of sustainability should be, do not demolish buildings. Refurbishing should be the default position for any building no longer needed for its original purpose. Refurbishment is not a carbon-free option of course but it’s many magnitudes better than to demolish and start again. Not all buildings can be refurbished for the same or similar use or even a compatible use and the further you get away from that, the greater the loss of fabric, including, of course, heritage fabric. Building fabric gets tired or worn out, so some replacement will be necessary.
Saving every heritage building is a laudable goal, but it may not be feasible to save every old building. The next step down is to deconstruct a building but then reuse most of its material in a new building. The worst thing that can happen to a building is to be turned into demolition rubble. In New Zealand, a significant amount of construction material is recycled (and turned into a less valuable product) but what is being proposed internationally goes far beyond that. A movement has begun in Europe to institute ‘materials passports’ for new buildings so that every component of a new building gets a digital record and can be identified and re-used, mostly for the same purpose, at a later date in a new building. Think of it as if the components of a building are on loan for a particular purpose and then when they have done their bit, moved on to another building. It’s not quite saving heritage, but, again, at least it’s better than demolition.
The challenge is to convince people that the materials bound up in an existing building have a value; that keeping them will significantly reduce building costs and will help the environment. A campaign in the UK called RetroFirst, run by the Architects’ Journal, champions refurbishment over demolition and rebuild. One of its targets is a peculiar anomaly in VAT (the equivalent of GST) that taxes refurbishment of a building at 20 per cent but exempts new building.
In a country like New Zealand where the relentless pursuit of the bright, shiny and new still holds sway, it would take a major cultural shift to achieve such an approach, but it’s essential if we are to survive on Planet Earth.
Michael Kelly is President of PHANZA, the Professional Historians’ Association of New Zealand/Aotearoa. Republished with permission from Phanzine vol. 26 no. 1 (May 2020).
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Blood pressure is the force (pressure) that the blood exerts on the vessel wall from the inside and is one of the most variable measurements for circulation.
If the heart has contracted the most, then the blood pressure is highest. When the heart relaxes, no more blood is pumped through the arteries and the blood pressure drops to its lowest value.
High blood pressure (hypertension) occurs when the pressure of your blood on the walls of your blood vessels rises. High blood pressure damages the small blood vessels in your kidneys
and prevents the filtering process from functioning properly. The reasons for high blood pressure are very often unknown. However, it is usually associated with your general health, your lifestyle and your eating habits.
Our daily lives involve many factors that promote the development of high blood pressure. The risk factors can be roughly divided into influenceable and uninfluenceable factors.
Uninfluenceable factors are age, gender and genetic predisposition.
Influenceable factors include smoking, stress, excessive performance demands, obesity, alcohol consumption, diabetes and a high-fat diet. But a salty diet can also maintain high blood pressure. Processed (ready-made) food in particular is often very salty. Inactivity also plays a major role, but also fat metabolism and protein excretion.
Since high blood pressure often goes undetected and does not cause any discomfort, kidney damage caused by high blood pressure is a widespread problem today. An early sign of this can be the prolonged presence of protein excretion (microalbuminuria) in urine. This is where targeted medical intervention comes into play, in order to prevent something more serious.
Related content
There are many different reasons for renal failure. Apart from high blood pressure, one of the most common once is diabetes.
A healthy diet plays an important role at all stages of chronic kidney disease.
Treatment types
Our focus is haemodialysis, but our portfolio of treatments ranges from preventive care to transplantation services.
|
Harvard experts list 8 steps to help support a healthy immune system
Fight COVID with Internal Immunity | Photo credit: iStock Images
Key highlights
• Vaccines are absolutely non-negotiable unless your doctor has ruled it out in your specific case.
• If you want to have a healthy and fighting immune system, you need to work on building it.
• Who better to advise us than Harvard Health experts on what steps to take to ensure a better immune system?
With the third wave of COVID-19 already here, people are relying on methods they learned in previous waves to tackle the viral scourge. The availability and administration of vaccines has changed the development this time around. But that did not end the pandemic, it only took the bite out of the disease to a small but decisive extent.
So, apart from medical procedures, this time around, people are betting a lot on their immunity to fight off a possible infection. There are different reports, some say that the Omicron variant causes milder (milder) symptoms compared to earlier variants like Delta etc.
What remains at the bottom line, however, is that it is one’s immunity that lets one sail through. Take a look at these 8 steps that according to experts at Harvard Health (https://www.hsph.harvard.edu/nutritionsource/nutrition-and-immunity/) we must follow to keep our immune systems strong and in top fighting shape keep.
8 steps to stronger immunity:
1. Eating a balanced diet: Add a platter of rainbow colored fruits, vegetables, lean proteins, whole grains, and plenty of water. A Mediterranean diet is an option that includes these types of foods. You don’t have to look for exotic species. Regular tropical fruits available in India have powerful antioxidants. Slice them instead of juicing them, keep the fiber and keep them fresh.
2. Ask your doctor about nutritional supplements: A lack of individual nutrients can alter the body’s immune response. Animal studies have shown that deficiencies in zinc, selenium, iron, copper, folic acid and vitamins A, B6, C, D and E can alter the immune response. These nutrients help the immune system in a number of ways: they act as an antioxidant to protect healthy cells, support the growth and activity of immune cells, and produce antibodies. Lack of nutrients from food puts us at greater risk of bacterial, viral and other infections. If a balanced diet is not readily possible, a multivitamin that contains the RDA for multiple nutrients can be used.
3. Do not smoke (or stop smoking if you do): Research (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5352117/) shows that cigarette smoking is linked to numerous diseases and poses a serious challenge to the current health system around the world. Smoking affects both innate and acquired immunity and plays a dual role in regulating immunity, either worsening pathogenic immune responses or weakening immune responses. Why smoke or keep the habit if doing so puts you in direct danger?
4. Moderate your alcohol consumption: If you are already a drinker and your doctor recommends that you keep drinking, reduce your consumption. Research (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4590612/) has shown that there is a link between excessive alcohol consumption and negative immune-related health effects such as susceptibility to pneumonia. In recent decades, this association has been expanded to include a greater likelihood of acute respiratory stress syndromes (ARDS), sepsis, alcoholic liver disease (ALD), and certain cancers; a higher incidence of postoperative complications; and slower and less complete recovery from infection and physical trauma, including poor wound healing.
5. Perform moderate regular exercise: Research shows that acute respiratory infections (ARIs) are caused by respiratory viruses and bacteria and are the most contagious disease in humans. Despite the lack of accurate data on how physical activity improves the immune response to the new coronavirus, there is evidence of lower rates of ARI incidence, duration and intensity of symptoms, and the risk of death from infectious respiratory diseases in those who are at high levels appropriate move . In addition, various studies suggest that regular exercise is directly related to lower mortality from pneumonia and influenza, an improvement in cardiorespiratory function, vaccination response, and glucose, lipid and insulin metabolism.
6. Get enough sleep: Aim for 7-9 hours of sleep each night. Try to stick to a sleep schedule, waking up and going to bed at around the same time each day. Our internal clock or the circadian rhythm regulates the feeling of sleepiness and alertness. Work-from-home and the continuous use of digital screens have disrupted our boys’ watches and brought them out of rhythm. The Harvard Report says that having a consistent sleep schedule can help balance the circadian rhythm so we can sleep deeper and more restfully.
7. Strive to cope with stress: According to SimplyPsychology (https://www.simplypsychology.org/stress-immune.html), the immune system’s ability to fight off antigens decreases when we are stressed. That is why we are more prone to infection. The stress hormone corticosteroid can suppress the effectiveness of the immune system (e.g. lower the number of lymphocytes) and also drive us to unhealthy behavior such as drinking or smoking. It is said that you have to learn to deal with stress, but honestly, it’s easier said than done. Even so, Harvard experts advise that you need to try to find some healthy strategies that will work well for you and your lifestyle – be it exercise, meditation, a particular hobby, or talking to a trusted friend. Another tip is regular, conscious breathing throughout the day and when feelings of stress arise. Learn Pranayam – the yoga of breathing techniques – available from certified experts online on videos. It doesn’t have to be too long – even a few breaths can help. For guidance, try this short, mindful breathing exercise.
8. Wash hands all day: Especially since the COVID-19 pandemic spread around the world, most of us have consciously adopted healthier lifestyles. Disinfect your hand with a doctor-prescribed alcohol swab if soap and water are not available. Otherwise, it is best to wash your hands several times a day, especially if you come from outside, before and after preparing food and eating, after using the toilet, after coughing or blowing your nose.
Disclaimer: The tips and suggestions mentioned in this article are for general information only and should not be construed as professional medical advice. Always consult your doctor or nutritionist before starting any fitness program or changing your diet.
Get the latest health news, healthy eating, weight loss, yoga and fitness tips, and other updates on Times Now
Leave a Comment
|
Persistence and reversibility – long-term design considerations for wild animal welfare
Crossposted from the Wild Animal Initiative blog.
When designing interventions to improve the welfare of wild animals, we want to maximize the expected benefit produced given the cost.[1] A major factor in the cost-effectiveness of interventions is the persistence of the effects. The longer they last, the higher the ratio of benefit[2] to cost, all else being equal. However, due to widespread uncertainty concerning the effects of our actions on wild animal welfare, it is possible that an intervention will turn out to do more harm than good. Reversibility can contribute to cost-effectiveness by allowing bad outcomes to be reversed, limiting the damage of an intervention gone wrong. In short, we want to optimize persistence given a good outcome while still preserving option value in case of bad outcomes. However, there is a tension between persistence and reversibility, since most factors that contribute to high reversibility will also lead to low persistence, and vice versa (Table 1). This report aims to explore the importance of persistence and reversibility to wild animal welfare interventions, how to negotiate trade-offs between them, and ways to sidestep the trade-off altogether.
My main conclusions are:
• All else equal, the ideal intervention would be both persistent in the face of natural processes and reversible.
• In practice, interventions that are both persistent in the face of natural processes and reversible seem to be rare. Designing more such interventions would be very useful.
• Although still in development, genes drives might turn out to be an unusually persistent in the face of natural processes, while simultaneously fairly reversible to improve wild animal welfare. Future work should explore this technology further and try to identify responsible policies for its use.
• The feasibility and long-term viability of carrying out interventions to improve wild animal welfare is strongly influenced by public perception. This concern implies that we should avoid hard-to-reverse interventions, even if that means choosing less persistent interventions.
• In order to preserve option value, we should probably prevent hard-to-reverse changes with uncertain utility that are likely highly persistent, such as respiratory bypass in plants.
• The lower the predicted capabilities of future humans, the greater the benefit of selecting interventions that are persistent in the face of natural processes, because such interventions would be persistent in a world without much human involvement.
• Irreversibility is strong reason to avoid species extinctions. However, as local and global extinctions become increasingly common, it is critical to understand the welfare consequences of these events. As those consequences will likely rely on the dynamics of how niches are filled, future research should consider the speed of replacement and similarity of replaced species.
Definition of reversibility
Strictly speaking, no action can truly be undone or fully reversed. Even after a thorough effort to undo an action, the resulting world will inevitably be at least slightly different from the counterfactual world where the original action never occurred. Thus, rather than expecting complete reversal, a more reasonable objective might be to minimize the differences between the world where the action was reversed and the world where the action was never taken.
While the state of the world that maximizes utility might not necessarily correspond to either the reversal or no-reversal world (see Appendix A), reversibility can serve as a good approximation for how to incorporate option value into the prioritization of wild animal welfare interventions (cf. Schubert & Garfinkel 2017). For the purposes of this report, I define it as:
Reversibility: The fraction of the effect of an intervention that should be reversed (r) so as to maximize the expected utility of a reversal (see equation 1), given a bad outcome.
Even if an intervention with poor results could be reversed to a high degree with sufficient effort, the optimal strategy may not be to attempt the full reversal. This is because there might be other valuable actions that could be taken using the time, money, and other resources spent on reversal.[3] To understand why, consider the following example:
Box 1: Damming a river example
Imagine we dam a river in order to create more wetland habitat. Later, we realize that this was a bad idea, and that things were better before we flooded the area, so we attempt to reverse the intervention. Removing the dam is easy, and 95% of the flooded area drains right away. But there are still small ponds in 5% of the area we wanted to drain, because there are low-lying areas that do not drain back into the river. To continue to reverse the flooding, we’d have to bring in heavy machinery to dig canals that transport the water away. We might then decide that such an effort would not be worth it, if the financial cost of digging canals is greater than the utility of draining the last 5% of the land. In this case, we would say the point of optimal reversal is 95%, so the original river dam intervention turned out to be 95% reversible.
We can formalize the foregoing intuitions as follows: to maximize expected utility, the fraction “r” of some intervention A (e.g. damming a river) that one should try to reverse when a bad outcome occurs is given by the utility gained by reversing the intervention, minus the utility lost in the form of an opportunity cost. This cost is caused by not investing in some intervention B, where intervention B is the possible action with the next highest expected utility (compared to further reversing A).[4] The utility-maximizing fraction r can be described by the following equation:
Box 2: Equation 1, defining optimal reversal fraction
EU_reversal = max_r(r × V_r=1 − C_r × CE_intervention_B)
EU_reversal = maximum expected utility produced by the reversal
r = the fraction of a total reversal that is undertaken
V_r=1 = the total amount of utility that would be produced if we assume that a full reversal could be undertaken
C_r = the cost of reversing the intervention by amount r
CE_intervention_B = the cost-effectiveness of the next best action (intervention B)
As the damming example illustrates, there are probably diminishing returns to increasing r due to the increasing costs of reversal (C_r ; see Figure 1), such that it might become increasingly costly to make marginal improvements on r as r increases.[5] Therefore, max_r is included in order to find the value of r that maximizes expected utility, where an increase in r from that point yields a lower expected utility. Equation 1 represents the optimal balance between when to keep reversing the effect of an intervention, and when to cut your losses and not invest in reversing the intervention any more, given that continuing the reversal is more costly than other things that you could use your resources for.[6] The definition of reversibility is somewhat similar to social-ecological resilience (e.g. Adger et al. 2005), although reversibility here relates to both human and (wild) animal well-being. For information on how the definition of reversibility relates to the value of information and the cost-effectiveness of an intervention, see Appendix A.
Intuitively, comparing financial costs to wild animal welfare benefits might seem odd—we might, for instance, wonder if it is even possible to assess the price of animal wellbeing. For decision-making purposes, however, we can ask a simpler question instead: Is there a way to improve welfare more with the same resources? Stated differently, we want to know the opportunity cost of spending resources on a particular project.
Figure 1. Hypothesized relationship between cost of reversal and degree of reversal (left), and the resulting function describing expected utility (right). Expected utility combines the costs and benefits of reversal into one single metric. The optimal degree of reversal is thus the r for which the expected utility is maximized.
Costs and benefits of reversibility
Future knowledge
Our empirical understanding of the world generally improves over time. For example, empirical information about biological systems has fairly reliably increased over time in recent years,[7] as has our ability to process large amounts of data.[8] In the future, we will probably know more about how to effectively improve wild animal welfare than we do now. Reversibility allows us to take advantage of new knowledge by making it easier to change course if we discover that an intervention is actually suboptimal.
What these facts indicate is that the value of information about the effects of wild animal welfare interventions is likely high.[9] Value of information is generally higher the more uncertain the utility of the action is, the cheaper new information is to gather, and the more time we have to reap the benefits of acting on new information. These conditions seem to hold for wild animal welfare interventions. Right now, we know very little about how to reliably improve wild animal welfare, so the effects of our actions will often be uncertain. But the fact that wild animal welfare has received so little attention also means that there are probably high marginal returns to targeted research on the subject, making new information relatively cheap. Even in the absence of a research program dedicated to welfare biology, an increased understanding of ecology in general will be very useful for wild animal welfare interventions. The positive or negative consequences of interventions could last for decades, so the impact of reversing a suboptimal intervention could easily justify the cost of information gathering.
Future values
Just as empirical knowledge can improve, perhaps moral reasoning can improve. Looking back in time, humanity’s moral circle has gradually expanded to include ever larger groups of individuals (Singer 1981; Roser 2019). Just as our ancestors overlooked moral atrocities like racism and sexism, perhaps present humans also have moral blind spots that future humans will find. The more likely we think this is, the more valuable reversibility becomes, because future generations will have a better understanding of moral philosophy (MacAskill MS-a).
However, rather than achieving some idealized version of our current values, it is also possible that future humans stray away from our values in ways that we would not endorse (MacAskill MS-b). If this happens, reversibility is not a strength but rather a weakness, because any progress we do make is easier for antagonistic decision-makers to undo. For example, if future generations stop valuing the well-being of humans of a certain ethnicity, they might reverse earlier actions such as the Universal Declaration of Human Rights. This would constitute a value loss from the point of view of the morals of current humans.
I have so far described human values as a unified entity, but most human actions that affect wild animals are motivated by disparate and ever-changing goals, almost none of which are aimed at improving wild animal welfare. If this continues to be the case, it is possible that future actors will, intentionally or unintentionally, reverse the effects of interventions even if they are net positive from the perspective of our current values.
Public acceptance
All else equal, reversible interventions will probably be easier for the public to get behind, because non-reversible interventions might be perceived as too risky. This could dominate other considerations: even when a hard-to-reverse intervention is preferable to a reversible intervention, a reversible intervention is probably preferable to no intervention at all. Consequently, I take the tentative working hypothesis that reversible interventions are preferable to non-reversible interventions, especially in the wild animal welfare space where uncertainty abounds.
Definition of persistence
I define persistence as:
Persistence: The expected duration of the counterfactual effects of an action.
Here, the duration of an intervention refers to its mean lifetime. If an intervention’s effects decay exponentially over time, duration is proportional to the half-life of the decay function. The word “counterfactual” in the definition is also of key importance, as the persistence of an action is only compared to what would otherwise have happened. For example, if we plant a forest in a field, but a forest would have established itself in the same field without our help 25 years later, then counterfactually we can only claim credit for the first 25 years of the intervention’s effects. Furthermore, interventions are persistent only when they are independently so.[10] Refilling a bird feeder once a month would be classified as a low-persistence intervention, as even if we refill the feeder for many years, the effects of each discrete filling event only lasts for a short period, and the intervention would not persist without upkeep.
Using the terminology of dynamical systems theory, highly persistent effects can be described as attractor states[11] (Brin & Stuck 2002), which are possible to enter but difficult to leave (cf. absorbing states in Greenwell et al. 2003, and path dependence in Mahoney & Schensul 2006). Highly persistent states generally represent deep or wide basins of attraction.[12] To understand the relationship between persistence and the expected utility of our actions, consider a curve representing the utility of an intervention over time. Whether or not the expected utility of an action is positive, the distribution of expected utility per unit time is probably best represented as an exponential decay function,[13] representing the survival of intervention effects over time. The overall utility of the intervention is represented by the integral over time, and depends on both the effect size per unit time (i.e. the utility of the intervention at each time point), and the width of the distribution (i.e. the duration of the interventions effects). Persistence influences this latter factor: the more persistent an intervention, the wider the distribution.
As may already be apparent, persistence and reversibility are closely related. Reversibility is one factor that determines persistence, as an intervention that is reversed would be less persistent than one which is not. Critically, both reversibility and persistence are relative to human capabilities. Some interventions will be reversible only if certain technological advances are achieved. Similarly, some interventions will be persistent only if certain abilities are not achieved. However, note that just because future humans can affect the persistence of our actions, does not mean that they will.[14]
Balancing reversibility and persistence
What factors determine the preferred level of reversibility and persistence?
Persistence in the face of natural processes
As discussed in the section Costs and benefits of reversibility, reversible interventions seem preferable, as they are more politically feasible and they preserve option value. Furthermore, if future humans are unable or unwilling to reverse the effect of an intervention, a persistent intervention is beneficial when the intervention has positive effects in expectation, because the expected benefits will last longer.[15] Thus, we want to find interventions that are reversible when humans are moderately to highly capable, but also persistent in situations where humans are not interfering with the interventions, for whatever reason (i.e. we should prefer the solid intervention in Figure 2). I will refer to such interventions as being persistent in the face of natural processes. If human capabilities decreased or became virtually non-existent, such interventions would have a comparatively high persistence. Thus, the importance of long-term persistence depends on the future prospects of humanity. Relatedly, it seems that if the likelihood of reduced future human capabilities is high, the utility produced by implementing reversible interventions becomes comparatively low.[16]
One way in which human capabilities could be drastically reduced is if we go extinct. Consequently, knowing roughly how likely extinction is will probably be useful when assessing the value of reversibility and long-term persistence in the face of natural processes. According to researchers studying the risk of human extinction, the probability that humans will go extinct in the next century is quite high,[17] perhaps as high as 19% (Sandberg & Bostrom 2008). Even if we think that this latter estimate is an overestimation by an order of magnitude (i.e. if we think that the real estimate is closer to ~2% risk per century), it still makes sense to account for this possibility when making decisions on interventions designed to help wild animals.
Figure 2. Two hypothesized relationships between expected persistence of interventions and the capabilities of future actors. Each line represents a hypothetical intervention that is either persistent in the face of natural processes (solid teal line) or not (dashed mustard line). Both lines represent interventions with high reversibility when future capabilities are comparable to current human capabilities (a non-reversible intervention would be represented by a straight horizontal line regardless of persistence). That the two lines approach each other with increasing capabilities is mainly based on the assumption that high future capabilities will lead to more ambitious rearrangements of the natural world, largely without wild animal welfare in mind, causing average persistence to be low.
Scale of intervention
As the number and degree of the changes caused by an intervention grows, the more likely it becomes that the results of those changes will be persistent and hard to reverse. Such considerations about projects that are larger in scale is one reason to pilot small-scale tests when possible. The scale of an intervention encompasses the sum total of its effects, including its geographic scope, the number of animals affected, the size of the impact on each animal, etc. For a more thorough discussion of what is meant by scale, see Appendix A (Granularity of world-comparisons).
One reason large-scale interventions might be more persistent is that they are more likely to pass thresholds where negative feedback becomes positive feedback, making reversal considerably harder. By definition, a threshold is dependent on the magnitude of the change in the parameter value(s). If you go above or below a certain value, you move towards a new state, which can be more or less persistent. It is thus inherent to threshold effects that the probability of reaching the threshold is dependent on the size of the shift in the parameter. Large-scale interventions induce larger shifts in parameter values than small-scale interventions of the same type. Hence, the likelihood of transgression of an arbitrary threshold increases with the scale of an intervention.
Accounting for political feasibility
Political feasibility is one of the most important factors to consider when balancing the persistence and reversibility of wild animal welfare interventions.[18] Interventions that are persistent in the face of natural processes are likely to be met with greater public opposition, although making sure that such interventions are also highly reversible might mitigate some of this opposition. Even though humans have long been altering ecosystems on a massive scale to meet our needs (e.g. Barnosky et al. 2011; Steffen et al. 2011; The World Bank 2016), explicitly stewarding nature for specific purposes might be perceived negatively by the public (e.g. as with GMOs in Europe: Eurobarometer 2010). In other words, certain interventions might superficially seem very cost-effective, but only in the absence of political and coordination costs. If public opposition to these interventions are large enough, the cost-effectiveness of working towards implementing them will be low.
Based on the above discussion, we should prefer interventions that are reversible given current human capabilities, fairly persistent in the face of natural processes, and initially restricted in scale. In the following sections I will apply these ideas and highlight good and bad concrete examples.
An example of persistence in the face of natural processes: gene drives
We should generally expect interventions with high reversibility and high persistence in the face of natural processes to be unusual, since the factors that make interventions persistent in the face of natural processes are often similar to the factors that make them persistent despite deliberate human action (i.e. low reversibility).
After further development, gene drive technology has the potential to become an exception to this rule. It is important to note, however, that any immediate use of gene drives in the wild would likely be highly negligent. Safety measures have to be perfected and extensive laboratory testing performed (Min et al. 2017; Dhole et al. 2018) before such an implementation.
Gene-drive-induced phenotypic changes
A gene drive is a process that occurs when a specific genetic element consistently increases in frequency in a population, even if it does not confer a fitness benefit to the individuals that carry it. Gene drives have existed naturally for millions of years,[19] but scientists have recently discovered ways of creating engineered gene drives using the tools of CRISPR/Cas9 (Esvelt et al. 2014).[20] CRISPR/Cas9 is a highly precise gene editing tool, originally discovered in prokaryotes, where it serves a function in acquired immunity (Jinek et al. 2012). CRISPR/Cas9 can be constructed to constitute a gene drive, spreading almost any desired genetic change. Gene drives increase the probability of spreading the gene drive complex from the usual 50% per offspring (so-called Mendelian inheritance) to almost 100%.
Gene drives are probably highly persistent, because they enable a gene to be passed on until all members of a species have it. It is possible that over time, natural selection would “undo” the effects of the gene drive, as presumably the original phenotype was better for evolutionary fitness. However, even with unfavorable fitness effects, a gene drive complex could last for at least a few hundred generations (Noble et al. 2017). The type and scale of the gene drive is relevant to persistence. For instance; deleting a whole gene will probably constitute a more persistent modification. Such a change will likely be less affected by mutation and selection than simply modifying the sequence of an existing gene, because mutation and selection can bring the previous function back if the gene was not deleted. Furthermore, if there are few homologous genes that could be co-opted to perform the function of the deleted gene, it could take a long time to regain the function that the deleted gene provided. Lastly, modifications that become fixed[21] for the whole species are also likely to be more persistent, as there is less variation in the population for natural selection to act upon.
Genetic modifications to wild animals using gene drives could have long-lasting effects on wild animal welfare[22] (Johannsen 2017). If reliable and safe gene drives are developed, it will likely be possible to make many previously proposed interventions to improve wild animal welfare more persistent in the face of natural processes. For example, gene drives could help reduce parasite loads on wild animals (Ray 2017) or nonlethally prevent overpopulation by reducing fertility (Brennan 2018).
Risks posed by gene drives
Depending on how they are used, gene edits might be detrimental to the long term viability of a species. Artificial alterations of a species’ traits will push the distribution of trait values away from their current fitness optimum (as there is no reason to edit genes if no functionally relevant trait is altered). Furthermore, a population might spontaneously go extinct if the population size is suppressed below a certain non-zero number, for instance if the population growth rate becomes negative (so-called Allee effects), where the alternative stable state is extinction (Dennis 1989; Beisner et al. 2003). The effect of reduced fitness and an accompanying drop in population size could thus be an increase in the extinction rate of gene-edited species.
Extinction could also be intentional. Screwworms were eradicated in North America (Vargas-Terán et al. 2005), and some people have proposed eradicating malaria-carrying mosquitoes (Matthews 2018). Eradicating a species would have high persistence in the face of natural processes (see Extinctions below).[23] It is currently impossible to bring species back from extinction, and reversing the welfare effects by other means would probably be very hard. Thus, to reduce the probability of unintentionally causing a species to go extinct, the first field trials of gene drives (after rigorous laboratory testing) should ideally be conducted on isolated (Webber et al. 2015) and non-threatened populations, since larger populations are generally less vulnerable to extinction (O’Grady et al. 2004; Fagan & Holmes 2006).
Reversibility of gene drives
Despite being persistent in the face of natural processes, gene drives might be highly reversible[24] by humans (Oye et al. 2014; Min et al. 2017). Although none are currently ready for use in the field, there are several gene drive reversal methods currently under development (Min et al. 2017; Noble et al. 2019) that, in combination, could allow for the suppression of the spread of a gene drive (although issues still remain; see Girardin et al. 2019).
One potential method would be to release a second gene drive that targets the first gene drive, which would then reverse whatever genetic change had been previously implemented. However, such a gene drive would still leave traces of the second gene drive in the population, because the CRISPR/Cas9 machinery would remain. Although this would likely have little effect on the ecosystem or the welfare of the animals, a way of removing all traces of the second gene drive is also under development (Min et al. 2017). If this alternative method is successfully developed, it would be significantly more complex than current gene drives. It would require the release of different types of genetically modified organisms at different stages, using underdominance[25] and so-called daisy drives[26] to prevent fixation of the gene drive elements (Min et al. 2017). Furthermore, to remove all traces of gene editing, the population(s) could be subjected to a genetic bottleneck. In such a scenario, the offspring of the (possibly few) remaining wild-type individuals would constitute the future population (Min et al. 2017), and this population would thus experience a loss[27] of the genetic diversity present in the individuals who carried the gene drive. However, new genetic diversity would be regained given enough time, which would be the type of reversal that we care about (see Appendix A for a discussion about the choice of metric for reversibility).
Geographically localized gene drives
Currently available gene drives will spread continuously through the population until the gene reaches fixation. Through migration, the gene could spread to all non-isolated populations of the same species. Developing localized gene drives would help avoid these large-scale effects. This would maintain local sovereignty by giving communities the ability to opt out of the use of gene drives when they are deployed nearby. Currently proposed methods for localizing gene drives are far from implementation-ready. They seem either prohibitively expensive or too likely to spread to neighboring populations (Esvelt et al. 2017; Dhole et al. 2018), although there is no reason that such hurdles could not be overcome with future innovations.
The underdominance-coupled daisy drives mentioned earlier (Noble et al. 2019, Min et al. 2017) might be another way to reduce the geographic spread of gene drives. The daisy drive system would limit the number of generations during which the gene drive can spread by breaking down the self-replicating machinery into separate units that are dependent on each other (Noble et al. 2019). Crucially, the primary unit is introduced as a normal gene without the same ability to spread throughout the population as the other elements have (Noble et al. 2019). Since there is one component that segregates naturally in the population, and is necessary for driving the next component in the daisy chain, it should eventually be purged from the population by natural selection. This will happen to all elements of the daisy drive that have not reached fixation; when the element that drives them disappears, they will be weeded out by natural selection as well. This process ensures that the daisy drive, if constructed properly, will have reduced persistence, potentially allowing future researchers to implement small field trials after sufficient laboratory testing. These developments are important especially considering the negative public perception of genetic modification, where more work on the possibility of controlling the spread of gene drives might mitigate some of this anticipated opposition.
Avoiding irreversible effects
The avoidance of irreversible actions can be considered a type of highly reversible intervention. The parallel is imperfect, because the “intervention” here is the deliberate avoidance or delay of another intervention. But the benefits are similar. If the action can be undertaken just as easily in the future, then avoidance can easily be “reversed” by taking the action at a later date.
In other words, delaying irreversible actions preserves option value. If irreversible negative effects are possible, then accounting for option value can flip the sign of the expected utility of an action. In general, avoiding entering persistent states that are hard to exit seems like a good heuristic (Schubert & Garfinkel 2017). This will be a more reliable guideline in cases with high uncertainty about the action’s effects and low costs to delaying the action.
Below, I describe three actions with low reversibility, high persistence, and uncertain utility. These are cases where we might prefer avoiding the action in order to avoid highly persistent negative effects.
The local or global extinction of a species[28] seems to be particularly persistent and hard to reverse, even if we are looking only at the effects on wild animal welfare. However, it is possible that closely related species will fill the functions of the extinct species fairly quickly (Oliver et al. 2015), counteracting the effects of extinction and potentially lowering the persistence of the effects on animal welfare. For example, the extinction of a parasite might temporarily improve the welfare of its host species, but the welfare effects would be counteracted if a new parasite filled the niche left behind.
Below I discuss the factors that influence the likelihood of such replacements, and to what extent the replacements can be thought to nullify the effects of extinctions. I mainly focus on local or global extinction of multicellular organisms rather than pathogens and other microorganisms, although some concepts and ideas will certainly be transferable.
Functional redundancy in ecosystems and niche expansion
The competitive exclusion principle prohibits the stable coexistence of two species with exactly the same ecological niche because one would outcompete the other (as described by Lotka-Volterra models of competition: Gotelli 2001, p. 112; Cushing et al. 2004). In reality, there are almost always slight differences in niches, even for highly similar species. In general however, there seems to be functional redundancy in ecosystems where the function of an extinct species is often taken over by some other species (Oliver et al. 2015), although the probability of functional replacement seems to depend on biodiversity and functional homogeneity, among other things (Fonseca & Ganade 2001; Solé et al. 2002).
The probability of such replacements are also related to the concepts of fundamental and realized niche. A realized niche is the niche that is currently occupied by the organism. In contrast, the fundamental niche is the niche that is theoretically inhabitable in terms of abiotic factors if the interactions with other species had been different,[29] for example, under less competition with other species for that niche space (Hutchingson 1957; Soberón & Peterson 2005; Holt 2009). Species distributions are often constrained by interspecific interactions (Ricklefs 2010),[30] which might lead to colonization of new niches if interspecific competition is relaxed.
It is likely the case that vacant niche spaces could be occupied by many different species. If an extinct species fed on many different types of plants, for instance, these plants might in the future be consumed by several different species. Even if the niche replacement by competitors is imperfect after an extinction event, selection would act on the species that had partially replaced the extinct species, ultimately filling the niche completely (cf. adaptive radiations: Stroud & Losos 2016; Cooney et al. 2017). The more closely related the replaced and replacing species are, the more probable it is that the replacing species would have pre-adaptations that would allow it to move into the extinct species’ phenotype space. The generation time of the replacing species is also important, because together with other factors such as population size and genetic diversity, it determines the rate at which the occupying species can adapt to the available niche space. Future work on niche replacement and functional redundancy would likely reduce the uncertainty in terms of the persistence of the welfare effects of extinction events.[31]
In cases where niche replacement is very unlikely or impossible, the average length of the existence of a species is a reasonable upper bound on the estimate of the effects of species extinctions, at least in the absence of human activity. The average lifespan of a species is estimated to be in the vicinity of a couple of million years.[32]
Climate change
Although climate change is of course not an intentional wild animal welfare intervention, reflecting on the persistence and reversibility of global warming can help shed light on how its effects might influence animal welfare over generations. Many of the persistent effects of climate change will influence wild animal welfare,[33] although the precise nature and value of those effects is unclear.[34] For some species or populations, it is for instance possible that a warming climate could even cause beneficial changes, such as shorter winters and greater food availability. In the case of humans, the welfare effects are almost certainly negative, which could in turn prevent humans from acting to help wild animals.[35]
Anthropogenic carbon emissions can have long-lasting effects on earth’s systems. Between 65% and 80% of current marginal CO2 emissions that are released into the atmosphere will remain there until they are absorbed by the oceans, a process that will take somewhere between 200 and 2,000 years (Archer et al. 2009). The remaining 20% to 35% of atmospheric CO2 will be absorbed by ocean sediments as CaCO3 over tens of thousands of years (12,000 to 45,000 years, Archer et al. 2009).[36] However, these estimates of how long CO2 persists in the atmosphere describe the persistence in the face of natural processes. What timescales are reasonable given predictions about human activities? In other words, how reversible is climate change?
The presence of excess CO2 in the atmosphere mainly brought about by the burning of fossil fuels seem hard to reverse currently, but if we scale up carbon capture and storage (CCS) quickly enough, it may turn out to be fairly reversible relative to its scale (IPCC 2005, p. 12).[37] It is at least plausible that improvements in CCS technology, or some unknown future technology, could substantially reduce the CO2 concentration in the atmosphere within this century, although reversing ocean acidification and the melting of ice sheets might take a lot longer (Grant et al. 2014; Mathesius et al. 2015).
Although CO2 concentrations may be reversible, their effects may not be. Climate change contributes to an increased rate of extinction (~10% of all species might be lost, Urban 2015) and other seemingly persistent changes, which would have unclear effects on wild animal welfare. Therefore, the results of climate change overall are harder to reverse than the mere increase in the concentration of CO2 in the atmosphere. Thus, it seems that the persistence of the effects of climate change will likely be fairly high, even in the face of human attempts to reverse the effects.
Modifications to enhance plant energy efficiency
Energy efficiency enhancements that lead to increases in organisms’ absolute fitness are likely highly persistent, because evolution will tend to preserve and spread beneficial changes. Human actions can facilitate changes in traits that natural selection cannot, by forcing transitions across deep fitness valleys. Such modifications are currently under development in plants[38] using respiratory bypass, where inefficiencies in the photosynthetic machinery can be removed in crops using genes from bacteria (South et al. 2019).[39]
If such improvements spread to wild plants, for instance via species hybridization (Warschefsky et al. 2014), it could lead to a large increase in biologically available energy. Because plant productivity increases with increased incoming solar radiation (Nemani et al. 2003; Wright & Calderón 2006; Graham et al. 2003; Dong et al. 2012),[40] we would expect increased photosynthetic efficiency to have the same effect. Furthermore, future speciation events could spread the changes further, leading to a large shift in the global composition of plant species. This could lead to substantial increases in global biomass and biologically available energy, which could have large implications for wild animal welfare by generally increasing wild animal populations.
Although the extent of the influence of greater plant biomass on animal populations likely depends on the extent to which a population is resource limited (Power 1992), several studies have shown that food supply and experimental supplementation is correlated with larger animal populations in multiple taxa (Dempster & Pollard 1981; Prevedello et al. 2013; Ruffino et al. 2014; Curtis et al. 2015). However, it is unclear how well these isolated effects on individual species translates into an overall effect of enhanced photosynthesis across the board.[41] Generally, all animal populations have upper limits set by negative density dependent effects (cf. Malthusian trap[42]) where increases in resource availability could lead to increases in population size if the carrying capacity is increased (Hixon 2008; Huston & Wolverton 2009; but see the paradox of enrichment: Jensen & Ginzburg 2005). Whether such potential increases in the number of wild animals are good will depend on, among other things, questions of population ethics (e.g. Greaves 2017) and the balance of suffering and happiness in nature (Horta 2010; Groff & Ng 2019).
Energy efficiency enhancements in plants seem likely to be implemented given the benefits to human food supply, and are probably highly persistent in the face of natural processes. A remaining question is how reversible such energy efficiency enhancements are, considering that they are favored by natural selection. The answer to this question hinges on gene drive reversibility. If the energy efficiency enhancement is introduced as a single gene, it would require a single gene drive to remove it from the particular species that it had been introduced into. On the other hand, the further it spreads (via hybridization and speciation), the costlier and harder it will be to reverse. Although speciation rates are exceedingly low compared to human time scales,[43] plant hybridization events are comparatively frequent.
In this report, I have addressed the implications of reversibility and persistence of interventions for helping wild animals. At this time, one of my main conclusions is that interventions that are reversible and persistent in the face of natural processes look like promising options when trying to improve wild animal welfare. I have discussed an example of such an intervention category: gene drives. Future work on the topic of wild animal welfare should preferably explore how gene drives could be safely used to improve wild animal welfare and work to ensure responsible gene drive policy. For a list of factors that influence reversibility and persistence in the face of natural processes, see Table 1.
My second main conclusion is that general persistence with very low reversibility is likely suboptimal, not because the direct effects on wild animals are low in comparison, but rather because (1) reversibility confers option value, and (2) the public perception of irreversible and persistent interventions will be negative.
Table 1. List of important determinants of persistence in the face of natural processes and reversibility for interventions, as discussed in the report.
* If currently promising methods can be developed.
Will Bradshaw, Cameron Meyer Shorb, Lukas Finnveden, Jane Capozzelli, Kim Cuddington, Gustav Alexandrie, Michelle Graham, Abraham Rowe, Denis Drescher, and Luke Hecht have given very valuable feedback during the process of writing this report. This work was supported by the Effective Altruism Foundation.
Anzalone, A. V., Randolph, P. B., Davis, J. R., Sousa, A. A., Koblan, L. W., Levy, J. M., … & Liu, D. R. (2019). Search-and-replace genome editing without double-strand breaks or donor DNA. Nature.
Archer et al. (2009). Atmospheric lifetime of fossil fuel carbon dioxide. Annual review of earth and planetary sciences 37:117-134.
Askell, A. (2017). The moral value of information. https://articles/the-moral-value-of-information-amanda-askell/.
Barnosky, A. D., Matzke, N., Tomiya, S., Wogan, G. O., Swartz, B., Quental, T. B., … & Mersey, B. (2011). Has the Earth’s sixth mass extinction already arrived?. Nature, 471(7336), 51.
Beckstead N. (2013). On the overwhelming importance of shaping the far future (Doctoral dissertation, Rutgers University-Graduate School-New Brunswick).
Beisner, B. E., Haydon, D. T., & Cuddington, K. (2003). Alternative stable states in ecology. Frontiers in Ecology and the Environment, 1(7), 376-382.
Bostrom N, & Ord T. (2006). The reversal test: eliminating status quo bias in applied ethics. Ethics, 116(4), 656-679.
Bostrom N. (2013). Existential risk prevention as global priority. Global Policy, 4:15-31.
Brennan, O. (2018). Wildlife Contraception. https://paper/wildlife-contraception/.
Brin, M., & Stuck, G. (2002). Introduction to dynamical systems. Cambridge university press. p. 25-27.
Burt, A. & Trivers, R. (2009). Genes in conflict: the biology of selfish genetic elements. Harvard University Press.
Collins J & Page L. (2019). The heritability of fertility makes world population stabilization unlikely in the foreseeable future. Evolution and Human Behavior, 40:105-111.
Cooney, C. R., Bright, J. A., Capp, E. J., Chira, A. M., Hughes, E. C., Moody, C. J., … & Thomas, G. H. (2017). Mega-evolutionary dynamics of the adaptive radiation of birds. Nature, 542(7641), 344.
Curtis, R. J., Brereton, T. M., Dennis, R. L., Carbone, C., & Isaac, N. J. (2015). Butterfly abundance is determined by food availability and is mediated by species traits. Journal of Applied Ecology, 52(6), 1676-1684.
Cushing, J. M., Levarge, S., Chitnis, N., & Henson, S. M. (2004). Some discrete competition models and the competitive exclusion principle. Journal of Difference Equations and Applications, 10(13-15), 1139-1151.
Dempster, J. P., & Pollard, E. (1981). Fluctuations in resource availability and insect populations. Oecologia, 50(3), 412-416.
Dennis, B. (1989). Allee effects: population growth, critical density, and the chance of extinction. Natural Resource Modeling, 3(4), 481-538.
Dong, S. X., Davies, S. J., Ashton, P. S., Bunyavejchewin, S., Supardi, M. N., Kassim, A. R., … & Moorcroft, P. R. (2012). Variability in solar radiation and temperature explains observed patterns and trends in tree growth rates across four tropical forests. Proceedings of the Royal Society B: Biological Sciences, 279(1744), 3923-3931.
Eskander, P. (2018a). To reduce wild animal suffering we need to find out if the cause area is tractable. https://blog/to-reduce-wild-animal-suffering-we-need-to-find-out-if-the-cause-area-is-tractable/.
Eskander, P. (2018b). An introduction to human appropriation of net primary productivity. https://paper/an-introduction-to-human-appropriation-of-net-primary-productivity/.
Esvelt, K. M., & Gemmell, N. J. (2017). Conservation demands safe gene drive. PLoS biology, 15(11), e2003850.
Esvelt, K. M., Smidler, A. L., Catteruccia, F., & Church, G. M. (2014). Emerging technology: concerning RNA-guided gene drives for the alteration of wild populations. Elife, 3, e03401.
Eurobarometer (2010). Biotechnology report. Bruxelles, Belgium: TNS Opinion and Social.
Fagan, W. F., & Holmes, E. E. (2006). Quantifying the extinction vortex. Ecology letters, 9(1), 51-60.
Fonseca, C. R., & Ganade, G. (2001). Species functional redundancy, random extinctions and the stability of ecosystems. Journal of Ecology, 89(1), 118-125.
Girardin, L., Calvez, V., & Débarre, F. (2019). Catch me if you can: a spatial model for a brake-driven gene drive reversal. Bulletin of mathematical biology, 81(12), 5054-5088.
Gotelli, N. J. (2001). A primer of ecology. Sunderland, MA: Sinauer Associates. (p. 112).
Graham, E. A., Mulkey, S. S., Kitajima, K., Phillips, N. G., & Wright, S. J. (2003). Cloud cover limits net CO2 uptake and growth of a rainforest tree during tropical rainy seasons. Proceedings of the National Academy of Sciences, 100(2), 572-576.
Grant, K. M., Rohling, E. J., Ramsey, C. B., Cheng, H., Edwards, R. L., Florindo, F., … & Williams, F. (2014). Sea-level variability over five glacial cycles, Nat. Commun., 5, 5076.
Greaves H. (2017). Population axiology. Philosophy Compass, 12:e12442.
Greenwell, R. N., Ritchey, N. P., & Lial, M. L. (2003). Calculus with Applications for the Life Sciences—Markov Chains (online material). Boston: Addison Wesley.
Groff Z, & Ng YK. (2019). Does suffering dominate enjoyment in the animal kingdom? An update to welfare biology. Biology & Philosophy, 34:40.
Hastings, A., Abbott, K. C., Cuddington, K., Francis, T., Gellner, G., Lai, Y. C., … & Zeeman, M. L. (2018). Transient phenomena in ecology. Science, 361(6406), eaat6412.
Hixon, M. A. (2008) Carrying capacity. In: Jorgensen, S.E., Fath, Brian. Encyclopedia of Ecology. London: Elsevier Science. p. 258-260.
Horta, O. (2010). Debunking the idyllic view of natural processes: Population dynamics and suffering in the wild. Télos, 17:73-88.
Huston, M. A., & Wolverton, S. (2009). The global distribution of net primary production: resolving the paradox. Ecological monographs, 79(3), 343-377.
IPCC. (2005). IPCC special report on carbon dioxide capture and storage. Prepared by working group III of the intergovernmental panel on climate change. Metz, B., O. Davidson, H. C. de Coninck, M. Loos, and L. A. Meyer (eds.). Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. (p 12).
Jensen, C. X., & Ginzburg, L. R. (2005). Paradoxes or theoretical failures? The jury is still out. Ecological Modelling, 188(1), 3-14.
Jinek, M., Chylinski, K., Fonfara, I., Hauer, M., Doudna, J. A., & Charpentier, E. (2012). A programmable dual-RNA–guided DNA endonuclease in adaptive bacterial immunity. science, 337(6096), 816-821.
Johannsen, K. (2017). Animal Rights and the Problem of r-Strategists. Ethical Theory and Moral Practice, 20(2), 333-345.
Keck, F. (2017) Changes in number of authors in ecology journals over time. http://changes-in-number-of-authors-in-ecology-journals-over-time/.
Lindmark, R. (2018). Current Estimates for Likelihood of X-Risk? https://posts/frYYAAa5K4nHqRCPG/current-estimates-for-likelihood-of-x-risk.
MacAskill, W. Manuscript a. Human extinction, asymmetry, and option value. https://document/d/1hQI3otOAT39sonCHIM6B4na9BKeKjEl7wUKacgQ9qF8/. (p. 9).
MacAskill, W. Manuscript b. Should we expect moral convergence? https://document/d/1EaIsqexbG2wiE7WIA_tyXiZjmbOmKc1Gy7rVQDSvMtg/.
Mahoney, J., & Schensul, D. (2006). Historical Context and Path Dependence. In Goodin, R., and Tilly, C., (eds.), The Oxford Handbook of Contextual Political Analysis. Oxford: Oxford University Press. Pp. 454-71.
Manthey M, Fridley JD and Peet RK (2011) Niche expansion after competitor extinction? A comparative assessment of habitat generalists and specialists in the tree floras of south‐eastern North America and south‐eastern Europe. Journal of Biogeography, 38:840-853.
Mathesius, S., Hofmann, M., Caldeira, K., & Schellnhuber, H. J. (2015). Long-term response of oceans to CO 2 removal from the atmosphere. Nature Climate Change, 5(12), 1107.
Matthews, D. (2018). A genetically modified organism could end malaria and save millions of lives — if we decide to use it. https://science-and-health/2018/5/31/17344406/crispr-mosquito-malaria-gene-drive-editing-target-africa-regulation-gmo.
Min, J., Noble, C., Najjar, D., & Esvelt, K. (2017). Daisy quorum drives for the genetic restoration of wild populations. BioRxiv, 115618.
Nath S. (2016). The thermodynamic efficiency of ATP synthesis in oxidative phosphorylation. Biophysical chemistry, 219:69-74.
Nemani, R. R., Keeling, C. D., Hashimoto, H., Jolly, W. M., Piper, S. C., Tucker, C. J., … & Running, S. W. (2003). Climate-driven increases in global terrestrial net primary production from 1982 to 1999. science, 300(5625), 1560-1563.
Noble, C., Min, J., Olejarz, J., Buchthal, J., Chavez, A., Smidler, A. L., … & Esvelt, K. M. (2019). Daisy-chain gene drives for the alteration of local populations. Proceedings of the National Academy of Sciences, 116(17), 8275-8282.
Noble, C., Olejarz, J., Esvelt, K. M., Church, G. M., & Nowak, M. A. (2017). Evolutionary dynamics of CRISPR gene drives. Science advances, 3(4), e1601964.
O’Grady, J. J., Reed, D. H., Brook, B. W., & Frankham, R. (2004). What are the best correlates of predicted extinction risk? Biological Conservation, 118(4), 513-520.
Oye, K. A., Esvelt, K., Appleton, E., Catteruccia, F., Church, G., Kuiken, T., … & Collins, J. P. (2014). Regulating gene drives. Science, 345(6197), 626-628.
Power M. (1992). Top‐down and bottom‐up forces in food webs: do plants have primacy. Ecology, 73:733-746.
Prevedello, J. A., Dickman, C. R., Vieira, M. V., & Vieira, E. M. (2013). Population responses of small mammals to food supply and predators: a global meta‐analysis. Journal of Animal Ecology, 82(5), 927-936.
Rakocevic, G., Djukic, T., Filipovic, N., & Milutinović, V. (2013). Computational medicine in data mining and modeling. New York: Springer. (p. 159).
Raup DM (1978) Cohort analysis of generic survivorship. Paleobiology, 4:1-15.
Raup D M (1991) A kill curve for Phanerozoic marine species. Paleobiology, 17:37-48.
Ray, G. (2017). Parasite load and disease in wild animals. https://paper/parasite-load-disease-wild-animals/.
Ricklefs R (2010) Evolutionary diversification, coevolution between populations and their antagonists, and the filling of niche space. Proc Natl Acad Sci USA 107:1265-1272.
Roser, M., and Ritchie, H. (2019) Technological Progress. https://technological-progress.
Roser, M. (2019) Human Rights. https://human-rights.
Ruffino, L., Salo, P., Koivisto, E., Banks, P. B., & Korpimäki, E. (2014). Reproductive responses of birds to experimental food supplementation: a meta-analysis. Frontiers in zoology, 11(1), 80.
Sandberg, A. & Bostrom, N. (2008): Global Catastrophic Risks Survey, Technical Report #2008-1, Future of Humanity Institute, Oxford University: pp. 1-5.
Shooster, J. (2017). Legal personhood and the positive rights of wild animals. https://writing-by-others/legal-personhood-positive-rights-wild-animals/.
Schubert, S., & Garfinkel, B. (2017). Hard-to-reverse decisions destroy option value. https://articles/hard-to-reverse-decisions-destroy-option-value/.
Singer, P. (1981). The expanding circle. Oxford: Clarendon Press.
Soberón J, Peterson AT (2005) Interpretation of models of fundamental ecological niches and species’ distributional areas. Biodiversity Informatics 2:1–10.
Solé, R. V., Ferrer‐Cancho, R., Montoya, J. M., & Valverde, S. (2002). Selection, tinkering, and emergence in complex networks. Complexity, 8(1), 20-33.
South, P. F., Cavanagh, A. P., Liu, H. W., & Ort, D. R. (2019). Synthetic glycolate metabolism pathways stimulate crop growth and productivity in the field. Science, 363(6422), eaat9077.
Steffen, W., Grinevald, J., Crutzen, P., & McNeill, J. (2011). The Anthropocene: conceptual and historical perspectives. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369(1938), 842-867.
Stroud, J. T., & Losos, J. B. (2016). Ecological opportunity and adaptive radiation. Annual Review of Ecology, Evolution, and Systematics, 47.
Thomas, C. D. (2015). Rapid acceleration of plant speciation during the Anthropocene. Trends in Ecology & Evolution, 30(8), 448-455.
Todd, B. (2017). The case for reducing extinction risk. https://articles/extinction-risk/.
Tomasik, B. (2016). Scenarios for very long-term impacts of climate change on wild-animal suffering. https://scenarios-for-very-long-term-impacts-of-climate-change-on-wild-animal-suffering/.
Tomasik, B. (2018a). Climate change and wild animals. https://climate-change-and-wild-animals/.
Tomasik, B. (2018b). Net primary productivity by land type. https://net-primary-productivity-land-type/.
Tyrrell T, Shepherd J, & Castle S. (2007). The long-term legacy of fossil fuels. Tellus B: Chemical and Physical Meteorology, 59:664-672.
Urban M. (2015). Accelerating extinction risk from climate change. Science, 348:571-573.
Valentine J. W. (1970) How many marine invertebrate fossil species? A new approximation. Journal of Paleontology, 410-415.
Vargas-Terán, M., Hofmann, H. C., & Tweddle, N. E. (2005). Impact of screwworm eradication programmes using the sterile insect technique. In Sterile insect technique (pp. 629-650). Springer, Dordrecht.
Warschefsky E, Penmetsa RV, Cook DR, & von Wettberg EJ. (2014). Back to the wilds: tapping evolutionary adaptations for resilient crops through systematic hybridization with crop wild relatives. American journal of botany, 101:1791-1800.
Webber, B. L., Raghu, S., & Edwards, O. R. (2015). Opinion: Is CRISPR-based gene drive a biocontrol silver bullet or global conservation threat?. Proceedings of the National Academy of Sciences, 112(34), 10565-10567.
The World Bank. (2016). Agricultural land (% of land area). http://indicator/AG.LND.AGRI.ZS.
Wright, S. J., & Calderón, O. (2006). Seasonal, El Nino and longer term changes in flower and seed production in a moist tropical forest. Ecology letters, 9(1), 35-44.
Appendix A
Considerations and limitations of reversibility
Reversibility as a proxy for malleability
When deciding whether to reverse an intervention, we want to use a utility function that assigns value to different states of the world (such as the world in which a bad intervention is reversed, and the world in which it is not). But, returning the world to the same physical state as it was before the intervention was implemented will not necessarily correspond to a maximization of utility. Even though a complete reversal of the world after a bad intervention would lead to a net gain in utility, there might be other states with higher utility, where the effect of the intervention can be said to be cancelled but the physical state of the world is not reversed.
Given the goal of maximizing welfare, then, assuming that we should always move back to the original state after an intervention goes poorly has similar qualities to the status quo bias (Bostrom & Ord 2006). The likelihood that the optimum configuration of the world just happens to match what we had before the intervention is fairly low (Bostrom & Ord 2006). This shortcoming of the concept of reversibility might not be a big problem, however. The ability to return the world to a state that contains the same moral value will likely correlate with how easily changes can be made to the state of the world (i.e. how malleable the system is). However, this proposed correlation is mainly conjecture and would need to be investigated further.
Granularity of world-comparisons
To conceptualize the measurement of the degree of reversal, we can consider the state of the world as a point in multidimensional state space, where every variable has its own axis and a point represents a complete description of the world in a given state. The difference between two states of the world is then measured as the euclidean distance between the worlds in state space. When assessing reversibility, we are thus measuring how much we can reduce the distance between the two worlds in state space.
An alternative approach, which is not considered in this report, is to measure the difference between worlds in a more granular way. We could for instance decide to only measure a subset of parameters, and then similarly calculate the euclidean distance between the worlds in this reduced state space. It is unclear if it is better to conceptualize reversibility as a complete or reduced state space, but a complete state space seems less arbitrary than choosing some parameters specific to each intervention and measuring reversibility only on those.
Reversibility for different agents
Lastly, it might be useful to distinguish between reversibility for different agents. It is possible that agents who care about wild animals, who might be in a minority even in the future, will have a reduced ability to affect states of natural systems, even if our aggregate capabilities would have increased. This could happen if one or a few agents or institutions have obtained a decisive strategic advantage, so as to largely exclude others from the decision-making processes.
Appendix B
Relationship between reversibility and other important concepts
Value of information (VOI) is a concept related to reversibility. The expected utility of future information about the effects of an intervention is affected by (1) the expected utility of a reversal given the information that the intervention produced a bad outcome, and (2) the probability of a bad outcome. The probability of a bad outcome is important, since it determines whether we can make use of the information by reversing the intervention. Thus, the VOI of intervention A is given by:
Box 3: Value of information
VOI = P_bad_outcome × EU_reversal
VOI = value of information,
P_bad_outcome = probability of a bad outcome, and
EU_reversal = the maximum expected utility produced by a reversal (see equation 1 and Definition of reversibility for further detail)
Here we assume binary outcomes (not continuous probability distributions). The larger the probability of a bad outcome, and the larger the expected utility of reversing such an outcome by r is, the larger the value of information. Note that, for simplicity, we are substituting the extent to which we can move to the optimal state, with the extent to which we can move to the reversed state (see Reversibility as a proxy for malleability in Appendix A for a discussion about this). Furthermore, the expected utility of doing intervention A is defined as follows:
Box 4: Value of an intervention
EU_A = P_good_outcome × V_good_outcome − C_A × CE_intervention_B + P_bad_outcome × V_bad_outcome + P_bad_outcome × EU_rev
EU_A = the expected utility of an intervention (intervention A),
V_good_outcome and V_bad_outcome = the utility that would be produced if there was a good or bad outcome, respectively,
P_good_outcome and P_bad_outcome = the probabilities of obtaining a good or bad outcome, respectively,
C_A = the cost of intervention A, and
EU_rev = the maximum expected utility produced by a reversal given a bad outcome
EU_A is the most decision relevant quantity of the ones discussed in this Appendix, since it gives you the expected utility of intervention A including the opportunity cost of not doing the next best thing (intervention B). If EU_A is positive, and given that we assume an expected utility approach, we should do intervention A.
1. See for instance: https://concepts/cost-effectiveness-analysis/. ↩︎
2. “Benefit” here refers to the benefit in expectation, which accounts for all possible outcomes (both positive and negative) in proportion to their likelihood. ↩︎
3. Note that the action could be any type of action, and does not necessarily have to be related to wild animal welfare. ↩︎
4. I assume here that continuing to reverse A is the action with the highest expected utility, as if it were not, you would just do whatever action has the highest expected utility. ↩︎
5. Relatedly, an alternative way of defining reversibility that applies in cases without diminishing returns of r to C_r, is the value of C_r when r = 1 (e.g. Schubert & Garfinkel 2017). ↩︎
6. Specifically, the product of C_r and CE_intervention_B gives this opportunity cost, in that it represents forsaken expected utility of intervention B given the total investment in reversing intervention A. ↩︎
7. The number of papers produced in ecology has been rising at a steadily increasing (perhaps exponential) rate since the 1950s (Keck 2017). ↩︎
8. Processor speed, memory storage, and many other aspects of computer efficiency have increased at roughly exponential speeds (Roser & Ritchie 2019). ↩︎
9. See the section A2 in the Appendix for a discussion of how the value of information relates to reversibility. See Askell (2017) for an explanation of how to estimate the value of information. ↩︎
10. Of course, one could define interventions maintained by humans as being persistent over time, but that type of pseudo-persistence is conditional on the continued upkeep and outlay of resources. The reason for restricting the definition of persistence is that independently persistent interventions have features that “maintained” interventions do not: they are likely to be more cost effective and less reversible. ↩︎
11. Although persistence can also be due to transient effects (Hastings et al. 2018), as an action that changes the state of a system can have effects that are long lasting even if the system moves (very) slowly back to the initial state through a transient state. Here, transient refers to the technical definition from Complexity Theory (Hastings et al. 2018). ↩︎
12. This is similar to how the resilience of states with regards to ecological systems is defined in Beisner et al. (2003), although they consider resilience to be a property of a system that can change over time, such that a system can be resilient currently but predictably non-resilient in the future. In this report, I take persistence to mean the expected (i.e. predicted) duration spent in a certain state, thus aggregating over all time slices. ↩︎
13. This assumes that the termination probability is constant over time, or that the strength/intensity of the effect of each intervention will exponentially decay. Other shapes of the survival function might apply, but without specific information about the shape of the distribution, we should opt for the maximum entropy distribution (the one with the fewest assumptions: Rakocevic et al. 2013, p. 159). In cases with non-negative values, such a distribution is described by an exponential decay function. ↩︎
14. Although if they are persistent because future humans approve of the intervention, we should not take credit for the full effect, since the counterfactual might have been that the intervention would have been implemented anyway. ↩︎
15. However, it is possible that such persistent interventions are more costly on average, at least in terms of research, than less persistent interventions. Intuitively it seems like making persistent changes to ecosystems is harder than making short-lasting changes, since evolutionary pressures and population dynamics often act as negative feedback loops to any fitness-decreasing changes. Furthermore, research into overcoming such negative feedback loops might be costly. These costs are probably not large enough to outweigh the added benefit of higher persistence. ↩︎
16. Assuming that there is some cost to restricting the space of possible interventions to reversible ones. ↩︎
17. For summaries of different estimates, see Todd (2017) and Lindmark (2018). ↩︎
18. For a similar discussion, see What is the likelihood of interventions being accepted and adopted? in Eskander (2018a). ↩︎
19. These selfish genetic elements copy themselves into the genomes of host organisms, with the capability of spreading among individuals within species, and even among species. The most common forms of naturally occurring gene drives are transposable elements, where multiple copies exist in the genome of most species (Burt & Trivers 2009, p. 228). ↩︎
20. An even newer and more precise gene editing technology, called prime editing, was recently described (Anzalone et al. 2019), which could replace CRISPR/Cas9 as the method of choice for gene drives. ↩︎
21. A gene is fixed within a population when all individuals share that particular gene variant. ↩︎
22. From a rights-based perspective, one might argue that modifying genes violates the autonomy of animals. However, many applications could plausibly be said to increase autonomy. For example, engineering disease resistance into animal populations would allow wild animals to live longer and fuller lives. For a broader discussion of rights-based perspectives on helping wild animals, see Shooster (2017). ↩︎
23. It might appear that an extinction is an infinitely stable change (to the extent that we cannot resurrect the species). However, this ignores the fact that the counterfactual is not that the species will exist forever; the average time until extinction for most animal taxa is a few million years (see Functional redundancy in ecosystems and niche expansion). ↩︎
24. The reason for gene drives being both persistent in the face of natural processes, and highly reversible, is probably related to the fact that the genetic code of living organisms has evolved to ‘defy’ entropy (by increasing entropy elsewhere) and retain information, while at the same time it has not evolved mechanisms to protect it against human gene modification tools. ↩︎
25. Underdominance is when heterozygotes (individuals with two different gene variants at a specific site) have lower fitness than homozygotes (individuals with the same gene variants at the specific site). ↩︎
26. A daisy drive is in many ways identical to a standard gene drive, but with a built in self-extinguishing effect (Noble et al. 2019). This is accomplished by having a series of genetic elements driving each other in a chain-like pattern, where the first link in the chain will be inherited as any other gene, and will thus be selected against. When the first link in the chain has been eliminated from the population, there is now nothing that is driving the second link, which will cause it to be purged from the population by natural selection. This continues until all gene drive elements have been eliminated. ↩︎
27. Another scenario that is conceivable based on the methods of Min et al. (2017), is speciation due to mate choice based on traits that might distinguish genetically modified individuals from wild type individuals. I find this implausible because such reproductive isolation would have to be established extremely fast. ↩︎
28. Extinction will here refer to the extinction of non-human animals. ↩︎
29. This is a bit of an illusory distinction, since most species will not survive in the absence of other organisms. It might be a useful heuristic though, especially in the case discussed here where perhaps just a single species is removed from the environment. ↩︎
30. “As in the case of range occupancy, however, species often are absent from locations that otherwise are judged to be appropriate” (Ricklefs 2010, pages 84–86). ↩︎
31. It would be interesting to see empirical or theoretical work quantifying the difference between the fundamental and realized niche, and the effects of species removal on niche occupancy, ideally with a focus on the welfare of the extinct and the replacing species. Studies such as the study by Manthey et al. (2011), looking at the difference between the fundamental niche and the realized niche, would be needed to get estimates of the time to niche replacement. Furthermore, studies on the speed of evolution of phenotypic character displacement might give indications of how quickly certain characters can re-evolve once the displacement pressure disappears. ↩︎
32. The mean duration of invertebrate genera in the fossil record during the Phanerozoic Eon (i.e. from the Cambrian period, 541 million years ago, until present) is 11.1 million years (Raup 1978). According to a review of the literature by Valentine (1970), the mean duration of marine invertebrates is somewhere between 5 and 10 million years. Raup (1991) followed up his previous study on invertebrates (Raup 1978), but focused specifically on marine animals (including marine vertebrates as well), and found that the average duration of genera was 4 million years. However, he seemed to think differences in the results (11.1 and 4 million years) were due to differences in statistical methods rather than differences in the underlying data. If this is correct, the results of Raup (1991) are likely more accurate, since that estimate was produced later. It should be noted that the genus is commonly used as opposed to species when calculating extinction rates (e.g. Raup 1978, 1991), but the most relevant level for the discussion of gene drives is the species level. ↩︎
33. This has been written about previously (long-term effects: Tomasik 2016; and short-term effects: Tomasik 2018a). ↩︎
34. This is due to the difficulty of predicting the welfare implications of for instance distribution and population size changes of wild animals. ↩︎
35. Even if the effects of climate change would be positive for wild animal welfare, it does not follow that we should not try to mitigate the effects of climate change. From a longtermist perspective (Beckstead 2013; Bostrom 2013), actively increasing the risk of human extinction brought about by extreme climate change could be very bad (assuming that the expected utility of humanity’s continued existence is positive). ↩︎
36. Although earlier studies have produced estimates that differ substantially from Archer et al. (2009) and each other (see table 1 in Tyrrell et al. 2007). ↩︎
37. “In most scenarios for stabilization of atmospheric greenhouse gas concentrations between 450 and 750 ppmv CO2 and in a least-cost portfolio of mitigation options, the economic potential of CCS would amount to 220–2,200 GtCO2 (60–600 GtC) cumulatively, which would mean that CCS contributes 15–55% to the cumulative mitigation effort worldwide until 2100, averaged over a range of baseline scenarios.” (IPCC 2005, p. 12). ↩︎
38. It is also possible that energy efficiency enhancements could be carried out directly on animals, although I think it is fairly unlikely. The efficiency of cellular respiration in animals seem to be around 40% (Nath 2016), which indicates that there is room for efficiency improvements at the biochemical level, at least in principle. ↩︎
39. For a summary, see: https://objectives/photorespiratory-bypass. ↩︎
40. For tentative discussions on the effects of changes in net primary productivity (NPP) on wild animal welfare, see Tomasik (2018b) and Eskander (2018b). ↩︎
41. There are indirect measures suggesting a relationship between primary productivity and animal biomass (Huston & Wolverton 2009), which probably partly manifests as an increase in the total number of individual animals. By definition, with zero primary production by plants and other organisms, no heterotrophic organisms (i.e. organisms that cannot produce their own food) can be sustained. Consequently, there is definitely a positive relationship between the total number of individual animals and primary production, at least for the lower part of the primary production range. ↩︎
42. Humans are probably an exception, although similar processes might affect us as well in the future (Collins & Page 2019). ↩︎
43. These time scales are in the order of 0.1 to 10 speciation events per million species years (Thomas 2015). ↩︎
No comments.
|
Rule of the Clan
Last week, David Cameron’s brother, working without pay, caused the collapse of a fraud trial on the grounds that, because of legal aid cuts, the defendants could not get proper representation. It is a challenge to government policy mounted by a senior lawyer who happens to be the Prime Minister’s bother.
It made good copy and briefly gave us all a good laugh, Cameron v Cameron and all that, but no-one really thought it was that odd. Family members disagree about all sorts of things and the legal profession has been criticising the government’s legal aid cuts for some time.
It might not look odd in this country but, in many countries, a public spat between two powerful brothers would be unusual. In some, it would be unthinkable. If the country’s leader’s brother were a top lawyer then the two of them would be in cahoots. Furthermore, people would expect them to be in cahoots and think it strange if they weren’t.
I’ve just finished reading a fascinating book Rule of the Clan by legal historian and law professor Mark S. Weiner. It is a study of societies based on extended kinship groups and the story of how some managed to move on from clan-based structures to create modern states.
His argument goes like this:
In clan-based societies, the primary relationships are between kinship groups, rather than between individuals. Authority stems from the senior members of the clan and the individual is submerged within it. A person’s rights and obligations are governed by the kinship group and their position in its hierarchy. Property is considered the property of the group rather than the individual.
Offence is taken collectively too. Therefore, if your brother steals from someone and goes into hiding, the victim and his family will come after you. If they can’t get at him, you will do. Furthermore, clan societies are characterised by the need to maintain honour and avoid collective shame. So the theft victim’s family have to be seen to take revenge. The trouble is, after they have given you a kicking, your family will feel affronted and so must be seen to hit back. In this way, feuds start.
In some societies, clan-based feuds can lead to bloodshed out of all proportion to the original crimes. Weiner describes how feuding got so bad in medieval Iceland that the country’s elders had to beg Norway, the country their ancestors has escaped from, to resume sovereignty. Only a powerful referee, in this case the Norwegian state, could put a stop to the killing.
Indeed, says Weiner, it is only when people transfer their allegiance from the clan to something wider, like the country, the nation, or the king, that the sort of society we take for granted in the West becomes possible. He describes how this happened very gradually in Anglo-Saxon England. Clan ties weakened and loyalty to a wider group, symbolised by the emerging Anglo-Saxon kings, took its place.
This progression ‘from kin to king’, says Weiner, makes a whole lot of other things possible. For example, common, widely applicable laws, an infrastructure for resolving disputes and a system for punishing people without resorting to feud. Eventually, people seek redress from the law and the state rather than from blood vengeance.
The establishment of the rule of law allows the individual to come to the fore. No longer dependent on the clan for protection, people can own property on their own account, start businesses and even marry whom they please. Crucially, they can start enterprises without fear of losing everything because a kinsman has offended someone in a neighbouring clan. Paradoxically, the strong centralised state allows individualism to flourish. Without it, people have to rely on the clan for support and must therefore accept its restrictions.
If you have read Daron Acemoglu and James Robinson’s Why Nations Fail, some of this will sound familiar. (See previous post.) Their argument is that some nations have grown prosperous because they have developed the political and economic institutions which allow enterprise and wealth creation to flourish and the benefits to be widely distributed. Rule of the Clan provides the preceding chapter to Why Nations Fail. It provides the social and historical underpinning, explaining the changes that allowed these institutions to develop.
It is a persuasive argument. The World Bank’s World Governance Indicators show the rule of law fading out once you get outside western Europe and its former colonies. In the countries coloured green, relationships are primarily individual and commercial relationships are contract-based. The dark red countries are, for the most part, clan-based societies.
Screen Shot 2014-05-02 at 16.08.30
Even where there are the trappings of a liberal state, with laws, courts and constitutions, the persistence of clan rule means that their effectiveness is limited. Saudi Arabia, for example, is based on a pyramid of tribal alliances. Those tribes locked out of the inner circle wage war against the state. And why wouldn’t they? It’s not their state. It’s the Al-Saud’s state. It even says so on the label.
But, as Nicholas Roberts said in Middle East Monitor this week, we need to be careful about labelling all conflicts in the Arab world and sub-Saharan Africa as tribal. That might have been the case once but modern gangsterism and militia-based loyalty is cutting across tribal bonds. Clan loyalties are part of the story in Libya and Yemen but not all of it.
The presence or otherwise of clan-based power also doesn’t explain all the differences between the liberal and prosperous societies and the rest. As this review in the New York Journal of Books says:
Unfortunately, Mr. Weiner seems unaware of any culture that falls outside the two poles of individual freedom and clannish subjugation of the individual.
That’ said, I don’t think Mark Weiner was trying to give a universal explanation for why liberal societies have developed in the way they have. The decline of clans is one factor; an important one but not the only one.
Others have accused Weiner of endorsing the Whig interpretation of history. Again, a little unfair. It’s true that he discusses a steady progress towards the rule of law in European countries, but he does not say it was inevitable. If anything, he shows that much of the West’s stability and prosperity is due to an accident of history. We are just lucky that, through combination of factors, our societies escaped from clan rule.
Rule of the Clan is a valuable book because it helps us understand that western societies are not normal, either by global standards or historical ones. For most of human history, and in much of the world now, individualism, the rule of law, contracts and property rights were unknown. This often comes as a surprise to westerners, especially when they find the property they thought they had bought in another country seized by someone else, or find the police unwilling to act against someone who has clearly committed a crime.
Rule of the Clan also contains a stark warning. If the state, and with it, the rule of law, are weakened, the rule of the clan could returning a far shorter time than it took to disappear. Where the law is weak, people will try to protect themselves through extended family groups or through other groups which have similar structures and hierarchies, such as gangs. As Weiner puts it:
A decline in the state would bring chaos and catastrophe for individualism.
The book is not a dense read. You don’t need an academic knowledge of history or law to understand the points or the examples used. If you want a flavour of it, have a look at Mark Weiner’s blog, where there are plenty of links and follow up articles.
It is an important book though. It’s one of those I had to keep putting down and thinking about before I moved on to the next bit. Rule of the Clan gives another clue to that often asked question, why did Europe and its New World offspring become so much more prosperous than everywhere else. It’s not the whole answer, of course, but it fills in some of the blanks.
This entry was posted in Uncategorized. Bookmark the permalink.
8 Responses to Rule of the Clan
1. Good recommendation. I can’t make much of a point until reading it. I would suggest though that examples of social groupings within Western nations which work more on a clan basis than individualism would be peasant societies, such as found in Ireland and remotish rural communities everywhere in Europe.
2. sdbast says:
Reblogged this on sdbast.
3. begob says:
Yeah, Ireland was a great example of this – used to have a clan based system, warfare was the dominant industry, and justice was a sort of privatised arbitration system.
I wonder if that’s Chris Grayling’s vision of England’s future.
4. Dave Timoney says:
The suggestion (by begob) that in Early Medieval Ireland “justice was a sort of privatised arbitration system” is misleading. Brehon Law, which had a particular focus on property and contracts, could accurately be described as a system of “common, widely applicable laws, an infrastructure for resolving disputes and a system for punishing people without resorting to feud”. In fact, it is not unreasonable to see the absence of a strong, central state as a major stimulus to its development.
There are also plenty of examples of sophisticated (albeit pre-industrial) societies, with central government and legal codes, that retained an emphasis on clan rights and duties, notably Classical Greece and Republican Rome. A society in which warfare is a “dominant industry” is also no bar to the development of trade and commerce, not to mention cultural goods, as evidence by Feudal Japan.
I’ve not read Weiner’s book, but it looks like a case of post hoc propter hoc. Agriculture places a premium on intermittent mid-scale cooperation, so pre-industrial areas tend to have societies based on clans or kinship groups (even if people live and usually work in smaller units). Industrial capitalism tends to destroy these social structures, replacing it with a form of individualism that would be otherwise impossible. But this does not mean that pre-industrial areas lack sophisticated legal systems. Given the importance of land ownership and inheritance, the opposite is often the case. The imposition of pro-capital laws is routinely accompanied by an ideological denigration of the pre-existing structures as “backward”, “superstitious” and “unenlightened”, with the implication that they never really existed at all, hence leading to the belief that capitalism is “the rule of law”.
Acemoglu and Robinson are right to stress the importance of the institutional framework in the development of Western capitalism, but we shouldn’t assume that laws and consitutions are value-free, and that therefore a simple quantity theory of law will suffice. For example, you summarise their position as: “some nations have grown prosperous because they have developed the political and economic institutions which allow enterprise and wealth creation to flourish and the benefits to be widely distributed”. Most economic historians would agree that the key legal development in modern capitalism was the invention of limited liability, which allowed losses “to be widely distributed”.
• “a privatised arbitration system” albeit unofficial I would say has occurred in Ireland until recent times. It is a general characteristic of inward looking peasant societies who shun the official organs of law even where available. The difference between a peasant society and an industrial one is one of outlook. Such activity would be disapproved of by ordinary people in England whereas in Ireland there was widespread approval of sorting things out “in-house” so to speak, and disapproval of using the state facilities. If I interpreted Begob’s point right, as everyone becomes increasingly fearful of calling the police when they have a problem, that sets the stage to reverting to the peasant norm. If you don’t have a community who can enforce “law” for you, you will be left in a very vulnerable vacuum.
• begob says:
You’re perfectly correct on the corpus of brehon law. But its administration must have contrasted severely to that of state courts with judges appointed during good behaviour. We can’t say for sure because the evidence has been lost.
My assumption is that brehons would have been prey to the same pressures described by Elizabeth Warren when she volunteered as a member of the arbitrator panel for the US credit industry: “he who pays the piper …”
Thanks for the other interesting points. Plus your blog is unputdownable – just occurred to me that has two meanings, both accurate! And hidden in plain sight too.
5. Lucky Godot says:
Interesting reference and sounds like an important book. It may cover some common ground with Pinker’s “Better Angels of Our Nature”, albeit this has a much broader scope. Pinker describes a convincing picture of a consistent reduction in violence across human history. One explanation given being the emergence of the modern democratic state, with a monopoly of the legitimate use of violence. This does require strong democratic apparatuses and the ability to challenge the authority of the state (to minimise the state abusing its position of privileged power) but where this is present the levels of homicide are generally very low and show generally consistent decline over time. Countries without this are likely to have much higher levels of homicide. Although the USA may not experience the same level of arms related deaths as Somalia, its much higher level than Western European states may in part relate to a challenge to the states monopoly position and the insistence on its population having the right to exercise its own ability to use extreme force. In an earlier work Pinker cites research from Daly and Wilson on homicide where the main ‘reason’ identified for a great deal of murder being something that can be covered by the term ‘argument’. Seemingly honour and revenge are a more common motivation for violence and murder than anything else.
This appears to one of the arguments that can be made against ‘small government’ (or as it is presented an argument against big government). If the state is an effective manager of communal disagreement the question for politicians may not be to reduce the size of government (whatever that may mean) but first to establish and assert democratic control of the government and the democratic system over the state apparatus. As it may be that the strong state is the most effective manes of securing freedom from violence and persecution for most of the population. In a sense the strength of the state should be celebrated: but balanced by control and challenge (separation of powers, independent judiciary, free and diverse mass media etc).
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
P. 400 lower (with art)
Homer, Iliad 5.628-51
but Tlepolemus, son of Heracles, a valiant man and tall, was roused by resistless fate against godlike Sarpedon. And when they were come near as they advanced one against the other, the son and grandson of Zeus the cloud-gatherer, then Tlepolemus was first to speak, saying: “Sarpedon, counsellor of the Lycians, why must thou be skulking here, that art a man unskilled in battle? They speak but a lie that say thou art sprung from Zeus that beareth the aegis, seeing thou art inferior far to those warriors that were sprung from Zeus in the days of men of old. Of other sort, men say, was mighty Heracles, my father, staunch in fight, the lionhearted, who on a time came hither by reason of the mares of Laomedon with but six ships and a scantier host, yet sacked the city of Ilios and made waste her streets. But thine is a coward’s heart, and thy people are minishing. In no wise methinks shall thy coming from Lycia prove a defence to the men of Troy, though thou be never so strong, but thou shalt be vanquished by my hand and pass the gates of Hades.” And to him Sarpedon, captain of the Lycians, made answer: “Tlepolemus, thy sire verily destroyed sacred Ilios through the folly of the lordly man, Laomedon, who chid with harsh words him that had done him good service, and rendered him not the mares for the sake of which he had come from afar. Greek Text
Homer, Iliad 8.283-84
Telamon, who reared thee when thou wast a babe, and for all thou wast a bastard cherished thee in his own house. Greek Text
Homer, Iliad 20.144-48
So saying, the dark-haired god led the way to the heaped-up wall of godlike Heracles, the high wall that the Trojans and Pallas Athene had builded for him, to the end that he might flee thither and escape from the monster of the deep, whenso the monster drave him from the seashore to the plain. Greek Text
Diodoros Siculus 4.32.1
After this Heracles, returning to Peloponnesus, made war against Ilium, since he had a ground of complaint against its king, Laomedon. For when Heracles was on the expedition with Jason to get the golden fleece and had slain the sea-monster, Laomedon had withheld from him the mares which he had agreed to give him and of which we shall give a detailed account a little later in connection with the Argonauts. Greek Text
Diodoros Siculus 4.32.2-3
At that time Heracles had not had the leisure, since he was engaged upon the expedition of Jason, but later he found an opportunity and made war upon Troy with eighteen ships of war, as some say, but, as Homer writes, with six in all, when he introduces Heracles’ son Tlepolemus as saying:
Aye, what a man, they say, was Heracles
In might, my father he, steadfast, with heart
Of lion, who once came here to carry off
The mares of King Laomedon, with but
Six ships and scantier men, yet sacked he then
The city of proud Ilium, and made
Her streets bereft.
When Heracles, then, had landed on the coast of the Troad, he advanced in person with his select troops against the city and left in command of the ships Oecles, the son of Amphiaraus. And since the presence of the enemy had not been expected, it proved impossible for Laomedon, on account of the exigencies of the moment, to collect a passable army, but gathering as many soldiers as he could he advanced with them against the ships, in the hope that if he could burn them he could bring an end to the war. Oecles came out to meet him, but when he, the general, fell, the rest succeeded in making good their flight to the ships and in putting out to sea from the land. Laomedon then withdrew and joining combat with the troops of Heracles near the city he was slain himself and most of the soldiers with him. Heracles then took the city by storm and after slaughtering many of its inhabitants in the action he gave the kingdom of the Iliadae to Priam because of his sense of justice. Greek Text
Hesiod, Ehoiai (Catalogue of Women) fr 165 MW – Fragmenta Hesiodea, pp. 80-81, ed. R. Merkelbach and M. L. West. Oxford 1967.
Peisandros fr 11 PEG – Poetae Epici Graeci 1, p. 170, ed. A. Bernabé. Leipzig 1987.
Peisandros says that Herakles gave Telamon a most beautiful cup for the expedition against Ilios. (Transl. E Bianchelli)
Boston, Museum of Fine Arts. Corinthian krater. Herakles, Sea Monster.
BostronHerakles monster
Boston Museum
Pindar, Nemean 4.25-26
Heracles, with whom once powerful Telamon destroyed Troy and the Meropes. Greek Text
Pindar Isthmian 6.26-30
Telamon. The son of Alcmena led him in ships to Troy, the toil of heroes, for war that delights in bronze, as an eager ally along with the men of Tiryns because of Laomedon’s wrongdoing. Greek Text
Artistic sources edited by Frances Van Keuren, Prof. Emerita, Lamar Dodd School of Art, Univ. of Georgia,October, 2017.
443 total views, 1 views today
|
Skip to content
Go to GoCardless homepage
LoginSign up
Primary Account Number (PAN) Definition
If you have a credit or debit account, you’ll have a unique identifying number to go with it. This is called a primary account number, or PAN. How are these numbers assigned and what do they mean? We’ll explore the ins and outs of PANs below.
What is a primary account number (PAN)?
Look at the front of any credit card, and you’ll see a unique number embossed across the front. This is the cardholder’s primary account number, shortened to PAN. A PAN can be anywhere from 14 to 19 digits in length, depending on the type of account. Also called a payment card number, this unique identifier includes valuable details that the payment processor needs to complete a transaction. In addition to credit and debit cards, PAN cards also include things like store gift cards and prepaid cards.
How do PANs work?
When a customer opens a new payment account, the issuer automatically generates a unique PAN to go with it. Although they may vary in digit length, all PANS follow the same basic structure. The first few digits identify the issuing company and credit card network, whether it’s Visa, MasterCard, or a different provider. The final digit is called a checksum number, which is used for security purposes. In between, the middle digits are used as a unique identifier for each customer.
A PAN is only used on payment cards, which means it’s different from bank account codes like BIC and SWIFT.
Primary card number vs secondary card number
Do you have an account with more than one authorised user? In that case, you could consider applying for separate PAN cards for both users. Some credit and debit card issuers will give you PAN cards with the same primary account number on each. Yet others will use a secondary account number instead to keep the cards separate for tracking purposes.
Business credit cards apply PANs slightly differently. The business owner opens a bank account or corporate credit card account. However, when cards are issued for employee use none of them will show the PAN. Instead, each employee receives a unique secondary account number. This is helpful for businesses to track and manage employee card use.
How to apply for PAN card online
All major credit and debit cards use PANs, so if you wish to apply for a PAN card online, you’ll simply need to follow the usual credit application steps.
To get started, figure out which credit card best suits your needs before you submit a PAN card application. Think about factors like international use, rewards, and credit limits. You’ll also need to check your credit score and eligibility to see what types of cards are available to you. Use a credit card comparison tool to see which rates you’re eligible for, comparing terms and interest rates carefully.
Finally, you’re ready to submit your application online. Be sure to follow up and track PAN card status if you haven’t received a reply within the timeframe specified.
PAN card security features
One final note to consider before submitting a PAN card application is security. How will you protect your account number from credit card theft and fraud? Most credit card companies require merchants to protect their customers’ primary account numbers.
PAN truncation is a tool that prevents merchants from storing the full PAN. This involves printing only the first or last digits of the account number on receipts and avoiding other identifying materials. Account numbers should also be encrypted when travelling from Point A to Point B. This means criminals will be unable to track PAN cards.
Protect your primary account numbers and associated cards carefully. If you’re a business owner, you’ll also need to be sure that you’ve put all required precautions into place to protect your customer PANs from hackers and fraud.
We can help
Get StartedLearn More
Interested in automating the way you get paid? GoCardless can help
Contact sales
|
What are two important holidays in Judaism?
What are two important holidays in Judaism?
Erev Pesach — Fast of the Firstborn. Observed only by a fast of the firstborn males, it marks the beginning of Passover. Erev Rosh Hashanah — Nine Nights. The celebration and festival last for nine nights and ten days, ending with the Day of Atonement (Yom Kippur).
What is the most significant Jewish holiday?
Yom Kippur
Is Passover the most important Jewish holiday?
Passover is one of the most important religious festivals in the Jewish calendar. Jews celebrate the Feast of Passover (Pesach in Hebrew) to commemorate the liberation of the Children of Israel who were led out of Egypt by Moses.
What do Jews remember at Pesach?
Pesach, sometimes called Passover, is one of the most important Jewish festivals. Jews remember how the Israelites left slavery behind them when Moses led them out of Egypt more than 3000 years ago. Moses went to see the Pharaoh many times, but each time he refused to release the Israelites.
Why do Jews celebrate Passover BBC?
Moses lived in Egypt. He saw that the Israelites were being persecuted so he went to see the pharaoh. God told Moses that the Israelites should mark their doorposts with lamb’s blood so that the angel of death could ‘pass over’ their houses and spare them from this plague. This is why the festival is called Passover.
Is the Passover and the Lord’s Supper the same?
As with the Passover, this new event, called by various names throughout the generations — the Lord’s Supper, the Eucharist, Holy Communion — was to be used as a remembrance of what Jesus had done for us on the cross.
Are mashed potatoes OK for Passover?
As the main “allowed” starch of the holiday, some people actually get sick of them. But potatoes on Passover don’t have to get boring. But just think – potatoes can be mashed, smashed, fried, boiled, broiled, grilled, sliced, Hasselbacked, or chopped. No matter who you are, everyone enjoys potatoes during Passover.
Share via:
|
References in classic literature ?
And the greatest degree of evil-doing to one's own city would be termed by you injustice?
And so of the individual; we may assume that he has the same three principles in his own soul which are found in the State; and he may be rightly described in the same terms, because he is affected in the same manner?
The objects of the Union among the States, as described in article third, are "their common defense, security of their liberties, and mutual and general welfare." The terms of article eighth are still more identical: "All charges of war and all other expenses that shall be incurred for the common defense or general welfare, and allowed by the United States in Congress, shall be defrayed out of a common treasury," etc.
When Philip was put in the study he could not help seeing that the others, who had been together for three terms, welcomed him coldly.
They were, consequently, the first dispossessed; and the seemingly inevitable fate of all these people, who disappear before the advances, or it might be termed the inroads, of civilization, as the verdure of their native forests falls before the nipping frosts, is represented as having already befallen them.
Luker's terms. After all, he had a year at his disposal, in which to raise the three thousand pounds--and a year is a long time.
The note of terms was plain, straightforward, and comprehensive, at any rate.
Thirdly, That the terms offered to the person who should undertake and properly perform these duties were four guineas a week; that he was to reside at Limmeridge House; and that he was to be treated there on the footing of a gentleman.
The employment was likely to be both easy and agreeable; it was proposed to me at the autumn time of the year when I was least occupied; and the terms, judging by my personal experience in my profession, were surprisingly liberal.
I ask to be allowed to come to terms, supposing your document is all correct.'
'Now, mark, Boffin,' returned Silas: 'Mark 'em well, because they're the lowest terms and the only terms.
I must give in to the terms. But I should like to see the document.'
|
With a first look, one may realize that among other provinces of Iran, Kerman could not be considered a land with comprehensive attractions. But reality and facts have never stopped at just one glance. Therefore, we dig deeper and go a little further.
Kerman is one of the most amazing provinces of Iran when it comes to the environment, geography, and historical elements. The Province of Kerman has one of the richest lands that counts as the main resource for Iran’s industries.
History usually comes with the most tragic events and Kerman which embraces Bam, is also full of terrors. Like when Agha Mohammad Khan Qajar heartlessly took out many eyeballs in Bam and Kerman just to prove that defending a Hero and being on the right side, has its consequences. Before that, Kerman, including Bam, suffered from major attacks and were burned to the ground and then revived, like a golden-red Phoenix.
This province holds both the hottest (also in the world) and the coldest spots of Iran and apart from its strange and wonderful deserts, it has villages with pristine nature and beautiful rivers that have added to the sights of the region.
This World Heritage Site is about one of the greatest phenomena that the Kerman province generates. World’s largest adobe building is located in Bam, Kerman.
The city of Bam is placed in the southeast of the Kerman province, by the ancient Silk Road, and did represent (and still does), a clear road to Sistan, one of the great harbors in trading. It is 195 km away from Kerman and UNESCO’s World Heritage site, Arg-e Bam is the main landscape of Bam. A historical city that was once ranked as one of the most desirable places to visit in Iran.
The exact date of Bam’s construction is not known. Many attribute its historical city and citadel to the Parthian and Sassanid dynasties, although the Achaemenes Empire is considered to have built the base structure. What we can deduct from some solid documents is that Bam was very popular and prosperous. As well as some politically important issues, Bam’s economy was very special and unique. The textile industry, especially the tailoring added to the attractive factors in Bam.
It can be said that Bam has gone through one of the worst natural disasters in the recent decade. Many were devastated by the shocking earthquake and many were killed. Not to mention the ones who stood while anything they held dear was lost right in front of their eyes. The great historical city of Bam collapsed. The world’s largest adobe structure became nothing but earth.
Before the 2003 earthquake, Bam was the last place where Lotf Ali Khan (Zand dynasty 1751-1794) took refuge there and for a while, it was considered the capital of Iran, but upon Agha Mohammad Khan Qajar’s victory over the Zand army the city fell into his hands and many men, women, and children of Bam were executed.
The Citadel of Bam was the largest adobe building in the world before the earthquake in 2003. This earthen city was erected near the fifth century B.C. and glowed till the middle ages of the 19th century A.D. It had a reputation for itself and generation after generation kept their lineage. One of the reasons why the Citadel was so famous and had high ranked people, merchants, and others come and go was because of the Silk Road nearby. As a result, many caravans traveled to the city, increasing its prestige as well as its name.
This Citadel is a complete adobe structure and for this reason, it has been severely damaged over the years. However, the general plan of the city was quite clear before the earthquake. The citadel comprised of four key sectors: the residential district, the stables, the barracks for the army, and the governor’s residence (the main building).
Arg-e-Bam used to have nearly 38 lookout towers, 4 entrance gates, and the outer defense wall was bounded by a moat. The Government Quarters was on a hill, protected by a double fortification wall, overlooking the city. The most extraordinary sections were the Mosque, the bazaar, the Mirza Na’im ensemble, and the Mir House.
High walls surrounded the city and prevented invaders and wild animals from entering. The entrances to the city were tightly controlled. Then there were the houses of farmers and people who were considered poor that generally did not have more than two or three rooms. People of the middle class had houses with three to four bedrooms. Noble houses and the houses of the aristocrats come after and had more rooms, porches, and facilities such as a school and a sports field. After all this, in the center of the city, where it was well protected and had a clear view of the whole city, was the government citadel.
Bam Citadel is included in the UNESCO World Heritage List, which is why after the earthquake, many countries rushed to its aid and paid for the repair. The serious damage caused by the earthquake has been partially restored and the Citadel is almost in its former shape. However, those who saw Bam and the Citadel before the earthquake were very lucky. What we need now to understand the greatness of Bam alongside the effort to restore it, is an illustration and a little imagination.
This post is also available in: German
error: Content is protected !!
|
Marine seals
Two species of seal are found in the coastal areas of Finland: the grey seal and the Baltic ringed seal.
The grey seal is the largest in the Baltic Sea
The grey seal, i.e. Halichoerus grypus, is the largest and most abundant seal species in the Baltic Sea. There are currently approximately 30,000 individuals residing in the Baltic Sea, one-third of which occur in the marine areas of Finland. The numbers of grey seals are estimated from assessment flights made in spring when the seals are moulting. The adult males are easy to identify because of their ”Roman noses” and they may reach up to 300 kilograms in weight. Female grey seals are paler in colour and leaner, with body weights usually remaining below 150 kilograms.
Grey seals are social animals and gather in large groups composed of as many as 1,000 individuals. On calm summer nights, their ”song” can carry dozens of kilometres. Grey seals are nomadic and in the space of a week may travel as much as 500 kilometres.
Seal pups are born in early spring
Between February and March, females give birth to a single pup weighing about 12 kilograms either on the open ice or an outer skerry. The females nurse the pups for three weeks, during which time they fatten rapidly, increasing their body weight to 40 or 50 kilograms. The fattest pups grow on the open ice, while those on islets and skerries have lower body weights.
During their nursing period, seal pups change their white, wavy baby fur for a shorter adult pelt. Females come into heat towards the end of the nursing period and the adult males spend much of this time near the islets where the juveniles nurseries are. Each year between May and June, grey seals older than one year moult their fur on the outermost skerries.
A grey seal eats over five kilograms of fish each day
Although a fully-grown grey seal consumes between five and eight kilograms of fish per day, there is a large seasonal variation in their food intake. The abundant herring forms the dietary basis for grey seals of all age classes.
Salmon is also highly favoured as a food item by grey seals and they can catch it most easily from fishermen’s traps and nets. Some adult males have even become specialised in looting fishing gear. In the Baltic Sea, the highest proportion of damage caused by seals to commercial fishing is by the grey seal. The eaten catch and the broken fishing gear cause significant problems for commercial fishermen.
A grey seal and a fish trap.
A grey seal inspecting fishing gear.
The Baltic ringed seal is dependent on snow and ice for its survival
The Baltic ringed seal, i.e. Pusa hispida botnica, is the smallest of the Baltic Sea seals, growing from 50 to 120 kilograms in weight. Like the ringed seals of the Saimaa- and Ladoga lake systems, the Baltic ringed seal is subspecies of ringed seal. It is highly dependent on ice and snow as suitable environments for reproduction and moulting and is also capable of inhabiting fast ice in marine areas, with the help of breathing holes in the ice. Therefore, climate change is one of the greatest threats to the Baltic ringed seal in the near future.
The tiny seal pups are born in a nest of snow
New-born Baltic ringed seal pups weigh about five kilograms. They are born between February and March in a lair dug by the female usually from a snowdrift on the pack ice. Females nurse their single pups for five to seven weeks when the white seal pup fur changes to the typical ringed pattern of the adult pelt. The female comes into oestrus during the nursing period. Baltic ringed seals become sexually mature from three to six years of age and may live for forty years.
Baltic ringed seals moult their fur annually towards the end of April when the last of the sea ice is melting. It is at this time when the abundance of this species is assessed by aerial surveys.
The habitats of the Baltic ringed seal occur in the northern Baltic Sea and the Bay of Bothnia
For the most part, Baltic ringed seals live in the northern parts of the Baltic Sea, north of a line running between the Stockholm Archipelago and the Gulf of Riga. Only on rare occasions are individuals encountered south of this line. Therefore, the distribution of ringed seals in the Baltic Sea reflects the sea areas most likely to be frozen annually.
Most of the Baltic ringed seal population lives in the Bay of Bothnia, where it is estimated there may be up to 20,000 individuals. At least one thousand individuals are living in the Gulf of Riga. Currently, a total of only a few hundred ringed seals are found in the Archipelago Sea and the Gulf of Finland combined. These breeding populations are considered endangered.
A Baltic ringed seal on the sea ice.
The Baltic ringed seal is dependent on winter ice cover for its survival.
Baltic ringed seals mainly eat herring
The Baltic ringed seal predominantly eats small shoaling fish about 10 centimetres long, and herring forms its main diet. In the Bay of Bothnia, it not only eats three-spined stickleback but also vendace. Occasionally, they will also eat crustaceans, such as bottom-dwelling isopods, i.e. Saduria entomon. On average, a Baltic ringed seal consumes 3.5 kilograms of food per day. This seal species fasts in the spring, while the peak of its food intake occurs in late summer and autumn when it fattens itself for the winter.
Baltic ringed seals are mainly solitary in nature and they are not particularly social. However, they may occasionally be observed resting in small groups.
There are seven seal reserves in Finnish marine areas
There are seven seal protection areas in Finnish state-owned sea areas, which specifically protect the main moulting and resting islets used by grey seals. The total area of these reserves covers 188 km², which is only 0.37% of the Finnish marine area.
In addition to species protection, seal protected areas also benefit research, as well as species monitoring. Although some protected areas may also play a role in the protection of the Baltic ringed seal, it is more difficult to identify equally clear protected areas for this ice-dependent species. Seals have also been observed in several marine Natura 2000 nature protection sites.
Grey seals on a rocky islet.
Grey seals gathered on a skerry in the outer archipelago.
Finland has a long history of seal hunting
In Finland, both the grey- and ringed seals of the Baltic Sea are considered game animals. For centuries seal hunting has been part of the archipelago culture and the income from seals was an important additional source of livelihood for fishermen. Seal pelts, meat, as well as blubber and oil were exploited in many ways. Seals were also considered as pests and their hunting was encouraged.
In Finland, a bounty for was paid for killing marine seals until the mid-1970s. The decline in seal populations began to appear in seal catches as early as the 1960s. Indeed, the initial seal population collapse due to commercial overfishing was later exacerbated by environmental toxins. Subsequently, marine seals were protected from hunting.
With the increase in population, seal damages also increased. As a result, seal hunting was resumed, first for grey seals and later for ringed seals. Finland's current annual catch quota for grey seals is 1500 individuals, of which a quota for 450 individuals is from the marine areas around the Åland Islands. However, less than half of the total catch quota is accomplished.
Baltic ringed seals may only be hunted in the Bay of Bothnia, where the Finnish annual catch quota is 300 individuals. In addition to Finland, marine seals are also hunted in Sweden and, to a lesser extent, in Estonia and Denmark. Due to sea conditions, seal hunting is more challenging than average hunting. The EU ban on trade in seal products currently prevents the wider commercial exploitation of hunted seals.
|
Skip to content Skip to navigation
A Proteomic Map of the Human Retina
Feb 21 2018
Posted In:
20/20 Blog, Press
Understanding the distinct features of the fovea, macula, and periphery is crucial in the development of effective targeted treatment therapies for various conditions, including age-related macular degeneration (AMD), cystoid macular edema, retinitis pigmentosa, and diabetic retinopathy.
These areas all serve unique functions, and all are all uniquely vulnerable. In a recent study, researchers examined variations in protein levels in these different regions of the human retina, and how those protein levels affect molecular predisposition for ophthalmic diseases.
“Many vision specialists are aware that certain retinal diseases, such as AMD and diabetic retinopathy, affect different anatomic regions of the retina like the fovea and macula,” Author Gabriel Velez, BS, told MD Magazine. “However, it’s unclear what is happening on the molecular level that makes these regions susceptible to these diseases.”
The team looked at foveomacular, juxta-macular, and peripheral retina punch biopsies to determine protein levels and identify patterns of expression. They used liquid chromatography-tandem mass spectrometry (LC-MS/MS) to measure retinal protein levels, and 1-way ANOVA, gene ontology, pathway representation, and network analysis to ascertain protein expression.
Reactive oxygen species (ROS) at elevated levels can cause damage to DNA, proteins, and lipids, and in turn, can lead to apoptosis and genetic dysregulation. Because the retina is the most metabolically active tissue in the human body, it is especially vulnerable to this oxidative stress, particularly when the retina’s natural defenses against these species become less effective with age.
According to the authors, this decline happens throughout the entire retina, but many diseases are more inclined to develop in certain areas.
“In this study, we sought to catalog the thousands of proteins in the normal human retina using proteomics,” Velez said. “A detailed molecular map can give researchers and clinicians clues as to why certain areas of the human retina are more susceptible to different diseases, as well as what therapies may be most effective for treating them. For example, we found that the foveomacular region of the retina contained fewer proteins known to combat oxidative stress in the eye, which may explain why the foveomacular region is susceptible to oxidative damage in diseases like AMD.”
The authors identified “a mean of 1,974 proteins in the foveomacular retina, 1,999 in the juxta-macular retina, and 1,779 in the peripheral retina.” They said that 697 “differentially-expressed proteins included those unique to and abundant in each anatomic region.”
“We also interrogated our proteomics data for retinal antioxidant proteins that could be activated by available drugs and compounds, like ebselen and vitamin B3,” Velez said. “These neuroprotective drugs can be repurposed for different retinal diseases where oxidative stress is a contributing factor, like AMD and diabetic retinopathy."
The authors concluded that they “have developed a reliable and reproducible dissection protocol that makes use of readily available punch biopsy tools. Our focus on analyzing the proteomic profile of different retinal regions provides further insight into their molecular uniqueness and to understanding the pathophysiology of numerous regional retinal diseases.”
|
My understanding is that students find absolute value to be challenging to learn or understand. Off the top of my head, I can come up with two possible reasons for this.
1. Absolute value is a piecewise defined function. Piecewise defined functions are more difficult due to increased abstractness: they are not a simple formula, but include a conditional. (I do not know if this is true, but it sounds plausible enough.)
2. Absolute value is difficult, because it combines algebra (changing sign) with a geometric interpretation as a distance of number. At according to the thesis of Hähkiöniemi on derivative [1], it is challenging for students to change between perspectives in a fruitful way.
Is one of these the reason for the difficulties, or maybe it lies elsewhere? As always, answers using scientific literature are the most valuable and ones relying on explicit personal experiences are also fine.
[1] Hähkiöniemi, Markus. The role of representations in learning the derivative. No. 104 in Reoprts of university of Jyväskylä, department of mathematics and statistics. University of Jyväskylä, 2006. http://urn.fi/URN:ISBN:951-39-2639-7
• 2
$\begingroup$ I think your first answer is mostly likely the main reason. Students' understand functions as being things like x^2, because those are usually the only sorts of functions they've really had to deal with before. $\endgroup$
– Jessica B
Aug 16 '19 at 12:48
• 1
$\begingroup$ I agree with @Jessica B, and I think another reason (which is possibly included in her reason if interpreted broadly enough) is that absolute value manipulations are different from the standard algebraic manipulations they're used to (distributive property, combining like terms, etc.). Indeed, the concepts in your second reason are often a way of getting students to better "visualize" absolute value manipulations and to help keep students from making incorrect manipulations or incorrect deductions involving absolute value manipulations. $\endgroup$ Aug 16 '19 at 13:01
• 1
$\begingroup$ The main difficulties I see are when students are asked to deal with absolute value inequalities. Inequalities are very hard for most students to deal with. $\endgroup$
– Sue VanHattum
Aug 16 '19 at 16:10
• $\begingroup$ Because it's intricate. Yes, even just one or two "if then"s still makes something intricate! We are meat, not silicon. Just explaining a rule or set of rules is not adequate for us, if we have to remember some intricacies. Instead of tacitly searching for some lock-key explanation idea (what is the hurdle and how do we adroitly remove it), I recommend to drill, drill, drill. And then drill some more. This is the way to familiarity and to making struggling concepts routine. It's no different in music or sports. That's the way us meat people are. $\endgroup$
– guest
Aug 18 '19 at 18:00
• 3
$\begingroup$ I suspect most students have no trouble with the notion of the absolute value of a number, and that those who have trouble with the absolute value function do so because more generally they have trouble with the function concept (the same reason they struggle with the notion of the square root function). $\endgroup$
– Dan Fox
Aug 21 '19 at 19:11
(My answer is just a guess and not based on any formal research.)
I suspect the absolute value function may be difficult to understand because it involves "negative numbers that aren't negative." One way to define the absolute value of $x$ is:
$$|x|=\left\{\begin{array}{rl}-x, & x<0\\x, & 0\le x\end{array}\right.$$
I think the $-x$ confuses students, causing them to think that "sometimes the output of the function is negative."
• 7
$\begingroup$ Yes. In my experience, many students struggle to accept that for some values of x, it is true that |x| = -x. $\endgroup$
– idmercer
Aug 22 '19 at 20:05
• $\begingroup$ This detail can be hard for students to grasp, but in many contexts the formal definition is entirely hand-waved away. I had one (weak) math major in a discrete math course a year ago, who I assume had done many absolute-value exercises in the past, who came to me after presenting the formal definition claiming "this is different from the absolute value we've been taught before", and surprised when I could show it produced the values he expected. $\endgroup$ Aug 11 '20 at 13:56
• 1
$\begingroup$ This is why I vastly prefer using the term "the opposite of $x$" to "negative $x$" for $-x$ in front of students, or "opposite $x$" for short. That removes a lot of the confusion what to do with $-x$ if $x<0$. $\endgroup$
– Forklift17
Aug 12 '20 at 19:23
My experience is that weak students latch onto absolute value of a number is always positive. They are fine working with constants. When you introduce a variable, it all falls apart. To these students it is clear that:
$$\lvert-x\rvert= x$$
After all the absolute value sign takes away the negative sign. If you want students to understand the absolute value of an algebraic expression then you MUST get them past this hump. All this is before you move onto word problems and inequalities.
• 1
$\begingroup$ This is the most concise description of the biggest problem, though Joel's answer also gets to this point fairly directly. $\endgroup$
– kcrisman
Aug 7 '20 at 14:30
• 1
$\begingroup$ @JoelReyesNoche thanks for catching my error $\endgroup$
– Amy B
Aug 9 '20 at 4:11
• 1
$\begingroup$ @JTP-ApologisetoMonica thanks for the edit $\endgroup$
– Amy B
Aug 9 '20 at 4:12
• $\begingroup$ @kcrisman yes Joel's answer also addresses this point. He has a more formal definition of absolute value, which is often not introduced to high school students. I was thinking of how such students might fail in understanding. $\endgroup$
– Amy B
Aug 10 '20 at 7:56
• $\begingroup$ Yes, that is exactly why I also upvoted your answer. $\endgroup$
– kcrisman
Aug 12 '20 at 15:13
I am writing this based on pure observation (e.g., entering year four of teaching this topic to secondary school students, and having co-taught a minicourse for teachers on absolute value functions$^\star$).
There are a lot of definitions/interpretations of absolute values:
• the (abstract) axiomatic one;
• the piecewise or "case-based" one (provided by Joel Reyes Noche);
• the colloquial function one (erase any negative sign and return the result);
• the equivalent function one of $x \mapsto \sqrt{x^2}$;
• the geometric one (where $|a-b|$ is the distance on the number line between real numbers $a$ and $b$);
• the positive difference interpretation (subtract the lesser number from the greater one, i.e., $|a-b|=\max(a,b)-\min(a,b)$ for real numbers $a$ and $b$);
• the graphical one communicated by a $\mathsf{V}$-shaped curve;
A complication that has already arisen in the above collection of definitions/interpretations is its ambiguity around whether we are defining a function whose input is one number or two numbers; the one number version can, of course, be viewed as the two number version where (at least) one of the entries is zero. Nevertheless, I think that sometimes the topic is introduced with multiple approaches and without drawing this distinction; e.g., if you try to help a student get a grip on absolute value functions by asking the difference in age between them and a friend, then you are effectively asking about the two input interpretation, which could be confusing if you had just introduced it as, e.g., a piecewise function.
There may also be an issue of timing: When I delve into absolute value functions, equations, and inequalities, it is in an Algebra 2 course; in our present course sequence, this means that students have done Algebra 1 already, and then spent a year on Geometry. In other cases, you may have this topic broached in an Algebra 1 course; so, students are in the throes of matching graphical representations, geometric representations, symbolic representations, and so forth.
As to covering absolute values pedagogically, I think there is a great value in viewing more general absolute value functions of the form $x \mapsto a|x-h|+k$ for real parameters $a, h, k \in \mathbb{R}$ in terms of transformations, and, in addition, connecting this to quadratic functions written in vertex form, $x \mapsto a(x-h)^2 + k$, to emphasize similarities and differences (and to reinforce terminology: intercepts, roots, vertex, concavity, end behavior, domain, range).
$\star$: I would be remiss if I failed to link our (my colleague/department chair, Liz Brennan, and my) freely available materials from a minicourse taught through Math for America:
There are a lot of materials in there, which range from a question I asked on MathOverflow (MO 301514) to a misformulated problem (p. 9 #6 is impossible!) to problems that participants formulated during our final meeting (Desmos link).
The three-meeting course linked above is by no means exhaustive: In fact, I have been thinking more about absolute values and their role in defining equations as one endeavors to impose domain restrictions; for example, see the sequence of tweets that begins here.
• $\begingroup$ What do you mean by, "the (abstract) axiomatic one", and how does it differ from "the piecewise or 'case-based' one"? $\endgroup$ Aug 11 '20 at 13:59
• $\begingroup$ @DanielR.Collins I was thinking of the four axioms at: en.wikipedia.org/wiki/Absolute_value as combined with en.wikipedia.org/wiki/Ostrowski%27s_theorem if necessary (altho that goes way beyond a first run at absolute value functions!). $\endgroup$ Aug 12 '20 at 6:37
• 1
$\begingroup$ Interesting, so a more generalized absolute value, not necessarily the standard one. Thanks. $\endgroup$ Aug 12 '20 at 13:44
In my experience, this is the first really clear example that students experience of dissonance between how something looks and what it is. One of the most common errors I see regarding absolute value - before bringing in variables, of course - looks like this:
$$|5 - 6| = 5 + 6 = 11$$
Whereas, of course, if I ask the student "what's $5 - 6$?" and then "okay, what's the absolute value of that?" they'll get the answer perfectly correct. The issue seems to be that they have a mental picture of "absolute values make negative signs positive", and they see a negative sign in the expression "$5 - 6$".
As experts, we understand that the absolute value operates on numbers, not on symbols; it cares whether the number it's given is negative, not whether the way the number is written includes a negative sign. But that's not an easy distinction to make if a student has grown up with the usual symbol-based approaches for problem-solving.
This is a similar issue to what often happens with simplifying fractions - one common mistake looks like this:
$$\frac{3 + 2}{6 + 2} = \frac{3}{6} = \frac12$$
A student who makes this mistake is thinking of cancellation as a symbolic action ("delete the same symbols from the top and bottom") instead of an arithmetic one ("multiplying the numerator and denominator by the same number is the same as multiplying by one").
Many students I've worked with get around this problem with absolute value by just memorizing the rule "simplify inside the absolute value first". That's correct, but it suddenly stops helping when variables come into play; that leads them to say things like this:
$$|-x| = x$$
The situation is made even worse when they're presented with the piecewise definition that other people have mentioned in the answers. My sense is that it's largely because they feel like they're being asked to accept a completely new meaning to a word that feels familiar; it's as if you were trying to convince them to accept that the word "elephant" now refers to a small bird. To the average student, "absolute value" means performing this very simple symbol-based process of removing negative signs; now you're telling them it's actually a complicated process of conditions and arithmetic!
At my school (a community college in California) the curriculum is set up so that students first take an algebra course in which essentially every function is a linear function, and only later do they do anything at all with nonlinear functions. Nonlinear functions are just more complicated, so they require more thought about the logic. Thinking about logic is harder than manipulating formulas according to recipes.
As an example, if I tell a student that $x^2=4$ and ask them to solve for $x$, the formula-manipulation approach would be "do the same thing to both sides," so they may get $x=2$. It requires an extra logical step to realize that there are two roots. One way to write this is $|x|=2$, but I'll see students do things like $x=|2|$, or $|x|=|2|$ (which is correct but kind of silly). Understanding why $|x|=2$ is the right choice requires some more logical thinking.
An example that comes up in physics is that students will memorize the fact that $a=-g$ for free fall, despite instruction to the effect that this depends on the coordinate system, and that they need to pick a coordinate system first. Some textbooks even encourage this. Picking a coordinate system requires an extra logical step, which is hard if you aren't used to thinking about math logically. I can tell them that $g$ is defined as $|a|$ so that we can put a value of $g$ in the book without reference to any coordinate system, but again, this requires some logical thinking about topics that they aren't used to thinking about, and have gotten the impression that they never have to think about.
• 3
$\begingroup$ "Picking a coordinate system requires an extra logical step, which is hard if you aren't used to thinking about math logically." — I think an extra complication is that in many pre-college environments the axes are drawn with two arrowheads, and it is assumed that values increase in the right or upwards direction; in this case arrowheads indicate that this is an infinite line. I believe that an alternative approach when an arrowhead indicates the direction in which value increases, and the infinite length of the axis is assumed just because it is not terminated with a point, is less ambiguous. $\endgroup$
– Rusty Core
Aug 23 '19 at 23:14
Absolute value is difficult for students because they have difficulty parsing and simplifying logical statements.
Some of the results of working with absolute value statements seem to actually hide the inner mechanics of how the logic of an absolute value statement play out. Because of their piecewise nature, you must understand logical statements to work deeply with absolute values. Students often develop a superficial, rote approach to these expressions because they are simpler than actually describing what is going on (rote is also sufficient for most exercises that I have encountered).
The best anecdote that I can offer was the following question, which was asked if me by a student in 11th grade: When you are solving the inequality abs(x-2)≤3, the solution is -1≤x≤5, which is an AND statement (-1≤x AND x≤5); but the solution process involves an OR statement (x<2 OR x≥2) to address both parts of the piecewise definition. Where does this change happen from OR to AND?
It is a question that really requires one to logically express the solution process clearly to answer (the answer is actually that the statement is always an OR statement that just happens to be equivalent to the AND one... (x<2 AND x≥-1) OR (x≥2 AND x<5), where the outermost OR is the continuation of the two distinct possibilities from the solution process).
All of the notions of the different interpretations of absolute values are tools to aid students' abilities to intuit answers to problems involving absolute values, but do little to actually let students rigorously understand them. To do so, you would have to actually pick one as the definition and relate the others rigorously to that. This is an exercise that almost never happens in classes.
In truth, I think that students find absolute values difficult because the subject is legitimately difficult! Otherwise, why would we bother having so many ways to think about how to interpret such statements? These interpretations are shortcuts around the rather tedious legwork of working directly with the logic (my personal chosen definition). I think that most teachers probably don't know absolute values as clearly as we would like them to.
I offer the following questions as food for thought on the difficulty of absolute values:
1. When solving equations, as opposed to inequalities, it is more common in secondary school to find a countable number of solutions as opposed to an uncountable number, but the solution set for the equation abs(2x-2)-abs(x)=2-3x is [0,1]. Why can you get entire intervals of solutions to such an equation?
2. How do you graph y=abs(x-abs(x-1))?
It is difficult because there are many things implicitly done behind the scenes:
• Function definition with case analysis usually for the first time.
• Solving problems, among which are equations and inequalities, usually involves case analysis for all variables and all their possible values and then checking if solution satisfies initial conditions, or in other words just doing Backtracking, but teachers don't do analogies showing general solving strategy, and for example comparing absolute value problem to some real life situation. They just do example and that's it.
• and and or, De Morgan's laws etc statements when solving equations and inequalities, also doing set operations that teachers don't explain well.
• Students don't have good algebra intuition - not explaining in details what happens with distance interpretation $|x-y|$ "What happens for example when $x$ is negative, $y$ is negative and other cases", or just seeing $-x$ that is positive.
• Seeing other properties like reverse triangle inequality might be intimidating.
• Graphing on $(x,y)$ plane and seeing some stroked lines for the first time and then doing mapping with algebra equation is a challenge too.
• Convoluting even more for example adding some parameters and stating some questions about solutions without explaining overall problem solving strategy.
• Resemblance with $x^2$ and other stuff that other commenters mentioned.
There is a lot of truth in many of the answers here, but they are all addressing this from a pure math standpoint. The logic of absolute value isn't hard, though - even if there are many examples here of weak prior knowledge that can cause problems.
Absolute value is hard for students to learn because it is almost always taught in extremely abstract and boring ways that just repel the vast majority of people from math while powerfully discouraging sense-making.
Here is an example from the ironically named "Math is Fun - Absolute Value" website
enter image description here
How fun and interesting is this? If we were to poll 1000 students, would 5 of them enjoy making meaning of this? Would even one see absolute value as useful?
I doubt it. The normal reaction almost assuredly would be something like "This is just more proof that math is just a bunch of weird rules and steps to obey. More rote crap to memorize." With this as a foundation, there is almost no way to make sense of any of the other very clear and logical points made in this thread.
In order to get students interested in the meaning of absolute value they need context in which absolute value thinking is useful.
You are a purchasing agent at a pharmacy. You need to order a large amount of a generic cancer drug and three different factories offer to sell it to you. You ask for a sample of their pills to check their quality. Each pill should have exactly 700mg of active ingredient, but no factory is perfect.
Factories A and B are brand new and not ready to produce enough pills for you. They send you pills just to get your feedback. Factories C and D are ready to sell to you now. They send a much larger sample of pills for you to analyze.
Your mission: (1) Recommend to your manager which factory to buy from. (2) Justify your answer. (3) Develop a written process for evaluating quality of all future pill samples.
Do great work because your patients' lives depends on it!
Pill Quality Case
Instructions are on the first sheet. Data is on the second sheet. Reflect explicitly to consolidate is on the third sheet.
Here's a little snippet of data.
enter image description here
First you can have them estimate the quality of each factory. B is clearly better than A.
Now, you can guide them through calculating, totaling, and averaging errors for each factory, in which case a pill being, say, 200mg over and another pill being 200mg under exactly cancels. And whaddaya know: Factory A is perfect and Factory B sucks! Math beats common sense! According to the total and mean error, we should go with Factory A. Maaaaaaybe grandma will scream in unnecessarily excruciating pain and the baby will go into overdose, but according to mean error calculations, that is OK. lolz
Then, the more outspoken ethical sticklers will say that they, in fact, don't want to torture grandma and kill a baby. (BUT, LIKE, MEAN ERROR CALCULATIONS YO, IT'S JUST MATH DUDE, GOTTA LISTEN TO DEM NUMBERS.)
Hmm. Maybe 200mg under and 200mg over should not cancel out when added up, because an overdosing baby doesn't really cancel out screaming grandma. So we could if we were very careful, always make sure to do:
• Error = 700mg - (measured active ingredient) when the mg of active ingredient is less than 700mg
• Error = (mg of active ingredient) - 700mg when the mg of active ingredient is greater than 700
And that way, we get no negative numbers. No more errors that offset each other.
We just have to be really careful on every single calculation. Every. Single. Pill.
That's pretty easy for Factories A and B.
Who wants to do all those calculations for the hundreds of pills from Factories C and D? Hands up for volunteers! Hands up please!
Uhh... anyone... anyone?
If only there were an easier way to get rid of those damn negative errors... Wouldn't that be awesome?
At this point, you can introduce absolute value (and clicking/dragging formulae in spreadsheets) and every student knows why it's necessary and what it means. You could go a little deeper here into this case, formally relate the two error formulae to absolute value, then gradually abstract away from this and towards all the other posts in this thread. It can now become obvious why $\lvert x-700 \rvert=400$ must have two solutions: An absolute error of 400mg can be caused by overage or underage.
This stuff can all make sense because absolute value has meaning in their minds.
Anyways, if you have any suggested updates for the case or other thoughts on how I teach absolute value, please reply. :-)
• $\begingroup$ I like your example. For me, I consider distance as one fairly simple way to consider absolute value. Along a number line, the points at both $-2$ and $4$ are the same distance of $3$ from the point at $1$ since you need to move $3$ units in both cases to get to $1$, with $|-2 - 1| = |4 - 1| = 3$. In particular, distance doesn't care whether the other point is to the left or right of the point being compared to, just how many units it needs to move, with is given by the absolute value of the difference in values. $\endgroup$ Aug 12 '20 at 0:22
• $\begingroup$ Thanks! Do you accompany your distance approach with some kind of context? I've never been able to get students interested in this without one. $\endgroup$ Aug 12 '20 at 6:51
• $\begingroup$ You're welcome. When I tutored a $2$'nd year university engineering calculus course for a couple of years, I was asked several times about absolute value. This was over $30$ years ago, so I don't recall many details now of what I was specifically asked and how I responded, but I vaguely recall mentioning about its relation to distance at least once or twice, although I'm not sure of what sort of context I used then. Regardless, I agree giving appropriate context generally helps students to better understand mathematical concepts. $\endgroup$ Aug 12 '20 at 7:15
These are great answers to a great question.
Although my answer might only be a rephrasing of what is already said, I think a main problem with absolute value is that it can be ambiguous (or have similar wording but different answers)..
For example, if I say the difference between $x$ and $4$ is $3$ then what is the answer?
1. Is it $x-4 =3 \to 7$, using the "difference"?
2. Is it $|x-4|=3$ and then it becomes $x-4=3 \to x= 7$ and $-(x-4)=3 \to x = 1$ using the absolute difference?
To me, the second option seems more likely if you think geometrically about placement on the number line, but more likely it's the first answer since the question didn't state "absolute difference"
Another thing I thought of for absolute value, is that it can be unnecessary like $|x^2|=x^2$, but then necessary with $|x| \neq x$
Your Answer
|
Dear Good Shepherd Parishioners,
You’ve probably noticed that I’ve been wearing a peculiar look- ing hat at different parts of the Mass on Sundays and weekdays, and you may have wondered what it is. I thought I would use my column this week to shed some light on this seldom used liturgical garment called the biretta.
A biretta is a square cap with three or four pointed ridges, often adorned with a pom or tassel at the top center. It is worn as a ceremonial hat by Catholic clerics of many ranks, from cardinal down to seminarian. Cardinals wear red birettas, bishops wear purple, and priests, deacons and seminarians wear black.
The word biretta is Italian, although it likely evolved from the Medieval Latin word “birrettum.” This word literally means hooded cloak. Centuries ago, the biretta was simply a cap similar to the “pileus,” a skullcap worn by the Catholic clergy. The cap was worn under larger hats for a simple reason — protection against the cold. Given its practical benefit, church clerics and secular officials began to wear the early biretta in the 14th and 15th centuries.
Priests traditionally wore the biretta during High Masses, more elaborate ceremonies that included singing, the use of incense, and the participation of deacons and sub-deacons. The Catholic Church no longer classifies Masses as either High or Low, but Catholics some- times still use the term “High Mass” to describe special, or more solemn occasions.
In the past 50 years or so, the biretta has fallen out of use. Honestly, I am not sure if anyone can give an answer as to why, but it is something I personally like to use at Mass.
Just like anything we do repeatedly, the Mass can become something we take for granted, or view as mundane and ordinary. When this happens, we can go into “autopilot” at Mass and not really pay much attention to what is going on or think about what we are doing. This is even true for the priest at Mass!
One way we can fight against these tendencies at Mass is to make intentional actions against them by doing things that increase our sense of reverence. Ways of doing this include taking time for silence before Mass, looking over the readings before Mass, or wearing more formal clothing among other things. I feel that the biretta is something small that, when combined with other things, adds a level of solemnity and formality to the celebration of Mass which, even in ordinary time, is something which we should strive never to view as ordinary, or to approach out of a sprit of routine.
Just ask Fr. Weber
Do you have questions, comments or thoughts about what Fr. Weber wrote? Maybe you even have a different question or just wanted to ask something that has been on your mind? Fr. Weber welcome’s your thoughts, questions or comments. Simply fill out the form below and your message will be submitted directly to him and he’ll get back with you.
Ask Fr. Weber
|
No Such Thing As A Native American
Again deposits over certain time are assumed and do not fit what we know about deposits today, i.e. polystata fossils, lack of erosion between layers.
It is not possible to know what the ratio of C14 to N14 was at it’s creation. And, over approx. 50k years C14 is undetectable so how can long ages be determined using this method?
Do an experiment, take a Mason jar, fill 3/4 with water, add a handful of soil, shake and let settle. Sediment layers will form.
No kidding, and seasonal flows create seasonal layers that can be dated layer after layer.
Polystrate fossils are not that complicated and pose no problem with traditional geologic dating.
At streams and rivers. What about the sediments on top of Mount Everest for example? What about the inverted layering where what should be a 100 MYA layer is beneath a 10 MYA layer? Polystrata fossils? Did the tree not rot for 5 million years while the sediment built up?
What about them? Mountains are pushed up by massive geological upwellings.
Trees don’t rot when they are fossilized, the cells are replaced by silica and minerals.
Blood cells and tissue and C14 in dinosaur fossils?
Trees standing inverted with “millions of years of sediment” surrounding them? Sediment layers bend and curved but not broken?
Clam fossils on mountain tops. in the closed position?
Lack of the trillions of human fossils if Man is 50,000 y.o.?
Scientific evidence negates almost all of evolution hypotenuses?
How long does it take to replace live tree cells with silica?
Completely false.
How long fossilization takes depends on many conditions, it’s now believed it can take less than 10,000 years to over a million years again depending on the conditions.
Clams sometimes die with their shells in the closed position.
Trees can be submerged in floods due to landslides creating lakes and if the water and temp conditions are right won’t rot for centuries. If they are buried in sediments due to continued flows they fossilize in the standing position without breakage.
Visit southern NM sometime if you’d like to see direct evidence of dramatic upwellings that pushed up mountains and ridges overnight, some hundreds or thousands of feet of rise in a single event.
Plate tectonics show us that plates both rise and fall and one may be forced up constantly or dramatically as anther plate is forced under it.
No mystery, no magic, no problem or conflict with what we know from all the various disciplines.
As for human fossils, they’ve certainly been found all over the world. Only a very small portion of any species present at any time on earth though will become fossilized because it takes very specific conditions to create a fossil.
Depends on the conditions. It can vary dramatically.
Fossils of sea animals have been found everywhere on earth from very low to extremely high elevations again for the reason I said, upwelling due to plate tectonics.
Even if you believe it takes 10,000 years for a tree to fossilize, wouldn’t it rot in less time than that?
Of course the mountains rose and the sea floors receded. It is still occurring.
For fossils to form, they must have been buried rapidly evidenced by fish fossils frozen while eating another fish, mammoths buried in the standing position. Then there are the elastic tissue from dinosaur fossils. All explained by the Noahic Flood.
No, all of that predates the biblical flood considerably if you buy into the 5,000 year old creationist model.
No, they don’t have to be buried quickly, submerged animals and plants particularly in cold water won’t rot for a very long time and can be slowly buried in sediments and then fossilized.
Rapid melt off of glaciers at the end of the last few ice ages more than explains most fossilization as it creates tremendous flooding and erosion and so do things like asteroids, and massive quakes creating huge tsunamis.
The highest tsunami wave ever recorded was 1700’ high and was created by a landslide in Alaska due to a fault shift forcing a huge amount of water rushing u p into a narrow valley.
An event like that moves millions of tons of debris in a flash and buries it at the bottom of the flow, in this case in deep, cold water.
The upright mammoths were apparently flash frozen due to rapid cooling and buried in glaciers and huge snowpacks which is why they still have soft tissue and were not fossilized.
“It seemed these fish actually had turned to stone in the blink of an eye.”
I can’t find the right youtube.
Fossilization can occur very fast.
Geological dating is unreliable. Even the Grand Canyon could have formed in a matter of a few years, not millions of years.
Petrified Forest in Arizona is probably the result of a flash flood. It didn’t take millions of years either.
ha , I was just thinking that
1 Like
It sure as hell isn’t “history”.
Nothing in that article actually suggests that actual “flash fossilization” occurs in nature.
They couldn’t even demonstrate that under incredibly well controlled conditions in the lab that they could 'flash fossilize" a whole fish in anything less than centuries or millennia.
There’s zero evidence to support the contention that the petrified forest was “petrified” in anything less than thousands of years.
That it was likely buried in a massive flood under mountains of sediment has long been accepted.
Back on topic. There has been discovered artifacts similar to ancient Hebrew relics in N. America, pyramids in S. America, Nordic artifacts from early visitors to N. America and Asian relics on the Left Coast.
DNA mapping shows the spread of human habitat NE Africa region, Turkey or Iran. I watched an interesting lecture on The Table of Nations by Paul Griffiths. He talked about the migration of people from that region, and how the names of the tribes are still evident in the regions of the world where they settled. It is worth watching if you have an interest in the subject.
I’m sure Vikings settled in North America (who also settled in Greenland which was green then, and lived there for a few centuries).
Prior to these Scandinavians, the Celts, and most likely the Phoenicians as well.
|
Saturday, 6 October 2018
Pedal power
Originally published in 2013.
In perhaps one of the great ironies of human civilisation, mechanical devices to truly magnify human power came along as soon as we didn’t need them. Pedal-powered devices like bicycles only appeared after coal had already begun to transform the landscape, however – mass production was necessary for the standardised metal parts -- and around the same time that gasoline was first being introduced as a fuel for automobiles.
We tend to forget, then, three important things about the bicycle. First, it remains the most efficient method of using our bodies, allowing us to attain higher machine speeds for longer than we would on muscle power alone – and without using any more fuel or causing any more weather to go haywire.
Bicycles have been used for so long as children’s toys and exercise equipment that we forget what useful technology they represent. They multiply our bodies’ speed and efficiency many times over, allowing us to travel miles without strain. Their widespread adoption in the late 19th century created a ripple of under-appreciated effects in society; for example, they allowed women to commute to jobs away from home and paved the way for the universal sufferage movement.
Second, bicycles have seen many improvements in the last hundred years, most of which have escaped the notice of anyone but enthusiasts. Many of the bicycles we use today function mainly as toys, and racing bikes are built for speed; sturdier bicycles – often going under the name of “military bicycles” can still be ordered.
Most importantly, though, bicycles are only one of many possible pedal-powered machines that were not used for transportation. Beginning in the 19th century, factories began to make and stores to market treadles for manufacturing everything from cigars to brooms to hats. Farms saw foot-powered harvesters, tractors, threshers, milking machines and vegetable bundlers. Machinists saw pedal-powered drills.
“…no matter how simple it seems to us today, pedal power could not have appeared earlier in history,” wrote Kris DeDecker in LowTech Magazine. “Pedals and cranks are products of the industrial revolution, made possible by the combination of cheap steel (itself a product of fossil fuels) and mass production techniques, resulting in strong yet compact sprockets, chains, ball bearings and other metal parts.”
Today, we have built a world that runs on fossil fuels, which will not last forever. Eventually we will not be able to depend on familiar machines like cars and electronics - - either because we won’t be able to afford them, or to afford continually fixing them, or because fuel prices will be out of reach.
One way or another, we will have to go back to muscle power, and the best way to do that is to revive the lost technologies of pedal-powered tools. Most of these devices exist today only as a few rare museum specimens, but we should easily be able to build more. The irony, though, is that we need to build them while we still have fossil fuels.
“It is important to realise that pedal powered machines (and bicycles) require fossil fuels,” DeDecker writes “If we burn up all fossil fuels driving cars, we won't be able to revert to bicycles, we will have to walk. If we burn up all fossil fuels making electricity to drive our appliances, we won't be able to revert to pedal powered machines, but to the drudgery that went before them.”
Perhaps more people around here will take to bicycles again, as I will now that I have a headlamp to light my way during the winter nights. Older people here remember when the bicycle was the most popular method for getting from one village to another, and the roads were safer then with so few cars. It’s possible that the schoolchildren of today will see those days again.
|
Leadership at home: this is about you and your children, not your politicians.
There are tough times in our family due to tension between us, the parents, and our two teen aged kids. They are less and less obeying the house rules. How can we maintain their discipline?
We can miss very quickly the point once we concentrate on the issue of obedience and discipline. It is, many times, a remedy for a failure, to pinpoint and zoom onto the children’s side, and dealing with THEIR defiant behavior.
Let us tern the question away from the issue of controlling the kids to the challenge of educating and guiding them.
For that purpose I recommend you to take a look at YOUR end, and re-asses the nature of your parental authoritative power.
Forget the …‘I’m your parent and therefore you do so and so because I say so’… stuff. It belongs to old days that are far gone. No parent should trust that method to work well and for a long time, as it used to be. It is your problem if you tend to stick to it. If so, parents, you better wake up. We are at the 21st century.
The term of LEADERSHIP is called for.
The essence of leadership, in short, is the ability to build trust that creates sustained loyalty for you. It is also the process of influencing others to adopt and follow your directions and ideas. You, the parents, can be so and do so through modeling and relationship building.
No, your home is not the political arena, but yes, parents should develop and maintain their leadership position if they want their children to follow their guidelines and their house rules. And please note: leading is not ruling! Parents who are their kids’ leaders tend to worry less regarding the intensity of obedience that they have established and more about the charismatic bonds that they should create. They do not base their expectation on their children’s fear from punishment but on their kids’ decision to maintain their loyalty to their parents.
Since leadership is built through modeling and relationship building, I’ll first explain the concept of modeling:
‘Leading by example’ or ‘walk your talk’ is modeling. And you can check yourselves and your parenting style:
- Do you, the parents, involve your kids with family budget planning and spending? This is an opportunity to model financial awareness and responsibility.
- Do you expose them to the various ways you choose to refrain from substances while socializing? This models them a decision making process regarding values and cultural norms.
Smart parents, therefore, choose to act wherever they are as if they are with their kids, who are constantly watching them. This awareness for your role as a model strengthens your ability, as parents, to use the powerful method of modeling. In short: show them the ‘how you do’ before you expect them to do.
Finally, a few words about Relationship building: an on-going process that requires RECOGNITION and REWARDING.
The ‘recognition’ term:
suppose your kids want to go to a certain activity that does not seem appropriate to you. Recognition means, in such a case, that you acknowledge their needs before you band their wish. It also means that you appreciate their gains if they would be able to attend and you are aware of their feeling of loss if at the end they would have to give up.
‘Rewarding’ does not necessarily lead parents to their pockets… Rewarding may be a warm ward, a comforting gesture, a thank you note or just an eye to eye look that reviles your wish to pay attention. And by the way: when was the last time that you forward one of those goodies to your kids?
So now, dear parents, I can finally conclude my answer:
re-structure parenting style by choosing the proper activities and behaviors that would bring your children to perceive you as their leading figures.
Another way to phrase it: depend on their acceptance and loyalty to the guidelines, not their obedience to you.
Author's Bio:
Psychologist (MA) and Behavioral Expert (PhD); Director of The
Center for Human Growth and Business Insights, Mechanicsburg, PA. Please visit dr-joeph.com.
|
SSC Economics Sample Paper NCERT Sample Paper-3
• question_answer
Consider the following statements:
1. Indian sugar Industry is suffering from cheap imports, which destabilize the domestic industry.
2. The Indian sugar industry is suffering from outdated machinery and inefficiency.
3. The crushing season of sugarcane in the southern India is of greater duration than Northern India. Which of the above statements are correct?
A) 1, 2, 3
B) 2, 3
C) 1, 2
D) 1, 3
Correct Answer: A
Solution :
[a] The sugar industry is clamouring for a sharp increase in import duty to curb cheap imports into the country. Indian Sugar Mills Association (ISMA) has proposed that the import duty be raised from 10% to 60%. India is highly vulnerable to sugar coming from Pakistan through the Wagah border which is at least Rs.5-6 cheaper than the Indian variety.
You need to login to perform this action.
You will be redirected in 3 sec spinner
|
As they say, a picture is worth a thousand words. With that in mind, I refer you to the maps and photos above. They pretty much say it all. Directly or indirectly, Europeans were responsible for slaughtering perhaps 99,000 grizzly bears over a 150 year period, resulting in the extirpation of grizzlies in roughly 98% of their western range within the contiguous United States. The four maps above offer snapshots of grizzly bear distributions at four different times. The distribution of extant bears is shown in green and the distribution of extirpated bears is denoted by yellow. The time periods illustrated in these maps correspond with the time of first sustained contact with Europeans (1800), fifty years later in the aftermath of the first wave of westward European migrants (1850), a time when numerous populations were winking out (1910), and at the culmination of extirpations, just prior to when grizzly bears were declared Threatened by the US government (1970). The map of distributions circa 1910 is thanks largely to the record-keeping of C. Hart Merriam, who was the head of the US Division of Biological Survey between 1886-1910, during which time he oversaw a number of surveys that documented the last grizzlies remaining in the West. In addition to documenting the demise of grizzlies, he also gifted us by naming 83 "species" of grizzlies in North America, none of which have stood the test of time (see Evolution). Note that grizzlies were extirpated first in the southern and central Great Plains, a topic that I return to below.
There is no mystery about why Europeans were so lethal (see The lethality factor). Killing grizzlies was both informal and formal policy. Essentially no bear survived contact with an armed European. Moreover, the federal government mounted a taxpayer supported campaign to exterminate grizzly bears (and other carnivores) in the West starting in 1914 with the formation of the Predatory Animal and Rodent Control (PARC) Branch of the Biological Survey. In addition to hounds, set-guns, and traps, PARC agents used lots of poison. The basic ethos was informed by Manifest Density and the derived imperative to cleanse the earth of any obstruction to the settlement and "civilization" of an ever-diminishing wild West. Grizzlies weren't the only victims. The rogues gallery above (below the maps) gives testimony to the widespread and enthusiastic slaughter. A couple of exemplars include George A. Custer (far left in the middle of the group) and Ben Lilly (second from right), who was a notorious (famous?) killer of bears in Arizona and New Mexico. David Brown and Tracy Storer and Lloyd Tevis provide detailed histories of the demise of grizzlies in their respective books covering the Southwest ("The Grizzly in the Southwest") and California ("California Grizzly").
The map at right provides a bit more detail on the timing of grizzly bear extinctions, with a focus on the period between 1850 and 1970. The locations of regional extirpations--or at least the last kown grizzlies--are given along with the year it occurred. I've summarized this information in the inset graph to the lower right which shows a smoothed frequency distribution (in red) for extirpations. You can see that there were two major peaks centered on roughly 1890 and 1920, and a late minor peak centered on 1950-1970, representing the last holdouts in particularly rugged and remote areas such as central Idaho. Most of the early extirpations were along the western edge of the Great Plains, immediately post-dating the demise of the bison.
You will also notice that I've shown trends in the size of European (i.e., "censused") populations in three regions of the West. All three trend lines tell much the same story of rapid increases between 1880 and 1900 (the Homestead era) coinciding with the first peak of grizzly bear extirpations. This was followed by a plateau in growth on the Plains (the circles with dashed line) and sustained population growth elsewhere.
Driver's of extirpation
Clearly, much of the dynamic that drove grizzlies to near extinction in the West had to do with what was going on between people's ears. Which had a lot to do with culture and up-bringing (i.e., The lethality factor). But aside from this perhaps dominant consideration, there were factors that either accelerated or slowed local extirpations. Several of these factors pertained either directly or indirectly to features that either attracted grizzlies to people or kept them (more-or-less) safely sequestered away.
The figures at left show relations between the likelihood that grizzlies would have persisted in a given area (the vertical or y-axis) and features of that area that either accelerated or slowed extirpation (the horizonatl or x-axis). Of all the factors that I considered, the extent of whitebark pine habitat, the extent of grizzly bear range in the surrounding area at the beginning of a transition, and densities of (European) people were most closely related to persistence of grizzly bears. These relationships are shown left to right, for two different time periods: 1850-1920 in the top panels, and 1920-1970 in the bottom panels.
The strong negative relationship between persistence of grizzlies and densities of people (the red dots) doesn't require explanation. The positive relationship of persistence to extent of local distributions at the beginning of a transition period (gray dots) relates in a straight-forward way to the idea that, if you start with more, you're more likely to end up with something. But the positive relationship between persistence and whitebark pine not only requires a bit of explanation, but also broaches an important topic when it comes to understanding the survival of bears anywhere in the world. Put simply, grizzlies were more likely to survive in areas where key food resources kept them out of harms way, that is, away from armed people with bad attitudes. Conversely, foods that tended to concentrate grizzlies near people and in ways that made them vulberable, tended to die out. So, back to whitebark pine. The tree is an important source of bear food. It also lives at high elevations in rugged mountains. Hence, bears that made substantial use of pine seeds found a safe haven and tended to survive. By contrast, grizzlies that fed on spawning salmon in the Pacific Northwest, or berries and bison carcasses on the Great Plains, all tended to concentrate along rivers and streams, which were precisely the areas that were settled earliest and used most heavily for transport by newly arrived Europeans. Hence, bears in these areas disappeared relatively quickly, and sometimes despite the prevalence of wilderness conditions in the surrounding uplands.
I conclude this section on the direct effects of Europeans by examining proximal, or fine-scale, drivers of risk for grizzly bears. The bar graph immediately above summarizes the results of combing through a large number of journals and other texts attributed to various European explorers or settlers. The horizontal length of each bar corresponds to the number of instances where one of these Europeans observed a grizzly engaged in a particular kind of feeding activity on or near the Great Plains. The most common was scavenging on a bison carcass, which is a reflection of the extent to which bison were a key food of grizzlies on the Great Plains (see The bison factor). Second to this was consumption of plum and chokecherry fruits in riparian areas. But notice the number of instances where the observed activity involved meat or other food associated with people--livestock, dirty camps, or the remains of animals killed by hunters. Here, we see a foreshadowing of current dynamics. Clearly, a non-trivial number of encounters beween grizzly bears and Europeans was organized around human-associated foods. No doubt these foods lured grizzlies near Europeans, almost invariably ending in the death of the bear. As is currently the case, human-related attractants were a huge problem for grizzly bears.
The First Peoples factor
In thinking through all of the factors that might have contributed to driving the 1800s-1900s extirpations of grizzlies, I ended up speculating on a possible role played by First Peoples. Before getting into the details of this possible effect, it is worth reiterating that the ultimate driver of all destructive dynamics was self-evidently the arrival and spread of Europeans and their markets and technologies. So, with that in mind, the maps immediately above show the spread and adoption of horses and guns by First Peoples. Horses were introduced by the Spanish conquerors, perhaps as some minimal compensation for the extent of their atrocities. Guns largely eminated from trade with the French, English, and successor American entrepeneurs--all greedy for profits and indifferent to harm. The delimiting lines mark the extent of the area within which First Peoples had adopted horses and guns as part of their lifeways. The map above left intersects the diffusion of horses with that of guns, and identifies areas where both had been integrated by 1750 and 1790. The timing and delineation of diffusions is thanks to Francis Haines, Frank Secoy, J.C. Ewers, and D.E. Worcester. The basic point is that guns were (more-or-less) diffusing from the east, and horses from the southwest. Both were adopted earliest by First Peoples in the central Great Plains, and by essentially all Tribes on the plains by 1790.
In the figure above right I overlay the areas within which guns and horses had been adopted by First Peoples with the distribution of grizzly bears around 1850, which is roughly when the earliest extirpations occurred on the Great Plains. I would argue that the less productive conditions of the southern plains predisposed grizzly bears to over-exploitation or harvest. It is easy (at least for me) to imagine First Peoples on newly acquired horses and, following that, with newly acquired guns, being more lethal to grizzly bears than they had ever been before. I don't show it here, but Comancheria, the domain of the dominant Comanche and their Kiowa allies, spatially coincides with the earliest demise of grizzlies--well before Europeans were around in any numbers. This is in the southern plains of New Mexico, Oklahoma, and Texas--also the least productive environment of the Great Plains for bison and grizzly bears (see The bison factor). Parenthetically, the first half of the 1800s predated major effects linked to the bison hide market, which I address immediately below.
As I elaborate in The bison factor, availability of bison was almost certainly a major factor driving population dynamics of grizzly bears on the Great Plains. Which is to say that the decline and extirpation of bison populations during the mid- to late-1800s undoubtedly also helped drive grizzly bears to local extinction.
The graphs at left summarize factors that were probable drivers of bison extirpations: drought, the adoption of horses and guns by First Peoples, the arrival of emigrant Europeans, and the emergence of the European-driven bison hide market. The latter three factors are denoted by horizontal bars in shades of burgundy at the top of each timeline at left. The blue trend lines show estimated levels of drought (or, the opposite) in three regions of the Great Plains, from north (top) to south (bottom) derived from the work of Zhihua Zhang and his colleagues. I've also demarked periods of more-or-less sustained drought in each region by vertical orange bars. Bison population fluctuations on the Great Plains during the Holocene have been conclusively tied to grassland productivity, which has been, in turn, linked to rainfall and the prevalence of less nutritious C4 verus C3 grasses.
There are some lines of evidence suggesting that major declines in regional bison herds began as early as the 1830s and 1840s. Andrew Isenberg has perhaps the most informative and compelling account of the demise of bison in his book "The Destruction of the Bison,' which contains themes that had been previously articulated by Dan Flores in his seminal paper "Bison ecology and bison diplomacy." There is solid evidence that a number of plains Tribes fully participated in the bison hide market established by Europeans several decades before European hide hunters arrived in any numbers to deliver the death blow to bison. There are suggestions of not only surplus killing of bison by First Peoples for hides, but also local declines of bison in areas where this surplus killing was most intense. One notable example is along the Missouri River in present-day South Dakota. Early declines of bison herds here have been implicated as one of several drivers that displaced some Siouan tribes west in the mid-1800s (see below).
There are a couple of noteworthy synergies evident in these graphs. Perhaps most intriguing is the coincidence of the heaviest bison hide-hunting with sustained periods of drought on the northern and southern plains--but most notably in the north. Given the link between bison population dynamics and moisture, it is likely that declines in productivity of female bison exacerbated the already catastrophic effects of hide-hunting during the 1860s and 1870s..
The map above highlights a final factor of potential relevance to understanding the fates of grizzly bears during the 1700s and 1800s--the long-range movements and considerable flux of Tribal boundaries. The European invasion of North America set in motion a myriad of dynamics that affected every living thing on the continent. As I've described above, these dynamics included the descimation and displacement of First Peoples. But there was an interesting dynamic behind this phenomenon: that of Tribes who benefitted from the early acquisition of either guns or horses being displaced by Europeans or other Tribes, moving, and then displacing other Tribes in turn. The end result of this progressive falling of dominos was an almost complete replacement of the original Tribes living within the eastern half of grizzly bear range by the 1700s and early 1800s.
So, getting back to the map. I've tried to summarize a lot of information, including: the European names and estimated boundaries of Tribes around 1850; whether the Tribes on the Great Plains were primarily sedentary (in green) or nomadic (in brown); the estimated migration routes taken by these Tribes primarily during the 1700s (by gray arrows); and settlements of the Apache and Padouca that predated the arrival of Tribes who had recently been mounted on horses. Just a note: the Tribal boundaries are probably well-accepted by most who are interested in such things. The migration routes are another matter. The lore of most Tribes represents each as living in a particular area since the creation of time--or since many generations before. On the European side, various scholars have various interpretations of where different Tribes originated and how they moved. I will not vouch for the final verdict of any account, but the evidence assembled by European scholars convinces me that a highly dynamic situation occurred during the 1700s and early 1800s. Whatever the migration route, Tribes were moving around a lot and displacing and even descimating other Tribes. And all of this was accompanied by heightened inter-Tribal conflict which probably began with population increases that predated the arrival of Europeans (see The human factor), but with the dynamics between 1700 and 1900 triggered by the onslaught of Europeans.
What does this mean for grizzly bears? The feature of most relevance to bears is probably the wide brown dashed lines in the map above. They denote what some scholars have called "war zones," which were presumably so contested that the only people venturing there were war parties. Andrea Laliberte and Bill Ripple popularized this notion based on their analysis of the spatial distribution of First Americans and wildlife documented by the Lewis & Clark expedition along the Missouri River. I've extracted the portion of their map (included above) that shows the First Peoples villages (dark brown dots) and wildlife sightings (proportional to the width of the red line) summarized from Lewis & Clark's journals. The basic point is: wildlife were more abundant in areas more sparsely occupied by First Peoples, especially those who were nomadic, as well as in areas that were violently contested by neighboring Tribes (i.e., War Zones). As it turns out, most of Lewis & Clark's sightings of grizzly bears also occurred in these relatively human-free zones. Riffing off of the Laliberte and Ripple thesis, it may have been the case that grizzlies benefitted, both directly and indirectly, by the heightened warfare among Tribes that occurred during the 1700s and early 1800s. The zones of particular relevance to grizzlies would have been those around the Blackfeet, Crow, Pawnee, and Arapaho Tribes. How much confidence do I place in this hypothesis? Well...it is largely speculative and supported only by circumstantial evidence. As the Eighteenth-Century English would say, I wouldn't stake my wig on it. But the hypothesis seems plausible.
|
Follow us
Everything to Know About Record Label Agreements
Co-Written and Researched by Tyler Anthony
Most record labels make their money by selling sound recordings of song performances, which can be a tough way to earn a profit.
This article discusses some of the most common agreements record labels use to generate income from prospective artists, distributors, and film producers.
Licensing and Distribution Agreements
Licensing and distribution agreements are popular in the independent scene where labels have limited resources. With these deals, the independent label who owns the rights to a master recording licenses it out to a major record label or distribution company with better distribution capabilities.
Under a licensing agreement, the owner of the master recording – usually a record label – grants rights to third parties to manufacture copies, sell, distribute, advertise, and promote the music from the master recording. In exchange, the record label gets a piece of royalties from every sale.
Under distribution agreements, a distributor agrees to issue copies of an independently-produced record to retail outlets. These deals sometimes include promotional activities.
While licensing and distribution agreements are declining in popularity, there’s another category of deals called pressing and distribution where the distributor also manufactures and distributes copies of the master record.
360 Deals
As their name suggests, 360 deals encompass virtually everything an artist does to make money. Record companies take a cut from all of the artists’ music and non-record related business activities such as film, sponsorship’s, and licensing agreements.
We’ll take a closer look at 360 deals in a separate article.
Master Use Agreements
These are the equivalent to a synchronization licenses for music publishing. Essentially, if a film producer wants to use a song in their production, they need permission from the organization or record label that owns the master recording of the song.
The recording company often will provide the producer a license that is either exclusive or non-exclusive to use the sound recording.
Master Purchase and Sale Agreements
When a record label buys and sells master recordings from other artists or organizations, they conduct the transaction through a master purchase and sale agreement.
The payment for the master can be a one-time flat fee, a staggered purchase, or a fee accompanied by royalties.
Loan-out Agreements
Some artists create new corporations to act as their signing party in a record label agreement. These so-called “load-out” corporations offer many benefits to taxation and liability.
Nevertheless, record companies often require the artist to execute a guarantee that states that if the loan out corporation defaults, the artist becomes personally bound by the terms of the recording agreement.
We cover the ins and outs of common record label contracts in our next piece.
Written by:
Claudius is an experienced commercial lawyer who specializes in acquisitions, financing, and securities law in relation to corporate commercial law.
|
Wednesday 26. January 2022
Michael Kuhn
Through the eyes of the poor
For its Plenary Assembly from 26 to 28 October 2016, COMECE has chosen to examine the complex issue of poverty in Europe.
United States
US elections and the catholic vote
European Union
Possibilities for combating poverty in Europe
When developing effective policies to fight poverty you need to open up the debate and long-term dialogue to include people who are living in poverty and exclusion, says Bert Luyts, representative of ATD Fourth World to the European Union.
European Union
Growth and the euro after Brexit
A report drawn up by a group of international experts under the aegis of the Bertelsmann Stiftung and the Jacques Delors Institute, is promoting a set of recommendations, which Enrico Letta, President of the Jacques Delors Institute and former Italian Prime Minister, sets out below.
European Union
Brexit and its impacts on environmental policy
Brexit’s impact on environmental policy will be felt not just in the UK but also in the European Union. While the UK risks sliding towards a lowering of its standards, the EU will be losing a Member State that has been extremely active in this field.
European Union
An electoral year without an election in the Democratic Republic of Congo
Sliding into insecurity is a scenario being taken very seriously by the international community. What latitude does the European Union have now?
Published in English, French, German
COMECE, 19 square de Meeûs, B-1050 Brussels
Tel: +32/2/235 05 10
Editors-in-Chief: Martin Maier SJ
|
Evaporative cooler
An Egyptian qullah, set in drafts to cool interiors. Porous pottery and coarse cloth maximize the area for evaporation.
An evaporative cooler (also evaporative air conditioner, swamp cooler, swamp box, desert cooler and wet air cooler) is a device that cools air through the evaporation of water. Evaporative cooling differs from other air conditioning systems, which use vapor-compression or absorption refrigeration cycles. Evaporative cooling uses the fact that water will absorb a relatively large amount of heat in order to evaporate (that is, it has a large enthalpy of vaporization). The temperature of dry air can be dropped significantly through the phase transition of liquid water to water vapor (evaporation). This can cool air using much less energy than refrigeration. In extremely dry climates, evaporative cooling of air has the added benefit of conditioning the air with more moisture for the comfort of building occupants.
The cooling potential for evaporative cooling is dependent on the wet-bulb depression, the difference between dry-bulb temperature and wet-bulb temperature (see relative humidity). In arid climates, evaporative cooling can reduce energy consumption and total equipment for conditioning as an alternative to compressor-based cooling. In climates not considered arid, indirect evaporative cooling can still take advantage of the evaporative cooling process without increasing humidity. Passive evaporative cooling strategies can offer the same benefits of mechanical evaporative cooling systems without the complexity of equipment and ductwork.
Schematic diagram of an ancient Iranian windcatcher and qanat, used for evaporative cooling of buildings
An earlier form of evaporative cooling, the windcatcher, was first used in ancient Egypt and Persia thousands of years ago in the form of wind shafts on the roof. They caught the wind, passed it over subterranean water in a qanat and discharged the cooled air into the building. Modern Iranians have widely adopted powered evaporative coolers (coolere âbi).[1]
A traditional air cooler in Mirzapur, Uttar Pradesh, India
The evaporative cooler was the subject of numerous US patents in the 20th century; many of these, starting in 1906,[2] suggested or assumed the use of excelsior (wood wool) pads as the elements to bring a large volume of water in contact with moving air to allow evaporation to occur. A typical design, as shown in a 1945 patent, includes a water reservoir (usually with level controlled by a float valve), a pump to circulate water over the excelsior pads and a centrifugal fan to draw air through the pads and into the house.[3] This design and this material remain dominant in evaporative coolers in the American Southwest, where they are also used to increase humidity.[4] In the United States, the use of the term swamp cooler may be due to the odor of algae produced by early units.[5]
Externally mounted evaporative cooling devices (car coolers) were used in some automobiles to cool interior air—often as aftermarket accessories[6]—until modern vapor-compression air conditioning became widely available.
Passive evaporative cooling techniques in buildings have been a feature of desert architecture for centuries, but Western acceptance, study, innovation, and commercial application is all relatively recent. In 1974, William H. Goettl noticed how evaporative cooling technology works in arid climates, speculated that a combination unit could be more effective, and invented the "High Efficiency Astro Air Piggyback System", a combination refrigeration and evaporative cooling air conditioner. In 1986, University of Arizona researchers W. Cunningham and T. Thompson built a passive evaporative cooling tower, and performance data from this experimental facility in Tucson, Arizona became the foundation of evaporative cooling tower design guidelines developed by Baruch Givoni.[7]
Physical principles
Evaporative coolers lower the temperature of air using the principle of evaporative cooling, unlike typical air conditioning systems which use vapor-compression refrigeration or absorption refrigeration. Evaporative cooling is the conversion of liquid water into vapor using the thermal energy in the air, resulting in a lower air temperature. The energy needed to evaporate the water is taken from the air in the form of sensible heat, which affects the temperature of the air, and converted into latent heat, the energy present in the water vapor component of the air, whilst the air remains at a constant enthalpy value. This conversion of sensible heat to latent heat is known as an isenthalpic process because it occurs at a constant enthalpy value. Evaporative cooling therefore causes a drop in the temperature of air proportional to the sensible heat drop and an increase in humidity proportional to the latent heat gain. Evaporative cooling can be visualized using a psychrometric chart by finding the initial air condition and moving along a line of constant enthalpy toward a state of higher humidity.[8]
A simple example of natural evaporative cooling is perspiration, or sweat, secreted by the body, evaporation of which cools the body. The amount of heat transfer depends on the evaporation rate, however for each kilogram of water vaporized 2,257 kJ of energy (about 890 BTU per pound of pure water, at 95 °F (35 °C)) are transferred. The evaporation rate depends on the temperature and humidity of the air, which is why sweat accumulates more on humid days, as it does not evaporate fast enough.
Vapor-compression refrigeration uses evaporative cooling, but the evaporated vapor is within a sealed system, and is then compressed ready to evaporate again, using energy to do so. A simple evaporative cooler's water is evaporated into the environment, and not recovered. In an interior space cooling unit, the evaporated water is introduced into the space along with the now-cooled air; in an evaporative tower the evaporated water is carried off in the airflow exhaust.
Other types of phase-change cooling
A closely related process, sublimation cooling, differs from evaporative cooling in that a phase transition from solid to vapor, rather than liquid to vapor, occurs.
Sublimation cooling has been observed to operate on a planetary scale on the planetoid Pluto, where it has been called an anti-greenhouse effect.
Another application of a phase change to cooling is the "self-refrigerating" beverage can. A separate compartment inside the can contains a desiccant and a liquid. Just before drinking, a tab is pulled so that the desiccant comes into contact with the liquid and dissolves. As it does so, it absorbs an amount of heat energy called the latent heat of fusion. Evaporative cooling works with the phase change of liquid into vapor and the latent heat of vaporization, but the self-cooling can uses a change from solid to liquid, and the latent heat of fusion, to achieve the same result.
Before the advent of modern refrigeration, evaporative cooling was used for millennia, for instance in qanats, windcatchers, and mashrabiyas. A porous earthenware vessel would cool water by evaporation through its walls; frescoes from about 2500 BCE show slaves fanning jars of water to cool rooms. Alternatively, a bowl filled with milk or butter could be placed in another bowl filled with water, all being covered with a wet cloth resting in the water, to keep the milk or butter as fresh as possible (see zeer, botijo and Coolgardie safe).[9]
California ranch house with evaporative cooler box on roof ridgeline on right
Evaporative cooling is a common form of cooling buildings for thermal comfort since it is relatively cheap and requires less energy than other forms of cooling.
Psychrometric chart example of Salt Lake City
The figure showing the Salt Lake City weather data represents the typical summer climate (June to September). The colored lines illustrate the potential of direct and indirect evaporative cooling strategies to expand the comfort range in summer time. It is mainly explained by the combination of a higher air speed on one hand and elevated indoor humidity when the region permits the direct evaporative cooling strategy on the other hand. Evaporative cooling strategies that involve the humidification of the air should be implemented in dry condition where the increase in moisture content stays below recommendations for occupant's comfort and indoor air quality. Passive cooling towers lack the control that traditional HVAC systems offer to occupants. However, the additional air movement provided into the space can improve occupant comfort.
Evaporative cooling is most effective when the relative humidity is on the low side, limiting its popularity to dry climates. Evaporative cooling raises the internal humidity level significantly, which desert inhabitants may appreciate as the moist air re-hydrates dry skin and sinuses. Therefore, assessing typical climate data is an essential procedure to determine the potential of evaporative cooling strategies for a building. The three most important climate considerations are dry-bulb temperature, wet-bulb temperature, and wet-bulb depression during a typical summer day. It is important to determine if the wet-bulb depression can provide sufficient cooling during the summer day. By subtracting the wet-bulb depression from the outside dry-bulb temperature, one can estimate the approximate air temperature leaving the evaporative cooler. It is important to consider that the ability for the exterior dry-bulb temperature to reach the wet-bulb temperature depends on the saturation efficiency. A general recommendation for applying direct evaporative cooling is to implement it in places where the wet-bulb temperature of the outdoor air does not exceed 22 °C (72 °F).[7] However, in the example of Salt Lake City, the upper limit for the direct evaporative cooling on psychrometric chart is 20 °C (68 °F). Despite the lower temperature, evaporative cooling is suitable for similar climates to Salt Lake City.
Evaporative cooling is especially well suited for climates where the air is hot and humidity is low. In the United States, the western and mountain states are good locations, with evaporative coolers prevalent in cities like Albuquerque, Denver, El Paso, Fresno, Salt Lake City, and Tucson. Evaporative air conditioning is also popular and well-suited to the southern (temperate) part of Australia. In dry, arid climates, the installation and operating cost of an evaporative cooler can be much lower than that of refrigerative air conditioning, often by 80% or so. However, evaporative cooling and vapor-compression air conditioning are sometimes used in combination to yield optimal cooling results. Some evaporative coolers may also serve as humidifiers in the heating season. In regions that are mostly arid, short periods of high humidity may prevent evaporative cooling from being an effective cooling strategy. An example of this event is the monsoon season in New Mexico and central and southern Arizona in July and August.
In locations with moderate humidity there are many cost-effective uses for evaporative cooling, in addition to their widespread use in dry climates. For example, industrial plants, commercial kitchens, laundries, dry cleaners, greenhouses, spot cooling (loading docks, warehouses, factories, construction sites, athletic events, workshops, garages, and kennels) and confinement farming (poultry ranches, hog, and dairy) often employ evaporative cooling. In highly humid climates, evaporative cooling may have little thermal comfort benefit beyond the increased ventilation and air movement it provides.
Other examples
Trees transpire large amounts of water through pores in their leaves called stomata, and through this process of evaporative cooling, forests interact with climate at local and global scales.[10] Simple evaporative cooling devices such as evaporative cooling chambers (ECCs) and clay pot coolers, or pot-in-pot refrigerators, are simple and inexpensive ways to keep vegetables fresh without the use of electricity. Several hot and dry regions throughout the world could potentially benefit from evaporative cooling, including North Africa, the Sahel region of Africa, the Horn of Africa, southern Africa, the Middle East, arid regions of South Asia, and Australia. Benefits of evaporative cooling chambers for many rural communities in these regions include reduced post-harvest loss, less time spent traveling to the market, monetary savings, and increased availability of vegetables for consumption.[11][12]
Evaporative cooling is commonly used in cryogenic applications. The vapor above a reservoir of cryogenic liquid is pumped away, and the liquid continuously evaporates as long as the liquid's vapor pressure is significant. Evaporative cooling of ordinary helium forms a 1-K pot, which can cool to at least 1.2 K. Evaporative cooling of helium-3 can provide temperatures below 300 mK. These techniques can be used to make cryocoolers, or as components of lower-temperature cryostats such as dilution refrigerators. As the temperature decreases, the vapor pressure of the liquid also falls, and cooling becomes less effective. This sets a lower limit to the temperature attainable with a given liquid.
Evaporative cooling is also the last cooling step in order to reach the ultra-low temperatures required for Bose–Einstein condensation (BEC). Here, so-called forced evaporative cooling is used to selectively remove high-energetic ("hot") atoms from an atom cloud until the remaining cloud is cooled below the BEC transition temperature. For a cloud of 1 million alkali atoms, this temperature is about 1μK.
Although robotic spacecraft use thermal radiation almost exclusively, many manned spacecraft have short missions that permit open-cycle evaporative cooling. Examples include the Space Shuttle, the Apollo command and service module (CSM), lunar module and portable life support system. The Apollo CSM and the Space Shuttle also had radiators, and the Shuttle could evaporate ammonia as well as water. The Apollo spacecraft used sublimators, compact and largely passive devices that dump waste heat in water vapor (steam) that is vented to space.[citation needed] When liquid water is exposed to vacuum it boils vigorously, carrying away enough heat to freeze the remainder to ice that covers the sublimator and automatically regulates the feedwater flow depending on the heat load. The water expended is often available in surplus from the fuel cells used by many manned spacecraft to produce electricity.
Evaporative cooler illustration
Most designs take advantage of the fact that water has one of the highest known enthalpy of vaporization (latent heat of vaporization) values of any common substance. Because of this, evaporative coolers use only a fraction of the energy of vapor-compression or absorption air conditioning systems. Unfortunately, except in very dry climates, the single-stage (direct) cooler can increase relative humidity (RH) to a level that makes occupants uncomfortable. Indirect and two-stage evaporative coolers keep the RH lower. Restaurants and bars, for example, may choose open display refrigerators and glass door coolers to keep their products hidden from view. Other companies, such as convenience stores and supermarkets, want to keep their items visible and within reach of customers.[13]
Direct evaporative cooling
Direct evaporative cooling
Direct evaporative cooling (open circuit) is used to lower the temperature and increase the humidity of air by using latent heat of evaporation, changing liquid water to water vapor. In this process, the energy in the air does not change. Warm dry air is changed to cool moist air. The heat of the outside air is used to evaporate water. The RH increases to 70 to 90% which reduces the cooling effect of human perspiration. The moist air has to be continually released to outside or else the air becomes saturated and evaporation stops.
A mechanical direct evaporative cooler unit uses a fan to draw air through a wetted membrane, or pad, which provides a large surface area for the evaporation of water into the air. Water is sprayed at the top of the pad so it can drip down into the membrane and continually keep the membrane saturated. Any excess water that drips out from the bottom of the membrane is collected in a pan and recirculated to the top. Single-stage direct evaporative coolers are typically small in size as they only consist of the membrane, water pump, and centrifugal fan. The mineral content of the municipal water supply will cause scaling on the membrane, which will lead to clogging over the life of the membrane. Depending on this mineral content and the evaporation rate, regular cleaning and maintenance is required to ensure optimal performance. Generally, supply air from the single-stage evaporative cooler will need to be exhausted directly (one-through flow) because the high humidity of the supply air. A few design solutions have been conceived to utilize the energy in the air, like directing the exhaust air through two sheets of double glazed windows, thus reducing the solar energy absorbed through the glazing.[14] Compared to energy required to achieve the equivalent cooling load with a compressor, single stage evaporative coolers consume less energy.[7]
Passive direct evaporative cooling can occur anywhere that the evaporatively cooled water can cool a space without the assist of a fan. This can be achieved through use of fountains or more architectural designs such as the evaporative downdraft cooling tower, also called a "passive cooling tower". The passive cooling tower design allows outside air to flow in through the top of a tower that is constructed within or next to the building. The outside air comes in contact with water inside the tower either through a wetted membrane or a mister. As water evaporates in the outside air, the air becomes cooler and less buoyant and creates a downward flow in the tower. At the bottom of the tower, an outlet allows the cooler air into the interior. Similar to mechanical evaporative coolers, towers can be an attractive low-energy solution for hot and dry climate as they only require a water pump to raise water to the top of the tower.[15] Energy savings from using a passive direct evaporating cooling strategy depends on the climate and heat load. For arid climates with a great wet-bulb depression, cooling towers can provide enough cooling during summer design conditions to be net zero. For example, a 371 m2 (4,000 ft2) retail store in Tucson, Arizona with a sensible heat gain of 29.3 kJ/h (100,000 Btu/h) can be cooled entirely by two passive cooling towers providing 11890 m3/h (7,000 cfm) each.[16]
For the Zion National Park visitors' center, which uses two passive cooling towers, the cooling energy intensity was 14.5 MJ/m2 (1.28 kBtu/ft;), which was 77% less than a typical building in the western United States that uses 62.5 MJ/m2 (5.5 kBtu/ft2).[17] A study of field performance results in Kuwait revealed that power requirements for an evaporative cooler are approximately 75% less than the power requirements for a conventional packaged unit air-conditioner.[18]
Indirect evaporative cooling
The process of indirect evaporative cooling
Indirect evaporative cooling (closed circuit) is a cooling process that uses direct evaporative cooling in addition to some heat exchanger to transfer the cool energy to the supply air. The cooled moist air from the direct evaporative cooling process never comes in direct contact with the conditioned supply air. The moist air stream is released outside or used to cool other external devices such as solar cells which are more efficient if kept cool. This is done to avoid excess humidity in enclosed spaces, which is not appropriate for residential systems.
Maisotsenko cycle
Indirect cooler manufacturer uses the Maisotsenko cycle (M-Cycle), named after inventor and Professor Dr. Valeriy Maisotsenko, employs an iterative (multi-step) heat exchanger made of a thin recyclable membrane that can reduce the temperature of product air to below the wet-bulb temperature, and can approach the dew point.[19] Testing by the US Department of Energy found that a hybrid M-Cycle combined with a standard compression refrigeration system significantly improved efficiency by between 150 and 400% but was only capable of doing so in the dry western half of the US, and did not recommend being used in the much more humid eastern half of the US. The evaluation found that the system water consumption of 2-3 gallons per cooling ton(12,000 BTUs) was roughly equal in efficiency to the water consumption of new high efficiency power plants. This means the higher efficiency can be utilized to reduce load on the grid without requiring any additional water, and may actually reduce water usage if the source of the power does not have a high efficiency cooling system.[20]
An M-Cycle based system built by Coolerado is currently being used to cool the Data Center for NASA's National Snow and Ice Data Center (NSIDC). The facility is air cooled below 70 degrees Fahrenheit and uses the Coolerado system above that temperature. This is possible because the air handler for the system uses fresh outside air, which allows it to automatically use cool outside ambient air when conditions allow. This avoids running the refrigeration system when unnecessary. It is powered by a solar panel array which also serves as secondary power in case of main power loss.[21]
The system has very high efficiency but, like other evaporative cooling systems, is constrained by the ambient humidity levels, which has limited its adoption for residential use. It may be used as supplementary cooling during times of extreme heat without placing significant additional burden on electrical infrastructure. If a location has excess water supplies or excess desalination capacity it can be used to reduce excessive electrical demand by utilizing water in affordable M-Cycle units. Due to high costs of conventional air conditioning units and extreme limitations of many electrical utility systems, M-Cycle units may be the only appropriate cooling systems suitable for impoverished areas during times of extremely high temperature and high electrical demand. In developed areas, they may serve as supplemental backup systems in case of electrical overload, and can be used to boost efficiency of existing conventional systems.
The M-Cycle is not limited to cooling systems and can be applied to various technologies from Stirling engines to Atmospheric water generators. For cooling applications it can be used in both cross flow and counterflow configurations. Counterflow was found to obtain lower temperatures more suitable for home cooling, but cross flow was found to have a higher coefficient of performance (COP), and is therefore better for large industrial installations.
Unlike traditional refrigeration techniques, the COP of small systems remains high, as they do not require lift pumps or other equipment required for cooling towers. A 1.5 ton/4.4kW cooling system requires just 200 watts for operation of the fan, giving a COP of 26.4 and an EER rating of 90. This does not take into account the energy required to purify or deliver the water, and is strictly the power required to run the device once water is supplied. Though desalination of water also presents a cost, the latent heat of vaporization of water is nearly 100 times higher than the energy required to purify the water itself. Furthermore, the device has a maximum efficiency of 55%, so its actual COP is much lower than this calculated value. However, regardless of these losses, the effective COP is still significantly higher than a conventional cooling system, even if water must first be purified by desalination. In areas where water is not available in any form, it can be used with a desiccant to recover water using available heat sources, such as solar thermal energy.[22][23]
Theoretical designs
In the newer but yet-to-be-commercialized "cold-SNAP" design from Harvard's Wyss Institute, a 3D-printed ceramic conducts heat but is half-coated with a hydrophobic material that serves as a moisture barrier.[24] While no moisture is added to the incoming air the relative humidity (RH) does rise a little according to the Temperature-RH formula. Still, the relatively dry air resulting from indirect evaporative cooling allows inhabitants' perspiration to evaporate more easily, increasing the relative effectiveness of this technique. Indirect Cooling is an effective strategy for hot-humid climates that cannot afford to increase the moisture content of the supply air due to indoor air quality and human thermal comfort concerns.
Passive indirect evaporative cooling strategies are rare because this strategy involves an architectural element to act as a heat exchanger (for example a roof). This element can be sprayed with water and cooled through the evaporation of the water on this element. These strategies are rare due to the high use of water, which also introduces the risk of water intrusion and compromising building structure.
Hybrid designs
Two-stage evaporative cooling, or indirect-direct
In the first stage of a two-stage cooler, warm air is pre-cooled indirectly without adding humidity (by passing inside a heat exchanger that is cooled by evaporation on the outside). In the direct stage, the pre-cooled air passes through a water-soaked pad and picks up humidity as it cools. Since the air supply is pre-cooled in the first stage, less humidity is transferred in the direct stage, to reach the desired cooling temperatures. The result, according to manufacturers, is cooler air with a RH between 50 and 70%, depending on the climate, compared to a traditional system that produces about 70–80% relative humidity in the conditioned air.
Evaporative + conventional backup
In another hybrid design, direct or indirect cooling has been combined with vapor-compression or absorption air conditioning to increase the overall efficiency and/or to reduce the temperature below the wet-bulb limit.
Traditionally, evaporative cooler pads consist of excelsior (aspen wood fiber) inside a containment net, but more modern materials, such as some plastics and melamine paper, are entering use as cooler-pad media. Modern rigid media, commonly 8" or 12" thick, adds more moisture, and thus cools air more than typically much thinner aspen media.[25] Another material which is sometimes used is corrugated cardboard.[26][27]
Design considerations
Water use
In arid and semi-arid climates, the scarcity of water makes water consumption a concern in cooling system design. From the installed water meters, 420938 L (111,200 gal) of water were consumed during 2002 for the two passive cooling towers at the Zion National Park visitors' center.[28] However, such concerns are addressed by experts who note that electricity generation usually requires a large amount of water, and evaporative coolers use far less electricity, and thus comparable water overall, and cost less overall, compared to chillers.[29]
Allowing direct solar exposure to any surface which can transfer the extra heat to any part of the air flow through the unit will raise the temperature of the air. If the heat is transferred to the air prior to flowing through the pads, or if the sunlight warms the pads themselves, evaporation will increase, but the additional energy required to achieve this will not come from the energy contained in the ambient air, but will be supplied by the sun, and this will result not only in higher temperatures, but higher humidity as well, just as raising the inlet air temperature by any means, and heating the water prior to distribution over the pad by any means, would do. In addition, sunlight may degrade some media, and other components of the cooler. Therefore, shading is advisable in all circumstances, though the vertical aspect of the pads, and insulation between the exterior and interior horizontal (upwards facing) surfaces to minimise heat transfer will suffice.
Mechanical systems
Apart from fans used in mechanical evaporative cooling, pumps are the only other piece of mechanical equipment required for the evaporative cooling process in both mechanical and passive applications. Pumps can be used for either recirculating the water to the wet media pad or providing water at very high pressure to a mister system for a passive cooling tower. Pump specifications will vary depending on evaporation rates and media pad area. The Zion National Park visitors' center uses a 250 W (1/3 HP) pump.[30]
Exhaust ducts and/or open windows must be used at all times to allow air to continually escape the air-conditioned area. Otherwise, pressure develops and the fan or blower in the system is unable to push much air through the media and into the air-conditioned area. The evaporative system cannot function without exhausting the continuous supply of air from the air-conditioned area to the outside. By optimizing the placement of the cooled-air inlet, along with the layout of the house passages, related doors, and room windows, the system can be used most effectively to direct the cooled air to the required areas. A well-designed layout can effectively scavenge and expel the hot air from desired areas without the need for an above-ceiling ducted venting system. Continuous airflow is essential, so the exhaust windows or vents must not restrict the volume and passage of air being introduced by the evaporative cooling machine. One must also be mindful of the outside wind direction, as, for example, a strong hot southerly wind will slow or restrict the exhausted air from a south-facing window. It is always best to have the downwind windows open, while the upwind windows are closed.
Different types of installations
Typical installations
Typically, residential and industrial evaporative coolers use direct evaporation, and can be described as an enclosed metal or plastic box with vented sides. Air is moved by a centrifugal fan or blower (usually driven by an electric motor with pulleys known as "sheaves" in HVAC terminology, or a direct-driven axial fan), and a water pump is used to wet the evaporative cooling pads. The cooling units can be mounted on the roof (down draft, or downflow) or exterior walls or windows (side draft, or horizontal flow) of buildings. To cool, the fan draws ambient air through vents on the unit's sides and through the damp pads. Heat in the air evaporates water from the pads which are constantly re-dampened to continue the cooling process. Then cooled, moist air is delivered into the building via a vent in the roof or wall.
Because the cooling air originates outside the building, one or more large vents must exist to allow air to move from inside to outside. Air should only be allowed to pass once through the system, or the cooling effect will decrease. This is due to the air reaching the saturation point. Often 15 or so air changes per hour (ACHs) occur in spaces served by evaporative coolers, a relatively high rate of air exchange.
Evaporative (wet) cooling towers
Large hyperboloid cooling towers made of structural steel for a power plant in Kharkіv (Ukraine)
Cooling towers are structures for cooling water or other heat transfer media to near-ambient wet-bulb temperature. Wet cooling towers operate on the evaporative cooling principle, but are optimized to cool the water rather than the air. Cooling towers can often be found on large buildings or on industrial sites. They transfer heat to the environment from chillers, industrial processes, or the Rankine power cycle, for example.
Misting systems
Mist spraying system with water pump beneath
Misting systems work by forcing water via a high pressure pump and tubing through a brass and stainless steel mist nozzle that has an orifice of about 5 micrometres, thereby producing a micro-fine mist. The water droplets that create the mist are so small that they instantly flash-evaporate. Flash evaporation can reduce the surrounding air temperature by as much as 35 °F (20 °C) in just seconds.[31] For patio systems, it is ideal to mount the mist line approximately 8 to 10 feet (2.4 to 3.0 m) above the ground for optimum cooling. Misting is used for applications such as flowerbeds, pets, livestock, kennels, insect control, odor control, zoos, veterinary clinics, cooling of produce, and greenhouses.
Misting fans
A misting fan is similar to a humidifier. A fan blows a fine mist of water into the air. If the air is not too humid, the water evaporates, absorbing heat from the air, allowing the misting fan to also work as an air cooler. A misting fan may be used outdoors, especially in a dry climate. It may also be used indoors.
Small portable battery-powered misting fans, consisting of an electric fan and a hand-operated water spray pump, are sold as novelty items. Their effectiveness in everyday use is unclear.[citation needed]
Understanding evaporative cooling performance requires an understanding of psychrometrics. Evaporative cooling performance is variable due to changes in external temperature and humidity level. A residential cooler should be able to decrease the temperature of air to within 3 to 4 °C (5 to 7 °F) of the wet bulb temperature.
It is simple to predict cooler performance from standard weather report information. Because weather reports usually contain the dewpoint and relative humidity, but not the wet-bulb temperature, a psychrometric chart or a simple computer program must be used to compute the wet bulb temperature. Once the wet bulb temperature and the dry bulb temperature are identified, the cooling performance or leaving air temperature of the cooler may be determined.
For direct evaporative cooling, the direct saturation efficiency, , measures in what extent the temperature of the air leaving the direct evaporative cooler is close to the wet-bulb temperature of the entering air. The direct saturation efficiency can be determined as follows:[32]
= direct evaporative cooling saturation efficiency (%)
= entering air dry-bulb temperature (°C)
= leaving air dry-bulb temperature (°C)
= entering air wet-bulb temperature (°C)
Evaporative media efficiency usually runs between 80% to 90%. Most efficient systems can lower the dry air temperature to 95% of the wet-bulb temperature, the least efficient systems only achieve 50%.[32] The evaporation efficiency drops very little over time.
Typical aspen pads used in residential evaporative coolers offer around 85% efficiency while CELdek[further explanation needed] type of evaporative media offer efficiencies of >90% depending on air velocity. The CELdek media is more often used in large commercial and industrial installations.
As an example, in Las Vegas, with a typical summer design day of 42 °C (108 °F) dry bulb and 19 °C (66 °F) wet bulb temperature or about 8% relative humidity, the leaving air temperature of a residential cooler with 85% efficiency would be:
= 42 °C – [(42 °C – 19 °C) × 85%] = 22.45 °C or 72.41 °F
However, either of two methods can be used to estimate performance:
• Use a psychrometric chart to calculate wet bulb temperature, and then add 5–7 °F as described above.
• Use a rule of thumb which estimates that the wet bulb temperature is approximately equal to the ambient temperature, minus one third of the difference between the ambient temperature and the dew point. As before, add 5–7 °F as described above.
Some examples clarify this relationship:
• At 32 °C (90 °F) and 15% relative humidity, air may be cooled to nearly 16 °C (61 °F). The dew point for these conditions is 2 °C (36 °F).
• At 32 °C and 50% relative humidity, air may be cooled to about 24 °C (75 °F). The dew point for these conditions is 20 °C (68 °F).
• At 40 °C (104 °F) and 15% relative humidity, air may be cooled to nearly 21 °C (70 °F). The dew point for these conditions is 8 °C (46 °F).
(Cooling examples extracted from the June 25, 2000 University of Idaho publication, "Homewise").
Because evaporative coolers perform best in dry conditions, they are widely used and most effective in arid, desert regions such as the southwestern USA, northern Mexico, and Rajasthan.
The same equation indicates why evaporative coolers are of limited use in highly humid environments: for example, a hot August day in Tokyo may be 30 °C (86 °F) with 85% relative humidity, 1,005 hPa pressure. This gives a dew point of 27.2 °C (81.0 °F) and a wet-bulb temperature of 27.88 °C (82.18 °F). According to the formula above, at 85% efficiency air may be cooled only down to 28.2 °C (82.8 °F) which makes it quite impractical.
Comparison to other types of air conditioning
A misting fan
Comparison of evaporative cooling to refrigeration-based air conditioning:
Less expensive to install and operate
• Estimated cost for professional installation is about half or less that of central refrigerated air conditioning.[33]
• Estimated cost of operation is 1/8 that of refrigerated air conditioning.[34]
• No power spike when turned on due to lack of a compressor
• Power consumption is limited to the fan and water pump, which have a relatively low current draw at start-up.
• The working fluid is water. No special refrigerants, such as ammonia or CFCs, are used that could be toxic, expensive to replace, contribute to ozone depletion and/or be subject to stringent licensing and environmental regulations.
• Can be operated on home power inverter during power cuts. This is particularly useful in areas that experience frequent power outages.[35]
• Newly launched Air coolers can be operated though remote control.[36]
Ease of installation and maintenance
• Equipment can be installed by mechanically-inclined users at drastically lower cost than refrigeration equipment which requires specialized skills and professional installation.
• The only two mechanical parts in most basic evaporative coolers are the fan motor and the water pump, both of which can be repaired or replaced at low cost and often by a mechanically inclined user, eliminating costly service calls to HVAC contractors.
Ventilation air
• The frequent and high volumetric flow rate of air traveling through the building reduces the "age-of-air" in the building dramatically.
• Evaporative cooling increases humidity. In dry climates, this may improve comfort and decrease static electricity problems.
• The pad itself acts as a rather effective air filter when properly maintained; it is capable of removing a variety of contaminants in air, including urban ozone caused by pollution,[citation needed] regardless of very dry weather. Refrigeration-based cooling systems lose this ability whenever there is not enough humidity in the air to keep the evaporator wet while providing a frequent trickle of condensation that washes out dissolved impurities removed from the air.
• Most evaporative coolers are unable to lower the air temperature as much as refrigerated air conditioning can.
• High dewpoint (humidity) conditions decrease the cooling capability of the evaporative cooler.
• No dehumidification. Traditional air conditioners remove moisture from the air, except in very dry locations where recirculation can lead to a buildup of humidity. Evaporative cooling adds moisture, and in humid climates, dryness may improve thermal comfort at higher temperatures.
• The air supplied by the evaporative cooler is generally 80–90% relative humidity and can cause interior humidity levels as high as 65%; very humid air reduces the evaporation rate of moisture from the skin, nose, lungs, and eyes.
• High humidity in air accelerates corrosion, particularly in the presence of dust. This can considerably reduce the life of electronics and other equipment.
• High humidity in air may cause condensation of water. This can be a problem for some situations (e.g., electrical equipment, computers, paper, books, old wood).
• Odors and other outdoor contaminants may be blown into the building unless sufficient filtering is in place.
Water use
• Evaporative coolers require a constant supply of water.
• Water high in mineral content (hard water) will leave mineral deposits on the pads and interior of the cooler. Depending on the type and concentration of minerals, possible safety hazards during the replacement and waste removal of the pads could be present. Bleed-off and refill (purge pump) systems can reduce but not eliminate this problem. Installation of an inline water filter (refrigerator drinking water/ice maker type) will drastically reduce the mineral deposits.
Maintenance frequency
• Any mechanical components that can rust or corrode need regular cleaning or replacement due to the environment of high moisture and potentially heavy mineral deposits in areas with hard water.
• Evaporative media must be replaced on a regular basis to maintain cooling performance. Wood wool pads are inexpensive but require replacement every few months. Higher-efficiency rigid media is much more expensive but will last for a number of years proportional to the water hardness; in areas with very hard water, rigid media may only last for two years before mineral scale build-up unacceptably degrades performance.
• In areas with cold winters, evaporative coolers must be drained and winterized to protect the water line and cooler from freeze damage and then de-winterized prior to the cooling season.
Health hazards
• An evaporative cooler is a common place for mosquito breeding. Numerous authorities consider an improperly maintained cooler to be a threat to public health.[37]
• Mold and bacteria may be dispersed into interior air from improperly maintained or defective systems, causing sick building syndrome and adverse effects for asthma and allergy sufferers.
• Wood wool of dry cooler pads can catch fire even from small sparks.
See also
1. ^ Kheirabadi, Masoud (1991). Iranian cities: formation and development. Austin, TX: University of Texas Press. p. 36. ISBN 978-0-292-72468-6.
2. ^ Zellweger, John (1906). "Air filter and cooler". U.S. patent 838602.
3. ^ Essick, Bryant (1945). "Pad for evaporative coolers". U.S. patent 2391558.
4. ^ Landis, Scott (1998). The Workshop Book. Taunton Press. p. 120. ISBN 978-1-56158-271-6. evaporative cooler squirrel cage southwest popular.
6. ^ Such units were mounted on the passenger-side window of the vehicle; the window was rolled nearly all the way up, leaving only enough space for the vent which carried the cool air into the vehicle.
7. ^ a b c Givoni, Baruch (1994). Passive and Low-Energy Cooling of Buildings. Van Nostrand Reinhold.
8. ^ McDowall, R. (2006). Fundamentals of HVAC Systems, Elsevier, San Diego, page 16.
9. ^ Cryer, Pat. "Food storage in a working class London household in the 1900s". 1900s.org.uk. Retrieved 22 November 2013.
10. ^ Bonan, Gordon B. (13 June 2008). "Forests and Climate Change: Forcings, Feedbacks, and the Climate Benefits of Forests". Science. 320 (5882): 1444–9. Bibcode:2008Sci...320.1444B. doi:10.1126/science.1155121. PMID 18556546. S2CID 45466312.
11. ^ Verploegen, Eric; Rinker, Peter; Ognakossan, Kukom Edoh. "Evaporative Cooling Best Practices Guide" (PDF).
12. ^ Verploegen, Eric; Sanogo, Ousmane; Chagomoka, Takemore. "Evaporative Cooling Technologies for Improved Vegetable Storage in Mali - Evaluation" (PDF).
13. ^ https://ancasterfoodequipment.com/true-commercial-refrigerators-coolers/
14. ^ Peck, John F.; Kessler, Helen J.; Lewis, Thompson L. (1979). "Monitoring, Evaluating, & Optimizing Two Stage Evaporative Cooling Techniques". Environmental Research Laboratory, University of Arizona.
15. ^ Kwok, Alison G.; Grondzik, Walter T. (2007). The green studio handbook: environmental strategies for schematic design. Architectural Press. ISBN 978-0-08-089052-4.
16. ^ Grondzik, Walter T.; Kwok, Alison G.; Stein, Benjamin; Reynolds, John S. (2010). Mechanical and Electrical Equipment. John Wiley & Sons.
17. ^ Energy Information Administration. "Annual Energy Review 2004". EIA. U.S. Department of Energy. Retrieved 12 December 2014.
18. ^ Maheshwari, G.P.; Al-Ragom, F.; Suri, R.K. (2001). "Energy-saving potential of an indirect evaporative cooler". Applied Energy. 69 (1): 69–76. doi:10.1016/S0306-2619(00)00066-0.
19. ^ see Independent Testing tab, Thermodynamic performance assessment of a novel air cooling cycle and other papers http://www.coolerado.com/products/material-resource-center/
20. ^ Coolerado Cooler Helps to Save Cooling Energy and Dollars: New Cooling Technology Targets Peak Load Reduction; Robichaud, R; 2007; https://www.osti.gov/biblio/908968-coolerado-cooler-helps-save-cooling-energy-dollars-new-cooling-technology-targets-peak-load-reduction
21. ^ "Coolerado Chills Snow & Ice Data Center". Data Center Knowledge. August 12, 2011.
22. ^ Maisotsenko cycle based counter and cross flow heat and mass exchanger: A computational study. 2017 International Conference on Energy Conservation and Efficiency (ICECE). Rasikh Tariq ; Nadeem Ahmed Sheikh. Publication Year: 2017, Page(s): 44 - 49
23. ^ Maisotsenko cycle: technology overview and energy-saving potential in cooling systems. Journal of Energy and Emission Control Technologies. 6 March 2015 Volume 2015:3 Pages 15—22. Emmanuel D Rogdakis, Dimitrios Nik Tertipis. Faculty of Mechanical Engineering, National Technical University of Athens, Athens, Greece
24. ^ "cold-SNAP: Eco-friendly air conditioning". Wyss Institute. 27 September 2019.
25. ^ (PDF) http://www.pge.com/includes/docs/pdfs/myhome/saveenergymoney/savingstips/evap/eedectechsheet.pdf. {{cite web}}: Missing or empty |title= (help)
26. ^ Margolis, Jonathan. "Corrugated cardboard swamp cooler by Sundrop Farm". Theguardian.com. Retrieved 2018-09-25.
27. ^ "Sundrop Farm's system". Sundropfarms.com. 2014-06-20. Retrieved 2018-09-25.
28. ^ Torcellini, P; Long, N; Pless, S; Judkoff, R (February 2005). Evaluation of the Low-Energy Design and Energy Performance of the Zion National Park Visitors Center - Technical Report NREL/TP-550-34607 (PDF). p. 88. Retrieved 9 June 2020.
29. ^ "Evaporative Cooling Design Guidelines Manual for New Mexico Schools and Commercial Buildings" (PDF). December 2002. pp. 25–27. Retrieved 12 September 2015.
30. ^ Torcellini, P.; Pless, S.; Deru, M.; Long, N.; Judkoff, R. (2006). Lessons Learned from Case Studies of Six High-Performance Buildings - Technical Report NREL/TP-550-37542 (PDF).
31. ^ [1] Archived May 18, 2007, at the Wayback Machine
32. ^ a b HVAC Systems and Equipment (SI ed.). Atlanta, GA: American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE). 2012. p. 41.1.
33. ^ Krigger, John; Dorsi, Chris (2004). Residential Energy: Cost Savings and Comfort for Existing Buildings (4th ed.). Saturn Resource Management. p. 207. ISBN 978-1-880120-12-5.
34. ^ "Evaporative cooler/ Evaporative cooler". Waterlinecooling.com. Retrieved 2013-11-22.
35. ^ "Air Cooler or Air Conditioner - Which is Better & Why?". energyconversiondevices.com. Retrieved 2021-07-03.
36. ^ "Latest technology stylish looks and compact build". bonafideresearch.com. Retrieved 2021-08-04.
37. ^ "A brief note on the NID Cooler" (PDF). Government of India - National Centre for Disease Control. Archived from the original (PDF) on 10 October 2017. Retrieved 22 November 2013.
External links
• Holladay, April (2001). "A swamp cooler cools air by evaporation". WonderQuest Weekly Q&A science column. USAToday.com. Retrieved 2006-07-14.
|
Why do certain drugs make the skin more sensitive to sun?
Why do certain drugs make the skin more sensitive to sun? 1 Physicians and pharmacists often advise patients to avoid prolonged exposure to sunlight while taking certain medications without telling them why.
Those patients that do not heed this warning may later find a red itchy rash or sunburn in areas left unprotected from sunlight or the light emitted by tanning beds.
Why do certain drugs make the skin more sensitive to sun? 2
Why do certain drugs make the skin more sensitive to sun? 3
Medications that react with the skin in this manner are termed photosensitizers.
Examples include tetracycline and its derivatives, fluorquinolone antibiotics (such as Cipro), sulfa-containing drugs (such as Bactrim) and the cardiac medication amiodarone (which is sold under the brand name Cordarone).
These photosensitizers, or chromophores, possess a unique ability to absorb ultraviolet light at the particular wavelength spectrum found in sunlight or artificial sunlight (UVA and UVB). This ability, however, is not the problem. Instead the unique structural characteristics of these medicationssuch as halogenated aromatic rings or alternating single and double bondslead to the destabilization of their chemical structure and a transfer of energy that induces a buildup of damaging compounds in the skin.
What is photosensitivity?
Photosensitivity (or sun sensitivity) is inflammation of the skin induced by the combination of sunlight and certain medications or substances. This causes redness (erythema) of the skin and may look similar to sunburn. Both the photosensitizing medication or chemical and light source have to be present in order for a photosensitivity reaction to occur.
Generally, these reactions can be divided into two mechanisms:
Phototoxic drugs are much more common than photoallergic drugs.
1. Phototoxic reactions.
Why do certain drugs make the skin more sensitive to sun? 4
In phototoxic reactions, the drug may become activated by exposure to sunlight and cause damage to the skin. The skin’s appearance resembles sunburn, and the process is generally acute (has a fast onset). Ultraviolet A (UVA) radiation is most commonly associated with phototoxicity, but ultraviolet B (UVB) and visible light may also contribute to this reaction.
Rash from a phototoxic reaction is mainly confined to the sun-exposed area of the skin. A phototoxic reaction typically clears up once the drug is discontinued and has been cleared from the body, even after re-exposure to light.
Symptoms of phototoxic reaction:
Individuals with phototoxic reactions may initially complain of a burning and stinging sensation. Then the redness typically occurs within 24 hours of the exposure to sun in the exposed areas of the body such as the forehead, nose, hands, arms, and lips. In severe cases, the sun protected areas of skin may be also be involved.
The range of skin damage may vary from mild redness to swelling to blister formation (bullae) in more severe cases. The rash from this photosensitivity reaction usually resolves with peeling and sloughing off (desquamation) of the affected skin within several days.
2. Photoallergic reactions.
Contra actions examples.
In photoallergic reactions, the ultraviolet exposure changes the structure of the drug so that is seen by the body’s immune system as an invader (antigen). The immune system initiates an allergic response and cause inflammation of the skin in the sun-exposed areas. These usually resemble eczema and are generally chronic (long-lasting). Many drugs in this family are topical drugs.
This type of photosensitivity may recur after sun exposure even after the drug has cleared from the system and can sometimes spread to areas of the skin unexposed to the sun.
Symptoms of photoallergic reactions:
Individuals with photoallergic reactions may initially complain of itching (pruritus). This is then followed by redness and possibly swelling and eruption of the involved area. Because this is considered an allergic reaction, there may be no symptoms for many days when the drug is taken for the first time. Subsequent exposure to the drug and the sun may cause a more rapid response in 1-2 days.
It is important to note that not all people will develop a reaction to a photosensitizer.
Fair-skinned people may be more susceptible, much as they are to sun damage in general.
What is ultraviolet light?
Ultraviolet (UV) light is radiation energy in the form of invisible light waves. UV light is emitted by the sun and by tanning lamps.
The sun discharges three types of ultraviolet radiation:
1. Ultraviolet A (UVA)
2. Ultraviolet B (UVB)
3. Ultraviolet C (UVC)
Only UVA and UVB rays reach earth. (UVC does not penetrate the earth’s upper atmosphere.)
Tanning lamps also produce UVA and/or UVB. These artificial rays affect the skin in the same way as do UVA and UVB from the sun.
You may also be interested in
Join our Salon Compass community
Share with us your thoughts and ideas
Salon-Compass Ltd
Made with in London
Copyright © 2020 Salon-compass.com. All rights reserved.
|
SSC General Science & Technology Sample Paper NCERT Sample Paper-1
• question_answer
During elections, a permanent, chemical mark is put at the base of the nail of index finger while exercising your franchise. This mark is not seen after two months or so because:
A) It is worn out by constant washing with soap and water.
B) Constant contact of the hands with hard water corrodes the mark
C) The formation of new nail, forcibly removes the old one.
D) It is gradually oxidised by tannin and caffein that we ingest in tea and coffee respectively.
Correct Answer: C
Solution :
[c] Election ink, electoral stain or phosphoric ink is a semi-permanent ink or dye that is applied to the forefinger (usually) of voters during elections in order to prevent electoral fraud such as- double voting. It is an effective method for countries where identification documents for citizens are not always standardised or institutionalised. Electoral stain typically contains a pigment for instant recognition, and silver nitrate which stains the skin on exposure to ultraviolet light, leaving a mark that is impossible to wash off and is only removed as external skin cells are replaced. Industry standard electoral inks contain 10%, 14% or 18% silver nitrate solution, depending on the length of time the mark is required to be visible. Although normally water-based, electoral stains occasionally contain a solvent such as alcohol to allow for faster drying, especially when used with dipping bottles, which may also contain a biocide to ensure bacteria aren't transferred from voter to voter.
You need to login to perform this action.
You will be redirected in 3 sec spinner
|
sign up for our newsletter to get free sample alerts
Here's Why You Should Be Worried About Blue Light Exposure (and What to Do About It)
Photo 1/9
We spend a lot of money (and time) protecting ourselves from the sun's damaging UV rays. From sunscreen to sun-protective clothing, we're well aware by now of how important daily protection is, even when it's cloudy out. But did you know there are harmful rays lurking in your home or office that can be just as damaging as UV rays? We're talking about blue light, and you're exposed to it every time you use your smartphone, laptop or television. Before delving into why you should be concerned about exposure to blue light, let's find out exactly what it is.
Image via @baxterblue_
Cover image via @baxterblue_
This article originally appeared on
Photo 2/9
What is blue light?
"Blue light is emitted from all handheld electronic devices as well as desktop and laptop computers and televisions," says leading holistic eye physician Dr. Marc Grossman. "It's the shortest wavelength light in the visible spectrum and causes significant damage to many parts of the eye, seriously impacting present and future vision capacity. Damage from blue light from smartphones is particularly important because smartphones are often used in dim light and are used close to the eyes. Unlike ordinary computer vision fatigue, damage from blue light is serious, cumulative and irreversible."
RealSelf contributor Dr. Joel Schlessinger adds that the sun also emits a big chunk of the blue light we're exposed to. That's right, the sun can do damage beyond skin cancer and wrinkles and with our ever-increasing reliance on technology, blue light exposure is only increasing.
Image via @healthyoffice
Photo 3/9
How does blue light affect us?
"Blue light exposure is cumulative and is one of the major risk factors contributing to frequency and severity of macular degeneration, the nation's leading cause of severe vision loss and legal blindness in adults," explains ophthalmologist Dr. Alan Mendelsohn. "More commonly than macular degeneration, blue light is the instigator of digital eye strain, which is characterized by eye fatigue, blurred vision, red and/or dry eyes, eye discomfort and decreased productivity. Sometimes headaches can also be present."
Schlessinger also notes that studies have found blue light to affect circadian rhythm, which can affect how well you're sleeping. "Sleep deprivation can cause premature aging, which is never a good thing, but it can also affect eyes because it's one of the types of light that human eyes are sensitive to. Blue light has some of the highest energy wavelengths and the shortest lengths, which causes flickering. This can cause eye strain, headaches and fatigue since our eyes don't come with a built-in blue light filter."
Image via @baxterblue_
Photo 4/9
It's even worse for children
Blue light is especially problematic for children. "Anyone under the age of 9 is particularly vulnerable to eye damage from blue light emitted by computers, smartphones and other handheld devices," says Grossman. "Young children's eyes have not fully developed the protective pigment to help filter harmful blue light. Babies have little, if any, melanin pigment in their eyes, which is essential for filtering out blue light."
Most parents will tell you smart devices are imperative for keeping their kids entertained, but it's clearly important that they restrict the use of handheld devices in young children and, at the very least, Grossman advises to make sure your children consume plenty of fresh fruits and vegetables with plenty of important carotenoids to help protect their eyes from blue light damage.
Image via @baxterblue_
Photo 5/9
Reduce the risks from blue light exposure: Blue light filter glasses
Wear UV-resistant sunglasses when you're outside. "Favor sunglasses that are amber-tinted, reducing blue light rather than gray, green or bluish lenses," says Grossman. He also suggests investing in glare screens for your desktop, laptop and handheld displays that reduce blue light, limiting exposure to LED/fluorescent-based light and instead look for lighting options that simulate daylight with broad spectrum color and following the 20-20-20 rule. "Every 20 minutes that you use the computer or any electronic device, look at least 20 feet away for at least 20 seconds. This gives the tiny muscles in your eye a chance to relax."
He also recommends blue light glasses like Baxter Blue or Healthy Office. "Blue light filter glasses are the best way to mitigate the harmful and uncomfortable symptoms of blue light," says Grossman. "These lenses are designed to effectively filter blue light without changing the clarity of the lens. You can also add anti-glare to the computer glasses for optimal comfort on the computer."
Baxter Blue's lenses have been specifically developed to filter out the short wave harmful blue light without changing the color perception of a digital screen. The blue light filtering component is part of the actual lens, not a coating on the outside and they also have an anti-reflective coating to reduce the glare of digital screens to enhance the viewing experience as well as offering 100 percent UV protection. Similarly, Healthy Office's versions are designed to block harsh blue light, reduce eye strain and improve sleep.
Image via @healthyoffice
Full Site | Terms & Conditions | Privacy Policy
© 2022 Total Beauty Media, Inc. All rights reserved.
|
use shade in a sentence, make example sentences of the word is too hot; let's,
English words and Examples of Usage
Example Sentences for "shade"
This sun is too hot; let's go sit over there where it's shadyWe stood in the shade of a large tree, and watched the children play in the hot sun.
This sun is too hot; let's go sit over there where it's shady.
We painted our kitchen and living room in differing shades of yellow.
The child shaded his eyes from the sun with his hand.
Yasmin sat in the shade of a palm tree, reading magazines, and drinking a cold beer.
Plant these bushes in a shady area with lots of drainage.
We put a bench in a shady area of the garden, so we could get out of the sun when we wanted to.
In Kashmir, there is a proverb which observes that where there is sunshine, there is also shade.
There is an Osage proverb which states that if you want a place in the sun, you must leave the shade of the family tree.
Thomas Hood wrote, No shade, no shine, no butterflies, no bees, no fruits, no flowers, no leaves, no birds, November!
The average human eye can distinguish about 500 different shades of gray.
Camels seem to prefer to rest in sunlight even if shade is available.
The use of shading in the painting conveys a sense of mass and volume which enhances the naturalistic effect of the scene.
Trees in the city of Bangkok are not tall enough to shade the elephants which are tourist attractions there, so they often get sunburnt.
Dogs only see shades of gray.
Find someone who likes to sit in the shade on hot, sunny days.
Example sentences with the shade, a sentence example for shade, and how to make shade in sample sentence, how do I use the word shade in a sentence? How do you spell shade in a sentence? spelling of shade
Search Example Sentences for any English Word here ....
Share on Facebook
Learn words with English sentences with us
use English words in a sentence
|
Beit Midrash
• Family and Society
• Mitzvot of the Land of Israel
To dedicate this lesson
The Fruits of the Fourth Year
What are the laws related to fruit growin on a tree in its fourth year?
Rabbi Yirmiyohu Kaganoff
Question #1:
Rabbi Lamdan, a local talmid chacham, asks his Rav: "I have carefully studied this week’s parsha, which contains the Torah’s only mention of the mitzvah of neta reva’ie (fruit that grows during the fourth year of a tree’s existence). Yet, I cannot find a single allusion in the Torah to the laws of neta reva’ie as recorded by the halachic authorities! What information am I missing?"
Question #2:
Tikvah, always known for her intellectual honesty, inquires: "I feel like a hypocrite. Every day I pray for Moshiach to come and our return to the land of our fathers, and yet, I know little about the agricultural mitzvos of the Torah. If I truly hope for his imminent appearance, should I not be familiarizing myself with the laws that will apply when he arrives?"
Question #3:
When the Levy family moved into their spacious Waterbury home, they planted several fruit trees and grapevines, which are now producing luscious looking pears, apples and grapes. May they begin enjoying the fruit? Must they perform any special procedures before eating them?
What do these three questions have in common?
Understanding the basic laws of neta reva’ie and their source will enable us to answer both Rabbi Lamdan’s and the Levys’ questions, and at the same time will assist Tikvah in her search for truth.
First, the basics:
This week’s parsha proclaims:
"When you arrive in the Land, and you plant any tree for its fruit, you shall restrict its fruit; what is produced the first three years is restricted from you and may not be eaten. And in the fourth year, all its fruit shall be holy for praises to Hashem. Only in the fifth year may you eat its fruit – therefore, it will increase its produce for you, for I am Hashem, your G-d" (Vayikra 19:23-25).
The fruit produced in the first three years of a tree’s life is called orlah and is forbidden. The Torah refers to planting an eitz maachal, which I translated as a tree for its fruit, rather than a fruit tree. This is because Chazal understand that the prohibition of orlah applies only to a fruit tree planted for its fruit, and not to a fruit tree planted for a non-food purpose, such as for lumber or as a hedge (Orlah 1:1). This rule may affect the Levys, as I will later explain.
Although the Torah states only that orlah may not be eaten, the Torah shebe’al peh teaches that one may not benefit from it either. For this reason, one may not dye one’s skirt with orlah pomegranate peels, heat a house with orlah nutshells, or even feed orlah fruits and peels to animals. (In a different article, I discussed how one determines the end of the three prohibited crop years.) Although the mitzvah of orlah is obviously agricultural, it nevertheless applies to trees growing outside Eretz Yisrael.
Although the fourth year’s fruit is no longer orlah, it still has a special status. When the Torah discusses this produce, it states, "And in the fourth year, all its fruit shall be holy for praises (in Hebrew, kodesh hillulim) to Hashem." As Rabbi Lamdan correctly noted, the Torah’s entire description of the status of these fruits is these two words. What does this obscure phrase kodesh hillulim mean? What type of sanctity does the fruit manifest, and how does this result in praise?
The Gemara explains that the sanctity of the neta reva’ie fruit prohibits one from eating it until it has been redeemed (Berachos 35a). This act of redemption is itself praise to Hashem (Rashba ad loc.).
However, Rabbi Lamdan is not entirely satisfied with this answer. He knows that one redeems neta reva’ie only if one cannot eat the fruit in Yerushalayim, an aspect that the verse does not mention. Furthermore, the verse says nothing about the method of redemption, which, in fact, has many detailed halachos, as we will see.
We must research further.
We find another reference that might shed some light on the nature of neta reva’ie. Concerning the individuals exempted from going to war, the Torah states: "Who is the man who planted a vineyard, but he did not yet redeem it? He shall return to his house" (Devarim 20:6). Here the Torah alludes to the redeeming of a vineyard, although it mentions no details about when and how this happens (see Rashba, Berachos 35a). Although this verse does not answer any of Rabbi Lamdan’s questions, it does imply a new factor, heretofore unmentioned: that the mitzvah of neta reva’ie applies only to grapes. (In reality, the Gemara [Berachos 35a] cites a dispute whether neta reva’ie indeed applies only to grapes or to all fruits, a matter that we will soon discuss.)
Thus, our search for the sources for this mitzvah is still unresolved.
In fact, much of the law concerning neta reva’ie originates elsewhere. A mesorah, an oral tradition from Sinai, compares its sanctity to that of a different mitzvah, maaser sheni (Kiddushin 54a). There the Torah states:
"And you shall eat the maaser of your grain, your wine, and your olive oil …before Hashem your G-d, in the place where He will choose to rest His name -- so that you will thereby learn always to be in awe of Hashem. However, when you will be blessed by Hashem your G-d such that you will be unable to carry [the maaser sheni] as far as the place that Hashem chose, then you may exchange it for money that you subsequently take with you when you go to the place that Hashem chose. You may then exchange the money for cattle, sheep, wine or anything else you desire, and you shall eat there before Hashem your G-d, and in this way, you and your family will celebrate" (Devarim 14:23-26).
The Torah shebe’al peh teaches that "the place where He will choose to rest His name" refers to the city of Yerushalayim. Thus, we are to transport maaser sheni to Yerushalayim. However, if this is difficult, one may redeem the produce for coins instead, and the special sanctity of the maaser sheni transfers to the money. One adds an additional 25% to the money and brings it to Yerushalayim, where he purchases with it food to be eaten within the confines of the city. This acquisition transfers the maaser sheni sanctity from the money onto the food.
Whether one transports one’s maaser sheni produce itself to Yerushalayim or exchanges it for money, the farmer remains with a large value that may be consumed only in Yerushalayim, a city bursting with sanctity and special, holy people. The beauty of this mitzvah is that it entices the farmer to ascend to the Holy City and be part of the spiritual growth attainable only there.
One can even look at the maaser sheni as "vacation fund" money that the Torah provides. Although the farmer may not be wealthy, when he arrives in Yerushalayim, he can eat and drink like a king!
The Torah specifies that once in Yerushalayim, one may exchange the maaser sheni money for cattle, sheep, wine or anything else you desire, which seems both wordy and unusual. The Torah shebe’al peh interprets this to mean that one may not purchase just any food with maaser sheni money, but only those that grow either from the ground or on it. Therefore, one may use maaser sheni money to purchase fruit, vegetables, breads, pastry, meat or poultry, but not fish, which do not grow on the ground, not salt or water, which do not grow; and not mushrooms, which are fungi and also do not grow from or on the ground.
Both the original maaser sheni and food purchased with its redemption money are holy and may be eaten only within the walls of the old Yerushalayim and only when both the food and the individual eating it are tahor, ritually pure.
By the way, the area of today's Old City of Jerusalem is encompassed by walls constructed by the Ottoman Turks. The Turkish walls surround areas that probably were not part of the city at the times of Tanach and Chazal, and therefore those areas do not have the halachic sanctity of the Holy City; at the same time, without any question, large sections that do have the sanctity of the Holy City are outside these walls.
The fact that one must be tahor to consume maaser sheni changes the way one observes this mitzvah today, when achieving this status is virtually unattainable. Since we have no ashes of a parah adumah with which to purify ourselves of certain types of tumah, we cannot eat the produce of maaser sheni, nor the food purchased with the redeeming coins, since they have the same sanctity. Because of this problem, it is pointless to purchase food with these coins, and instead, they remain unused and are eventually destroyed. To avoid excessive loss, one may redeem large quantities of maaser sheni onto a very small value within a coin: this is the way we redeem maaser sheni today. Of course, we are missing the main spiritual gain of consuming the foods in Yerushalayim, but this is one of the many reasons for which we mourn the destruction of the Beis HaMikdash and pray daily for its restoration.
We now return to the laws of neta reva’ie. Although the Torah alludes only to the redemption of neta reva’ie fruits, the Torah shebe’al peh teaches us to apply the laws of maaser sheni to neta reva’ie, where the redemption services the grower unable to transport his produce to Yerushalayim. Similarly, one may eat neta reva’ie itself only in Yerushalayim when tahor. Someone who cannot transport it there may redeem it by transferring its kedusha, holiness, to coins. When doing this, he add 25% to the value, brings the money to Yerushalayim instead of the fruit, and there purchases food to eat in the Holy City. Just as redeeming maaser sheni still allows the grower to reap the spiritual benefits of his produce, so, too, redeeming reva’ie enables the grower to benefit from the Yerushalayim experience.
At this point, we can answer Rabbi Lamdan’s original inquiry. The extensive literature of the Mishnah, Gemara and halachic authorities concerning neta reva’ie assumes that the laws of neta reva’ie derive from those of maaser sheni, and that the purpose of the redemption of neta reva’ie produce is to allow someone with a bountiful reva’ie crop to benefit from the spiritual gains of his produce.
And just as we cannot make ourselves tahor today, and therefore we cannot eat the produce of maaser sheni, we can also not consume the neta reva’ie or the food purchased with its redemption coins, since they have the same sanctity. Because of this problem and to avoid the loss that would result, we may transfer the kedusha of large quantities of neta reva’ie to a coin of small value. Again, we are missing the main spiritual gain of consuming the foods in Yerushalayim, and for this, too, we mourn the destruction of the Beis HaMikdash.
Having answered Rabbi Lamdan’s questions and also having addressed Tikvah’s concern, we will now tackle the questions raised by the Levys’ trees and vines. Does someone living outside Eretz Yisrael also merit fulfilling the mitzvah of neta reva’ie on his fruit? The Rishonim debate whether this mitzvah applies in chutz la’aretz, just as the mitzvah of orlah does, or if it is treated the same as most agricultural mitzvos that are exempt in chutz la’aretz. There are three basic approaches to this issue:
1. Some authorities contend that, since neta reva’ie is an agricultural mitzvah, it does not apply outside Eretz Yisrael, which is the usual, but not absolute, rule regarding these mitzvos (see Rambam, Hilchos Maachalos Asuros 10:16). Although orlah is an exception and applies even in chutz la’aretz because of a special halacha leMoshe miSinai, an oral tradition that Moshe received at Mount Sinai, reva’ie applies only in Eretz Yisrael, since it was not specifically included in the halacha leMoshe miSinai. Those who rule this way conclude that the Torah did not extend the spiritual benefits of these mitzvos to include produce grown outside Hashem’s palace. Therefore, the Levys’ trees are exempt from the mitzvah of neta reva’ie and all fruit produced after the orlah years are available for consumption, without any redemption procedure.
2. On the opposite side, there are authorities who contend that the halacha leMoshe miSinai that requires that we observe orlah in chutz la’aretz also requires observing the mitzvah of reva’ie; Hashem wanted us to benefit from the mitzvah of neta reva’ie, even outside the Holy Land. Therefore, the fruit that grows on the Levys’ trees and vines in Waterbury during the fourth year have the sanctity of neta reva’ie (see Rabbeinu Yonah, Berachos, Chapter 6). This is the opinion that the Shulchan Aruch follows (Yoreh Deah 294:7). (For reasons beyond the scope of this article, reva’ie applies only when we are certain that the fruit grew in the fourth year, but not when we are uncertain whether it grew in the fourth year or the fifth.)
3. There is a third opinion that contends that reva’ie applies to grapes that grow in chutz la’aretz but not to other fruits (Tosafos, Kiddushin 2b s.v. esrog and Berachos 35a s.v. ulemaan). This is based on a dispute as to whether the mitzvah of reva’ie in Eretz Yisrael applies to all fruit trees, or only to grapes (Berachos 35a). Many authorities conclude that we rule leniently regarding produce grown in chutz la’aretz and therefore absolve all fruits from neta reva’ie, except for grapes (Rama and Gra to Yoreh Deah 294:7).
Thus, according to Sefardic practice of following the Shulchan Aruch, the pears, apples and grapes of the fourth year growing in Waterbury, have the status of reva’ie and require redemption. According to the Ashkenazic practice, the grapes require redemption, but not the pears or apples.
Note that the Torah states: "And in the fourth year, all its fruit shall be holy for praises to Hashem. Only in the fifth year may you eat its fruit – therefore, it will increase its produce for you, for I am Hashem your G-d" (Vayikra 19:23- 25). We see that Hashem Himself promises that He will reward those who observe the laws of the first four years with tremendous increase in the tree’s produce in future years. May we soon see the day when we can bring our reva’ie and eat it while tahor within the rebuilt walls of Yerushalayim!
This Shiur is published also at Rabbi Kaganof's site
את המידע הדפסתי באמצעות אתר
|
Teenage Goal Setting Strategies to Help
There are significant benefits for teenagers who set goals. Goals can teach the difference between wants and needs, motivate teens to challenge themselves, and teach them to ask for assistance when necessary.
Teens need to get some “quick wins” when they start setting goals. For adults and teenagers alike, sometimes a fear of failure can prevent us from working on a goal.
Focus on Quick Wins to Get Started
Encourage goals to be ones inside a teenager’s control, rather than somebody else’s. For example, replace the goal “get the lead in the school play” with “have my audition monologue completely memorized.”
They Want Freedom to Set Their Own Goals; Provide Suggestions & Structure
Fully understanding the costs and benefits of goals will help teens determine if a goal is worth it, and if so, how to prepare for it.
Help Them Understand Costs & Benefits
Many people start their first jobs as teenagers. It’s fun to use disposable income as soon as it hits a bank account, but it’s better to use some of the money towards both short-term and long-term goals.
Teen Money Goals
Swipe up for more of THE Teenage Goal Setting Strategies to Help
|
What is a pain disorder?
What is a pain disorder?
Pain Disorder – You need to know more about pain issues if you want to treat them.
What is a pain disorder?
Pain is the primary symptom of a pain disorder, which is a physical ailment. Depending on the region, other symptoms may include swelling, tightness, pain, and more. Most people suffer from myofascial pain, which occurs when trigger points are found in the skeletal muscles. Arthritis, disc diseases, and arthralgia are the following most common causes of joint discomfort. It is common and manageable to have neuropathic and neuralgic pain. Being seen by a doctor right away often helps to avoid or decrease health issues.
pain disorder
Defining what a pain disorder is might be difficult. If you’ve had chronic pain for more than six months and it’s disrupting your daily life, it’s not always a chronic pain disorder. Chronic pain in one or more parts of the body is a symptom of a painful illness. In many cases, the pain is so intense that it prevents the sufferer from carrying out their daily routines. It could last for a few hours, or it could go on for years.
What are the different sorts of pain?
There are two ways to look at this. The pain can fall into more than one of these five categories, and that’s where the problem arises.
• Acute pain
• Chronic pain
• Neuropathic pain
Acute pain
When we talk about acute pain, we imply that it lasts anywhere from a few minutes to three months at the most (sometimes up to six months). Several factors might cause acute pain, but one of the most common causes is an injury to soft tissue or a short illness. If a wound doesn’t heal correctly or if the pain signals misfire, acute pain can turn into chronic pain.
Chronic pain
Chronic pain lasts a long time. Intermittent or regular: it might be either. Headaches are an example of an ailment that can be deemed chronic even though the discomfort isn’t always there. Arthritis, fibromyalgia, and spinal conditions, including degenerative disc disease, are all common causes of chronic pain.
Neuropathic pain
Pain caused by injury to the neurological system might be called neuropathic. Shot, stabbing, or searing pain is commonly described in terms of the feeling of pins and needles. It can also reduce a person’s sensitivity to touch and impair their ability to detect heat or cold. Chronic neuropathic pain is a frequent ailment. Symptoms might range from intermittent (meaning they come and go) to severe (meaning they interfere with day-to-day activities). Mobility difficulties can arise as a result of the discomfort interfering with regular movement.
There are several hypotheses as to why people develop a pain problem.
• Theories of psychoanalysis hold that the body’s physical symptoms can be used to mask an individual’s inner conflicts or desires.
• Distressing symptoms may be the only way youngsters may express themselves when they cannot communicate or convey their thoughts verbally.
• Distress can manifest physically when mental illness is stigmatized by society, whether in families or communities.
• It is hypothesized that children learn to emulate family members or pick up on the benefits of being “ill.”
• As part of a family’s dynamics, a child’s position may be that of the ill one. Entanglement, over-protection, rigidity, or a lack of conflict resolution are a few possible explanations.
• Many different types of trauma and abuse can be classified as either physical, psychological, or both. Quite often, this is the case. This disease is more common in abuse victims, whether it be physical or sexual in nature. People with a pain issue may or may not have been abused in the past. The best chance of preventing a pain issue is to get help as soon as the pain starts or starts to become chronic.
Many pain descriptions, including phantom limb, dyspareunia, orofacial pain, fibromyalgia, pelvic pain, and many others, have been linked to psychological dysfunction. This includes stomach pain, chest pain, and headache. No anatomical distribution of pain, discomfort in non-injured regions, pain that appears out of proportion to injury severity, and pain without damage have all been labeled as symptoms by medical professionals. Psychological abnormalities that underlie each of the pain symptoms must be taken into account by the assessors.
It’s critical to get medical attention if you want to alleviate your symptoms and resume normal daily activities. Some of the treatment options available are cognitive-behavioral therapy (CBT) or family therapy, proper antidepressant medication, and proper pain management. Using cognitive-behavioral therapy (CBT) can help alleviate the stress of dealing with chronic pain while also changing one’s outlook on one’s own health, which can help alleviate the related health anxiety and melancholy. In some instances, antidepressants can ease the pain while also reducing symptoms of depression and anxiety.
Share this post
Leave a Reply
|
How to Build Networks Using a Hardware Router (Cont.)
By: Walter Metcalf
Date: 10/05/00
This week we are continuing our short series on the hardware router by continuing to examine its advantages over using one of the PC's on the LAN to do the job. While the latter may more flexible in certain situations, using a dedicated hardware router has many advantages. In this article we shall continue to look at some of these advantages:
First, however, here is a screen shot of the LinkSys BEFSR41. Its cool design puts the connectors at the back out of the way and multi-coloured LEDS at the front that give you a very complete picture of the activity taking place within both the LAN and the WAN.
[ Click to expand image. ]
B. Advantages of Using a Router (Cont.)
1. Any (client) workstation can ALWAYS access the Internet or any other LAN resource that is online.
1. My family doesn't lose their connection when I reboot, turn off my computer, or perform maintenance on my main computer.
2. Router's Dynamic IP successfully tracks IP changes due to ISP's DHCP server.
1. So now we are up virtually all the time. (Cable service interrupts in our area are very rare.)
3. DHCP Server
1. Allows me to disconnect a computer from one cable and connect to another without reconfiguring computers.
4. Supports Dynamic Routing.
1. Allows multiple routers on the LAN.
1. e.g. LinkSys & InJoy Firewall on my Desktop.
2. This might be useful in a situation where two users wanted direct exposure to the Internet. (The LinkSys router only supports one such user.)
2. I don't currently have a need for this feature.
Diagram of Network with Router
[ Click to expand image. ]
If you compare this diagram with that of the Basic Network at the beginning of this series, you'll see that the crucial difference is that instead of one of the PC's standing between the (rest of the) LAN and the cable modem, here the router performs that function. This puts all the PC's on an equal footing, giving them all the same access to the Internet.
Network with H/W Router
C. Router Installation
1. Collect Required Information from Cable Provider
1. For now, assume Static Connection (i.e. IP is fixed).
1. ISP-assigned IP and Subnet addresses
2. ISP-assigned Default Route (aka Gateway)
3. ISP-assigned DNS IP addresses
4. ISP-assigned Computer Name & Workgroup Name
[ Click to expand image. ]
• Point browser at URL
1. When router is connected to your computer it takes IP address
2. Remember to assign your computer a different IP later on when configuring TCPIP notebook.
• Initial Logon
1. Leave UserName blank
2. Enter Password = "admin"
3. Click on OK
[ Click to expand image. ]
D. (Basic) Router Setup
1. Router Name
1. name of your computer
2. supplied by provider
3. not required by all providers
2. Domain Name
1. supplied by provider
2. not required by all providers (Note)
3. LAN IP Address
1. Set to netmask
2. WAN IP Address
• If your provider uses Dynamic IP, select, "Select IP address automatically"
• Otherwise, specify the address assigned by your provider.
3. PPPoE
• If your provider uses PPP over Ethernet, select Enable, and fill in the rest of the parameters with the correct values.
4. If all the values are OK, click on "Apply".
5. Click on "Continue".
• Next week we shall complete this mini-series appendix to the Peer Networking Series by showing how to modify the OS/2 networking objects to accommodate the router both in static and dynamic modes. Please join us then.
For Further Reading:
Digital Subscriber Line
A list of sites giving substantive information on DSL and alternative high-speed links to the Internet. Written by About Guide Bradley Mitchell.
Internet on Cable
Good discussion of Internet on Cable and how it differs from DSL. Especially relevant to Canadians. Written by AboutCanada Guide Marco den Ouden.
xDSL Technology
Practical information on xDSL technology including how to share a modem/cable/xDSL connection between multiple computers. Written by Randy Day.
Walter Metcalf
Unless otherwise noted, all content on this site is Copyright © 2004, VOICE
|
Technician disassembles an electric vehicle battery with an insulated drill and gloves
Under the Hood: Lithium Power in Electric Vehicles
by Katelyn Tomaszewski
The need for lithium batteries is growing every day. Lithium batteries are used in a wide range of things, including cell phones, laptops, kid’s toys, energy storage solutions, and electric vehicles. Lithium used in today’s batteries is primarily mined, while only a small portion is reused. The biggest lithium producers are Australia, Chile, Argentina, and China. However, our lithium supply isn’t infinite.
What is lithium? How is it used in batteries?
Lithium has been a part of battery innovation since the 1970s. Lithium is an ideal chemical component for batteries because it is highly reactive and is much lighter than other metals used in batteries, such as lead.
Common lithium batteries are made up of 4 parts:
• Cathode
• Anode
• Conduit
• Separator
These batteries have several important advantages, such as being light in weight, having a lower discharge rate, and having a higher energy density per kilogram (kg). Lithium-ion batteries have grown in popularity due to being low-maintenance with high energy density (150 WH of energy per kg). Lithium batteries also have easy fast charging and a lower environmental impact after production, as compared to energy sources with fossil fuels. They are ideal for EV car batteries because they hold a lot of energy squeezed into a small battery which allows for more energy extraction as opposed to other battery types.
EV Lithium Batteries
With the growth of lithium-based electric vehicles, so comes the need to recycle them. By 2040, it is estimated, more than half of new car sales will be electric vehicles, making it imperative that we set up a recycling stream for them now (Henze, 2018). EV batteries can be broken down, organized, and smelted into raw materials.
Currently, EV batteries that do not get recycled go through a high-temperature melting-and- extraction or smelting process. However, this is a very energy-intensive process, and the production of these plants is very costly not only to make but to run as well. The worst part of this process is that they don’t recover all valuable battery materials. By recycling used EV batteries, we can reduce the need for new mineral extraction and provide the battery market with materials to reuse. The recycling process creates a steady stream of materials from a secondary commodity market, in turn supplying the manufacture of new lithium batteries, supporting corporate sustainability and reducing environmental impact.
Doing Our Part
With the increase in battery use in our everyday lives, it is imperative that we recycle our used batteries, whether EV or regular batteries. To find the sustainable path to recycling your EV batteries, visit our Disassembly Services page or call 800.852.8127.
published 02/26/21
|
Python is the most popular programming language of the present IT market and sets itself apart from other programming language as it serves multiple purposes. Now being a Python developer has become the most preferred career choice among professionals and the reason is: Python is emerging as one of most powerful programming language.
Looking at the features offered by Python, now more and more companies are relying on this programming language to develop their projects across various sectors including Artificial Intelligence, Data Science, Machine Learning, etc.
With an aim to help our readers to know some of the coolest features that have made this programming popular, we are presenting this guide that discusses about the following topics:
• What is Python?
• 11 Coolest Features Of Python Programming Language.
But before we proceed further at first we would recommend you visit this link SimplivLearningPage and also read this given below blog that discuss about the career guidelines of Python programming language.
Now let us start discussing the first part of our blog
What is Python?
Python is a general purpose, object oriented, easy to learn programming language. In comparison to other programming languages, Python is easy to learn as its functions can be carried out with simpler commands and less text. And this explains why today Python has become the first choice among IT professionals to build their projects.
Python is now being used by startups as well as tech giants. Some of the noted companies including Yahoo!, CERN, Google, NASA and many others rely on this programming language. Many of the famous applications such as YouTube, Instagram and Quora are also using this programming language.
Python is universal. The other advantage of using Python is it provides various frameworks that make the development process much easier. Frameworks are the collection of modules or packages that helps in writing web applications. They automate the implementation of redundant tasks, frameworks cut development time and enable developers to focus greatly on application logic rather than routine elements.
Did You Know? Python relies on an interpreter. Unlike other programming languages, it does not need a compiler.
Some of the top frameworks of Python are as follows:
• Django
• Web2Py
• Flask
• Bottle
• CherrPy.
Python is now being used across different industries. Some of them are as follows:
• Web development
• Game development
• Machine Learning & Artificial Intelligence
• Desktop GUI
• Audio and Video applications
• Business applications.
Learning Python programming language opens up the door of many job opportunities. Some of the job profiles that a Python developer may entitle with are as follows:
• Python Web Developer
• Software Engineer
• Automation Testing Engineer
• Data Analyst
• Data Scientist
• Machine Learning Engineer.
Some of the skills required to become Python developer are Core Python, Object-Relation mappers, Web frameworks, Design skills, File handling concepts, Exception Handling, Deep Learning, Machine Learning, AI and many more.
11 Coolest Features Of Python Programming Language
Every programming language has its own unique and different features. Features matters a lot while choosing a particular programming language for developing a project. Below is the list of coolest features of Python programming language.
1. Supports OOP Concept:
Python is an Object-Oriented Programming Language. The concepts of classes and objects come into existence. This programming language supports some of the key OOPS concepts such as Inheritance, Polymorphism, and Encapsulation etc. This feature helps programmers to write reusable code and develop software application with lesser code.
2. Simple and Easy to Learn:
Python is considered as one of the simple and easy to learn programming language. Since this programming follows an easy syntax, simple setup and has many practical applications in web development makes it a very developer-friendly programming language. It means any aspiring learner can learn to code easily and develop their project within less time.
3. Support for GUI:
GUI stands for Graphical User Interface. GUI helps the user to easily interact with the software. Python offers large number of libraries such as Tkinter, wxPython or JPython for making Graphical User Interface for your applications.
4. Interpreted Language:
Python is an Interpreted programming language. It means that the Python interpreter executes codes one line at a time. While using Python developers need not to compile Python code and that makes the debugging process easier and efficient.
Did You Know? Python is a case-sensitive programming language. In Python myname and Myname are not the same.
5. Large Standard Library:
Python provides huge library support. These libraries can be imported at any instance and can be used in a specific program. With having libraries developers need not to write all the code on their own and they can import the same those that already exist in the libraries.
6. High-level Language:
Python is a high-level programming language. It means whenever programmers are developing any project using Python, they need not be aware of the coding structures, architecture as well as memory management.
7. Highly portable:
The advantage of using Python for developing your projects is that it is highly portable. Python programming can be written on a machine and same can be worked on any other machine without changing. This feature makes it portable programming language. For example if you have written Python program on Windows operating system and there is need to shift the same to other Operating System such as Mac or Linux, then it can be easily achieved without worrying much about changing the code. This feature helps programmers a lot.
8. Supports Exception Handling:
Python programming language supports exception handling. An exception is an event which occur during program execution and can disrupt the normal flow of program. Python supports exception handing that means programmers can develop less error prone code and can test various scenarios that can cause exception later on.
9. Dynamically Typed:
Python is a dynamically typed programming language. It means that type of variable is decided at runtime and not in advance. This feature helps developers to save time and increase efficiency as there is no need to specify the type of variable during coding.
10. Extendable and Scalability:
Python can be extended to other programming languages that make it an extendable language. It can be made extendable by adding low-level modules to its interpreter. The modules only enable the developers to add or modify their tools to be more efficient.
Did You Know? Python is the preferred programming language for working on Artificial Intelligence and Machine Learning projects.
11. Free and Open source:
Python is a freely available programming language. It is an open source, that means public has access to source code. Any developer can download it, change it and use it and distribute it.
So these are some of the top features of Python. Apart from these features mentioned above there are still many other features exist that helps the developers to efficiently to work on their projects and build them easily.
This has been a guide to discuss some of the coolest features of Python programming language. Undoubtedly Python is one of the powerful programming languages of the present IT market and anyone learning this language can grab the most attractive job in the IT arena.
So if any professional willing to learn Python then we recommend you to visit these online courses that offer a bundle of Python courses designed by industry experts. These courses help you to learn Python from scratch and guide you in the right path to become a Python professional.
If you have any question pertaining to any topic discussed above then we request to share feedback in the comment section. Please do visit this blog again for similar kinds of blogs with valuable information.
Thanks for reading!
Recommended blogs for you
Please enter your comment!
Please enter your name here
Pin It on Pinterest
|
Skip to main content
Climate Change Will Make Beer Taste Different (Yes, Really)
Editor’s Note: This is a guest post by Colleen Doherty, an associate professor of molecular and structural biochemistry at NC State whose work focuses on the connection between time and stress in plants. This post is part of a series highlighting ways that NC State is helping us understand, mitigate and prepare for the impacts of climate change.
Although centuries old, beer is continually changing. New trends in everything from ingredients to brewing styles alter how a beer tastes. But not all changes are under our control. Beer is almost certainly going to taste different in the future, because the ingredients themselves are changing.
Specifically, beer won’t taste the same due to the effects of changing global temperatures on hops and other components of beer.
Climate and Cost
A 2018 report in Nature pointed out that beer prices may rise due to climate change. Increasing temperatures and more frequent droughts will drive up the costs of beer ingredients such as hops and barley. However, the impacts they identified are just the foam on the surface – the taste of beer will change too. That’s because changes in temperature and rainfall affect the biochemistry of beer ingredients like hops – and that makes them taste different.
But rising temperatures aren’t the only problem. My research focuses on the fact that the timing of our current temperature increase is different than anything we’ve seen since agriculture started thousands of years ago. These changes in daily and seasonal temperature patterns – warmer nights, earlier springs – disrupt how plants function, hurts yields, affects the cost of the ingredients, and affect how beer tastes.
Taste and Terroir
A beer’s flavor comes from the different plants and yeast that go into making the beer: the chemical compounds in the barley (or other grains used) and hop cones; the starch from the grains; the yeast that turns sugar into alcohol. But, if we’re being honest, most of the flavor comes from the hops added to the beer.
Alpha acids from hops provide the bitterness we associate with many beers, while essential oils from hops give a brew its ‘hoppiness’ or flavor. These essential oils are compounds the plant produces, and breeders have cultivated different types of hops for many years to provide a range of flavor profiles.
Hops. Photo credit: Markus Spiske.
We are not 100% sure why hop plants originally produced these compounds. Based on other plant research, many scientists speculate that they are involved in defending the plant from bacterial and fungal pathogens. The essential oils are composed of many different chemicals, but some are more prominent than others when it comes to flavor. Myrcene is one of these compounds, and has the green, ‘fresh hop’ smell that many people associate with hops. Other components of the essential oils can give off a variety of woody, citrusy, or fruity flavors, to name just a few.
Each hop variety produces a unique flavor profile. These flavors depend on the combination of different chemicals in the hop cone’s essential oils. For example, if a hop cone produces more linalool or geraniol, it will have a flowery taste. In contrast, the presence of farnesene produces a woody, herbal aroma. The genetics of the plant contribute to this profile. If the hop variety has the genes present that enable it to create a compound like farnesene, it has the ‘genetic potential’ to make a woody flavor. The final flavor profile depends on the combination of compounds the plant produces in the cone at the time of harvest.
But genes are only one factor in determining that flavor profile; the environment the plant grows in also makes a difference. Terroir, a well-known term in wine-making circles, is the contribution the local environment has on a plant’s flavor profile. Terroir is the result of the soil, climate, weather patterns, even insect attacks that the plant has experienced before producing the edible product. Just as the environment’s effect on wine grapes affects the taste of the resulting bottle of wine, the environment hops grow in can affect the taste of the beer in your glass. The full effects of terroir on hops are unknown, but temperature appears to be a significant factor.
And hops are not the only beer ingredients influenced by where they are grown. Terroir also affects the starch component of beer: barley, oats, wheat, or rice. For these ingredients, the ratio of starches to proteins to lipids is essential. For example, barley varieties with high protein content in their seeds are less desirable for beer production. But even grain varieties that are optimal for beer production have altered protein and starch ratios when they are grown under stress conditions such as heat and drought. In other words, these stresses change the quality of the malt extract.
Hotter Nights
Changing temperatures impact how much hop, barley, and wheat we can grow. Temperature also affects the flavor profile of these ingredients. Therefore a change in temperature can alter both the cost and tastes of the final beer product. But the effects of heat are complicated. It isn’t just how much the temperature changes, but when that change happens. A warmer day can severely damage a young, tender plant or a plant at the reproductive stage when the heat-sensitive flower tissues are developing. However, the same temperature increase may have little impact on the same plant during a different stage of its development. Just as the plant moves through different developmental stages throughout the year, plants also move through different activity stages throughout the day. Some processes, such as flower development, are heat-sensitive. So, in many plants, these processes occur during the nighttime hours to protect them from the heat of the day. However, one majorly scary aspect of climate change, at least from the beer production perspective, is that nights are warming faster than days. (This is actually scary for lots of crops).
Grain. Photo credit: Michael Janek.
Suffering through a warm summer day in North Carolina, you can predict that the night will be cooler than the day, but not too cold. Likewise, in winter, even though it’s cold outside, there’s a good chance it will be even colder at night. The difference between the day and night temperature difference is relatively consistent all year round here in North Carolina. There’s always a few exceptions where a cold front moves through, and the night will be warmer than the day, but these are rare, happening no more than a few times per year. However, this consistent degree-difference between day and night is shrinking, mostly because nights are getting warmer.
Nights are warming faster than days due to something called the “boundary layer effect,” which basically means that subtle changes in daytime temperature are amplified at night. One effect of this, for example, is that the number of nights where the temperature dips below freezing (32?) has decreased over the last 50 years. And warmer nights also affect the compounds produced by hops.
Plants Do Things at the Right Time
Why does nighttime temperature affect the taste of plant products? Although plants don’t move around, they are incredibly attuned to when environmental changes happen. Plants get all the energy they need by just sitting still and collecting it. But this stationary lifestyle means they need to respond rapidly to environmental changes, since they can’t move away from them. One way plants fight environmental stresses is by essentially scheduling around them.
Many environmental stresses happen in a recurring pattern at a specific time of day. For example, I know traffic is going to be terrible going out of Raleigh at 5 PM on a Friday, so I can reduce my stress by leaving town at another time. Similarly, plants can coordinate the timing of their internal activities to avoid recurring stresses in the environment.
Keeping Time
I can use my watch to know when I need to leave to avoid traffic. But how do plants keep track of time? Like animals, plants have an internal circadian clock that controls their biological activities relative to the time of day and the season of the year. For example, before the first rays of sunlight hit a plant, the plant has already begun the processes necessary for photosynthesis – it will have its system ready to go by the time the sun is up.
This circadian clock also controls when plants respond to other repetitive environmental changes. For example, if a particular pest is only active during the day, the plant can fight that attacker off in the daytime, then save its resources by letting its guard down at night. We know timing is essential for these defense responses.
What does this have to do with the taste of beer in North Carolina? Plants fight bacteria, fungi, or bugs by producing chemical compounds. Interestingly, these compounds are what we associate with flavor. Many of the compounds that give hop plants their distinctive tastes are also defense compounds. Derivatives of Myrcene and Humulene, which give that ‘hoppy smell’ are antibacterial and antimicrobial compounds. The distinctive hoppy smell of beers comes from the byproduct chemicals that are released as these compounds break down in the brewing process.
Beer and hops. Photo credit: Missy Fant.
The internal circadian clock also allows plants to coordinate their biological processes with recurring weather patterns, such as daily temperature cycles. The circadian clock can control what time of day and season different events, like flower development, occur. For example, the circadian clock controls the timing of heat-sensitive processes, restricting them to the coldest time of day, which is generally at night. A small temperature increase during this period is particularly damaging since the most heat-sensitive activities are consolidated to this period that the plant expects to be the coldest part of the day. These changes to timing are why, for plants, at least, the effects of climate change are a bigger problem than many people realize.
Use Temperature to Set Your Clock
You know that temperature difference we talked about, with nights being cooler than days? Plants use that temperature difference to set their circadian clocks. The circadian clock in any species must be adaptable to the environment, so that it can adjust to things like seasonal changes in daylight.
We know respond to changes in daily light (much like humans do), but we know very little about how plants use the difference between warm days and cool nights to set their clock. However, we do know that some plants are sensitive to small changes in the daily temperature range.
In other words, warmer nights mean that the temperature difference between night and day is shrinking. And that can cause an effect in plants similar to jet-lag – they find it harder to set their circadian clock. In consequence, their clock may be less sensitive to changing conditions and can become out of sync with the environment.
What Does NC State Have to Do With Warmer Nighttime Temperatures and the Taste of Beer?
We are working to understand the effects of warmer nights on the chemicals that plants produce.
While nights are getting warmer, we still don’t know the full impact warmer nights will have on crops – including hops. In collaboration with Xu Li at NC State’s Plants for Human Health Institute, my lab is examining the effects of warmer nighttime temperatures on the compounds hop plants produce. Many of these compounds are antioxidants, antimicrobials, or other health-promoting nutraceuticals. Understanding how warmer nights affect the plant’s ability to allocate resources to making these compounds informs us how hop flavors will change. Importantly, these investigations also provide information on how these classes of compounds will be affected in other crops we rely on for food, as well as plant-derived medicines.
We are also researching how warmer nights affect different parts of the plant. Hops are a unique crop in that many parts of the plant are edible. The tender new shoots are a delicacy known as Poor Man’s Asparagus. (This is a total misnomer since hop shoots are among the most expensive vegetables to buy in Europe.) The hop leaves are edible, can be used to make delicious pasta, and contain many of the health-promoting compounds found in the hop cones. Even the cones themselves can be used for cooking, making tea, and contain health-promoting compounds. Identifying how warmer nights affect different plant tissues provides clues we can use to advance our understanding of how warmer nighttime temperatures disrupt the plant’s underlying biochemistry. This knowledge will be used to help us understand the effects on other crops.
Warmer Nights and Pathogens
Warmer nights also affect the behavior of pest species, which can have a significant impact on plants. Specifically, higher nighttime temperatures give pests more time to be active and increase the prevalence of diseases. For example, if the nights aren’t cooling off, fungal and bacterial pests get extra time to grow and attack plants.
In collaboration with fellow researchers Michael Kudenov in Engineering and Dahlia Nielsen in Bioinformatics, my lab is examining not just how the plants are impacted by warmer nights, but also how the interaction between plants and pathogens is affected. We’re also looking at how these changes in the balance between pests and plants affect the defense compounds the plant produces.
And changes in these defense compounds affect more than just the taste of beer. These antibacterial, antifungal, and antiviral components provide alternative sources of antibiotics and antimicrobials. For example, adding hop extracts to chicken feed may help reduce diseases in chickens.
How warmer nights and altered pest activity change the chemical composition of hops will help us understand the effects in other crops: the vitamins in our vegetables, the nutraceutical compounds in medicinal plants, and the overall nutritional quality of everything we grow.
This post was originally published in NC State News.
|
White Rainbows Frequent Only 2 Places in the World
[Science ★★★]
(P1) Waterfalls are among the most reliable places to catch a rainbow, but only two on the planet offers up a regular display of its close cousin, the moonbow: Cumberland Falls in Kentucky and Victoria Falls on the Zambia-Zimbabwe border. Also called a white rainbow, a moonbow appears when moonlight (which is sunlight reflected off the moon) in the days just before, during, and after a full moon hits the mist generated by the falls. Because we can’t see colors well in low light, a moonbow appears white, reports BBC Travel, though photographers can use long exposures to capture its actual colors. Moonbows are occasionally, but not regularly, seen elsewhere in the world, including at Yosemite Falls in California.
(P2) What makes Cumberland Falls and Victoria Falls so unique is that they boast just the right amount of splash along a wide enough (rather than deep and narrow) gorge so that moonlight can reach down and across the mist. CNN notes that because sunlight is much stronger than moonlight, moonbows are rainbow’s fainter cousin. They’re temperamental in other ways: Cloudy nights can prevent the bow from forming, and Niagara Falls on the US-Canada border has lost its moonbow thanks to light pollution. Bustle reports on one photographer who in November caught a similar fogbow, which forms in the fog, arching over a solitary tree in Scotland; it went viral on Instagram and Twitter.
WORDS: 231
SOURCE: http://www.newser.com/story/236138/white-rainbows-frequent-only-2-places-in-the-world.html
VOCABULARY: mist, boast, splash, gorge, fainter, temperamental, arching
2. Have you ever seen a moonbow? If yes, describe it. If not, would you like to see one? Why or why not?
3. Are there any beautiful waterfalls in your country? If so, where?
4. Is light pollution a problem in the city or town you live in? If yes, how can the light pollution be reduced?
1. Where is one of the most dependable places to see a rainbow?
2. The moonbow is paler than a rainbow and appears _________ in color.
3. Who can catch the true colors of a moonbow?
4. Which is more powerful, sunlight or moonlight?
5. What was captured in Scotland that went viral on social media?
What do the following expressions or phrases mean?
• catch a rainbow (P1)
• offer up (P1)
• which is sunlight reflected off the moon (P1)
• across the mist (P2)
• caught a similar fogbow (P2)
Cambly Practice Button
Image source: by Ian Glendinning via AP http://www.newser.com/story/236138/white-rainbows-frequent-only-2-places-in-the-world.html
One thought on “White Rainbows Frequent Only 2 Places in the World
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Computer Networking Concepts - Computers quiz 9
By CareerCadets
Computer Networking Concepts - Computers Online Quiz 9
Computer Networking Concepts - Computers quiz 9 is a free online quiz challenge under Computer Networking Concepts - Computers category. There are 589 free online quiz challenges available in Computers category
_________ is a computer networking device that builds the connection with the other bridge networks which use the same protocol.
With reference to 'Near field Communication Technology', which of the following statements is/are correct?1. It is a contactless communication technology that uses electromagnetic radio fields.2. NFC is designed for use by devices which can be at a distance of even a meter from each other.3. NFC can use encryption when sending sensitive information.Select the correct answer using the codes given below
To open the disc, mouse pointer is placed on disc icon and then________
You must install a___ on a network if you want to share a broadband Internet connection.
What is the default subnet mask for a class C network?
In the ________ topology, each workstation is connected directly to each of the others.
What is the key element of a Protocol?
If you want to locate the hardware address of a local device, which protocol would you use
When internet data leaves your campus, it normally goes to a(n) ________before moving toward its destination.
__________ is a telecommunications network or computer network that extends over a large geographical distance.
Invite your friends to Computer Networking Concepts - Computers quiz 9 Quiz
gmail WhatsApp Facebook Twitter Outlook Linkedin
|
Five New Vitamin D and COVID-19 Studies
August 14, 2020
Since I last wrote about vitamin D at the end of May, five new studies have been released on the topic.
Can Genetics Shed Light on Whether the Association Is Causal?
Yesterday, a Mendelian randomization study was released as a preprint* that, in the words of the authors, “did not show any evidence of a causal association of 25OHD concentration with COVID-19 susceptibility and severity.”
Mendelian randomization studies look at genetic evidence that one factor causes another. Although they are observational, they are a sort of “natural experiment” that can mimic some of the benefits of a randomized controlled trial, which makes them well suited to assess cause-and-effect relationships. The idea is that we inherit our genes and they remain unchanged through our life, so we know that our behaviors, our social environments, and our health conditions aren't changing them. Within a population, genes are randomly distributed with respect to the topic we care about, so associations between a gene and a health outcome shouldn't be tangled up with confounding factors.
When we observe a correlation between vitamin D status and COVID-19 outcomes, most of the time we don't know whether this is because vitamin D reduces COVID-19 risk, COVID-19 reduces vitamin D levels, or some third factor drives the association between the two, such as preexisting conditions that increase COVID-19 risk while also independently driving down vitamin D levels, or spending time outdoors decreasing COVID-19 risk while also driving up vitamin D levels. Indeed, it could be a combination of any or all of these.
In principle, a Mendelian randomization study that looks for whether genes that alter vitamin D levels also alter COVID-19 risk can help address the question of whether the vitamin D-COVID-19 connection is causal. Since this study found no evidence for that, at first glance it falls on the side of the “not causal” interpretation.
In reality, Mendelian randomizations can be useful, but are subject to a number of limitations. Among them:
• Genes may affect multiple processes in the body. Just because a gene alters 25(OH)D levels does not mean that it doesn't do anything else that could impact COVID-19 risk.
• Genes often explain only a small fraction of a trait. This can make it hard to find an association without very large sample sizes.
In the case of this study, they used data from the UK Biobank, with 3523 COVID-19 patients, 536 severe patients, and a much larger group of controls. They based their analysis on 143 genes from the same dataset that were shown to impact 25(OH)D levels. There was no association between these genes and either COVID-19 infection or COVID-19 severity.
There are two major limitations I see with this paper:
• All together, the 143 genes identified in the UK Biobank only explain 1.7% of the variation in 25(OH)D levels. They may have even less influence over whether someone falls below a certain 25(OH)D threshold, such as 20 or 30 ng/mL, where most of the disease risk appears to lie.
• While some of these genes have fairly specific impacts on 25(OH)D levels, many of them have very indirect impacts on 25(OH)D and much broader impacts on many other things that happen in the body. For example, some of them control cholesterol levels or components involved in transporting things within cells. As such, this isn't a very specific test of how changing 25(OH)D impacts disease risk.
As a result of these limitations, I don't think this paper is very valuable for assessing the cause-and-effect relationship between vitamin D status and COVID-19 outcomes.
Is There a U-Shaped Curve?
Since the first study on vitamin D and COVID-19 came out, I have been concerned that there could be a U-shaped curve, where low vitamin D and high vitamin D both increase risk, with the lowest risk lying in the middle, at the bottom of the “U.”
The first study with a meaningful amount of data above 35 ng/mL was a Swiss study that provided hints there may be a U-shaped curve for those under the age of 70, but not for those over the age of 70.
Then a study from Chicago came out that found infection risk bottomed out at 11% in the 30-40 ng/mL range, and that the infection risk remained nearly identical, 12%, in the 76 people with 25(OH)D above 40 ng/mL. I didn't conclude much from that, because they didn't tell us anything about how high the 25(OH)D went above 40 or whether the infection risk might have changed at some point above 40.
The latest paper with the potential to shed light on the possible existence of a U-shaped curve has come from Israel.
The data came from the medical records of Leumit Health Services, an Israeli HMO. Just over 14,000 subjects of all ages were tested for COVID-19, and just over 7,800 within that group were also tested for vitamin D status. Among those who had both tests, just over 10% were positive for COVID-19 and just under 90% were negative.
The risk of infection was 50% greater for those with 25(OH)D less than 30 ng/mL, and the risk of hospitalization was more than double.
Adjusting for other factors such as age, sex, socioeconomic status, smoking, BMI, and various health conditions made the association between vitamin D status and hospitalization lose its statistical significance, but even in the adjusted model poor vitamin D status still increased the risk by 1.95-fold and the 95% confidence interval was 0.99-4.78, so it barely lost significance and we can be quite confident the association wasn't driven by random chance, especially in the light of so many other studies all finding the same thing.
The potential evidence about the U-shaped curve comes from Figure 2B:
You can see from the green area to the right of somewhere around 53 ng/mL that there is a small tail of people with 25(OH)D in the 50-75 ng/mL range and there might have been a very few in the 75-100 ng/mL, none of whom had COVID-19. Unfortunately, the graph does not make the number of people in this range clear.
Although they had no COVID-19 infections, everyone with vitamin D status that high was also under the age of 50. Being over 50 was a major independent risk factor for infection, but there was also a peak around age 25 that the authors attributed to large social gatherings.
So, this could be evidence against a U-shaped curve above 50 ng/mL in those under the age of 50, and might even be evidence that levels this high completely protect against infection.
However, that conflicts with both the Swiss paper and the Chicago paper. We now have three relevant papers:
• The Swiss paper found hints of a U-shaped curve under the age of 70, but not over the age of 70.
• The Israeli paper found hints of an abolition of infection risk above ~53 ng/mL in those under the age of 50, but did not have any evidence about these levels at all above the age of 50.
• The Chicago paper found that high levels of vitamin D in the 40-100 ng/mL range had a 12% infection risk, almost identical to the 11% risk in the 30-40 ng/mL range.
One possibility is that there is a U-shaped curve in some populations, but not others, and its existence depends on an interaction between age and either geographical location, ancestry, or something else that correlates with the country in which the study was conducted.
However, it is also entirely possible that all of this is just noise, resulting from so little data in the >50 ng/mL range. I strongly suspect it is noise.
While this paper adds somewhat to the evidence against a U-shaped curve, due to the overwhelming paucity of evidence, it still doesn't eliminate my concern.
Vitamin D Impacts Mortality, But Not ICU Admission or Hospitalization Duration in Iran
From a July 14 preprint, of 611 COVID-19 inpatients at the Sina Hospital in Tehran, Iran, 235 of them had data on vitamin D status. Those with 25(OH)D above 30 ng/mL had a lower risk of hypoxia, lower CRP, and higher lymphocytes, all of which bode for a better risk of survival.
Although vitamin D status had no relation to the duration of hospitalization or the likelihood of admission into the ICU, it was associated with a lower risk of severe infection and a much lower risk of mortality.
Severe infections were found in 63.6% of those with 25(OH)D above 30 ng/mL and 77.2% of those with 25(OH)D lower than that, representing a 21% increased risk of severe disease for those with poor vitamin D status.
No one under the age of 40 died. Over the age of 40, 16.3% died. 20% of the over-40 patients overall had vitamin D status above 30 ng/mL, but only 9.7% of those who died did, and only 6.3% of those who died had 25(OH)D over 40 ng/mL.
Vitamin D Impacts ICU Admission, But Not Mortality, in England
From a June 25 preprint, Of 134 COVID-19 patients admitted to the NHS Foundation Trust hospital in Newcastle, England, 66.4% had vitamin D <20 ng/mL, 37.3% had below 12.5 ng/mL and 21.6% had <6 ng/mL. Only 19% of those admitted to the ICU had >20 ng/mL, while 39.1%, almost double, of non-ICU patients had 25(OH)D that high.
On the other hand, vitamin D status had no association with CRP or mortality.
This study basically found the inverse of the study from Iran: in Iran, vitamin D status was associated with CRP and mortality but not ICU admission; in England, vitamin D status was associated with ICU admission but not CRP or mortality.
Given that these are derived from hospital records rather than a large randomized controlled trial with standardized criteria, and that treatment protocols and ICU admissions criteria may be very different between the two countries, I don't think we should read that much into these differences. They share in common the finding that vitamin D status is strongly related to the severity of COVID-19 outcomes.
European Countries With the Worst Vitamin D Status Have the Worst COVID-19 Mortality Rates
From a July 1 preprint, the population data about 25(OH)D status in European countries can explain 58% of the variation in COVID-19 mortality rates.
The data on vitamin D status had to be from last 10 years, had to cover adults 40-65 or wider, and had to include the prevalence of vitamin D deficiency, not just average vitamin D status of the population.
Vitamin D deficiency defined as <10 ng/mL 25(OH)D explained 58% of the COVID-19 mortality rate counted in deaths per million people. This stayed the same after adjusting for the age structure of the population, which is important because vitamin D status tends to decline with age.
Vitamin D deficiency using a cutoff of <20 ng/mL only explained 27% of the mortality rate, and was not statistically significant.
Bizarre Policies Around Sunshine and Vitamin D in the UK
Two of these papers highlight some bizarre approaches to sunshine and vitamin D in the UK.
The greatest COVID-19 mortality rates in Europe are in the UK, where, at the time the manuscript was prepared, the COVID-19 deaths were 635 deaths per million people, and the prevalence of 25(OH)D less than 20 ng/mL was 56.4% and the prevalence of 25(OH)D less than 10 ng/mL was 15.4%.
The Newcastle paper noted that there are now several thousand cases of childhood rickets per year in the UK. We have known for nearly a century how to eradicate childhood rickets with vitamin D, at little cost to society or the individual. How is this possible?
From the Newcastle paper:
The UK's Scientific Advisory Committee on Nutrition (SACN) previously recommended universal supplementation with 400 IU (10 ug) daily of vitamin D3, which was endorsed by PHE (Public Health England) in 2016. Nevertheless, current NHS-England guidance to primary care explicitly discourages prescribing maintenance therapy with vitamin D3 due to concerns about cost-effectiveness.
There has been no meaningful promotion to the general public (e.g., via lay media) of over-counter supplementation with the onset of winter (as is routine in Scandinavia) nor any commensurate diminution of sun-avoidance messages to the public by UK medical & lay media in spring and summer. Indeed, both UK government ministers and senior police officers repeatedly stated that sunbathing — as opposed to exercising — in public places during lock-down was unacceptable behaviour.
In a country with such a high prevalence of severe vitamin D deficiency, how is a cheap, low-dose supplement not “cost-effective”? We don't know yet whether keeping vitamin D levels universally above 30 ng/mL or even 20 ng/mL would reduce COVID-19 incidence, severity, and mortality, but it would be cheap to do so and it might save hundreds of thousands of lives. We at least know that it would save thousands of children from rickets every year. How is that not cost-effective?
We also know that the outdoor air is extremely safe during the COVID-19 pandemic. Most transmission occurs indoors. Outdoors, the main risk is getting sneezed on, coughed on, or having a close-contact face-to-face conversation with someone over a long period of time. Laying motionless alone in the sun has very little risk.
In a country with such a high prevalence of D deficiency, with the possibility that vitamin D may be key to reducing COVID-19 severity and mortality, the policy should encourage people to go outside and sunbathe, and should just focus on avoiding large crowds, encouraging masks when in close contact with strangers, and staying six feet away from strangers when possible. Sunbathing and all outdoor activities that can avoid crowds and close face-to-face contact should be strongly encouraged.
The Best News: Clinical Trials Coming!
The best news is that there are 24 clinical trials registered that will test the effect of vitamin D on COVID-19, 16 of which are testing vitamin D by itself and six of which will compare it to a placebo. These are all at the recruiting stage, or not recruiting yet. It will be quite a while before we have any results, but these will finally settle the question of whether the vitamin D association represents cause-and-effect and whether vitamin D supplementation is effective as prevention or treatment.
The Overall State of the Vitamin D Evidence
Altogether, here is what we have so far:
• Vitamin D status below 20-30 ng/mL is associated with more severe disease or with mortality in COVID-19 patients from South Asian, Indonesian, Iranian, and European (Belgian and English) hospitals.
• For mortality in Indonesia, the association persists after adjusting for age, sex, and preexisting conditions. For severity as judged by CT scan in Belgium, the association exists only in males.
• Vitamin D status below 20-35 ng/mL is more weakly associated with infection risk in Switzerland, the United States, England, and Israel.
• In Switzerland, 30-35 ng/mL was associated with reduced risk mainly in those over 70 years of age. In England, infection risk correlated with vitamin D status lower than 20 ng/mL, but the infection disappeared when adjusted for confounders (although the vitamin D data was from 10-14 years before the pandemic). In the US, infection risk correlated with vitamin D status lower than 30 ng/mL, but it only became statistically significant after very complicated statistical maneuvering. In Israel, infection risk and hospitalization was associated with vitamin D status below 30 ng/mL and this largely persisted after adjusting for confounding variables.
• Three studies provide very limited insight into the possibility of a U-shaped curve: in Switzerland, there are hints of one under the age of 70, but not over the age of 70; in Israel, there are hints that there is not one and infection risk might be abolished over 53 ng/mL, but data is only from those under the age of 50; in Chicago, there are hints that 40-100 ng/mL gives the same infection risk as 30-40 ng/mL, but the distribution of the data was poorly described. This may indicate a U-shaped curve conditional on an interaction between age and country, but may just represent noise due to the paucity of data. I currently favor noise and am waiting for more data.
• The only study that looked at vitamin D supplement use was the Chicago study, which found no association with whether the most recent dose of vitamin D someone was taking was up to 1000 IU, 2000 IU, or equal to or greater than 3000 IU.
• Genetics that impact 25(OH)D do not, in aggregate, predict COVID-19 infection risk or severity, but genetics only impact 1.7% of the variation in 25(OH)D and many of the genes have effects on many other things.
• That neither supplement use nor genetics are associated with risk is bearish for a cause-and-effect relationship, but neither of them provide strong evidence.
• There still are no studies using current or past 25(OH)D to predict future risk of COVID-19 incidence, severity, or mortality.
• There remains no evidence that can clearly distinguish between causal and non-causal interpretations. However, there are 24 clinical trials registered that will eventually provide us some answers.
My Other Vitamin D Posts
Here are my other posts on vitamin D and COVID-19, which cover anything listed in the section above that wasn't newly covered in this post:
The Bottom Line
Maintaining 25(OH)D in the 30-40 ng/mL range is adequate to reap any of the potential benefits identified in any of the studies done to date. While it is not clear that doing so will reduce the incidence and severity of COVID-19, it is clear that it might, and at little to no risk.
The studies do not all agree with one another on every point, but they universally point in the same direction: 25(OH)D above 30 ng/mL is associated with a lower risk of COVID-19 infection risk and even more strongly associated with lower severity and mortality. In some studies the association is stronger; in others it is weaker. In some, adjustment for confounders makes it disappear; in others, adjusting brings it out. In some, the cutoff is lower; in others, it is higher. But all of them suggest that having 25(OH)D above 30 ng/mL is, in some way, shape, or form, associated with lower COVID-19 risk.
The outdoor air is very safe and getting sunshine will provide many benefits, including vitamin D. Supplementing with vitamin D to keep levels of 25(OH)D above 30 ng/mL when needed is a no-brainer.
With respect to COVID-19, there is no rationale for keeping 25(OH)D levels any higher than 40 ng/mL. There is very little data on how this impacts risk, but it is unlikely to provide much benefit for COVID-19 and it is not yet entirely clear whether it is risk-free.
Therefore, I believe the best course of action is to maintain levels at 30-35 ng/mL, and to only go above 40 ng/mL if it is necessary to do so for other reasons.
Stay safe and healthy,
Please Support This Service
Support the service by purchasing a copy of The Food and Supplement Guide for the Coronavirus.
Get the guide for free when pre-ordering my Vitamins and Minerals 101 book.
Or, you can offer support by buying Testing Nutritional Status: The Ultimate Cheat Sheet
Get the guide for free, get 30-50% off the pre-orders of my book, and get the Cheat Sheet for 50% off when you sign up for the CMJ Masterpass. Masterpass membership also includes a storefront where 305 brands of supplements are all 35% off, permanently. These include over 500 different vitamin D supplements. Use the code COVID19 to save 10% off the membership price.
Here's a list of other ways to support my work: How You Can Support My Work
Here are three ways to discuss this topic, including asking me questions and getting a response:
• The Masterpass FREE Forum. This forum is free and open to anyone to participate. Anything related to health and nutrition, including all aspects of the coronavirus, is welcome. I will do my best to participate several times a week, though I expect this to eventually be very large and may at some point have to participate on a weekly basis if it starts to take on a life of its own.
• The Coronavirus Forum. This is for anyone who purchases The Food and Supplement Guide for the Coronavirus, pre-orders my upcoming Vitamins and Minerals 101 book, or joins the CMJ Masterpass (if you join, use the coupon code NEWSLETTER for 10% off the membership price). This forum is dedicated specifically to the coronavirus, has subsections based on topics (nutrition, medicine, lifestyle, mechanisms of disease), and has a section where the archive version of this newsletter is directly linked and each newsletter can be discussed as an individual thread. I consistently participate in this forum 3-5 times a week.
• The Masterpass Discussion Group. Preserved for those who join the CMJ Masterpass, it's the best place to ask me questions in a fairly intimate environment and get a rapid response. All topics I cover are fair game, and I consistently participate approximately five times per week. The Masterpass also has monthly live Zoom Q&As that are even more intimate.
I am not a medical doctor and this is not medical advice. I have a PhD in Nutritional Sciences and my expertise is in conducting and interpreting research related to my field. Please consult your physician before doing anything for prevention or treatment of COVID-19, and please seek the help of a physician immediately if you believe you may have COVID-19.
If you aren't subscribed to the research updates, you can sign up here.
You can access an archive of these updates here.
* The term “preprint” is often used in these updates. Preprints are studies destined for peer-reviewed journals that have yet to be peer-reviewed. Because COVID-19 is such a rapidly evolving disease and peer-review takes so long, most of the information circulating about the disease comes from preprints.
|
hamlet etymology
English word hamlet comes from Old French ham (Village.)
Detailed word origin of hamlet
Dictionary entryLanguageDefinition
ham Old French (fro) Village.
hamel Old French (fro)
hamelet Old French (fro) Hamlet (small settlement).
hamelet Middle English (enm)
hamlet English (eng) (British) A village that does not have its own church.. A small village or a group of houses.. Any of the fish of the genus Hypoplectrus in the family Serranidae.
|
keep (an amount of) balls in the air
keep (an amount of) balls in the air
To have a number of different activities in progress; to deal with or oversee several different things at once. Rather than focusing on a single project, Tara prefers to keep a number of balls in the air at once. I'm not surprised he's so burnt out—he was keeping way too many balls in the air at the same time. You can't keep all these balls in the air and expect to stay successful for long—you need to delegate some of these tasks to lower management.
See also: air, amount, ball, keep
keep balls in the air
juggle balls in the air
If you keep a lot of balls in the air, you deal with many different things at the same time. They had trouble keeping all their balls in the air. In management terms, they were trying to do too much and things were starting to break down. I really am juggling a hundred balls in the air at the same time and it isn't easy. Note: This expression uses the image of juggling, where someone has to keep throwing and catching a number of balls at the same time.
See also: air, ball, keep
Collins COBUILD Idioms Dictionary, 3rd ed. © HarperCollins Publishers 2012
See also:
|
Interesting Facts about Korea You Might not Know
Want to learn more about Korea?
Check out these interesting facts!
What are some basics facts about South Korea?
• Official Name: Republic of Korea (ROK)
• Government: Presidential Republic
• Population51.3 million people live in South Korea (2019).
Over half the population lives in the northwest area surrounding Seoul.
South Korea makes up 0.67% of the total world population.
• Capital: Seoul has around 10 million inhabitants with a density of roughly 17,000 people per square kilometer (45,000/square mile).
There are 25.6 million people living in the surrounding metropolitan area.
It is by far the largest city in Korea.
Other notable cities:
• Busan: population 3,678,555
• Incheon: population 2,628,000
• Daegu: population 2,566,540
• Daejeon: population 1,475,221
There has been an effort to move the capital to a new city called Sejong City, starting with government offices.
There are numerous planned cities around Seoul, known as “bed towns”, or places you simply sleep after work.
These include Bundang, Ilsan and Pangyo.
An interesting case of a planned city gone awry is Songdo, which was an ambitious project to create a place where cars aren’t necessary.
It was largely empty for a few years and would have been a great place to film a post-apocalyptic movie.
What is the origin of the Korean language?
Official Language: Korean
Korean is spoken by nearly 80 million people around the world.
This includes North and South Korea as well as expatriate communities in numerous countries.
84.5% of overseas Koreans actually live in only five countries (China, U.S.A, Japan, Canada and Australia).
The language family that Korean belongs to is disputed.
The Northern theory places Korean in the Altaic language family, making it related to Japanese, Mongolian, Finnish and Turkish.
The Southern theory claims it is a member of the Austronesian language family.
Some linguists believe it exists in a family of its own.
There is no definite answer to this question, given Korea’s long history of contact with China and Japan.
Korean has been heavily influenced by Chinese.
A large proportion of Korean words were either coined using Chinese characters or adopted directly.
In addition, there are many loanwords borrowed from English, Japanese and even German.
Koreans tend to be very gracious when it comes to non-Koreans speaking their language and are generally impressed if you can say “안녕하세요” (hello).
This can seem patronizing to some, especially after hearing “한국말 잘해요” (you speak Korean well) for the hundredth time.
What are some fun facts about Korea?
Literacy: More than 97.9% of Koreans can read and write, which is one of the highest literacy rates in the world.
The median age in South Korea is 41.8 years.
Currency: 1,119 South Korean Won equals 1 USD (Feb 2019)
Rough Childhood: All Korean males are required to perform a minimum of 18 months of military service.
They pay an entry-level Private 306,100 KRW (less than 300 USD) per month.
This can be avoided by winning a gold medal in the Olympics or having a preexisting health condition allowing one to perform public service in a government office.
Tremendous Growth: Korea went from receiving the highest international aid per capita to one of the richest in the world in 50 years.
It’s not uncommon for the average Korean person to have a passcode lock on their front door with a security camera and a 100 Mbit/s internet connection.
Vast Resources: Korea does not have many valuable natural resources.
It has achieved growth through intensive education, long work hours and rapid industrialization.
Korea’s most valuable resource is its human resources.
Tough Job: For such a modernized country, Korea is relatively new to the whole democracy thing.
There have been a total of 12 presidents.
5 out of the last 7 former presidents have either been to prison or have committed suicide.
Big Business: Samsung comprises roughly 20% of Korea’s GDP.
They are known as a “문어발식 기업” or “Octopus Company” since they operate in numerous industries from hospitals to electronics to funeral homes.
They have a “cradle to grave” philosophy where they can deliver a baby and handle its every need until death.
Royal Family: Members of the last Korean royal family died out a few decades ago.
Jaebeol are the closest thing to Korea’s nobility.
They arrange marriages between notable families such as Hyundai and LG to strengthen alliances just like in feudal times.
They are both hated by the populace for “갑질” or “master actions” and envied for their wealth and power.
This is reflected in many K-dramas where a pretty, yet common girl will meet a 3rd generation Jaebeol man and fall in love.
The potential mother in law will promptly appear to slap the girl and/or offer her an envelope full of cash to get rid of her.
This is because the matriarch intended to marry her son off to a typically ill-mannered Jaebeol girl.
Spectator Sport: South Korea has been dominating eSports for the last two decades.
It is not common for a star “athlete” to earn a six figure income and have numerous fans.
The games are quite popular to watch and sell-out whole stadiums.
What are some Korean inventions?
Jikji simgyeong (1337)
Printing with metal movable type – predates Guttenberg bible (1455) by 118 years.
Hwacha (1409)
Multiple rocket launcher – Joseon Dynasty and used during the Imjin War
Source: Ancient Origins
Learning Korean
Guided conversation is the fastest way to get fluent in Korean. Pimsleur takes you from 0 to conversational in 90 days. You can try Pimsleur here for free!
|
What is Rewilding for Humans?
Moon Lotus Crystals
Posted on March 29 2020
What is Rewilding for Humans?
Say you were dropped off into the wilderness for 30 days without food, water, shelter. There's no phone signal, so you can't call for help. No Wi-Fi so you can't Google, "How to survive in the wilderness". You just have you and whatever wilderness skills and knowledge you have accumulated in your lifetime. How would you do?
If you're like most of the modern population, you would be petrified without much of a clue of what to do. Most of us just don't have those skills anymore.
With a small exception, most people are moving from a building to their car back into a building. Even exercising is mostly reserved for indoors. We are ignoring the glaring fact that we are animals that evolved to be in the sunshine.
How Did the Rewilding Movement Start?
One of the biggest misconceptions we have as humans in modern western society is that we are separate from nature, rather than an intricate part of the whole. We talk about nature as though it is something outside of us. This couldn't be further from the truth.
There is no doubt that the industrial age and the technology age have improved our lives, but one of the drawbacks is that we seem to think we are above nature. While the invention of the internet gives us access to so much information at our fingertips, it simply is not a substitute for the wisdom that comes from within.
Photo by Dominik Jirovský on Unsplash
In simple terms, we've become domesticated animals. This isn't all bad; there are undoubtedly some good things that have come along with this. However, there is no denying that our modern lifestyles are also taking a toll on our mental and physical health.
That's where rewilding for humans comes in.
What began as a movement to restore balance to the ecosystems by cultivating wilderness areas and reintroducing native plants and animals has created an offshoot which promotes ditching your digital gadgets and embracing the natural rhythms of life. It operates under the notion that we are wild animals, a part of nature and embracing this aspect of ourselves improves our overall health and well being.
How Do You Rewild?
As with most movements, there is a spectrum to rewilding. It can be as full or part-time as you want to make it. This isn't exactly a new movement either, although it is experiencing a boom in interest as society is realizing there is no substitute for nature.
Full time looks like leaving your 9 - 5 for greener pastures, literally. This is your off-gridders and survivalists. Those who go to live off the land and never look back.
What part-time looks like can vary from person-to-person. It may be as drastic as taking a few months of a year to live outside. Or it might look like going on a walk for your lunch break.
Photo by Ruthie Martin on Unsplash
How Can I Rewild Myself?
This post is mainly for the latter. If you're looking to get a little wilder in your everyday life here are some tips to get you going:
Join a foraging class. Foraging has received a surge in popularity over recent years. It is a way to connect to the nature around you by being able to identify what plants are edible.
Take a camping trip or attend a rewilding retreat. Whether solo, with family, or friends, taking even just a few days to live in the great outdoors can refresh your mind, body, and spirit.
Unplug. Make a commitment to yourself to turn off your phone or any other device at least an hour before bed. Even better, put your phone in a different room of the house while you sleep. Many people report having more restful sleep when their phone is in a different room.
Rise with the sun. This simple lifestyle shift has been shown to increase serotonin levels, which allows you to start your day off in a better mood. Artificial lights have allowed us the opportunity to stay up as late as we want. But just because you can, does that mean that you should?
Ground yourself. Remember when you were a child and you wouldn't give a second thought to running around barefoot? Whether you knew it or not, you were practicing grounding. Take 30 minutes of your day to walk barefoot on the earth. There are many benefits to grounding, including improving sleep, reducing inflammation, lessening stress and much more.
Take your exercise routine outdoors. Our ancestors didn't intentionally work out. It was just their way of life. Spend less time trying to drag yourself to the gym, and get yourself moving around outdoors. Not only will you experience the physical benefits of working out, but you will also be breathing in cleaner air and soaking in vitamin D.
Plant a garden. Feeling your hands in the soil, planting seeds and nurturing them into plants can be quite rewarding as well as boost your mood while getting more sunshine.
Whatever you choose to do is not really important. The main point is to get outside and reestablish your connection to nature.
More Posts
Leave a comment
All blog comments are checked prior to publishing
|
Archaeological Buildings of Homs Looted For Unique Stones
Homs' famous black stones are stolen from historical buildings and sold to foreigners by Lebanese emigrants
Activists of Homs confirmed that dozens of archaeological buildings in the war-torn city have been looted since the Syrian revolution erupted.
Activists explained that regime militias robbed the sites in two ways: either by removing the unique stones from buildings to re-build in a different place, or by collecting the remaining stones of demolished houses as a result of shelling and bombing.
Abu Walid, an activist, reported that many people who returned to their houses to restore them found them empty of historical relics, with only ordinary stones of no archaeological value remaining.
“Sadly, the distinctive features of the Homsi houses – like pools, arches, fountains and colored marbled mosaics – have completely disappeared” the activist told Zaman al-Wasl.
Most black stones, a distinctive feature in Homs’ buildings, were transferred to Lebanon for use in constructing houses there, Abo Ra’afat, a builder from Homs claimed.
Abu Ra’afat confirmed to Zaman al-Wasl that he had built three houses within last five months in Zahle, Jounieh and Baabda, using stones transported from Homs, as many affluent people prefer the old ornate stones.
Abu Ra’afat also mentioned that Homs’ stones are transported to Europe and Australia for people interested in old Arabic architecture.
The builder mentioned that antique traders in Hezbollah’s southern suburbs of Beirut facilitate the transportation of Homs’ distinctive stones, especially for Lebanese emigrants.
Recommend article
Sender's Name:
Sender's Email:
Receiver's Name:
Receiver's Email:
|
Renewable Energy Sources That Aren’t Solar and Wind
Renewable energy sources are all the buzz, especially as they seem to consistently be a necessary part of the growth and future of the United States and the world in general. While most of the attention goes to solar and wind energy sources, there are actually many more renewable energy options that people should be aware of. Here are a few renewable energy options that you should keep an eye on as we move towards a more renewable and sustainable future.
One of the most exciting possibilities for renewable energy is the option of harnessing the power of water to produce electricity. According to the National Academies, water has been an essential part of producing energy for centuries, but now may be the time to harness its power on a larger scale. Much like wind turbines, water passes through a turbine of its own to produce energy and power a generator. Water is capable of moving quickly and producing a large amount of energy without negative impacts or actually being used up, making it highly renewable.
According to Rabe Hardware, geothermal is among the most clean and efficient sources of energy. It is energy the earth already has stored beneath the surface and can be used in a variety of ways to produce usable energy. Geothermal energy is especially useful for heating and cooling purposes and can be a great way to harness renewable energy to power normal operations. Using geothermal energy can be harnessed anywhere, but it is more viable in particular areas depending on the geological traits of the physical space.
Biomass and Biofuels
Another option is a move from using fossil fuels to focusing on biomass and biofuels. According to Energy Sage, these kinds of fuels are produced by plant and animal life, and are like traditional energy sources but much more renewable. They also cause less environmental damage. Biomass and biofuels do come with their own problems and requirements, but they are a great option when it comes to transitioning to renewable energy sources and attempting to make human life on earth more environmentally compatible.
There are many kinds of renewable energy that may play a part in making the future more sustainable. The more people understand and implement these energy sources the better it will be for the future of human life on earth. Each energy source has its own role to play and can be an important part of making a difference for the future of sustainable life.
Read this next: How 3D Printing Can Benefit Manufacturers
|
What Do We Know About Moses’s Burial Place?
Print Friendly, PDF & Email
Ezra W. Zuckerman Sivan
If a reader of the first four books of the Torah still harbors any doubts as to the paramount importance of Moses to Israel’s history, the book of Deuteronomy makes the matter crystal clear. The book consists almost entirely of Moses giving speeches, and no other individual human takes any action, not even his successor Joshua. And if the way Deuteronomy centers Moses points indirectly to his singular greatness, the closing verses of the book– and of the Torah itself– explicitly declare that Moses is the greatest prophet and leader who Israel has had and will ever have. To be sure, the steps that God takes at the end of Deuteronomy ultimately curtail Moses’s ambitions and indeed his life. But they also reinforce the impression of his unique status. No previous patriarch was given a virtual aerial tour of the Promised Land; no one else is escorted to his death by God himself (“and he died… at God’s word” 34:5); and no one else has his burial place obscured such that “no man knew of his [place of] burial until this very day” (34:6).
Yet while this last aspect of Moses’s death may seem a straightforward indicator of Moses’s greatness, closer inspection uncovers a puzzle. After all, the Torah had previously ascribed great importance to recognizing the burial place of Israel’s leaders, beginning with Abraham’s purchase of the Cave of Makhpelah to bury his wife Sarah and future family members (Genesis 23). Indeed, Joseph had risked his standing in Pharaoh’s court by honoring his father Jacob’s request to bury him there (50:1-12) and Moses had made sure to take Joseph’s bones with Israel when they left Egypt (Exodus 13:19). Why then the sudden break from precedent? Why did God want to hide Moses’s body?
Ralbag (R. Levi ben Gershon, France, 1288-1344) offers an intriguing answer, one that is both more compelling and more flawed than it might seem at first glance. His reasoning is as follows:
If the place of his burial would be known, future generations would err by making a god out of him considering what had become publicized about the wonders he had performed. After all, some members of Israel erred in the case of the copper serpent that Moses made, considering the greatness of the man who made it… (Deuteronomy 34:6, ad loc.)
Ralbag is referring here to the mysterious incident of Numbers 2:4-9. God had let loose “fiery serpents” on the complaining people which eventually killed “a vast multitude of Israel” by biting them. Moses responded by devising a copper serpent that protected Israel when they gazed upon it. We learn from II Kings 18:4 that over the next several generations this copper serpent was apparently to become an idol, and Ralbag interprets this idolatry as a form of Moses-worship.
Ralbag’s worries about Moses-worship may seem far-fetched until we do a close reading of Deuteronomy 4:15-24. This is the second half of a short speech in which Moses exhorts Israel on proper worship of God and teaches what they should learn from their experience of revelation. It is here that Moses provides explicit instruction in what has come to be known (in reference to Maimonides’ Guide to the Perplexed) as “negative theology”– i.e., what God is not and what Israel should therefore not worship. It is also here that Moses first mentions that God has decreed that he will soon die.[1]
Moses begins the second half of his speech by stressing that the people saw no image at Sinai but had instead only heard the sounds and seen the fire of God. Moses then proceeds to run through a list of six examples of images the people must not worship upon arrival in Canaan. In particular, Moses tells them that they must not worship the form of a) a male or a female [person] (4:16); b) a land animal or c) a winged bird in the sky (4:17); d) anything that creeps on the ground or e) any fish that swims in the sea (4:18); or f) the sun or the moon or stars (4:19).[2] Moses concludes this speech a few verses later on the same theme: by reemphasizing the importance of not forgetting the covenant and coming to make graven images. He also reminds them that God is a consuming fire, a jealous God (4:23-24).
Ralbag’s Moses-worship theory receives some support from the examples that top the list: “male or female [person].” But there is also strong subtextual evidence that Moses was specifically worried that he himself would come to be deified. Consider the three verses that separate Moses’s rundown of the six types of “non-gods” (4:15-19) and the concluding verses (4:23-24):
And you, God took you. He brought you out of that iron blast furnace, out of Egypt, to be for Him a people of legacy, as is now the case. (4:20)
And/But God became enraged with me, And he swore that I would be barred from crossing the Jordan and barred from coming to the good land that God is giving you as a legacy (4:21).
For I am indeed dying in this land; I am not crossing the Jordan. And/but you will cross and inherit this good land (4:22).
These verses seem like a strange digression. It seems especially odd that Moses selected this moment to tell the people that he was about to die (Moses had mentioned earlier [Deuteronomy 1:37; 3:23-36] that he wouldn’t be allowed to cross the Jordan; but that didn’t necessarily mean his death was imminent). But the juxtaposition makes sense once we recognize that they had very good reason to see Moses as a godlike figure.
After all, he and two acolytes who were utterly deferential to him, Joshua and Caleb, were the sole survivors from the previous generation, and this new generation had grown up on stories in which the key agents of their salvation were God (whom they could not see or experience directly) and Moses (whom they could). Furthermore, while Moses was nominally an old man, his “eyes were undimmed and his vigor was undiminished” at the time of his death (34:7). It was also well known that he had lived for forty days and nights without food or drink (9:9,18) and that he had acquired a divine aura at that time, such that the people could not stand before him unless he were veiled (Exodus 34:33-35). If anyone has ever should have seemed immortal to any group of human beings, it would have been Moses in the eyes of Israel.
Finally, Moses seemed to have godlike powers at a time when God’s presence was hard to discern. Of the three miraculous salvation events that had occurred to this generation– a) water flowing from a struck rock to meet their needs (“Waters of Strife”; Numbers 20:1-13); b) healing from snake bites provided by viewing a copper serpent (“Copper Snake”; 21:4-9); and c) a plague arrested by the spearing of sinners (“Sin of Ba’al Pe’or.” 25:1-15)– Moses was the key agent in two of them, taking a great deal of initiative (as Ralbag notes) in each. To be sure, the Torah credits God with being the force behind these events, and informs us that He had privately chastised Moses for errors in the first case. But all the people could see with their own eyes was what Moses (and in the third case, Phineas) had done: taken actions responsible for miraculously saving them.
And so by interjecting his own impending death due to God’s wrath after the list of non-gods, Moses seems to be hinting very loudly here that the people should banish any thoughts from their minds that he is a god. Indeed, by discussing his death in the context of a narrative about the formation of Israel and instructions to Israel about its future over many generations, Moses is implicitly contrasting his own mortality and weakness with the people’s immortality and greatness: yes he will die but they will live on to fulfill their legacy.[3] Just a few verses earlier, Moses hints that this contrast is not coincidental but causal (3:26): “God was angered with me for your sake, and didn’t listen to [my request to go in the land].”
Consistent with Ralbag’s theory then, the climax of Deuteronomy seems to complete a logical circle begun at the beginning of the book. On the one hand, this climax affirms Moses as a uniquely great leader. But it also reinforces the message that Moses is fully mortal and an inappropriate target of worship. Hiding his body seems key to accomplishing the latter goal.
The problem: The body isn’t missing
Consider the following riddle: If you were God and you wanted to obscure the place of a person’s burial, how would you do it? When I have posed this question to friends, they invariably paraphrase Moses in his famous characterization of the unattainable above the heavensor “beyond the sea” (30:13). In fact, God does the former in the case of Elijah, whisking his body into the heavens with only one eyewitness as to the spot of ascent (II Kings 2). And Jonah famously gets a reprieve from the latter type of death. When God wants to hide a body, He knows very well how to do it. But did He actually hide Moses’s body?
To the contrary, the text tells us precisely where Moses is buried. The beginning of the very verse (34:6) that informs us that “and no man knew his burial (place) to this very day” begins with “And He/he buried him in the valley, in the land of Moab, opposite the House of Pe’or.”
One may be tempted to object that this is in fact imprecise. Since Moses was apparently alone when he was interred (no other human is described as ascending with him to Mt. Nebo, and the ambiguously-described agent of burial “he” is variously interpreted as God or Moses himself), perhaps his body was effectively hidden. Moreover, “the valley” is rather vague and it is unclear who was informed that Moses’s body was buried there rather than on top of the mountain.
But these objections quickly dissolve when we consider two straightforward points. First, there actually should be no ambiguity for those seeking Moses’s grave about which valley is intended. After all, it was mentioned towards the beginning of Deuteronomy as the very place where they had been encamped for awhile: “And (since God denied my final request to go into the Land), we have been living in the valley, opposite the House of Pe’or” (3:29).
Second, one does not need to identify a holy man’s body in order to build a shrine to him. Consider how many shrines we have today– Jewish and non-Jewish– where it is doubtful that the revered saint is actually buried there. Certainly the amount of information provided here is more than sufficient for Moses’s admirers to construct a shrine to him. Moreover, even if they were not informed that the body was buried in the valley, the shrine could have been built on the nearby Mt. Nebo.
Our original question has thus been sharpened. On the one hand, Ralbag seems to be correct that there is a serious threat of Moses-worship hanging over Israel, and it seems plausible that obscuring knowledge of Moses’s burial place could be helpful in countering this threat. Moreover, the text goes out of its way to tell us that they in fact did not know Moses’s burial place. But whereas we might be tempted to infer that God had put the body out of reach, the text had already told us that he was buried in a well-known location. It is almost as if the people were being encouraged to engage in Moses-worship.
The Most Familiar Place in the World
To begin developing a resolution to this puzzle, let us take an even closer look at what we know about the location of Moses’s burial. As discussed, the Torah is quite specific about this spot: “in the valley, in the land of Moab, opposite the House of Pe’or” (34:6). We have also noted that Moses had earlier (3:29) named “in the valley, opposite the House of Pe’or” as the place where they had dwelled for an extended period of time. And yet as we look further into the identity of this place, we begin to see that it has a wide array of names and associations. For instance, the third of the three biblical mentions of “in the valley, opposite the House of Pe’or” indicates that that it is in not the land of Moab but rather “in the land of Sihon, King of the Amorites, who reigns from Heshbon, whom Moses and Israel smote as part of the exodus from Egypt ”(4:46). This attribution is not inconsistent with the others once we recall that Sihon had taken over this territory from Moab (Numbers 21:27-30). Furthermore, given the use of the mountain overlook by King Balak of Moab (22:41), it was apparently a border region (21:13). Thus while each of these references points to exactly the same geographic location (“the valley, opposite the House of Pe’or), they evoke a multiplicity of historical and political associations (Balak/Moabites, Sihon/Amorites).
Indeed, now observe how this same specific geographic location apparently goes by multiple names, each with a different resonance. In particular, the overview of Israel’s encampments in Numbers 33 (the beginning of Parashat Mas’ei) ends as follows:
And they traveled from ‘Almon-Divlataimah and they encamped on the mountains of the ‘Avarim (“Crossings”), before Nebo.
And they traveled from the mountains of ‘Avarim and they encamped at the Plains of Moab, on the Jordan near Jericho.
And they encamped on the Jordan, from Beit Ha-Yeshimot (“House of the Wastelands”) and until Avel Ha-Shitim (“Mourning-Place of the Acacias”) on the Plains of Moab (33:47-49).
Given that Nebo is one name of the mountain that Moses ascended to die (Deuteronomy 34:1) and that ‘Avarim is another name for that mountain (32:49), it appears that all three of these verses describe places in close proximity to one another. Note further that the last two verses do not describe two different camps (no traveling between them is mentioned) but provide first a more general and then a more specific way of describing the very same encampment.[4] And note further that that neither of these names overlaps linguistically with the way this encampment is described in Deuteronomy (3:29): “in the valley, opposite the House of Pe’or.” And yet, these two names do overlap thematically. In particular, just as “House of Pe’or” recalls the Sin of Ba’al Pe’or (where Israel’s licentiousness and idolatry elicited a divine plague) so does Avel Ha-Shittim (“Mourning-Place of the Acacias”) given that this sin occurred at Shittim (“Acacias”; Numbers 25:1) and that it led to mass death.[5]
From a review of Numbers 21-22:1 and 33:38-49 and Deuteronomy 1:3, it appears that by the beginning of Deuteronomy, it it had been six months since they had buried Aaron on Hor Ha-Har (at the end of the fifth month of the 40th year) and that they had spent the majority of the intervening time based at this particular encampment and its close environs. The people were also to spend one more month at this site mourning Moses, and would not depart for Jericho for a few more days. While this seven-month period may be longer than the time they had spent at many other campsites, it is less time than the year the previous generation had spent at Sinai; and it is clearly less than the “many days” they had spent at Kadesh (1:46) or at Mt. Seir (2:1). And yet, it is remarkable how much memorable history had transpired over this short amount of time– accounting for many more chapters than had taken place at any place since Sinai and certainly the most in this generation.
Accordingly, now consider the following table, which includes fifteen names (including those already discussed) of places within the vicinity encompassed by these three verses, as well as the stories (recounted in Numbers, Deuteronomy, and Joshua) with which they are associated:
Relevant Story/ies
1. ‘Iyei Ha-Avarim
Wastelands[6] of the Crossings
Numbers 21:11
After Copper Serpent Incident
2. Nahal Zered
Wadi Zered
Numbers 21:12
After Copper Serpent Incident
3. Be’erah
Towards the Well
Numbers 21:16
Digging/Song of the Well; Perhaps Waters of Strife
4. ((Ha-)Nishkaf/ah al pnei) Ha-Yeshimon
(Overlook over) The Wasteland
Numbers 21:20, 23:28
Song of the Well; Balaam’s Failed Curses
5. Bamot Ha-Gai
Altars of the Valley
Numbers 21:20
Digging/Song of the Well
6. Rosh Ha-Pisgah
Top of the Cliff/Summit
Numbers 21:20, 23:14; Deuteronomy 3:27, 34:1
Digging/Song of the Well; Balaam’s Failed Curses; Moses’s Death
7. Bamot Ba’al
Altars of Ba’al
Numbers 21:28, 22:41; Joshua 13:17
Sihon’s victory over Moab; Balaam’s Failed Curses; Victory over Sihon
8 .Shittim
Numbers 25:1; Joshua 2:1, 3:1
Sin of Ba’al Pe’or; Crossing of the Jordan
9. Har Ha-Avarim
Mountain of the Crossings
Numbers 27:12; Deuteronomy 32:49
Moses’s Death
10. (Har) Nevo
(Mt.) Nebo
Numbers 32:3, 32:38, 33:47; Deuteronomy 34:1
Victory over Sihon; Land Allocation to Reuben; Moses’s Death
11. Avel Ha-Shittim
Mourning of/at the Acacias
Numbers 33:49
Edge of final encampment; Sin of Ba’al Pe’or
12. Beit Ha-Yeshimot
House of the Wastelands
Numbers 33:49
Edge of final encampment
13. (Mul) Beit Pe’Or
(Opposite the) House of Pe’or
Deuteronomy 3:29, 4:46, 34:6; Joshua 13;20
War against Sihon; Moses’s Burial
14. Ha-Gai
The Valley
Deuteronomy 3:29, 4:46, 34:6
War against Sihon; Moses’s Burial
15. (Ashdot/Ashdat) Ha-Pisgah
Watershed of the Cliff/Peak
Deuteronomy 3:17, 4:49, 33:2
Land allocation to Reuben, Gad & Manasseh; War against Sihon
These names hardly exhaust the vast array of Transjordanian place names that lend meaning to this location. As noted, the larger context is that of the Plains of Moab, the eastern banks of the Jordan, and situated opposite Jericho (see e.g., Numbers 26:3, 26:63, 35:1, 36:13). Additional context is provided by Gilead, which is both Israel’s name for the Transjordan generally (e.g., 32:1), going back to Genesis 31, and the name for the particular clan of Mansasseh that was given a share in this territory (see Numbers 26:21, 32:40). In addition to these regional names, various additional names seem to refer to smaller locations/camps whose significance is more obscure to us but had apparent significance for Israel.[7] And finally, some names in the vicinity are associated with Sihon’s Amorite kingdom for which this camp is the southwestern border,[8] and which were subsequently taken for settlement by the tribes of Reuben, Gad, and (half of) Manasseh. A reference to this area of conquest in turn reminds the reader of the conquest of King ‘Og’s northern Bashan area (up to Mt. Hermon, in today’s Golan), which occurred after the conquest of Sihon, and which forms the other main region of the conquered Transjordanian territory.
This larger context helps reinforce the first important implication of our table of names that are associated with the specific vicinity of Israel’s encampment. In particular, we see that Moses was buried in a location that is saturated with meaning. Think of the various objects, person, or places in our own lives to which we affix multiple names. This generally reflects the fact that we have extensive interaction with that someone or something and that we experience them in multiple, distinct ways. It seems that Israel had such a relationship with this encampment. It is remarkable that this place had come to mean so much to Israel over a mere half a year. But it is perhaps no surprise when we realize how many momentous events had occurred here and how they relate to one another.
Saturated with Historical Context and Communal Knowledge
Beyond the general observation that Moses’s burial place was saturated with meaning, we can also begin to break down the meanings of these various names in three distinct ways. First a high proportion are “left over” from the conquered peoples, and many are even associated with pagan worship. While we might have expected Israel to eliminate such names, in fact this pattern is quite familiar from recent historical episodes involving conquering peoples. For example, Israelis continue to use the Arab names for conquered Arab neighborhoods in Jerusalem (e.g.,Talbiyeh, Katamon, Baq’a) and Americans use the Mexican names for conquered Mexican mission towns (e.g., San Diego; Los Angeles) as well as many names that honor the pagan forebears supplanted by Christian settlers (e.g., Massachusetts, Kentucky).
Relatedly, the Torah is quite interested in relating the history of the predecessor peoples of this region (see also Genesis 36; Judges 11:12-28). In particular, it situates Israel’s conquest of the mighty Transjordanian Amorite kingdoms as justified and appropriate in the context of the political history of the region and of Israel’s family loyalty to Edom, Moab, and ‘Amon, who turn out to enjoy divine land grants similar to Israel’s (see Numbers 21:27-30; Deuteronomy 2:9-12, 2:19-23, 3:9-11). The implication is that while Israel’s recent experience in this location is only six months old, it is in fact deeply embedded in the history of the region. In fact, whereas we might otherwise be tempted to view Israel’s conquest of this area as a major historical turning point and perhaps to credit Moses with effecting radical change, this historical context makes the conquest seem like part of a much larger historical process guided by God from afar.
A second theme in these place names is also discernible. In particular, it is notable how many of the new Hebrew place names are merely descriptions of features of the natural landscape with a definite article attached: “the valley;” “the cliff/summit”; “(to) the well;” and “the acacias.” As discussed above in the case of “the valley,” these names are puzzling because they seem at first glance to have an ambiguous referent: Which valley? Which grove of acacias?[9]
Why would a community have place names that are so vague? Political Scientist James Scott’s influential argument about how states/outsiders differ from insiders/local communities in their characteristic approaches to naming helps illuminate these vague names. The key to his theory is that the outsider/state is trying to make the local as “legible” as possible so they can understand and control something unfamiliar and elusive, whereas the local already understands it in their own terms and takes a great deal of contextual information for granted. Scott and his colleagues’ treatment of state vs. local naming practices is especially relevant here. Here’s the key motivating example and heart of the insight:
A contrast between local names for roads and state names for roads will help illustrate the two variants of legibility. There is, for example, a small road joining the towns of Durham and Guilford in the state of Connecticut (USA). Those who live in Durham call this road (among themselves) “Guilford Road,” presumably because it informs the inhabitants of Durham exactly where they’ll get to if they travel it. The same road, at its Guilford terminus, is called the “Durham Road” because it tells the inhabitants of Guilford where the road will lead them. One imagines that at some liminal midpoint, the road hovers between these two identities. Such names work perfectly well; they each encode valuable local knowledge, i.e., perhaps the most important fact one might want to know about a road. That the same road has two names, depending on one’s location, demonstrates the situational, contingent nature of local naming practices. Informal, ‘folk’ naming practices not only produce the anomaly of a road with two or more names; they also produce many different roads with the same name. Thus, the nearby towns of Killingworth, Haddam, Madison, and Meriden each have roads leading to Durham which the inhabitants locally call the “Durham Road.” Now imagine the insuperable problems that this locally-effective folk system would pose to an outsider requiring unambiguous identifications for each road. A state road repair crew, sent to fix potholes on the “Durham Road” would have to ask, “Which Durham Road?” Thus it is no surprise that the road between Durham and Guilford is re-incarnated on all state maps and designations as “Route 77.” Each micro-segment of that route, moreover, is identified by means of telephone pole serial numbers, milestones, and township boundaries. The naming practices of the state require a synoptic view, a standardized scheme of identification generating mutually exclusive and exhaustive designations. And, this system can work to the benefit of state residents: if you have to be rescued on Route 77 by a state-dispatched ambulance team, you will be reassured to know that there is no ambiguity about which road it is that you are bleeding on. (Scott et al., 2002, pp. 4-5)
Scott and colleagues go on to generalize this insight in various ways, including the modern state’s drive to impose “synoptic” markers of individual identity on its citizens and residents, providing the basis for public health campaigns that are challenged by local communities to this very day. Scott et al. also discuss an example of this from the history of Ashkenazi Jews: the imposition of surnames on Ashkenazi Jews by 19th century states, an imposition that did no small amount of violence to us and with which we have never been fully comfortable. And of course, Ashkenazi Jews made sure to keep our existing naming system for internal, communal purposes, though it is difficult for outsiders to parse. This is no small act of resistance, as an-oft quoted midrash reminds us.[10]
What are the implications for the vague place-names in our table? In short, it seems that whereas Israel had only been in this location for six months, they had already assumed a way of discussing the location as if they were long-term insiders resistant to outside “legibility.” Everyone knew which valley was “the valley, opposite the House of Pe’or,” even if we as outsiders to that time and place struggle to make sense of their nomenclature. The same goes for the acacia grove, and for the well, and for the cliff/summit. We may be unsure of these locations, but their very obscurity to us reflects how well-known they were to them.
In addition, and crucially, the insider knowledge behind these names was communal knowledge. The Song of the Well marks the first time that such a name (Be’erah) is mentioned and it comes from a song that is produced by “Israel” rather than by “Moses and Israel,” as was the case for the Song of the Sea (compare Numbers 21:17 with Exodus 15:1). Similarly, whereas the patriarchs in Genesis often changed place names based on the experiences (of God) they had there, Moses does not do that here. To be sure, some of the recent conquests (Havot Ya’ir, Novah; Numbers 32:41-2; Deuteronomy 3:14) are named for their Israelite conquerors. But that is not the case for the set of common-noun names used in their main encampment. These are as fully communal as Guilford Road is to the people of Durham, Connecticut. They transcend Moses as they do any individual in Israel.
Association with Leadership Failure
We have noted to this point how prior to it becoming the location of Moses’s burial, the location of “the valley, opposite the House of Pe’or” was already saturated with a multiplicity of associations that would have cast a large shadow over any attempt to associate the location with one person. That these associations were so different– some deriving from the larger political history of the region and some from Israel’s idiosyncratic communal experience– would have made that shadow particularly hard to cast off.
And now let us turn to a final implication: insofar as this location is associated with Moses’s leadership, it is associated with a significant amount of failure. To be clear, this is the site of Moses’s valedictory addresses, as compiled in the book of Deuteronomy. And it is the site of the staging area for “Moses and Israel’s” victory over Sihon and ‘Og (4:46), not to mention Israel’s military expedition to vanquish the five Midianite kings. But as the various stories listed in the final column of the table remind us, it is also associated with significant failures– the Sin of Ba’al Pe’or in particular.
A midrash[11] makes this point quite clearly and forcefully in explaining why Moses’s burial place was unknown:
At the end of forty years … “they had camped by the Jordan from Beit Ha-Yeshimot as far as Avel Ha-Shittim on the Plains of Moab” (Numbers 33:49), and there they became lawless through unchastity. And they weakened Moses and the righteous who were with him, and they were crying. See that [Moses] had [previously] stood up to six hundred thousand [men with the golden calf, as stated] (Exodus 32:20), “And he took the calf that they had made.” And [now] he weakened? It was simply so that Phinehas would come and receive his due. Moreover, because [Moses] had been indolent [in the execution of justice], “no one knows his burial place” (Deuteronomy 34:6). [This fact serves] to teach you that one must be bold as a leopard, light as an eagle, swift like a gazelle, and strong as a lion to do the will of his Creator. From here you learn that [the Creator] is as meticulous with the righteous as a thread of hair.
Quite reasonably, this midrash sees the Sin of Ba’al Pe’or as Moses’s great failure (if only because he is held to unusually high standards). After all, God had explicitly instructed Moses to take action against the ringleaders (Numbers 25:4) and he apparently could not bring himself to do so, thereby failing to prevent a plague that killed 24,000 souls. Moreover, since Moses was the leader, it is hard to hold him blameless for Israel’s descent in the first place.
The inclination to see a larger failure of leadership gains support when we review the set of events that are referenced in the table and how they are interconnected.[12] Consider for example Reuben and Gad’s request to settle the Transjordan. On the one hand, Moses seems to overreact to this request (saying they are free-loaders who are adopting the approach of “evil men” even though they apparently had planned to be the military vanguard in conquering Canaan (Numbers 32:14)). But on the other hand, Moses may have good reason to be suspicious and insecure (as Israel was even after the tribes did what they promised; Joshua 22). In particular, it is notable that a) In making their request, Reuben and Gad seem to have abandoned Simeon, their erstwhile partner in the tribal encampments, perhaps due to Simeon’s strong association with the Sin of Ba’al Pe’or; b) Reuben had previously been associated with rebellion against Moses’s leadership (in the affair of Korah); and c) Reuben’s and (especially) Simeon’s populations had dropped precipitously over the course of the years in the wilderness (presumably due to the plague of Ba’al Pe’or and the incident leading up to the plague stopped by the Copper Serpent). Moses’s reaction seems to reflect major concerns with a significant part of Israel’s body politic. But however much these issues can be attributed to the leaders of Reuben and Simeon, they also reflect poorly on Moses himself as the preeminent leader.
With this context in mind, it is all the more striking that the location of Moses’s final resting place references the House of Pe’or.[13] Yet given how recent this calamity was and how inseparable it was with so much of Israel’s recent and momentous history, there could be no escaping its legacy. As the leaders of the ten other tribes (minus Levi) assert even after completing their conquest of Canaan, “the Sin of Pe’or– we are yet to be purified of it” (Joshua 22:17). Moreover, the legacy would have been physical as well. Moses, after all, was not the only person who ended up buried on his burial spot. So were 24,000 freshly-buried victims of a divine plague that was the largest stain on Moses’s great record of leadership. Indeed, the place had already been named as a mourning site for those dead. Those dead would necessarily loom large over any additional body that was then added. How then could one build a shrine that celebrated Moses– let alone one that deified him– there?
Conclusion: Great Leader but not a God
Following the lead of the midrash just quoted, let us be clear that the Torah in no way means to present Moses as a failed leader. To the contrary. As the climax of Deuteronomy states, Moses was and will ever remain Israel’s greatest prophet and leader. And as the thirty-day mourning period for Moses described implies (intriguingly, the only leader whose mourning period is given a name- “the Weeping Days of Moses’s Mourning” Deuteronomy 34:8), the people must have felt a tremendous love and gratitude for him. Moreover, that the leadership transition to Joshua was as smooth as it was, speaks eloquently of how well Moses had led Israel and prepared them for their future.[14] Key to that successful transition was how God and Moses apparently worked together to neutralize the serious threat that Moses would be deified: As discussed at the outset of this essay, Moses understood the risk that he might be treated as a god; and as we have seen, God addressed this risk by burying Moses in a spot that was especially ill-suited to becoming a shrine to Moses.
Following Ralbag, we have seen that a key mechanism for countering that threat was by preventing Moses’s burial place from becoming a shrine. But we have also gained a deeper appreciation for what it means that this place “would never be known as Moses’s burial place.” Paradoxically, this strategy was not about hiding the body so that it was physically beyond reach. To the contrary, they had more than enough information such that if all else were equal, the location would surely have become a shrine.
The actual strategy was effectively the opposite of hiding the body. It involved embedding the location of burial deep in the webs of meaning associated with Israel’s experience, larger divinely-guided historical forces, and Moses’s leadership challenges. These webs of meaning were so complex and dense that no human being’s legacy could have transcended them. The general logic here should be familiar once we consider our general practice of keeping burial sites away from locations that are either full of ongoing life or have broader cultural significance. The reason for this practice is that the various associations thereby evoked would interfere with our ability to appreciate and revere the deceased, whose life would otherwise seem short and insignificant. And if interference is generally a problem, how much more is it true when the location is associated with notable failures of the deceased?
Yet if we are left without the ability to build a shrine for Moses, we are also left with the most fitting tribute he could ever want: we are his people, guided by his teachings. This will necessarily be true as long as the “Torah of Moses” is taught and lived. And core to this Torah is the idea that Moses was a human being just like the rest of us and that his greatness came from being chosen by God because of his great and actionable empathy for fellow human beings subject to unjust treatment, and from how at age 80, he began a personal metamorphosis from being a retiring herdsman who resisted the mantle of leadership by insisting he wasn’t an ish devarim (Exodus 4:10; “man of words”) to becoming the quintessential ish Devarim (man of Deuteronomy).
[1] The following interpretation of Deuteronomy 4 is not one I’d seen elsewhere but I recently was pleased to learn that R. Menachem Leibtag has developed a closely related approach. Thanks to R. Leibtag for input on this essay.
After the original publication of this essay, R. Elchanan Adler helpfully noted that the Meshech Chochma (ad loc) offers a similar approach.
[2] Notably, this rundown is a reverse-order recital of the movable bodies created by God on days four through six of creation (Genesis 1:14-27). This follows the first three days, which are devoted to creating the immobile conditions for such bodies to thrive.
[3] Fascinatingly, he also seems to compare the people’s immortality to that of an idol, one that is forged in a blast furnace and will always retain its original form. Here he seems to acknowledge that this is the source of an idol’s appeal, and that this is at the core of Israel’s appeal to God: that it would be immortal like Him (and unlike Moses). But another key aspect of the relationship is distinct from the relationship between a human being and their idol: Israel was capable of– and likely to– fail to keep up its side of the covenant. Thus while the relationship would be eternal, it would be dynamic, with ups and downs.
[4] The Talmud (Yoma 75b; Eruvin 55b) notes that the width of the encampment was three parasangs long, which thus provides the basis for defining the size of tehum shabbat (halakhic area within which it is permissible to walk on Shabbat).
[5] Avel Ha-Shittim is often translated as “Meadow of the Acacias” rather than “Mourning-place of the Acacias,” as I am proposing. The basis for my proposed translation is not only that eivel means mourning (whereas the link between avel and “meadow” is unclear), but also that the only other place name in the five books of the Torah that similarly contains the descriptive Avel is Avel Mitzrayim (Genesis 50:11), where the location is associated with mourning. See also Kli Yakar (on Numbers 33:49), who associates Avel Ha-Shittim with the mourning for Aaron, and Bamidbar Rabbah (20:24), which sees Avel Ha-Shittim as the same place as Shittim.
[6] This translation is based on Hirsch, ad loc.
[7] E.g., Matanah and Nahaliel (21:19) as well as Divon-Gad and ‘Almon-Divlataimah (33:45-46)
[8] E.g., Yahtzah (Numbers 21:23) and Ba’al Meon (32:38)
[9] Accordingly, they have long posed a problem for translators, some of whom translate them as if they are proper nouns (“Pisgah”) and some of whom translate them as common nouns (“the cliff”). The latter approach seems appropriate insofar as these place names seem informal and do not last beyond Deuteronomy (but see Micah 6:5). The former approach seems appropriate insofar as they seem to function as place names. Thanks to Dr. Rivka Press Schwartz for alerting me to this issue and for prompting the analysis that follows.
[10] See
[11] Bamidbar Rabbah 20:24.
[12] Space constraints prevent a full consideration of these interlinkages. One intriguing thread mentioned by various commentators is the link between the well of Be’erah with the Waters of Strife and thus with the transgression of Moses that is the basis for God’s decision to bar him from entering Canaan (Numbers 27:14; Deuteronomy 32:15). In particular, Rashi notes (Numbers 21:20, ad loc.) that Moses is not credited in the Song of the Well as a punishment. A key basis for this connection between Be’erah and the Waters of Strife is that in both cases God is described as commanding Moses to gather the people so God can give them water (compare Numbers 21:16 with 20:8). Thanks to R. Alex Maged for emphasizing this connection to me.
[13] R. Menachem Leibtag helpfully points out that God never uses this (or any other name with problematic associations) in referring to Moses’s final resting place, thereby seeming to spare Moses the pain associated with Ba’al Pe’or. It is instead referred to as Rosh Ha-Pisgah, Nevo, or Har Ha-Avarim.
[14] Thanks to R. Menachem Leibtag for this point.
Ezra Zuckerman Sivan, an economic sociologist, is the Alvin J. Siteman Professor of Entrepreneurship and Strategy at the MIT Sloan School of Management, where he currently serves as associate dean for teaching and learning. Among his current research projects is a book on the emergence of the seven-day week. Ezra welcomes feedback at and he tweets at @ewzucker.
|
Foot Heel Pain Treatment
heel pain
What Is Plantar Fasciitis or Heel Spur Syndrome?
Heel pain is usually present during the first few steps in the morning and tends to ease until the next time the patient rests, sits, or drives and tries to stand on their feet. Many patients describe this pain as "a hot knife being pushed into the bottom of my heel." If left untreated, pain can become constant, even hurting when at rest. For some patients, the pain is so severe that they can no longer participate in certain normal activities such as work and leisure activities. Achilles tendonitis hurts in the back of the heel and can hurt first steps in the morning, more as the day progresses, and during walking or playing sports. Quite often, Achilles tendonitis and plantar fasciitis can occur together.
heel pain
What causes it?
Plantar fasciitis (heel pain) is caused by a mechanical imbalance in the foot called over-pronation which causes the foot to roll in towards the arch and big toe joint. There is a very strong, fibrous band on the bottom of the foot called the plantar fascia. The plantar fascia inserts in the heel bone and then spreads out and joins the toes. When the foot rolls in (over pronates) the band must try and stretch, but it cannot, therefore, the fascia pulls at its insertion at the heel bone which causes a small tear and swelling, hence pain. Over time, as the fascia continually pulls, it pulls away at the bone causing a heel spur. The size or presence of a heel spur does not always correlate with the amount of pain. One can have a heel spur with the absence of pain and vice-versa!
Radial soundwave therapy (ESWT)
Radial Soundwave Therapy (RST) is one of the best solutions for those who suffer from heel pain. Often called heel spur syndrome this painful condition can now be treated quickly and effectively. The new Radial Soundwave Therapy was introduced to Canada in 2002 by Podiatrist Hartley Miltchin. Previously, he introduced a surgical procedure to Canada in 1993 called Endoscopic Plantar Fasciotomy (EPF), using a micro camera to treat plantar fasciitis. The RST procedure is utilized to correct chronic heel pain that afflicts men and women equally. This new treatment is non-surgical, non-invasive, involves no time off your feet, and best of all, no anesthetic is required.
Radial Soundwave Therapy (RST) combines air pressure and electronics to produce an acoustic (sound) energy with a computerized, controlled frequency. This newer treatment requires no anaesthetic and requires three 3 minute treatments over a 3 to 4 week period. Some patients feel relief after the very 1st treatment! This treatment is non-surgical and does not produce any shockwaves.
If you are looking for heel pain treatment in Toronto, our team is committed to providing innovative foot pain. Please contact us today.
Call us, let's talk!
Set up your consultation today.
4430 Bathurst Street, Suite 503, Toronto, Ontario, M3H 3S3
(416) 635-8637 | 1-866-535-8637 (TOLL FREE)
|
Surgical Ablation for Atrial Fibrillation
Atrial Fibrillation.jpg
The heart maintains a normal heart rhythm created through its own electrical system, atrial fibrillation is the most common type of abnormal heart rhythm. With atrial fibrillation, electrical impulses don’t follow a normal pathway through the heart. As a result, the heart beats erratically and hence doesn’t pump properly or pump blood correctly. When associated with heart valve problems, this becomes a serious condition that can lead to blood clots, stroke, heart attack, heart failure, chronic fatigue, and even death.
Atrial fibrillation is common in patients who have undergone open heart surgery and heart valve surgery, and is difficult to cure. People with valve disease often go on to develop atrial fibrillation, or even a weakened heart. Traditional heart valve surgery can help in preventing further heart damage, but valve repair combined with surgical ablation can also correct the atrial fibrillation. Presently, electrophysiology is being used to treat atrial fibrillation but repeat treatment is usually necessary. Attempts at curing atrial fibrillation with drugs are often disappointing.
Surgical ablation involves the use of radiofrequency waves (modified electrical energy) to create precise scar lines on the heart’s surface. These scars redirect the erratic electrical impulses of atrial fibrillation to follow a normal electrical pathway through the heart. Surgical ablation can be performed in conjunction with other heart surgeries, such as mitral valve repair or coronary artery bypass, but is also sometimes performed as a stand-alone procedure for patients with atrial fibrillation.
atrial fibrillation 2.jpg
Surgical ablation for atrial fibrillation offers the following benefits to patients: 75 to 90 percent cure rate of atrial fibrillation; reduction in risk of blood clots and stroke; fewer or no symptoms related to abnormal heart rhythms; reduction or discontinuation of blood thinners; reduction or discontinuation of anti-arrhythmic drugs; most patients who have had the procedure report an ability to exercise more frequently and for longer periods of time; In some cases, the procedure will reduce the size of the atria, therefore lessening the risk for other complications, such as heart failure.
|
Furthest Right
Deciphering The Egyptian DNA Puzzle
It is great when science confirms traditional wisdom. The net buzzes with discussion of a genetic study of ancient Egyptian mummies, following up on the the knowledge that parallel human evolution occurred in Europe (see contrarianism). But most have missed the point.
From the abstract:
The researchers discovered that ancient Egyptians closely resembled ancient and modern Near Eastern populations, especially those in the Levant, and had almost no DNA from sub-Saharan Africa. What’s more, the genetics of the mummies remained remarkably consistent even as different powers—including Nubians, Greeks, and Romans—conquered the empire.
Here we have an exercise in “hide the ball.” What is not being mentioned?
1. Ancient Egyptians resembled modern Levantines. But who do modern Egyptians resemble?
2. The mummies remained consistent despite occupations, but what about Egyptians as a whole?
We are — cleverly, so very cleverly — ducking the question of population change in Egypt. As a child, you too may have wondered why the Egyptians once built great monuments but now seem barely able to build a two-story house, and are known mainly for hopeless invasions of Israel and cuisine. What happened?
Luckily, National Geographic is willing to tell us something about modern Egyptian heritage:
This reference population is based on native Egyptians. As ancient populations first migrated from Africa, they passed first through northeast Africa to southwest Asia. The Northern Africa and Arabian components in Egypt are representative of that ancient migratory route, as well as later migrations from the Fertile Crescent back into Africa with the spread of agriculture over the past 10,000 years, and migrations in the seventh century with the spread of Islam from the Arabian Peninsula. The East African component likely reflects localized movement up the navigable Nile River, while the Southern Europe and Asia Minor components reflect the geographic and historical role of Egypt as a historical player in the economic and cultural growth across the Mediterranean region.
Modern Egyptians are 68% North African, 17% Arab, 4% Jewish, and 3% each from Asia Minor, East Africa and Southern Europe. In other words, while the mummies remained consistent, the population has not, which may explain why modern Egyptians do not do the things the ancient ones did.
Each population has a genetic profile based on centuries of adaptation, and these genes convey abilities and inclinations known as traits. By themselves, traits are rarely complete in the form we think of them, but when a profile is complete, different traits complement each other and produce the abilities, preferences and intuitive knowledge that we see in each population. Just as there is no single gene for intelligence, it takes many genes — like a net — to produce the effects we recognize as distinct to a population. When the genetic profile is disturbed by admixture, even trace admixture, then those abilities are lost.
In Egypt, we see a warning. Traditional wisdom was that as Egypt rose in power and wealth, people came from all over to be part of this civilization, and gradually replaced the original Egyptians with a group whose genetic net was disrupted and so lacked the abilities of the original. Originally it was thought that gradual absorption of Nubians shattered the Egyptian bloodline.
It turns out that the picture is more complex and delivers a more dire warning for us. The question is not what you mix with, but that you mix at all. Even trace admixed groups like Southern and Eastern Europeans, when mixed into another European group, can erase its genetic net and replace it with generic people lacking the original abilities.
Tags: , , , , ,
Share on FacebookShare on RedditTweet about this on TwitterShare on LinkedIn
|
It’s important for dental professionals to remain educated on the latest technology and best practices. As new discoveries and advancements are made, there is new information that you will need to learn and implement in your dental practice. This is one of the best ways to continue providing high-quality care to your patients.
In order to begin using certain tools and methods, you may need to become certified in using them. It’s in the best interest of both you and your patients that you are well-trained and knowledgeable on the subject.
The American Society of Cosmetic Dentistry is offering a comprehensive course on the Rubber Dam. This course will properly equip you with the knowledge you need to properly use the Rubber Dam.
What is the Rubber Dam?
The Rubber Dam is a thin square that is used as a way to separate the procedure site from the rest of the mouth. This helps to prevent the spread of bacteria throughout the mouth. If you are working on an infected tooth, you don’t want the infection to spread to other areas of the mouth.
What’s Included in the Course?
After the course, you will be able to answer the following:
• What are the benefits of using a Rubber Dam?
• How can a dental professional use a Rubber Dam?
• How can a Rubber Dam be used in different positions in the mouth?
In the course, you will also learn about both posterior and anterior recipes and protocol. This will help you to be prepared for any situation that calls for the use of a Rubber Dam.
After the course, you will walk away understanding how to correctly use the Rubber Dam in your practice. You will also be able to educate your patients on the use of a Rubber Dam and why it’s important in preventing the spread of bacteria throughout the mouth.
If you are interested in learning more about the Rubber Dam so you can safely and effectively use it in your practice, you can sign up here. Feel free to contact us if you have additional questions about the course or if you experience any technical errors while signing up.
Call-Now Contact Us
|
Biography of John Forbes Nash Jr.
Biography of John Forbes Nash Jr.
John Forbes Nash Jr. – American mathematician.
Name: John Forbes Nash Jr.
Date of Birth: June 13, 1928
Place of Birth: Bluefield, West Virginia, U.S.
Date of Death: May 23, 2015 (aged 86)
Place of Death: Monroe Township, Middlesex County, New Jersey, U.S.
Occupation: Mathematician
Father: John Forbes Nash
Mother: Margaret Virginia Martin
Spouse/Ex: Alicia Lardé Lopez-Harrison (m. 1957)
Children: John Charles Martin Nash, John Stier
Early Life
An American mathematician who was awarded the 1994 Nobel Prize for Economics for his landmark work, first begun in the 1950s, on the mathematics of game theory, John Forbes Nash, Jr. was born on June 13, 1928, in Bluefield, West Virginia, the U.S. He did massive work in many branches of mathematics and his theories have proven to be very useful in numerous fields such as market economics, computing, biology, artificial intelligence, politics, and accounting. He shared the prize with John C. Harsanyi and Reinhard Selten. In 2015 Nash won (with Louis Nirenberg) the Abel Prize for his contributions to the study of partial differential equations.
Having graduated from esteemed educational establishments like the ‘Carnegie Institute of Technology’ and the ‘Princeton University’, he revolutionized the field of equilibrium theory. He is famous for his works on ‘Game Theory’, partial differential equations, and algebraic geometry. Not only is this mathematician’s work important in his field of study, but is also used in a wide range of subjects like artificial intelligence, politics, economics, accounting, and even biology. Application of his ‘Game Theory’ is essential for arriving at decisions that benefit an organization and its people. Since the establishment of the validity of this field of study, eleven game theorists have been awarded the ‘Nobel Prize’. Though glorified by his biographer, Sylvia Nasar, and Hollywood, his life has been controversial, where he has been charged with indecent conduct and has allegedly not been a very able husband and father. However, it is this talented mathematician’s fight against schizophrenia and the stigma associated with the condition, which has made him the epitome of brilliance according to many, across the world.
John Nash is the only person to be awarded both the Nobel Memorial Prize in Economic Sciences and the Abel Prize. In 1959, Nash began showing clear signs of mental illness and spent several years at psychiatric hospitals being treated for paranoid schizophrenia. After 1970, his condition slowly improved, allowing him to return to academic work by the mid-1980s. His struggles with his illness and his recovery became the basis for Sylvia Nasar’s biography, A Beautiful Mind, as well as a film of the same name starring Russell Crowe as Nash.
Childhood, Family and Educational Life
John Nash, in full John Forbes Nash, Jr., was born in the town of Bluefield, West Virginia, the U.S. on June 13, 1928. His father, John Forbes Nash, was an electrical engineer for the Appalachian Electric Power Company. His mother, Margaret Virginia (née Martin) Nash, had been a schoolteacher before she was married. He was baptized in the Episcopal Church. He had a younger sister, Martha (born November 16, 1930), who was born approximately two and a half years later.
John Nash attended kindergarten and public school, and he learned from books provided by his parents and grandparents. Nash’s parents pursued opportunities to supplement their son’s education and arranged for him to take advanced mathematics courses at a local community college during his final year of high school. He attended Carnegie Institute of Technology (which later became Carnegie Mellon University) through a full benefit of the George Westinghouse Scholarship, initially majoring in chemical engineering. Nash switched to a chemistry major and eventually, at the advice of his teacher John Lighton Synge, to mathematics. After graduating in 1948 (at age 19) with both a B.S. and M.S. in mathematics, Nash accepted a scholarship to Princeton University, where he pursued further graduate studies in mathematics.
John Nash earned a ‘John S. Kennedy scholarship’ to the prestigious ‘Princeton University’, where he specialized in the equilibrium theory in mathematics. In 1950, the young man was able to graduate with a doctorate degree for his research on ‘non-cooperative games’. During this time, he published dissertation papers like ‘Equilibrium Points in N-person Games’, ‘The Bargaining Problem’, and ‘Non-cooperative Games’.
In 1951 he joined the faculty of the Massachusetts Institute of Technology (MIT), where he pursued research into partial differential equations. He resigned in the late 1950s after bouts of mental illness. He then began an informal association with Princeton, where he became a senior research mathematician in 1995.
This American mathematician had a choice between ‘Harvard University’, and the ‘Princeton University’, to pursue his higher education, but he chose the latter because they offered him a scholarship. This proved to this man that the ‘Princeton University’ thought he had potential, and valued him more.
Personal Life
In 1952, John Forbes Nash, Jr. began a relationship in Massachusetts with Eleanor Stier, a nurse he met while admitted as a patient. They had a son, John David Stier, but Nash left Stier when she told him of her pregnancy. The film based on Nash’s life, A Beautiful Mind, was criticized during the run-up to the 2002 Oscars for omitting this aspect of his life. He was said to have abandoned her based on her social status, which he thought to have been beneath him. Two years later, in 1954 he was arrested in California, for his homosexual encounters in a public toilet. He was released from prison soon, but the exceptional mathematician lost his job at the ‘RAND Corporation’.
In February 1957, John Nash got married to a physics graduate from ‘MIT’, Alicia Lopez-Harrison de Lardé, according to Roman Catholic customs and the couple had a son, John Charles Martin. Soon John started developing symptoms of mental illness, and authorities at the ‘McLean Hospital’ diagnosed him with schizophrenia. Nash was later institutionalized at the ‘New Jersey State Hospital’, and from then on was treated regularly for the disease. Due to the stress of dealing with his illness, Nash and Lardé divorced in 1963.
John Nash’s schizophrenia started somewhere during 1957. His behavior was explained by his wife as unpredictable. He would talk about characters that didn’t exist and feared that all those men with red ties were against him in a conspiracy. He went as far as writing letters to D.C that those men were to form a government. Nash was diagnosed with paranoid schizophrenia by the doctors of McLean hospital in May 1959. Consequently Nash resigned from his post at MIT, withdrawing his pension and went to Europe. After a disturbing chain of events with him getting arrested by the police in Paris, he was sent back to the United States. He took treatment from many hospitals where he got antipsychotic medicines.
After his final hospital discharge in 1970, John Nash lived in Lardé’s house as a border. This stability seemed to help him, and he learned how to consciously discard his paranoid delusions. He stopped taking psychiatric medication and was allowed by Princeton to audit classes. He continued to work on mathematics and eventually was allowed to teach again. In the 1990s, Lardé and Nash resumed their relationship, remarrying in 2001.
The 1998 biography of this great mathematician, ‘A Beautiful Mind’ was penned by Sylvia Nasar. Three years later, the book became the basis for filmmaker Ron Howard’s movie, having the same title. The movie, ‘A Beautiful Mind’ starred American actor Russell Crowe as Nash, and won several accolades, including the ‘Academy Award for Best Picture’.
Career and Works
While he was still in graduate school, John Forbes Nash, Jr. published (April 1950) his first paper, “The Bargaining Problem,” in the journal Econometrica. He expanded on his mathematical model for bargaining in his influential doctoral thesis, “Non-Cooperative Games,” which appeared in September 1951 in the journal Annals of Mathematics. Nash thus established the mathematical principles of game theory, a branch of mathematics that examines the rivalries between competitors with mixed interests. Known as the Nash solution or the Nash equilibrium, his theory attempted to explain the dynamics of threat and action between competitors. Despite its practical limitations, the Nash solution was widely applied by business strategists.
In 1952, John Nash published his work on other areas of mathematics, in the paper, ‘Real algebraic manifolds’. The following year, the thesis paper, ‘Two-person Cooperative Games’, based on his research conducted at ‘Princeton University’ was also published. While working on a problem related to German mathematician David Hilbert’s ‘elliptic partial differential equations’, John got acquainted with Italian, Ennio de Giorgi in 1956. Both Nash and Giorgi formulated the proof for the equation, a few months away from each other, and thus both missed out on the ‘Fields Medal’.
Although John Nash’s mental illness first began to manifest in the form of paranoia, his wife later described his behavior as erratic. Nash seemed to believe that all men who wore red ties were part of a communist conspiracy against him. He mailed letters to embassies in Washington, D.C., declaring that they were establishing a government. Nash’s psychological issues crossed into his professional life when he gave an American Mathematical Society lecture at Columbia University in 1959. Originally intended to present proof of the Riemann hypothesis, the lecture was incomprehensible. Colleagues in the audience immediately realized that something was wrong. He was admitted to McLean Hospital in April 1959, staying through May of the same year. There, he was diagnosed with paranoid schizophrenia.
In 1958, John Nash began working as a lecturer on a probationary term, at the ‘MIT’. By the following year, his work started getting hampered owing to symptoms of mental illness, which became quite evident after his incoherent speech at the ‘American Mathematical Society’ of ‘Columbia University’.
In 1961, Nash was admitted to the New Jersey State Hospital at Trenton. Over the next nine years, he spent periods in psychiatric hospitals, where he received both antipsychotic medications and insulin shock therapy.
After a long period of hospitalization, John Nash was able to continue work since 1970, the year he refused to be treated any further for his schizophrenia. Within the next ten years, he overcame his regular hallucinations and was able to concentrate completely on academic research. Towards the end of his career, he worked at the ‘Princeton University’, as a Senior Research Mathematician.
Nash’s research into differential equations at MIT led to his seminal paper “Real Algebraic Manifolds,” which was published in Annals of Mathematics in November 1952. His other influential work in mathematics included the Nash-Moser inverse function theorem, the Nash–De Giorgi theorem (a solution to David Hilbert’s 19th problem, which Nash undertook at the suggestion of Nirenberg), and the Nash embedding (or imbedding) theorems, which the Norwegian Academy of Science and Letters described as “among the most original results in geometric analysis of the twentieth century”; the academy awarded Nash the Abel Prize. His other honors included the John von Neumann Theory Prize (1978) and the American Mathematical Society’s Leroy P. Steele Prize for a Seminal Contribution to Research (1999).
In 2005, John Nash delivered a speech at the ‘Warwick Economics Summit’, hosted by the ‘University of Warwick’. In 2006, Nash also attended a conference at Cologne, one of Germany’s largest cities, where he spoke about strategic decision making using his ‘Game Theory’. In recent times, Nash had conducted extensive studies in the field of game theory and partial differential equations.
Awards and Honor
This ingenious mathematician was felicitated with the 1978 ‘John von Neumann Theory Prize’, for pioneering the ‘non-cooperative equilibrium’, which has now been named after him, as the ‘Nash equilibrium’.
In 1994, this accomplished mathematician received the ‘Nobel Prize’, in the field of ‘Economic Sciences’, for his work on ‘Game Theory’. John Nash shared the award with German Economist, Reinhard Selten, and Hungarian-American scholar, John Harsanyi.
The ‘Leroy P. Steele Prize’ was awarded to John Nash in the year 1999, for his invaluable contribution to the field of mathematics.
In 2010, John Nash received the ‘Double Helix Medal’ by the ‘Cold Spring Harbor Laboratory’, for his fight against schizophrenia.
In 2012, John Forbes Nash Jr. was elected as a fellow of the American Mathematical Society. On May 19, 2015, a few days before his death, Nash, along with Louis Nirenberg, was awarded the 2015 Abel Prize by King Harald V of Norway at a ceremony in Oslo.
Death and Legacy
On May 23, 2015, John Forbes Nash Jr. and his wife died when a taxi they were riding in lost control and crashed on the New Jersey Turnpike. He was 86. He is survived by his son John Charles Martin Nash who lived with his parents at the time of their death.
Following his death, obituaries appeared in scientific and popular media throughout the world. In addition to their obituary for Nash, The New York Times published an article containing quotes from Nash that had been assembled from media and other published sources. The quotes consisted of Nash’s reflections on his life and achievements.
Amongst all mathematical research conducted by this genius, the one that brought him fame, and the ‘Nobel Prize’, is his work on ‘Game Theory’. The ‘Game Theory’ has become a significant area of study in the field of economics, and it describes how participants of a game take decisions individually or collectively to arrive at a win-win situation.
Nash’s research into game theory and his long struggle with paranoid schizophrenia became well known to the general public because of the Academy Award-winning motion picture A Beautiful Mind (2001), which was based on Sylvia Nasar’s 1998 biography of the same name. A more factually accurate exploration of Nash’s struggle with mental illness was offered by the public television documentary A Brilliant Madness (2002).
John Nash also received an honorary degree, Doctor of Science and Technology, from Carnegie Mellon University in 1999, an honorary degree in economics from the University of Naples Federico II on March 19, 2003, an honorary doctorate in economics from the University of Antwerp in April 2007, an honorary doctorate of science from the City University of Hong Kong on November 8, 2011, and was keynote speaker at a conference on game theory.
Information Source:
4. wikipedia
|
Those underground ‘Lakes’ on Mars simply continue to getting more funny
In 2018, researchers made a disclosure that could change our comprehension of the dusty, dry red ball that is Mars.
Radar signals ricocheted from just underneath the planet’s surface uncovered a sparkling patch, steady with nothing even an underground pool of fluid water. Resulting look through turned up significantly more sparkly fixes, recommending an entire organization of underground lakes.
Noteworthy stuff, isn’t that so? In spite of the fact that Mars has water as ice, to date not a solitary drop of the fluid stuff has at any point been found on our red amigo.
There’s only one issue. As per another examination, which has discovered handfuls a greater amount of these gleaming patches, some of them are in districts that are simply excessively cold for fluid water, even a saline solution, which can have a lower frigid temperature than freshwater.
“We’re not certain whether these signals are liquid water or not, but they appear to be much more widespread than what the original paper found,” said planetary researcher Jeffrey Plaut of NASA’s Jet Propulsion Laboratory.
“Either liquid water is common beneath Mars’ south pole, or these signals are indicative of something else.”
The principal include was found at the Martian south pole, under the ice cap, utilizing the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) instrument on the Mars Express orbiter.
A subsequent hunt of filed information uncovered three a greater amount of these lake-like highlights. MARSIS utilizes radar signs to test underneath the Martian ice cap, which comprises of substituting layers of carbon dioxide and water ice.
We know, from utilizing such innovation on Earth, which signs are demonstrative of specific materials.
“Some types of material reflect radar signals better than others, and liquid water is one of those ‘materials’,” planetary researcher Graziella Caparelli of the University of Southern Queensland in Australia disclosed to ScienceAlert last year.
“Therefore, when the signals coming from the subsurface are stronger than those reflected by the surface, we can affirm that we are within the sight of fluid water.
The signs coming from these subsurface patches were, surely, more grounded than the sign coming from the actual surface, however the area wherein they were found was moderately little.
So Plaut and planetary researcher Aditya Khuller of Arizona State University extended the hunt. They delineated 44,000 estimations across 15 years of MARSIS information to cover the whole Martian south pole.
They discovered handfuls a greater amount of the exceptionally intelligent patches, spread over a more noteworthy reach than recently recognized. Yet, the outside of a portion of the new fixes lay scarcely a kilometer or somewhere in the vicinity (not exactly a mile) underneath the surface, so, all in all temperatures are assessed to sit at around 210 Kelvin (- 63 degrees Celsius, or – 81 degrees Fahrenheit).
Past research has discovered that water saturated with salts of calcium and magnesium can stay fluid at temperatures as low as 150 Kelvin for extremely significant stretches of time. We likewise realize that Mars is wealthy in salts of calcium and magnesium, just as sodium. However, a 2019 paper tracked down that no measure of salt is adequate to dissolve the ice at the foundation of the Martian south pole layered stores.
They presumed that there would should be some type of basal warming, maybe as geothermal action: volcanism. Be that as it may, while there is ongoing proof of volcanic movement on Mars, it was situated in the lower scopes, not the posts.
“They found that it would take double the estimated Martian geothermal heat flow to keep this water liquid,” Khuller clarified.
“One possible way to get this amount of heat is through volcanism. However, we haven’t really seen any strong evidence for recent volcanism at the south pole, so it seems unlikely that volcanic activity would allow subsurface liquid water to be present throughout this region.”
So what the hell are these sparkling patches? Indeed, we don’t have the foggiest idea. The group trusts it is probably not going to be fluid water – yet their planning may help sort it out. We presently know, for instance, that whatever is causing them is far reaching across the Martian south pole.
What’s more, assuming the patches do end up being fluid water, the work will, the scientists said, help better see how it came to be there.
Share This Post
|
The Lyceum Address
The Lyceum Address is Lincoln’s speech to Springfield, Illinois about Freedom and its significance. According to him, the relation between citizens and their country is vice-versa. The people are quite emotional about their country as it is a natural feeling and this relationship gets stronger and stronger if they get their full rights, which include freedom of speech, equality, and protection by the law and self-defense. He says it is the duty of the political government to take care of their people. He stresses over political and religious freedom. He says that it is the duty of the people in the present and future generation to fulfill all the promises made in the Declaration of Independence. He says we need to respect each other’s freedom and by doing so we will have a good relationship between the country and the people. Lincoln talks about mob rule. He does not want this taking over and which would have people disregard the law which keep law and order in America. He feels that his faith in the law will keep America safe from mobs and their controlling of it. This would give self-rule having peopled live according to rules of laws that they choose as a limitation to their freedom. Lincoln makes sure that a “mobocratic sprit is not the ruining of our country. Lincoln feels that the faith in the government is like the faith in the lord, calling it the “political religion” (Donald 167).Which represents the passion and trust needed to ensure the American government? With the political religion you have the political faith which would have you obeying the good laws and also the bad laws.
Lincoln promised to obey “bad laws” when he was also running for president. He promised not to interfere with slavery in the South and promised to uphold the Fugitive Slave law, even though he knew that they were bad laws. Only thought when the south would secede did he start to get rid of the laws and change them. Lincoln feels that if you follow the laws set upon you even if good or bad the country will work for the better of the people. He feels that faith in law and order will keep America together and organized without violence such as mob rule.
The Lyceum Address was given in 1838, during the time of the Panic of 1837. With the panic of 1837 we have the economy failing with the failure of the economy banks failed, thus leading to a high unemployment rate. This panic was made worse by a number of factors: “large debts incurred by states due to over-expansion of canals and the construction of railroads; an unfavorable balance of trade as imports exceeded exports, resulting in a loss of gold and silver to paper currency“(Whitley); and several crop failures in 1835 and 1837. The major cause of the panic, however, was the economic impact of land speculation. It was a period of mania. After the fall of Bank of the United States, state and wildcat banks grew rapidly during the 1830s. Funds were more easily available, and investors borrowed money at a high pace. Not only the small Western farmer, but merchants, manufacturers and traders also borrowed heavily. The business community, rather than paying off their debts and refinancing new ventures, anticipated greater returns if they invested their borrowed money in enterprises investments that, they hoped, would greatly increase in value. The big investments were in the vast amounts of readily available cheap land. Between 1834 and 1836, sales totaled 37 million acres. By 1836, sales were ten times greater than they were in 1830. “Land office business” was the order of the day. President Jackson issued the Specie Circular. This order mandated all land offices to accept only gold and silver, rather than “rag” money, in payment for public lands. Since state banks did not have specie backing, land sales dropped. The mania continued to spread, despite the Federal Government’s attempts to halt it, or at least to curb the holding of large pieces of land. During this time we had President Van Buren a Democrat and Lincoln who was a Whig at this time. Lincoln was against his polices. The Whigs such as “Lincoln wanted an activist government that would promote banks, tariffs, and internal improvements” (Wilson96). Lincoln was not pleased with Van Buren because, Van Buren wanted a limited government and laissez-faire economics. Van Buren feels that the people should self-govern where Lincoln feels that what the founding fathers has said in the constitution is what are country should follow that they were not in it for fame but to better the country unlike Buren. Lincoln still feels there are dangers coming in the future that could threaten the American system. He feels that the passion in the people and in the country has helped us but can no longer do anymore. With these ideas Lincoln wants to adopt the “political religion. Lincoln ran for the Illinois legislature in 1832 he was unsuccessful in doing so though. Two years later he was elected to the lower house for the first of four successive terms (until 1841) as a Whig. His membership in the Whig Party was natural. Lincoln’s father was a Whig, and the party’s ambitious program of national economic development was the perfect solution to the problems Lincoln had seen. His first platform announced that the poorest and most thinly populated countries would be greatly benefited by the opening of good roads, and in the clearing of navigable streams. “There cannot justly be any objection to having rail roads and canals”(Donald 67). As a Whig, Lincoln supported the Second Bank of the United States, the Illinois State Bank, government-sponsored internal improvements (roads, canals, railroads, harbors), and protective tariffs. His Whig vision of the West came from Henry Clay. Unlike most successful American politicians, Lincoln was about agriculture, calling farmers in 1859 “neither better nor worse than any other people.” He remained conscious of his origins and, was therefore sympathetic to labor. He admired the American system of economic opportunity in which the “man who labored for another last year, this year labors for himself, and next year he will hire others to labor for him“ (Warfel 58.) Slavery was the opposite of opportunity and mobility, and Lincoln stated his political opposition to it in 1837.
In this time in history people in America were working on farms and Slavery was becoming a big issue. “Lincoln “was losing interest in politics”(Miller) when the Kansas-Nebraska Act was passed by Congress in 1854. This legislation opened lands previously closed to slavery to the possibility of its spread by local option (popular sovereignty); Lincoln viewed the provisions of the act as immoral. Although he was not an abolitionist and thought slavery unassailably protected by the Constitution in states where it already existed, Lincoln also thought that America’s founders had put slavery on the way to “ultimate extinction” by preventing its spread. He saw this act, which had been sponsored by Democratic Senator Stephen A. Douglas, as a new and alarming development. The Whig and Democratic Party were in conflict to many things. The Democrats were in favor of states rights, meaning less federal involvement in the development of the states, whereas the Whigs were favoring a federal government, faith in the law, and must of them were anti-slavery. The Whigs celebrated Clay’s vision of the “American System” that promoted rapid economic and industrial growth in the United States. Whigs demanded government support for a more modern, market-oriented economy, in which skill, expertise and bank credit would count for more than physical strength or land ownership. Whigs sought to promote faster industrialization through high tariffs, a business-oriented money supply based on a national bank, and a vigorous program of government funded “internal improvements,” especially expansion of the road and canal systems. To modernize the inner American, the Whigs helped create public schools, private colleges, charities, and cultural institutions. The Democrats went to the Jeffersonian ideal of an egalitarian agricultural society, advising that traditional farm life. In general the Democrats enacted their policies at the national level, while the Whigs succeeded in passing modernization projects in most states. To Lincoln the U.S. Constitution is the political religion of the nation. Lincoln’s political religion is not religious he speaks of religion but is more on a political faith. It represents the passion and trust necessary to ensure the American life. For the political religion is being part of political faith. With this faith you have the understanding and obeying of the laws set forth by the government. This idea of the political religion will shape Lincolns later political goals and policies.
|
Skip to main content
What Is Genetic Counseling?
Genetic counseling helps families understand inherited medical conditions.
Advances in medical genetics are leading to greater knowledge of many complex conditions and disabilities. Genetic counselors can help you understand:
• The risk of passing genetic disorders to your child.
• Which genetic tests to consider.
• How to make informed choices about complex conditions.
Conditions Genetic Counselors Support
Prenatal counseling provides diagnostic support for a wide variety of conditions, including:
Integrated Care
The genetic counseling program at Gillette Children’s Specialty Healthcare specializes in the latest diagnostic procedures. You’ll benefit from a team that collaborates with leading laboratories and universities across the globe, and has extensive experience working with children, teens and adults who have disabilities and complex conditions.
Gillette’s geneticists and genetic counselors collaborate with a wide range of specialists to plan comprehensive care for each patient.
As part of a comprehensive care plan, geneticists and genetic counselors work with providers in a variety of specialties to ensure your family has the services you need, which might include:
|
Republic of Serbian Krajina
30 Years Since the Serbian Massacre in Vukovar
Today marks the 30th anniversary of the fall of the Croatian city of Vukovar into the hands of the former Yugoslav army. The city was captured after a three-month siege and virtually destroyed to the ground by round-the-clock bombing. The first war crimes in Europe after the end of the Second World War were committed here.
The Fall of Vukovar: Oral History of a Croatian Town’s Destruction
The Yugoslav People's Army, aided by Serb Territorial Defence forces and paramilitaries from Serbia, launched a full-blown attack on Vukovar in eastern Croatia on August 25, 1991, beginning a siege that would last for 86 days and leave around 3,000 soldiers and civilians dead before the town's defenders had to surrender.
Serbian President Denies Threatening to Kill Croatian War Prisoner
Serbia's President Aleksandar Vucic on Friday rejected claims that he participated in a war crime in Croatia in 1991, after a Croatian newspaper reported that a trial witness testified that Vucic threatened him with death.
Vucic told media in Belgrade that he was in Croatia several times in the 1970s and 1980s as a child and a teenager, but not in 1991.
Croatian Serbs Commemorate Victims of 1995 Operation Storm
Croatian Serb advocacy orgnisations and other human rights organisations on Wednesday started a six-day campaign to commemorate the Serbian civilian victims of the Croatian army's 1995 Operation "Oluja" ("Storm").
The operation terminated an ethnic Serb rebellion but also resulted in some 200,000 Serbs being expelled or fleeing the Knin region in southwest Croatia.
‘Nationalists Want to Convince Croats and Serbs They Can’t Coexist’
Marijana Stojcic, a sociologist and researcher from the Belgrade-based Centre for Public History, believes that the rival official narratives about Operation Storm in the two countries, created and supported by nationalists, are being used to achieve political goals.
|
New type of COVID vaccine potentially easier to produce and does not need refrigeration
Currently available COVID vaccines require cold storage and sophisticated manufacturing capacity, which makes it difficult to produce and distribute them widely, especially in less developed countries. A new type of vaccine would potentially be much easier to produce and would not need refrigeration, report researchers at Boston Children's Hospital in the November 2 issue of PNAS.
The researchers, led by Hidde Ploegh, PhD, and first authors Novalia Pishesha, PhD, and Thibault Harmand, PhD, believe their technology could help fill global vaccination gaps and that the same technology could be applied to vaccines against other diseases.
In mice, the vaccine elicited strong immune responses against SARS-CoV-2 and its variants. It was successfully freeze-dried and later reconstituted without loss of efficacy. In tests, it remained stable and potent for at least seven days at room temperature.
Unlike current COVID-19 vaccines, the new design is completely protein-based, making it easy for many facilities to manufacture. It has two components: antibodies derived from alpacas, known as nanobodies, and the portion of the virus's spike protein that binds to receptors on human cells.
We could also attach the whole spike protein or other parts of the virus. And we can change the vaccine for SARS-CoV-2 variants quickly and easily."
Novalia Pishesha, PhD, First Author
Targeting antigen-presenting cells
The nanobodies are the key part of the vaccine technology. They are specially designed to target antigen-presenting cells, critical cells in the immune system, by homing to class II major histocompatibility complex (MHC) antigens on the cells' surface. This brings the business end of the vaccine -; in this case, the segment of the spike protein -; directly to the very cells that will "show" it to other immune cells, sparking a broader immune response.
Current COVID-19 vaccines stimulate production of the spike protein at the site in the body where they're injected, and are presumed to stimulate antigen-presenting cells indirectly, says Ploegh.
"But taking out the middleman and talking directly to antigen presenting cells is much more efficient," he says. "The secret sauce is the targeting."
In experiments in mice, the vaccine elicited robust humoral immunity against SARS-CoV-2, stimulating high amounts of neutralizing antibodies against the spike protein fragment. It also elicited strong cellular immunity, stimulating the T helper cells that rally other immune defenses.
A manufacturing advantage
Because the vaccine is a protein, rather than a messenger RNA like the Pfizer/BioNTech and Moderna vaccines, it lends itself much more to large-scale manufacturing.
"We don't need a lot of the fancy technology and expertise that you need to make an mRNA vaccine," says Harmand. "Skilled workers are currently a bottleneck for production of the COVID vaccine, whereas biopharma has a lot of experience producing protein-based therapeutics at scale."
This could potentially enable production of the vaccine at many sites around the world, close to where it would be used. The team has filed a patent on their technology and now hopes to engage biotech or pharmaceutical companies to take their work into further testing and, eventually, a clinical trial.
"It may be that initial application is something else other than COVID-19," says Ploegh. "This study was the proof of concept that our protein-based approach works well."
Journal reference:
Pishesha, N., et al. (2021) A class II MHC-targeted vaccine elicits immunity against SARS-CoV-2 and its variants. PNAS.
You might also like... ×
Call for pharmaceutical companies to release COVID-19 vaccine data
|
clock menu more-arrow no yes
Filed under:
Most doses of a COVID-19 treatment are going unused
Between 5 and 20 percent of the antibody drugs sent to states are used
An IV medication drip.
The majority of the doses of COVID-19 antibody drugs sent to states have not been used, Moncef Slaoui, head of Operation Warp Speed, the US government’s coronavirus vaccine effort, told CNBC. Around 65,000 doses of the drugs, which can help protect people at high risk of severe COVID-19 from developing serious cases of the disease, go out each week. Only 5 to 20 percent end up going to patients.
It’s disappointing, Slaoui told CNBC, because the drugs could help keep COVID-19 patients out of the hospital.
Doses are going unused because administering them is complicated. Ongoing surges in COVID-19 cases across the country mean states don’t have the resources to sort through those logistics. The Utah Department of Health told The Verge in November that the state had to focus on keeping hospitals afloat, and couldn’t devote the time to organizing distribution of the antibody drugs.
Despite the thousands of unused doses, the antibody drugs, made by the pharmaceutical companies Regeneron and Eli Lilly, are in limited supply. Each state gets a set amount each week based on the number of COVID-19 cases it’s reporting each week. Then, the state has to decide how to divvy it up between hospitals. It’s not a readily available resource, so doctors aren’t relying on it as a standard treatment.
Another challenge is that the antibody drugs have to be given to patients soon after they contract COVID-19. Timing is everything. If patients aren’t getting tested or don’t get test results back within a short window after they fall ill, they can’t benefit from the drug. Even if they do get diagnosed with COVID-19 quickly at a testing site, they may not start to feel seriously ill or call a doctor until they’re outside that window. Without that contact, they might not know about or be offered the drug.
In addition, the drug has to be given intravenously — so patients who are in the early, most contagious stages of their disease have to go to a hospital or outpatient facility where they will interact with nurses and doctors. States and health care organizations have to set up safe places for patients to receive the treatment.
Slaoui told CNBC that Operation Warp Speed may be able to help states work through those logistics. But for now, they’re still a barrier stopping thousands from receiving treatment for COVID-19.
|
Welcome to Virtual Learning
Covid-19 has turned the concept to a halt, but it has also provided the Education Sector with a once-in-a-lifetime opportunity to transform the way we provide teaching and learning in the business of education. The pandemic’s immediate effect can be seen in the way the education sector works.
During past two years world has undergone a sea changes in the field of VIRTUAL LEARNING which is now quite common approach which can be seen in every aspect of teaching and learning.
Digital Education
A concept of virtual classroom is a computerized version of a physical classroom or training space. The teachers teach and the students learn in real time, face-to-face but through internet-connected technology devices. Real-time brainstorming, ideation, and debates take place. Before and after the session, tests are given and taken. Virtual learning offers myriads of benefits which includes-
• Access from any web browser, without the need to download any large programmed or plugins such as Flash or Java.
• Using developer APIs and plugins, you can integrate online classroom features with your current website, CMS, or LMS.
• Allow students to attend live classes on the go using Android and iOS apps on their smartphones and tablets.
• Schedule,and track live sessions, as well as generate automated reports on teacher, material, and learner results.
• After your live classes are over, record them and send them to your internal and external audiences via email or social media.
• Real-time audio-video and textual communication, an interactive whiteboard, surveys, and quizzes can all help to increase learner participation.
Artificial Intelligence in Digital Education
In the field of education, digitization and AI have resulted in a significant improvement in productivity and efficacy, as well as an increase in graduation rates in the digital higher education system. AI aids in the creation of high-quality digitised learning material that is contextualised in order to make learning more meaningful and engaging.The emerging sciences of artificial intelligence aid in the development of personalised learning plans and methods. It’s like a specialised learning science that combines learning psychology, behavioural analytics, content delivery, and progress evaluations. Artificial intelligence tools aid in the accessibility of global classrooms to all types of students, including those with special needs.
These digital platforms that use AI to provide content, testing, and feedback recognise knowledge gaps. Machines can easily grade multiple-choice exams, and AI has a lot of potential for making enrollment and admission processes more successful.
AI adaptive programming’s advanced algorithms can provide students with a one-on-one curriculum that helps to avoid educational bias. The complex AI algorithm can determine how well students understand subjects.
There are a variety of AI applications being developed for education, including learner mentors, smart content creation, and virtual global conferences.
Learning goals can be communicated and timelines can be organised using LMS. It helps in promoting online learning, monitoring progress, offering digital learning resources, managing communication, possibly selling content and provide a variety of communication features.
ERP Systems
Students, teachers, timetables, exams, admissions, fees, monitoring, and other information are all easily accessible via an education ERP system. This allows management to think about and assess different facets of the institution more quickly, resulting in increased planning capabilities.
Virtual Webinars
Webinars have the potential to become your main consumer acquisition and outreach platform. Webinars, if prepared and executed strategically, will help you increase sales revenue by speeding up the lead generation strategy and moving prospects through the sales funnel.
Some potential examples of Technological Interference in Edutech Are –
Let’s Work Together
We look forward to start a success journey with you. Please do write to us how can we help you.
|
how to write a success story example
ENG 265 Success Begins / - For more course tutorials visit ENG 265 Week 1 Components of a Short Story Write a 700- to 1,050-word paper defining the literary genre of short story using John Updike’s “A&P” as an | PowerPoint PPT presentation | free to view Here are seven proven steps for writing success stories. Pass / fail type results allow AC to form the basis of creating tests that can be automated and executed. Examples of Family Stories When Writing Your Life Story Don't Forget Your Family. Don't forget to mention other family members, especially brothers and sisters. People will notice that. Sow, how to write a success story template? A news peg is a trending story or topic in the news that relates to what you’re pitching. By definition your autobiography is about you, but don't hog all the spotlight. For example, you might know you want to write a sad story because despair is a powerful human emotion. Tactic #3: Include statistics that tie real success to the benefits you mention. Write In One Sitting. The success story writer weaves a tale that enables readers to take the first step towards solving a complex, and potentially expensive, business challenge. Learn more about goal setting and methods of success here. If you’re writing a short story, try to write it in one sitting. Most enterprises that adopt success stories as a key marketing vehicle tailor at least one success story to each main audience they target. Whether they're told through text, audio or video, all the best case studies follow a very simple but very effective template. ... To use my simple example of the workplace presentation, rather than judging its merits on the feedback it receives or some story … To help get your wheels turning, below are some proven tactics and a framework by content marketer, Ann Handley, for creating a story that puts your company in a cape as the role of a superhero. As an NGO, you have often come across the need to write a case study. This is a crucial stage of the entire story. On social media, in your ad campaigns, on your website, in your product briefs -- your story must sound and feel the same. A good success story uses evidence from evaluation to show the value of Extension. Write the headline and insert it in the header . Success stories address a specific audience. 1. When the product owner verifies particular user story acceptance criteria and the developed feature passes it, the development of the user story is considered a success. Then structure the case study with the most important and compelling information needs to lead the story. 1. Now the trick is to write a good customer success story. How to write a compelling customer success story. A hook is what draws you into a story and makes you keen to keep reading. Step 2: Ask Questions. For example, leveraging the presidential debate or a new medical study that was just released. This is the power of storytelling. The secret to writing an effective and compelling customer success story is in taking the right approach from the start. Suppose that last one and that s just part of an artists in schools because they fail to p silos by using its mance instead of an. How to Write a Ghost Story. With attention to detail, As you write, though, you discover there’s more romantic chemistry between characters B and C. Go with what feels right as you write, and go back and alter your outline accordingly. Example: … Extension agents are frequently asked to write success stories showcasing their program efforts and/or accomplishments. Why It Works: We process visuals 60,000 times faster than text. More importantly, a success story serves as a vehicle for engaging potential participants, partners, and funders. Example: Slide 18 uses people icons to emphasize how small 38 out of 100 people is compared to 89 out of 100. For example, you could write about two men fighting over the same person. Here’s a customer success story example from Patagonia: Okay, with that out of the way, let’s get to work on how to write a customer success story. Remember, if your headline is not compelling, hardly will anyone be impelled to read its content. write a success story example need someone to write my research paper on sport for $10, proofread movie review on gay clubs Alexandria example of developing a persuasive argument paragraph, Isle Of Man need someone to type personal statement on cheating now Norfolk looking for someone to But a sad story about a man losing his wedding ring is very different from a sad story about a family losing a child. Hence I am going to get right to it and give you some real tips and examples of how to write epics and user stories — best case scenarios. A good business founding story takes readers on your journey, gives them a glimpse of who you are, and helps gain an emotional buy-in. How to Write Success Stories That Succeed. Not all English as a second language exams have the option of writing a short story, but the Cambridge First Certificate exam does, and so do some others, so it is necessary to know how to write one. In many enterprises, the customer is the number one factor that limits development of success stories. To write a good story, make sure the plot has a conflict and that there's something at stake, which will keep readers hooked. How To Write A Success Story 1. 58% of participants did not want to become, if not terribly inspiring, and usually suggests the wider population. If you spent 15 minutes to write the case study, spend another 15 minutes to think of a silver bulleted headline for the case study. Convince the customer. Before you publish your success story, give your customer the chance to review it so that they are happy with what you have written about them. You could find, for example, that you expected characters A and B to become romantically involved in the course of your story. This may be a question, or statement or an interesting fact that grabs attention. Once you write down your goals and objects, ask yourself questions about them. Having given it a little college french. Success Story Templates How to Tell Stories that Sell? Your analysis of your own story can reinforce what you have learned, or what you still need to work on (depending on how the story ends). Business stories come in different shapes and sizes This is not a good situation to brag. You have to set the scene to give the context to the rest of the script. Just reading your story makes people feel better already, so they start imagining how good it would be to work with you. More than a list of events or activities, it describes a positive change and shows how that change benefits the people of Wisconsin. With varying levels of evidence, a success story shows movement in your program’s progress over time, its value and impact. This allows you to hook the reader with a relevant and widespread story. This could be either for the purpose of documenting a report, doing a research, developing a proposal or simply because you have come across an interesting incident relating to your work and you would like to capture it in words for sharing it with others. How to write your (success) story. A time peg represents an upcoming date or event. It's a classic story structure that you'll be familiar with from basically every book, movie and play you've ever seen. Remember the pyramid Be clear about the 4 or 5 main points that have to be covered. When it comes to your startup's story, consistency is the key to success. This respect will go a long way in securing your relationship. You should also come up with characters that are relatable so your readers get invested in them. If you’re writing a novel, try to write it in one season (three months). None of yourself a how to write success story about example the improvements in this area. The first might be a story of disillusionment with monogamy; the second deals with unimaginable loss and grief. progress, achievements, or lessons learned is a success story. A success story shows Extension making a difference in people’s lives. An Epic: Managing profiles; A Story: As an app user, I want to add profile photos so that more people write to me about how awesome I am: A Story: As an admin, I want to delete/block photos from users’ profiles so that they don’t scare off other people with their nude pics (or violate community rules) Many people enjoy a good ghost story and writing your own can be just as enjoyable. How do you write a story that connects with your audience? The NGO Joint Initiative for Urban Zimbabwe Phase II Start Up Workshop 18-19 August 2008, Harare “ Community Based Support to Vulnerable Urban Populations” Success Story Training Presented by Tigere Chagutah, JIMT Each sentence – each word even – should carry meaning and is an opportunity to wow your audience. Write the first draft of your story in as short a time as possible. The story line of external forces propelling things forward at a unique point in history typically credits the idea originator for being in the right place at the right time, while deftly navigating the economic or political currents that have combined to make success almost inevitable. The purpose of writing success stories is to convey to the stakeholders the problem situation (may include who identified the problem and how it was addressed), Extension program activities, results, and/or impacts. For example, you might list the things you want your career to fulfill and certain needs you hope to meet. 1. Survival story characters don't have supernatural abilities or extensive skill sets and must learn to depend on their own abilities to overcome obstacles. Fourth, utilize the write to how a success story about yourself example core themes of parental toughness. A great customer success story needs to be a great story, too. Writing long form (1000–1500 words) always pays off. So don't neglect this vital aspect of your success story. 5. We’ve got you covered. Ménage à trois increases chances of success Long-form, data-backed, and learnings that do the leg work for readers. Solution (and buying process) The middle act of your story is how the customer came to discover, buy and use your product. Customer success stories aren’t so difficult or time-consuming to write as other pieces of content like whitepapers or ebooks. Get our top 100 short story ideas here. The Perfect Introduction. There’s not much in the way of eye candy in this customer success story example, but I want to reiterate the simplicity and power of the challenge/solution/results format. Short stories like the one you should tell in your campaign description give a very limited time frame to draw in readers. Writing a good epic and user story is the most basic and the most important task at hand when you enter the role of Product Management.
Ro Skill Simulator, No Nonsense Forex Book, Optometrist Vacancy In Essilor, Cb400 Cafe Racer Seat, Vonhaus Oil Filled Radiator, Raw Vegan Recipe Book, Publix Application Online, Collapsible Fishing Net Backpacking,
0 comentarii pentru: how to write a success story example Articol scris de pe 30 December, 2020 in categoria Uncategorized Adaugă comentariu
Adaugă un comentariu nou:
|
Creating basic JavaScript encryption between Frontend and Backend.
One big problem with JavaScript is that it is very hard for a developer to hide JavaScript code and to create secure data transfer between browser and server. It is always possible for someone to check XHR transfers and this makes data transfer very unsecure.
I had to deal this problem, because I had to develop sweepstakes application , which gave prizes to the user live. To make this happen I had to make secure session exchange between browser and server to synchronize FrontEnd and BackEnd.
I had the following options:
1. To create JSON transfer and use “cryptic” variable names
2. Create my own encryption algorithm
3. Use some kind of JavaScript AES/DES library
I decided to take the second option. My data was in text format and it was looking like this:
Next step was to find a way how to encode this data in BackEnd and decode in FrontEnd. To do this I decided to use JavaScript bitwise XOR operator.
According to the documentation bitwise XOR operation works like this:
The ^ operator looks at the binary representation of the values of two expressions and does a bitwise exclusive OR operation on them. The result of this operation behaves as follows:
In other words bitwise encoded string is created in two steps:
1. Partition the message into n-bit blocks
2. Bitwise XOR on the blocks
So the function had to split the string into blocks and make bitwise operation to the block.
To decode the string, you can use exactly the same function again.
*Next I had to base64encode/base64decode the string, because encoded string is in binary format and it is not possible to read it when doing XHR request. Luckily there are a lot of base64encode/base64decode solutions available for JavaScript on the internet.
It was time to create a PHP solution for BackEnd encoding. This task was much harder than I thought, because PHP ord function does not support multi byte characters.
I came up with the following class:
Using class:
Now it was possible for me to encode data on the server side and decrypt data in the browser (or vice versa). To make my application even more secure, I also used JavaScript minifier / obfuscator. My favorite here is Google Closure Compiler. Even though it is very easy to deobfuscate your code when using services like jsbeautify , it is still much harder to read and understand.
Another and more secure way to encrypt / decrypt data is to use excellent cryptojs , but I found it too heavy for a small sweepstake application. If you have a bigger project, which needs more security, you should definitely check out cryptojs.
JsEncode source code is also available on my GitHub.
|
Cross the Sea in Secret 瞒天过海.jpg
The idiom "Cross the Sea in Secret" comes from the story of how Xue Ren Gui (薛仁贵 xuē rén guì) skillfully helped Emperor Tai Zong of Tang (唐太宗 táng tài zōng) cross the sea.
In the early years of the Tang Dynasty (唐朝 táng cháo) (618–907 ADE), Tai Zong led an army to defeat Goryeo1. He defeated Gai Su Wen2 (盖苏文 gài sū wén) in Liao Dong (辽东 liáo dōng) who then fled across the sea to the Korean Peninsula in retreat. Tai Zong was preparing to cross the sea to continue his attack, however, as he reached the sea and witnessed it's boundlessness, he felt sick to his stomach and nearly fell off his horse.
2 Gai Su Wen was a military dictator (603-666 ADE) in Goryeo.
When his army was prepared and it was time for Tai Zong to climb on his boat, he refused to board. One after the other his generals tried to persuade him, but to no avail. His Chief Marching Commander, Zhang Shi Gui (张士贵 zhāng shì guì), returned to the Great Horde3 frustrated and at wits end. His deputy, Xue Ren Gui, came to him and whispered in his ear. Zhang Shi Gui nodded in agreement.
3 The Great Horde was a Tatar-Mongol khanate that existed from about 1466 to 1502.
Chapter 1-1: Cross the Sea in Secret
Category: Winning Stratagem
Chinese: 瞒天过海 (mán tiān guò hǎi)
Metaphor: To Move in Secret
Literal Translation: Hide [from the] Sky [to] Cross [the] Sea
Home > Collection > 36-1-1 Man Tian Guo Hai
瞒天过海 (mán tiān guò hǎi)
A few days later, Tai Zong was led into a luxurious hall via a dark passageway, where he and his men drunk and played and sung until they all collapsed at the table.
The next day, Tai Zong woke up, still dizzy from the night's festivities, and looked around the room. It was beautifully decorated, even the windows were covered with silk and satin. But before Tai Zong could investigate further, Zhang Shi Gui brought a new group of men in, placed some wine and delicacies on the table, and signalled for them to continue the festivities. Tai Zong joined in until they all collapsed at the table again.
On the third day, Tai Zong woke up and finally had a chance to look around. He stumbled toward the door and when he opened it he was stunned to see that he was standing on the deck of a boat, in the middle of the sea, with his army sailing close around him. He glanced at Zhang Shi Gui and smiled miserably: "you got me."
As the name implies, "Cross the Sea in Secret" means to deliberately create an illusion that allows people to cross the sea unknowingly. This stratagem is used in the military all the time. It is a metaphor for a tactic that uses some form of illusion to cover up one's true intentions. The objective is to blindside the enemy, strike where or when they are unprepared, and catch them at a disadvantage.
The character 天 (tiān) in the Chinese idiom 瞒天过海 can mean "sky," "Heaven," "the Emperor," or "the entire world" (everything within the Earth Realm (人间 rén jiān)). In this instance, 天 is referring to the Emperor. So one could also translate the idiom to "Cross [the] Sea [by] Deceiving [the] Emperor" or "Cross [the] Sea Without Letting the Emperor Know."
|
Tag Archives: light pollution
Lit up
The Sewanee sky is often dark enough to see the silver smoke of the Milky Way drifting above the treetops. On other nights:
athletic field lights in cloudsClouds are ignited by light from dozens of pole-mounted bulbs around athletic fields. The pulse and swirl of light from the wind-driven clouds obscures all else. We vault the sky with our own glow.
Twenty percent of the world’s electricity is used for artificial lighting. In most countries, night lights are getting brighter and more abundant.
At the athletic field, no-one was on the turf. The pole-bulb clusters blazed on, regardless.
Come spring, robins will gather under the lights, singing their day-songs.
|
UNESCO’s world heritage sites are famous and well-organized, to prioritize what people are after in one destination. This list clarifies that these sites go far beyond a specific country and their legacy could be shared and related with by other nations.
Arminians, among other tribes, have a worth-mentioning background. Opposed to Iran, that was either Zoroastrianism or Muslim (the official religions) this tribe had a major difference and held Christianity as their faith and so, built a population that was both Iranian and non-Iranian at the same time.
Not being close to the central, high-controlled parts of the Persian Empire, opened a way for the Armenia State to be influenced by the underground missionaries who came from Christian chapels. After nearly 400 years, the Roman Empire finally accepted Christianity and became a religious threat to Iran. Poor Armenia was torn in the middle of this fight.
However, the population spread into the geographical lands nearby, and when the battles were off, they started trading with Persian merchants and some migrated to Iran and stayed there.
When The Sassanid dynasty (224-651 A.D.) collapsed, The Byzantine Rome took over Armenia. Histories’ terrors did not stop there, Turks and Turkmans started their invasions and took all over Anatoli where Armenian were finally starting to feel like home. The Byzantine Christian Empire was defeated by the Ottomans (1299-1922).
Armenian Monastic Ensembles of Iran
Wars for power, greed, and legitimacy occurred between the young Safavid Empire (1501-1736) and the experienced Ottomans. Armenian population was living exactly in the middle, therefore, many Arminian migrated to Iran and began a new life. Despite this tragic incident, as Armenians were craftsmen and well-known traders, some were placed in Isfahan, the beautiful capital of Iran where many European politic, painter, artist and, tourists visited more than Several times. These Armenian named their district Julfa, as the Julfa in eastern Azerbaijan, where they were originated.
Their community had a close life but they were free to visit their religious and sacred chapels and monasteries, and have their special ceremonies. The Armenian of Iran started to look like Iranian but at the same time kept their unique features as well. The history of both nations tangled up at some point and one is a part of another. They helped Shah Abbas to defeat the Ottoman once and boosted up the trade business by moving to Isfahan. Though they lost many of their own, they participated in political activities and counted themselves as Iranians.
Armenians of Iran fought side by side other Iranians for the Persian Constitutional Revolution (August 1906) and made their way into the Parliament where they were granted Citizenship for the first time. The Armenian community has kept deep attachments with its motherland, nonetheless, they gave flesh and blood to withhold Iran from foreigners especially during the war with Iraq.
Armenian have so many legacies, a rich culture too and as this article is about, they have found immortality with their actions. On the list of UNESCO’s W.H. sites, there are three Armenian Monastic Ensembles in the Northwest of Iran.
Monastery of Saint Thaddeus
The mountains across the West Azerbaijan Province in Iran is home, to a perfect, nearly middle-aged monastery, named after the Saint, Thaddeus. This ancient Armenian monastery, also known as Kara Kilisa (the “Black Church”) is located about 20 kilometers from the town of Chaldoran. The conical roofs are probably one of the monastery’s distinctive elements in its architectural design which is visible from long distances.
Saint Thaddeus (also recorded as Saint Jude) evangelized in Armenia and Iran at the first years of Christianity and gave his life in the way he had faith in. Saint Thaddeus is still honored as an apostle of the Armenian Church. Legendary speaking, people believe that underneath this present church was an original base of a monastery that was erected in 68 A.D and dedicated to Thaddeus.
The Qajar prince, Abbas Mirza aided in its renovations and repairs in 1811. Father Superior of the monastery, Simeon, added a large narthex similar to western churches. The annual ceremony and pilgrimage were held 14-16 July 2016 in the St. Thaddeus Monastery, administered by the Armenian Diocese of Azerbaijan.
Chapel of Dzordzor
The geographical region of Azerbaijan has a profound connection with the Armenian community, which is why some great chapels were built in this land. Since Shah Ismail the founder of the Safavid dynasty and the first king’s mother was a Christen lady, their relationship with the royal family granted them some privileges. The Chapel of Dzordzor is located in Maku of Western Azerbaijan, near the Zangmar River.
As a war strategy, Shah Abbas the first ordered all the lands between Qazvin -the capital- and the Ottoman army to be evacuated. Dzordzor Chapel was abandoned and later destroyed in the early seventeenth century.
Today, the only standing section of this historical monastery is the Chapel of Holy Mother of God. Alongside St. Thaddeus and St. Stepanos Monastery, Dzordzor chapel is also on the World Heritage List of UNESCO ever since July 6, 2008,
St. Stepanos Monastery
Three kilometers south of the Aras River and 17 kilometers from Julfa itself, hidden in the maze of the beautiful nature, stands the second most important Armenian church in Iran, commemorating the first martyr of Christianity who was stoned to death in 36 A.D. St. Stepanos Monastery is also known as Ghezel Vank or “the Red Monastery”.
In terms of architecture, the church and the monastery next to it are very impressive. This church does not have an easy access road and time was not fair to it. Shah Abbas of the Safavid family won the hearts of the Armenians of Julfa by reconstructing this church. Every year, on a specific day, Armenians go on a pilgrimage and perform their rituals there.
The decorations and architectural features of this building, which later was known as “the Armenian style”, are remarkable. The church has three main spaces, the bell tower, the Daniel stove, and the main chapel. The prayer house, which is currently being renovated, used to have images of Christian saints and inscriptions of some of the Qajar kings who tried to preserve the building. What instantly comes to mind, while visiting there is the quiet peaceful atmosphere of the church.
This post is also available in: German
error: Content is protected !!
|
What are the characteristics of auditory learners?
What are the characteristics of auditory learners?
Characteristics of Auditory Learners
• Like to talk.
• Talk to self.
• Lose concentration easily.
• Prefer spoken directions over written directions.
• Enjoy music.
• Read with whispering lip movements.
• Remember names.
• Sing.
How does a student learn best?
Students learn best when they’re challenged with novelty, a variety of materials, and a range of instructional strategies. Law of feedback. Effective learning takes place when students receive immediate and specific feedback on their performance. Law of recency….
What are the 3 basic learner types?
What do kinesthetic learners struggle with?
People who have a kinesthetic learning style often struggle learning through traditional means and sedentary activities, like lectures and conferences. Kinesthetic learners love to experiment, so give them hands-on tasks and stimulate their learning that way….
Which study habits can you improve?
11 Techniques to Improve Your Study Habits
• Find a good studying spot. This is important.
• Stay Away From Your Phone. Distractions also include avoiding your phone.
• No Willpower?
• Take a break and take care of yourself.
• Organize lectures notes.
• Join or create a study group.
• Aromatherapy, plants and music.
• Leave time for the last-minute review.
What are the benefits of auditory learning?
Auditory learning style enables auditory learners to learn best by hearing or through verbal communication. Auditory learners are good at remembering what they hear as they learn information through auditory representation. Auditory components such as tone, pitch, and loudness are all important to these learners….
How do I know what type of learner I am?
You are a mixture of Auditory and Visual learning styles.
1. Use index cards to learn new words; read them out loud.
2. Record yourself and then listen to the recording.
3. Have test questions read to you by a friend or family member.
4. Study new material by reading it out loud.
5. Write down key words, ideas, or instructions.
How do I learn best?
Another one of the best ways to learn is to focus on learning in more than one way. Instead of just listening to a podcast, which involves auditory learning, find a way to rehearse the information both verbally and visually. By learning in more than one way, you’re further cementing the knowledge in your mind.
What are the types of learners?
Visual learners. Auditory (or aural) learners. Kinesthetic (or hands-on) learners. Reading and writing learners….
What are the 7 different learning styles?
What are the 7 different learning styles and do they work?
• Visual.
• Kinaesthetic.
• Aural.
• Social.
• Solitary.
• Verbal.
• Logical.
What are the 6 learning styles?
The Six Perceptual Modalities (Preferred Learning Styles) Of Adults Are:
• 1) Visual. Visual learners need to see simple, easy-to-process diagrams or the written word.
• 2) Aural. Aural learners need to hear something so that it can be processed.
• 3) Print.
• 4) Tactile.
• 5) Interactive.
• 6) Kinesthetic.
How do you teach different types of learners?
Strategies for teaching social learners:
1. Be inquisitive and ask them what they think about a concept/topic/idea.
2. Ask them to bounce ideas off of each other and compare their ideas with others’.
3. Allow them to discuss and share stories.
4. Include group work.
5. Engage in a role-play.
How many types of learners are there?
What are the 8 different types of learners?
The 8 Learning Styles
• Visual (spatial) Learners.
• Aural (audio) Learners.
• Physical (tactile) Learners.
• Verbal Learners (aka Linguistic Learners)
• Logical (analytical) Learners.
• Social Learners (aka Linguistic Learners)
• Solo Learners.
• Natural/ Nature Learners.
What are auditory learners good at?
Auditory learners are good at writing responses to lectures they’ve heard. They’re also good at oral exams, effectively by listening to information delivered orally, in lectures, speeches, and oral sessions. Auditory learners are good at storytelling. They solve problems by talking them through.
What is an example of auditory learning?
Auditory learning style – this means you learn by hearing and listening. Acquire knowledge by reading aloud • Hum and/or talk to yourself • Make comments like: ➢ “I hear you clearly.” ➢ “I’m wanting you to listen.” ➢ “This sounds good.”
How can I be an effective student?
10 Habits of Successful Students
3. Divide it up.
4. Sleep.
5. Set a schedule.
6. Take notes.
7. Study.
8. Manage your study space.
|
Birth spacing and limiting using modern family planning (FP) methods have the potential to avert 1.5 million maternal deaths and about two million infant deaths annually and contribute to the overall economic growth and development (1, 2). Unfulfilled demand for FP, on the other hand, contributes to unplanned pregnancies and unsafe abortions (3, 4). Despite these findings, the availability and access to modern FP among women of reproductive age (15–49 years) remains a challenge globally and particularly in low- and middle-income countries (LMICs) (1). For instance, while 63% of women of reproductive age globally use some form of contraception, 11.5% have an unmet need for FP (5), with only about one in three women using contraception and 23.4% having an unmet need for FP in sub-Saharan Africa (1, 6).
In Kenya, remarkable progress towards attaining universal access to FP has been made with an increase in modern contraceptive prevalence rate (CPR) from 53.2% in 2014 (7) to 58% in 2020 (8); but still about one in ten women have an unmet need for FP (9). However, successive national demographic and health surveys have continued to show an urban-rural disparity in contraceptive use (7, 10). For example, in 2014, about 40% of urban women were not using modern contraceptives and a further 13% had an unmet need for FP (7, 10). Recent studies have also found a persistent low use of modern contraceptive and a high unmet need in urban areas (11, 12). This is despite, a general expectation that access to FP services is better in urban than rural areas hence a need to explore this phenomenon.
Furthermore, Kenya like most sub-Saharan Africa countries is experiencing rapid urbanisation; with the urban population growing from 7.8% in 1962 (13) to 31.1% in 2019 (14) and is projected to reach 45.7% in 2050 (15). This rapid urban population growth is attributable in part to rural to urban migration and majorly to natural population growth (16), reinforcing the need for improved FP services in urban areas. Unfortunately, inadequate infrastructure and housing in urban areas have resulted in 56% of the urban population living in slums and other informal settlements (17); 36% of whom live in Nairobi county (18). These informal urban settlements are often characterised by overcrowding, poor access to healthcare including reproductive health services (19) and poor sexual and reproductive health outcomes (20). Besides, only 73% of health facilities in urban areas in Kenya offer FP services compared to 91% in rural areas; and even fewer facilities provide specific FP methods and only 44% offering FP services for adolescents (21).
Although the urban fertility rate has been declining with an overall increase in modern contraceptive use, the unmet need for contraception is still high and greatest among the urban poor (9). Therefore, urbanisation, urban population dynamics and the rise of urban informal settlements cannot be ignored in the bid to achieve health goals including FP. Consistent FP programmes to build on gains of previous urban FP programs such as Tupange (22) are needed to address the increased demand and need for FP in urban areas but to also address other confounders of FP use including poverty, low education level, exposure to FP messages in the media and FP commodities’ stockouts (11, 12). Also, while previous studies have consistently shown the prevalence of unmet need for FP to be lower in urban areas than in rural areas (11, 12), a possible rich-poor gap in unmet need for FP among urban women may be masked. Hence, to better inform urban FP programming and policies in Kenya, the study aimed to determine factors associated with modern contraceptive use and unmet needs for FP among urban women in 11 counties in Kenya.
Study Setting
Kenya is a lower-middle-income country with a fertility rate of 3.6 births per woman (23) and a population of about 47.6 million−30.1% of whom lives in urban areas (14). It has a gross domestic product of US$ 1816.6 per capita, human development index of 0.6 and a gender inequality index of 0.5 (24). Health care services are provided through the six levels of care from community to national referral services. The 47 subnational (county) governments oversee health service delivery at the country level while the national government oversees health policy formulation, training, and management of national referral hospitals. Family planning services are provided at all levels of health care but only about 85% of health facilities currently provide FP services with 97 and 79% of government and private facilities, and 89–90 and 99% of dispensaries and health centres and public primary hospital providing the services, respectively (21).
Data Source
The study used pooled data from seven rounds of the performance monitoring for accountability (PMA) surveys (2531). The PMA is a multi-country survey on the sexual and reproductive health of women of reproductive age (32). In Kenya, the survey has been conducted since 2014 in 11 counties (Kericho, Kiambu, Nandi, Siaya, West Pokot, Bungoma, Kitui, Kilifi, Nyamira, and Nairobi) using a multistage cluster design, with the counties as the strata. The included households were systematically selected from enumeration areas that were randomly selected from the Kenya National Bureau of Statistics (KNBS) master frame. In the sampled households, all females aged 15–49 years and who consented to the study were interviewed. The survey also included randomly selected service delivery point (SDP) offering FP services in the selected communities. The management staff in the sampled health facilities were interviewed on behalf of the facility. The survey methodology for PMA surveys has been described further elsewhere (2531).
This study included data from women aged 15–49 years and from urban health facilities. A total of 35,792 women of reproductive age were interviewed; 13,154 of whom were from urban areas. Of the 13,154 urban women, 2,680 missing observations were excluded resulting to a final sample of 10,474 women for analysis for modern contraceptive use. The analysis for unmet need for FP included 8,722 women after excluding 4,432 women (3,512 who were not sexually active, 577 infecund and 343 with missing observations).
Study Variables
Modern contraceptive use was defined as “the use of a product or medical procedure that interferes with reproduction from acts of sexual intercourse” (33) while unmet need for FP was defined as “women who were sexually active, fecund, not using any form of contraception but did not wish to become pregnant at all or within the next two years” (34, 35).
Unmet need for FP was assessed using the questions: “Would you like to have a child/another child, or would you prefer not to have any / any more children?” and “Are you or your partner currently doing something or using any method to delay or avoid getting pregnant?” (2531). The second question was used to assess the use of modern contraceptive use. Table 1 describes the explanatory variables included in the study and that were selected based on their availability in the datasets and their policy importance from the literature review.
Table 1. Operational definitions of variables.
Statistical Analysis
Sample characteristics were described using frequencies and percentages while factors associated with the utilisation and unmet need for modern contraceptives were assessed using bivariate and multivariable logistic regressions. For selection of variables for inclusion in the multivariable analyses, Hosmer and Lemeshow recommend that the variables to be included should be: (i) clinically important variable and (ii) variables in the univariable analyses with a p-value < 0.25 (48). This less stringent threshold of p < 0.25 helps to address the stochastic variability (univariate analysis ignores the fact that individual variables that are weakly associated with the outcome can contribute significantly when they are combined). In this study, a forward stepwise method was used to enter variables into the multivariable model to identify the model of best fit, which included all the variables based on the criteria above (48). Stata 13.0 was used for analyses, which were adjusted for the sampling design and stratification using survey weight provided in the datasets. Statistical significance was set at p ≤ 0.05.
Sample Characteristics
The study included 10,474 urban women of reproductive age; a majority of who resided in Nairobi County (25.0%), were aged 20–34 years (64.5%) and had two or more children (55.4%). The mean age at sexual debut was 18 years (standard deviation: 3.2). Most women accessed FP services from health centres and dispensaries (92%) and 16.1% of health facilities reported stockouts of contraceptives within 3 months preceding the survey (Table 2).
Table 2. Sample characteristics of urban women of reproductive in Kenya 2014–2018.
Prevalence and Factors Associated With Modern Contraceptive Use Among Urban Women
The overall prevalence of modern contraceptive use was 53.7% (95% CI 52.1–55.3%); highest among middle-aged (57.2%) and secondary educated (54.5%) women and those from richer households (56.3%). Also, a high prevalence of modern contraceptive use was seen among women with two to three children (65.9%), who resided in Kakamega county (62.8%), were exposed to FP messages in the media (54.3%) and had access to hospitals (63.4%). The prevalence of modern contraceptive use increased from 50.3% (95% CI 46.9–53.7%) in 2014 to 55.1% (95% CI 51.8–58.3%) in 2018 (Table 3).
Table 3. Prevalence and factors associated with modern contraceptive use among urban women in Kenya, 2014–2018.
Modern contraceptive use was associated with the county of residence, age, marital status, parity, education, household wealth quintile, exposure to media, and survey year. The odds of modern contraceptive use were higher among teenagers (aOR 1.39, 95% CI 1.04–1.86) and middle-aged women (aOR 2.02, 95% CI 1.76–2.31) compared to women above 35 years. Also, women exposed to FP messages in the media had higher odds of modern contraceptive use (aOR 1.30, 95% CI 1.03–1.64) and those with children had a 2- to 4-fold higher odds of modern contraceptive use compared to those with no exposure or childless. Unmarried, uneducated, primary, and secondary educated women and those from poorest household had 15–73% reduced odds of modern contraceptive use. Women who resided in Kitui, Nandi and Kakamega counties had between 53 and 69% increased odds of modern contraceptive use compared to those in Nairobi (Table 3).
Prevalence and Factors Associated With the Unmet Need of FP Among Urban Women
The overall prevalence of unmet needs of FP was 16.9% (95% CI 15.8–18.1%). It was highest among teenagers (34.6%), uneducated (30.0%), and poorest (29.5%) women and those with four or more children (24.0%), had access to health centres (18.7%) and were from Kilifi (22.7%), Kericho (22.6%), and Siaya County (21.8%) The prevalence of unmet needs of FP significantly decreased from 21.0% (95% CI 18.7–23.6%) in 2014 to 14.3% (95% CI 12.2–16.6%) in 2018 (Table 4).
Table 4. Prevalence and factors associated with unmet needs for FP among urban women in Kenya, 2014–2018.
Unmet need for FP was associated with the county of residence, age, marital status, education, household wealth, parity, facility type, and year of study. The odds of unmet needs for FP among women aged 15–19 years were 2.5 times (aOR 2.46, 95% CI 1.68–3.62) higher than those of women aged 35–49 years. Women with no formal education (aOR 2.77, 95% CI 1.57–4.87) and primary education (aOR 1.51, 95% CI 1.12–2.04) had increased odds of unmet need compared to women with tertiary education. Unmarried women had 43% (aOR 1.43, 95% CI 1.19–1.70) higher odds compared to the married/cohabiting while those from the poorest households had 59% (aOR 1.59, 95% CI 1.16–2.17) higher odds of unmet need compared to from the richest households. The odds of unmet need of FP were 64% (aOR 1.64, 95% CI 1.12–2.40) and 86% (aOR 1.86, 95% CI 1.24–2.79) higher among women with access to health centres and dispensaries, respectively, compared to hospitals offering FP services. The odds of unmet need of FP were 33 and 39% lower in 2015 and 2017/2018 compared to 2014 (Table 4).
This study found that slightly more than half of urban women of reproductive age used modern contraceptives while about 17% of them had unmet needs for FP. Our results show that there was an increase in the prevalence of modern contraceptive use and a decrease in unmet need for FP in urban areas between 2014 and 2018, a pattern observed in other studies in Kenya (11, 12). These improved FP indicators could be attributed to the overall increase in investment in government- and partner-supported programs in urban areas (22) and the government’s commitment to improving access to FP services in order to increase modern CPR to 66% in 2030 (8). However, despite the increase in modern CPR and the decrease in unmet need for FP, the rates are below the national average of 62 and 12% in 2020, respectively (9), which could indicate a possible urban-rural disparity in FP indicators and underscore the need to strengthen health systems to improve and promote equal and affordable access to FP services, especially in urban areas. Furthermore, our findings highlight a possible subnational disparity in modern contraceptive use and unmet need for FP, with West Pokot, Kilifi, and Kericho counties having the lowest modern CPR and Kilifi, Kericho, and Siaya counties having the highest unmet need of FP; findings that reinforce the resolve to promote equitable access to FP services countrywide through advocacy and strengthening of the devolved health service delivery at the county level.
Similar to previous studies (36, 37), we found an association between modern contraceptive use and women’s age, with teenagers and middle-aged women having higher odds of utilisation and unmet need of FP compared to women aged above 35 years. This could be explained by the early sexual debut (18.1 years), which may result in higher demand for FP that may currently not be sufficiently addressed. This observation is further supported by a survey that found that less than half (44%) of health facilities in urban areas in Kenya offer FP services to adolescents (21) and a study in five Kenyan cities that found that 58% of health service providers imposed minimum age restrictions on some FP methods like injectables hence locking out young women (49). These findings highlight the need for continued advocacy and investment in FP in urban areas in Kenya including the establishment of youth-friendly sexual reproductive health centres to address the unmet need of FP and increase the urban modern contraceptive use.
Married urban women had a high prevalence of modern contraceptive use compared to unmarried women, which could be attributed to their high contact with health facilities during prenatal, antenatal, and postnatal care (50) and the likelihood to discuss birth spacing and limiting with their partners (5153). However, our study also found that unmarried women had a high unmet need for FP, which confirms findings from previous studies in urban Kenya (38, 39). While some unmarried women are sexually active and desire to have children, there are likely to be teenagers or young women and not in a stable supportive relationship, which could contribute to their unmet need. Also, some health providers are known to discourage unmarried women from using contraceptives due to risk of infertility or difficulty conceiving (49).
Urban women from the poorest households and with lower education levels had higher odds of unmet need for FP and lower odds of modern contraceptive use, a finding consistent with a previous study in urban Kenya (42). With more than half of the urban population in Kenya living in informal settlements (17), poor women in these settlements are more likely to pay for contraception than their richest counterparts (43) and have poor access to healthcare including reproductive services (19) due to fewer lower-level health facilities that are majorly accessed by poor women (54). This argument is supported by our finding that women who accessed FP services from health centres and dispensaries had higher odds of unmet needs for FP compared to those with access to hospitals. In Kenya, only 73% of health facilities in urban areas offer FP services, with better availability of contraceptives in rural facilities compared to urban (21). Also, fewer health centres offer FP services compared to higher-level facilities (21) and are often understaffed, understocked and less equipped to provide some FP services (55) contributing to missed opportunities for FP services (56). On the other hand, poor women are also likely to have a low level of education despite higher education levels being positively associated with modern contraceptive use (40, 41). Educated women tend to have high levels of knowledge on reproductive health and FP (57), and increased autonomy and decision-making power on their health (58) hence their ability to decide on FP use. The persistent socioeconomic disparity in contraceptive use and unmet need for FP may erode gains made by FP programs in Kenya and hence a need to promote equity in contraceptive use in urban Kenya by ensuring equal and affordable access to contraceptive to all urban women through strengthening all healthcare levels to provide FP services.
Consistent with previous studies in urban areas (44, 45), parity was associated with higher odds of use of modern contraceptive use. Women with children are likely to use FP due to the need to space or limit birth compared to nulliparous women. However, women with four or more children also had high prevalence unmet needs for FP; a pattern observed in LMICs (47). Multiparous women have been noted to be less educated, poor (59) and with limited access to health services (60); characteristics that are associated with poor use of contraceptives (42), hence the unmet needs. They are also likely to have experienced method failure which discourages them from using FPs. Despite this, we also found that women with exposure to FP messages in the media, similar to other studies (61), had 30% higher odds of contraceptive use. Women’s exposure to FP messages is likely to dispel rumours, myths and misconceptions related to contraceptives and their side effects (46, 47). This finding underscores the key role that media plays in health education, and social and behavioural change communication resulting in increased utilisation of health services including FP.
This study sought to explore factors associated with modern contraceptive use and the unmet need for FP using nationally representative data; possibly making the results generalizable to urban areas in Kenya. However, it should be noted that the study only focused on the prevalence of modern contraceptive use and not the use of any contraceptive method, which may underestimate the proportion of urban women using FP. The study also cannot infer causation due to the cross-sectional nature of the data; and some important variables like religion and user fees on FP were not included due to a higher number of missing observations. Moreover, the study defined urban areas based on the KNBS classification, as used in the dataset (2531). While the dynamics of urban settings in Kenya are varied, the study used the official classification of urban areas which also guides government planning, development, and policies, and to allow for comparison with previous studies (2531).
Overall, slightly more than half of the urban women in Kenya used contraceptives while about one-fifth had an unmet need for FP. Over time, there was an overall increase in FP use and a simultaneous decrease in the unmet need for FP, which were associated with sociodemographic, socioeconomic, and contextual factors. Urban areas are unlikely to reach the government FP targets by 2030 if interventions to strengthen the provision of FP services will not be put in place to address the high prevalence of unmet need for FP. These interventions should be targeted towards teenagers, young, unmarried, poor, and uneducated urban women to improve uptake of FP services. Future studies on contraceptives use and unmet needs for FP should disaggregated data by area of residence to unmask possible urban-rural disparities, which needs to be explored further in Kenya.
Data Availability Statement
Data used in this study are publicly available. This data can be found here: https://www.pma2020.org/request-access-to-datasets.
Ethics Statement
The studies involving human participants were reviewed and approved by Kenya Medical Research Institute. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.
Author Contributions
Conflict of Interest
Publisher’s Note
We wish to acknowledge the Performance Measurement for Accountability for granting access to the PMA survey datasets used in the study.
KDHS, Kenya Demographic Health Survey; CI, Confidence Interval; LMICs, Low- and middle-income countries; aOR, Adjusted odds ratio; cOR, Crude odds ratio; PMA, Performance Monitoring for Accountability; SD, Standard deviation.
|
You are here
Home » Sangvor district nature. Sights of nature in Tajikistan and Central Asia.
Ridge of Peter Great.
Pamir Sights Tour.
“The Peter the Great Ridge - discovered and named in such a way in 1878 by the expedition of V. F. Oshanin, stretches for more than 200 miles along the left bank of the Surkhaba River on the southern border of Karategin (bekstvo in mountain Bukhara, see), between 40 ° and 42 ° East longitude from Pulkovo. In the western, relatively low snow-free part, the ridge of P. the Great is broken by the left tributary of the Surkh Khullas, or Ob-Hingou; on the meridian of Garma (see) snow appears on it, and somewhat east of the mountain rise no lower than 18,000 feet (Sary-kaudal); further east, peaks covered with eternal snow reach 20 thousand feet in places, at the eastern end of the ridge, where it seems to be connected with the Darvaz Range and where the huge Sandal massif covered with glaciers rises, the ridge rises to 24 - 25 thousand feet above sea level. Through the ridge of P. the Great there are 3 passes open for movement only in summer and leading from Karategin to Darvaz, namely to the Hullas valley. Of these, the westernmost Kamchirak is accessible for pack movement; Luli Harvey Pass to the east is covered with eternal snow and very difficult; almost at 41 ° east longitude there is a third, also very difficult Gardani-kaftar pass”
V.M. ESBE. Russia, St. Petersburg, 1890 – 1907.
Cultural walks from Dushanbe to Ishkashim village.
The ridge of Peter I has a latitudinal direction, its length is more than 200 kilometers, it leaves the ridge of the Academy of Sciences in the area of Communism peak, where the famous Pamir firn plateau is located.
The ridge of Peter the Great is located in the Western Pamirs between the Surkhob and Obihingou rivers. The average height of the ridge is from 4300 meters above sea level in the west to 6000 meters above sea level in the east.
The ridge is characterized by sawtooth ridges, deep gorges and high seismicity. Maple forests grow on the slopes, replaced by juniper woodlands and bushes; higher alpine meadows. There are 487 glaciers on the ridge with a total area of about 480 square kilometers.
The ridge is composed mainly of sandstones and conglomerates. A characteristic feature of the relief of the ridge of Peter I are the remains of ancient flat surfaces raised to a great height, along the edges there are blockages from the trees torn out from the root.
The highest point of the ridge is Moscow peak, 6,785 meters above sea level. The ridge is mostly located in the Sangvor region and partially (eastern spurs) in the Murgab district of the Gorno-Badakhshan Autonomous Region of Tajikistan.
Due to the great heights, as well as the fact that the ridge of the Academy of Sciences closes from the east of the Western Pamir Valley, there is the largest site of modern glaciation. The name behind the ridge has been strengthened since 1932, before that the name of the Peter the Great Ridge, assigned by V.F. Oshanin in 1878 at the opening of the ridge.
The following peaks directly refer to the ridge and its spurs in the vicinity of the Fortambek glacier system: in the axial part (from east to west) - Communism, Kuibyshev, Kroshka, Leningrad, Abalakova, Borodino, Moscow.
In the northern spur, departing from the peak of Moscow - the 30th anniversary of the Soviet state, Oshanin, Rodionov, Krupskaya, Shapak, Shataeva, Suloeva and a number of other lower peaks. In addition, the peak of Kirov, located in the northern part of the Pamir fir plateau, peaks 5203 and 4962 in the northern spur, extending from the peak of Kirov, belong to the ridge of Peter the Great.
An attempt to isolate the western shoulder of the peak of Communism with a height of 6950 m as an independent peak (Dushanbe peak, 1974) was unsuccessful. Such is the fate of the upper end of the Petrel rib, named the peak of Parachutis, in honor of the parachute landing made on the Pamir fir plateau in 1967.
There are two passes over the ridge: the Shini-Bini pass - between the Rodionov and Krupskaya peaks, connecting the glaciers of the Sugran and Fortambek systems, and the Kurai-Shapak pass north of the Shapak peak connecting the Khodyrsha and Shapak glaciers.
In the spur of the ridge north-east of Krupskaya peak between the Shataeva and Suloeva peaks, there is the Suloev pass.
"Fortambek and its peaks." G. Kalinin. Uzbekistan, Tashkent. 1983. "Glaciers." L.D. Dolgushin, G.B. Osipova. Series "Nature of the world." Moscow, the publishing house "Thought". 1989.
Photos by
Alexander Petrov.
|
To do anything effectively in life, you need to set up goals for yourself based on your present capabilities. You need to keep certain targets in mind and consider some necessary steps to be taken towards achieving them. Additionally, goal setting motivates people to achieve what they want but one needs to set SMART goals to ensure about their success. SMART goal setting gives meaning to your wants. Also, SMART goal setting provides a simple and clear framework for managing and defining them.
Smart Goals Templates
Setting SMART goals gives clear direction to act to accomplish the objectives and goals they have set. By reducing the risk of unclear goals, SMART goal setting is really helpful for everyone. Moreover, SMART goal setting is really easy to use and can be used anywhere, by anyone, without any requirement of training. Even though writing is really easy but sticking to it is difficult, once achieved, one can make progress rapidly and achieve success.
Let’s understand what SMART goals are, what are the reasons for SMART goal setting, how to write a SMART goal and what is included in its template.
What are SMART Goals?
SMART goal setting is the secret to alleviating the common challenges faced during a project or anything you want to accomplish. This is because these goals provide guidance and structure throughout the project, identifying what you want to accomplish.
In 1981, George T. Doran initiated this new idea with his research paper. He worked with Washington Water Power Company as a former director of corporate planning and a consultant. He published a paper titled “S.M.A.R.T. Way to Write to Management’s Goals and Objectives.” In this paper he introduced this new tool called SMART goals setting to perk up the chances of triumph. Below is the explanation to the acronym.
S – Specific
SMART goal setting includes an “S” which means specific. This means that you need to be specific about the approach. By specific, we just mean that one should be able to answer all the ‘w’ questions. To be specific about your SMART goal setting, you need to know what exactly you want to accomplish; it does not matter if your answer is detailed or not. Next in line comes why. You must consider the reason for the goal and who will be involved in achieving it. It could be more than one person as well.
Then, you should come towards when. Setting a time frame under the time bound section will encourage you to do things rapidly. Moving on to where, you need to identify the location of the relevant event as to where it will be performed. In addition to that, which will determine the requirements for the goal as to whether or not your goal is realistic or not.
M – Measurable
The “M” in SMART goals stands for measurable. As it is quite clear with this element, this will help you in measuring the progress and performance. You need to choose the metrics to see whether or not you need the goal. Identifying a measure makes a goal more tangible, providing a way to measure progress. Depending upon the length of the project, you should set milestones for the ease of completing the project.
A – Attainable or Achievable
The “A” stands for achievable/attainable. This highlights that a goal is important to you and the things you do to achieve or attain it. Even though this may require you to have some skills and a different attitude, you do it anyhow in order to achieve it. Clearly mentioning how you can accomplish the goal and skills you require for it can make it easily achievable.
R – Realistic or Relevant
The “R” in SMART goals means being realistic or relevant. Your goals definitely need to be realistic as you cannot form goals that cannot be achieved or are not relevant at all. Only relevant and realistic goals will make sense. If the goals are not realistic, you or your team will simply not be able to achieve it.
T – Time Bound
SMART goal setting also needs to consider the timings as most goals are time bound. If a goal lacks realistic timing, then there are chances that one cannot succeed in anyway. Setting a deadline is imperative to accomplish the goal. Also, time constraints create a sense of urgency and determination to complete the work.
When setting SMART goals, you should be prepared to ask questions to yourself. Those answers will surely fine tune your strategy and will also ensure that your goals are attainable. In addition to that, you should write SMART goals with a positive attitude.
Smart Goals Examples
What are the Reasons for SMART Goals Setting?
SMART goal setting actually drives you to achieve what you want. Goal setting may ensure success, but a SMART goal definitely achieves much more. Here are some reasons for SMART goal setting.
They Let You Focus
SMART goal setting has such an appeal that they let you focus well than any other form of goal setting. You can focus and just look at one thing instead of looking at many things at a time. No doubt there are so many things that you need to do every day that might divert your attention; for this reason, it becomes necessary for you to set goals to focus well.
Gives You a Clear Direction
When you do not set goals, you are actually driving on a road where you do not know your destination. On the contrary, when you set goals, it gives a straight path and a clear direction as where you need to go. Knowing what you really want will lead towards the path of success. However, you need to ensure that your goal is not ambiguous and vague or else it will confuse you. Explain your goal choosing the right words to excite and motivate you towards working for it.
Identifies Priorities
SMART goals help you in identifying and reminding you of your priorities. It is very likely that you do not stray away when you set a goal. This is because when you set goals, you are regularly reminded about priorities. You can surely reach your goals by accomplishing your priorities but first, you need to identify those priorities. Also, it is a good way to write them down and post them in front of yourself so that you are reminded about them every day.
Time Management
You can be successful in managing your time if you set SMART goals. The “T” in the SMART goals trains and helps you manage time, especially for the major projects you are working on. When you set goals, you set priorities too and in this way, you work through the priorities that are important by completing them in a timely manner and then start working on the others, which ultimately trains you to manage time.
Gives You a Feeling of Fulfillment
Nothing can be better and more satisfying than doing something in a timely manner. Recall your school days; were you the first one to finish your class work while the other students were still busy doing it? How did you feel? You felt fulfilled and happy for being the fastest in the class, Isn’t it? Similarly, when you set goals and achieve them on time, you feel happy and of course, it fuels you up to do even better in the future, setting up more goals and achieving them all successfully.
Smart Goals Worksheets
SMART + Goals
The main purpose of setting goals is to be realistic and be positive about achieving them. Using a SMART criterion for the goal setting can ensure that goals are attainable and reasonable. This criterion motivates you to address certain issues regarding the goals, stimulating motivation and setting the necessary steps to achieving those goals. So, always ensure SMART goal setting to help you in achieving those goals.
People often think that the SMART goals criterion is only for the professionals. They are surely mistaken as this criterion is for every single person. From a business owner to an employee and from a teacher to a student, SMART goal setting should be followed by all. Teachers can use this criterion to set teaching practices, while business owners can use it to formulate strategies for their business. Depending upon the tasks and related work, everyone can use it.
How to Write a SMART Goal?
Now that we have discussed a lot about the SMART goal criterion, we will help you understand how to write one effectively. Check out the steps below to writing a SMART goal.
Step # 1: Provide the Summary of the Goal
The very first step to writing a SMART goal is to provide a summary to your goal. Often, goals are quite unclear for people, which makes it harder or difficult for you to put in words; however, you must always force yourself to write them down clearly. Clarity plays an immense role in helping drive you towards your goal. Keeping the SMART criterion in mind, you need to formulate your goal. Specifying them in percentages or numerical terms can give you a clearer idea.
Step # 2: Classify Your Goal
People often do not understand how to classify goals. Classifying is basically categorizing the goals; by categorizing, we mean that you need to see what can help you best in achieving your goal. For example, a marketer would require more visitors on their site to generate leads and sales. So, categorizing also helps in clarifying the goal.
Step # 3: Set a Numeric Goal
Now that you have narrowed down your goal in the previous steps, do not waste time in setting a value to get to work. This can seem the hardest in terms of achieving but can motivate you to achieve it. For instance, being a marketer, you can say that you are aiming to achieve 50% more sales in the next month. Setting a numeric goal can add value and clarity to the goal.
Step # 4: Select a Completion Date or Time
A goal needs to have a deadline or else, it will just be considered as a dream. Selecting the length of time will help you reach your goal. The time and date will help you figure out the extent to which you should really be committed towards the project.
Step # 5: Beware of the Challenges
There is nothing in business or anything else that can make things go smooth. There is always going to be something that will stop you in this litigious society. Being a marketer, you should always identify the potential threats and challenges that can come in your way.
Smart Goals Samples
What Does a SMART Goals Template or SMART Goals Worksheet Include?
A SMART goal template and a SMART goal worksheet is one as the same thing. Here is what they include in the content;
• Initial Goal: First, you need to identify the first goal.
• Specific: State what you want to accomplish, why you want to do it, who will be working in the team and when do you want to start it.
• Measurable: Determine what will be the ways to measure the progress.
• Achievable: Identify if you have the skills to achieve the goal and what is it that motivates you towards achieving the goal?
• Relevant: State the reason you are setting the goal and if it is aligned with the overall objectives.
• Time bound: Identify what will be the deadline and if it is realistic or not.
• Actions for Obstacles: Mention the potential challenges that stand in your way and the solutions or actions that you would take to solve them.
Setting a goal is very essential and if it is set following the SMART goals setting criterion, then it is best. Make sure you are concrete and certain about it. Everyone must understand the value of setting goals to be successful. To help you set the right goals for your task and eliminate the hard work, we have put a free SMART goal template or SMART goals worksheet on our website. Download it now and get to setting your goals!
Tagged:BusinessSmart goals
TemplateLab April 29th, 2021
|
Connect with us
The Tail-End Of The Rumored “Brightest Meteor Shower” Is Almost Over—Watch The Sky!
Are you fan of watching the night sky? If so, there’s one month you don’t want to miss: August, when astronomers the world over are saying the Perseid meteor shower may be the brightest shower in recorded human history.
The Perseid meteor showers are an annual event, generally occurring between July 17 and August 24, peaking between August 9th and 13th.
This year, though, scientists are saying it could it could be the best show ever, and the next time it could even be close won’t come for another 96 years.
Greenwich’s Astronomy Photographer of the Year 2013 David Kingham combined 23 individual stills over several hours to depict a Perseid meteor shower. DAVID KINGHAM (THE NATIONAL MARITIME MUSEUM ROYAL OBSERVATORY) / PNG
Consider the following tips for when to best view the Perseids (or any other meteor shower):
1.) Schedule your viewing for the sky is darkest. This will require checking the Moon’s phase, as well as when it rises and sets, but as a general rule, most astronomers suggest just before dawn for the best and deepest darkness for meteor viewing.
2.) In the Northern Hemisphere, look between the zenith (the point directly above you in the sky) and the radiant (which will be in the northeast corner of the sky). The Perseids cannot be seen from the Southern Hemisphere, unfortunately.
3.) Lastly, the table below can help you locate the Perseids without simply looking up in the night sky and hoping to catch the movement of a shooting star.
The Mind Unleashed
|
History Podcasts
Sir Francis Drake claims California for England
Sir Francis Drake claims California for England
During his circumnavigation of the world, English seaman Francis Drake anchors in a harbor just north of present-day San Francisco, California, and claims the territory for Queen Elizabeth I. Calling the land “Nova Albion,” Drake remained on the California coast for a month to make repairs to his ship, the Golden Hind, and prepare for his westward crossing of the Pacific Ocean.
Drake then continued up the western coast of North America, searching for a possible northeast passage back to the Atlantic. Reaching as far north as present-day Washington before turning back, Drake paused near San Francisco Bay in June 1579 to repair his ship and prepare for a journey across the Pacific. In July, the expedition set off across the Pacific, visiting several islands before rounding Africa’s Cape of Good Hope and returning to the Atlantic Ocean. On September 26, 1580, the Golden Hind returned to Plymouth, England, bearing its rich captured treasure and valuable information about the world’s great oceans. In 1581, Queen Elizabeth I knighted Drake during a visit to his ship.
Sir Francis Drake claims California for England - HISTORY
For years, California schoolchildren were taught that a brass marker discovered in 1936 was Sir Francis Drake's "Plate of Brasse," recording the California coastal landing in 1579 of the English explorer and his ship, the Golden Hinde.
A "Plate of Brasse" announcing England's claim to California, supposedly engraved by Sir Francis Drake when he dropped by in 1579, became the state's greatest historical treasure when it was found and authenticated in the late 1930s. After testing by Berkeley Lab scientists 40 years later, it became California's greatest hoax. (photo: Bancroft Library)
That was the case until 1977, when Berkeley Lab's Helen Michel and Frank Asaro used neutron activation analysis on the brass plate and found that it was most probably manufactured between the last half of the nineteenth century and the early part of the twentieth. They found that what had been one of California history's greatest archaeological finds was not authentic.
At the time, Michel and Asaro were in Berkeley Lab's Nuclear Science Division Asaro is now in the Atmospheric Sciences Department of the Environmental Energy Technologies Division. Although he and Michel confirmed that the brass in the "Plate of Brasse" was modern, no one knew who had actually made the plate.
Now the final chapter in the plate's history seems to have been written. At a press conference held February 18, 2003, at the University of California at Berkeley's Bancroft Library, historian-researchers claimed that the plate was devised as a practical joke by several friends of Herbert E. Bolton, who was director of the Bancroft Library from 1920 to 1940.
Fascinated by stories about Drake having posted the plate to mark his California landing, Bolton often told his students to look for it in Marin County. The researchers' evidence suggests that the fake plate was meant to be a practical joke among friends, but the hoaxers lost control of their prank when Bolton authenticated the find publicly before they could tell him the truth.
Although it was historical evidence that completed the story, it was science performed over 25 years ago by Michel and Asaro that confirmed what a few historians had suspected all along about the plate's dubious origins. Their neutron activation analysis showed that its chemical impurity levels were too low for sixteenth century English manufacturing techniques. They estimated that the artifact had been made no earlier than the eighteenth century and probably much later -- even as late as 1936, shortly before the forgery was perpetrated.
Beginning in 1967, nuclear chemists Helen Michel and Frank Asaro employed neutron activation analysis for discoveries including the tracing of ancient trade routes, the revelation that it was an asteroid that brought the Cretaceous Period to an end, and proof that Drake's "Plate of Brasse" wasn't made by Drake.
In the mid-1970s the then-director of the Bancroft Library, James Hart, commissioned a new study of the plate in anticipation of the quadricentennial of Drake's landing. As part of this study, he asked the Research Laboratory for Archaeology and the History of Art at Oxford University in England to chemically analyze small fragments of the plate. He asked Glenn T. Seaborg, then Berkeley Lab's associate director at large, if someone on his staff would drill small samples from the plate to send to England Seaborg asked Asaro if he wanted to do this.
"I discussed this with my colleague Helen Michel, and we agreed to drill the plate," Asaro says. "But we also said we'd like to make some measurements too. This was acceptable to Professor Hart."
Although they started with the expectation that the plate was authentic, right away Michel and Asaro began seeing things that made them suspicious. When they drilled into the plate, they expected to see corroded material instead they saw fine strips of metal. The thickness of the plate was too homogeneous for something that would have been hammered out. Most important, neutron activation analysis revealed not only higher levels of zinc than expected for an alloy made in Drake's time -- zinc hadn't yet been identified in the sixteenth century -- but much lower levels of other metals like nickel, cobalt, silver, gold, lead, and iron. This suggested that the brass was a mix of high-purity copper and zinc, which would not have been available at the time.
These and other clues led Michel and Asaro to conclude that the plate was recent. They wrote up their results and sent them to Hart. "He had wanted a four-page letter," says Asaro. "We sent him a 45-page paper." Hart published 16 pages of the report, and Michel and Asaro later published the complete report in the journal Archaeometry.
Thus the plate was proved a forgery. Only the mystery of who had perpetrated the hoax remained -- until UC Berkeley's press conference in February, 2003.
Francis Drake claims California for England
Drake's journey to California began in December 1577 when, on the orders of Elizabeth I of England, he was sent to plunder Spanish ships along the Pacific coast of the Americas.
By Reginald Frontispiece Ltd
Written by Reginald Frontispiece Ltd
On this day June 17, 1579 Francis Drake claimed California for England.
He named it ‘Nova Albion’ or ‘New Britain’.
His claim of sovereignty was made when moored at what is now San Francisco whilst he was carrying out repairs on his ship.
Francis Drake (c. 1540 to 1596) was the eldest of 12 sons, born to Mary Elizabeth Drake (nee Mylwaye) and her husband Edmund Drake, in Tavistock, Devon.
The family moved to Kent when he was a child where they lived on an old ship.
It is perhaps here that he gained his love of the sea for, before he was thirteen, he had been apprenticed aboard a barque trading across the English Channel.
At twenty, he was master of the barque, the ship being willed to him by his apprenticeship master.
This voyage would lead to Drake being only the second person to circumnavigate the world.
On his return to Plymouth, England, in 1580, he presented to Queen Elizabeth a large cargo of spices and captured Spanish treasures.
The Queen's half-share of the cargo surpassed the rest of her income for that entire year.
Elizabeth is said to have dined on the Golden Hind at Deptford in 1581 where he was dubbed Sir Francis Drake.
Drake Passage, between South America and Antarctica, and Drake’s Bay, California, are named in recognition of his voyage.
Today, there is a Sir Francis Drake Hotel in Union Square, San Francisco, commemorating Drake's achievement.
Our antique map, published in the 1903 edition of Encyclopeadia Britannica, is from the collection of Frontispiece Ltd of Canary Wharf, Tower Hamlets.
Did Francis Drake Really Land in California?
Few sea voyages are as famous as that of the Golden Hind, privateer Francis Drake’s around-the-world voyage that ended with his arrival into England’s Plymouth harbor in 1580. Along with being a remarkable feat of seamanship, the world’s second circumnavigation, among other achievements, was the first to map large portions of North America’s western coast. Filling the Hind’s hold as it berthed in Plymouth were a half-ton of gold, more than two-dozen tons of silver, and thousands of coins and pieces of jewelry looted from Spanish ports and ships along the western shore of South and Central America. Drake’s lucrative journey helped spark England’s ambitions for global empire.
After their Spanish raids, as described in written reports by Drake and other crew members, the Golden Hind landed along the west coast of North America for several weeks to caulk his leaky ship and claim the land for Elizabeth I, the first formal claim by an Englishman to a piece of the Americas. To commemorate that act, Drake posted a “a Plate of Brasse” as a “monument of our being there,” according to an account by one of the crew.
But just where Drake, about 80 crewmen, and one pregnant African woman named Maria stepped ashore has been a matter of acrimonious dispute for nearly a century-and-a-half. Most of the expedition’s details were immediately classified by the queen, who worried that the news of Drake’s claim would instigate open war with Spain. What was published in subsequent decades was often incomplete and ambiguous. As a result, professional and amateur scholars poring over contemporary maps, letters and other documents have proposed candidate harbors from Mexico to Alaska.
In 1875, an English-born geographer named George Davidson, tasked with conducting a federal survey of the U.S. West Coast, pinpointed a bay about 30 miles northwest of San Francisco, a site that seemed to match the geography and latitude described by Drake and his crew. He had the bay renamed in honor of the privateer. Influential Californians quickly embraced the treasure-hungry captain as the natural native son of a state that prided itself on the Gold Rush. Drake also gave the state an English “founder” who arrived long before the settlement of Jamestown and Plymouth, an alternate origin story that could replace those of Spanish missionaries and indigenous populations.
Californians in the early 20th century celebrated the man knighted for his piratical exploits with memorials, parades and pageants. His name was bestowed upon a boulevard in Marin County and San Francisco’s premier hotel at Union Square. In 1916, the California legislature passed a resolution commemorating the man who “landed on our shores and raised the English flag at Drakes Bay.”
In 1937, a leading historian at the University of California, Berkeley, Herbert Bolton, announced the discovery of Drake’s “Plate of Brasse” at a site not far from Drakes Bay. The sensational find, etched with words claiming Nova Albion—New England—for Elizabeth, included Drake’s name. Dated June 17, 1579, the plate reads in part, “BY THE GRACE OF GOD AND IN THE NAME OF HERR MAIESTY QVEEN ELIZABETH OF ENGLAND AND HERR SVCCESSORS FOREVER, I TAKE POSSESSION OF THIS KINGDOME ….”
The discovery made headlines across the country, and turned Bolton into a national figure. The Berkeley professor, however, authenticated the rectangular plate and heralded it as physical proof of Drake’s landing north of San Francisco before conducting detailed historical and metallurgical tests. Though some historians expressed doubts about the plate’s legitimacy at the time, the university raised $3,500 to buy it, and the piece of tarnished metal became a cherished artifact still displayed at Berkeley’s Bancroft Library. For California’s elites, “the plate was not just a metal document or a valuable antique. It was the holy grail—a venerable Anglo-American, Protestant, religious relic,” writes Bolton’s biographer, Albert Hurtado.
Four decades later, however, researchers from Lawrence Berkeley National Lab subjected the plate to rigorous testing and concluded that California’s most famous artifact was made using modern material and techniques. It was, without question, a forgery, as many historians had long suspected. But other evidence, including the 1940s discovery of a cache of 16th-century Chinese pottery—thought by some archaeologists to have been purloined by the Hind—still pointed to Drake’s presence in northern California.
In a new scholarly book, Thunder Go North, to be published next week, Melissa Darby, an archaeologist from Portland State University, argues that Drake likely never made it to California at all—and that he wasn’t simply a privateer. Instead, she points to official English documents that show he was on a secret government mission of exploration and trade. She also cites Drake’s own writings that say that after raiding the Spanish to the south, he went far out to sea before heading back to the coast. Darby analyzes wind currents in that time of year—late spring—and contends that this would have put the Hind far to the north, likely in present-day Oregon.
Thunder Go North: The Hunt for Sir Francis Drake's Fair and Good Bay
Thunder Go North unravels the mysteries surrounding Drake’s famous voyage and summer sojourn in this bay.
She also highlights an overlooked contemporary document in the British Library that says Drake was seeking the Northwest Passage as a way to return to England—that would naturally have led to a more northerly course—and mentions a latitude consistent with central Oregon. As for the Chinese porcelain, she notes that a 2011 study concluded it all came from a 1595 Spanish shipwreck. In addition, Darby contends that anthropological evidence, such as plank houses and certain indigenous vocabulary, points to Drake meeting Native Americans living in the Northwest rather than on the California coast.
Because the vexed question [of where Drake landed] has largely been in the domain of rancorous proponents of one bay or the other, the question has become a quagmire that professional historians and archaeologists have largely avoided,” writes Darby of her book. “This study is a necessary reckoning.”
Her most explosive assertion, however, implicates Bolton, one of California’s most distinguished historians and a man heralded as a pioneer in the study of colonial Spanish America, in the hoax of Drake’s brass plate, one of the country’s most infamous cases of forgery.
“He was a flim-flam man,” Darby tells Smithsonian magazine. “It is almost certain that Bolton himself initiated the ‘Plate of Brasse’ hoax.”
Drake's Landing in New Albion, 1579, engraving published by Theodor De Bry, 1590 (Wikicommons)
Though the laboratory analysis revealed the plate as fake in 1977, who was behind the deception and their motive remained a mystery until 2003, when a team of archeologists and amateur historians published a paper in the journal California History concluding that the plate was a private prank gone awry. They told reporters that the episode “was an elaborate joke that got terribly out of hand.”
A highly respected academic, Bolton also served as Grand Royal Historian of the Clampers, a men’s satirical club that sought to keep the ribald pioneer life of California alive and was “dedicated to protecting lonely widows and orphans but especially the widows.” The team failed to find a smoking gun but drew on published material and personal recollections. They concluded that the object was fabricated by a group of prominent San Franciscans, including one Clamper, and was “found” north of San Francisco as a prank to amuse Bolton, who had previously asked the public to keep an eye out for what Drake had left behind. By the time the news went viral, the prank had spun out of control and the hoaxers remained silent. Bolton, according to the researchers, was the butt of the joke.
But in her book, Darby contends that Bolton was far more likely to be a perpetrator rather than a victim of the hoax. She tracks how Bolton and other prominent California men sought for decades to ignore and discredit scholars who opposed the story of Drake as a rogue pirate landing on the shores of Drakes Bay. For example, he blocked Zelia Nutall, a respected anthropologist, from publishing a paper suggesting Drake landed north of California. Darby also describes a pattern of deception going back to his early years as an academic.
“A thief does not begin his career with a bank heist,” she writes. “The plate was not Bolton’s first attempt at pulling the wool over the eyes of the public.”
Darby details how Bolton was often associated with a host of scams and schemes relating to Spanish or pirate treasure. In 1920, he publicly authenticated a 16th-century Spanish map pointing to a rich cache of silver and gold in New Mexico that set off a media frenzy. It proved a fake, but gave Bolton his first taste of national renown.
The next year Bolton claimed to have translated an old document that gave clues to an ancient trove of nearly 9,000 gold bars hidden near Monterrey, Mexico. When he declined a spot in the expedition organized to find it and a share in the profits, he again made headlines by turning down the offer because of his pressing academic duties ( Million Spurned by U.C. Teacher” read one another said “Bolton Loses Share in Buried Treasure”). No treasure ever surfaced.
In other instances of old documents and lost treasure, he brushed off accusations of fudging the truth.
“This was Bolton’s method,” writes Darby. “Create a good story for the gullible public, and if it was exposed, call it a joke.” In participating in the Drake plate hoax, she adds, he could reap not just media attention but draw new students to his program, which suffered during the depths of the Depression.
She suspects another motive as well. “The plate enabled Bolton to trump up the find and turn his sights to the largely white and Protestant California elites, who embraced Drake,” says Darby, because it “served to promote an English hero and stressed a white national identity of America.” Leading Californians of the day included members of men’s clubs like the Native Sons of the Golden West, which fought for legislation to halt most Asian immigration and to restrict land rights to many of those already in the state. “Bolton orated in front of the Native Sons, and they provided scholarships for his students,” Darby adds.
Bolton’s biographer, Hurtado, an emeritus historian with the University of Oklahoma, acknowledges that Bolton was “careless” in giving his stamp of approval to the plate without conducting adequate analysis. “There’s no question he was a publicity hound,” he adds. But he is skeptical that Bolton would actively risk scandal in the sunset of his career, when he was nearly 70 and highly esteemed. “He had no need to create a fraud to gain an international reputation. This risked his reputation.”
Members of the Drake Navigators Guild, a nonprofit group championing the Drakes Bay theory, soundly reject Darby’s assertion about Bolton. “The idea of a conspiracy doesn’t work,” says Michael Von der Porten, a financial planner and second-generation member of the guild whose father was part of the 2003 team that studied the hoax. He also dismisses her conclusions about a landing north of Drakes Bay. “This is yet another fringe theory, a total farce.”
Michael Moratto, an archaeologist who has been digging around Drakes Bay for decades, agrees. “I’ve spent 50 years listening to all sides of the debate, and for me it is settled.” Darby favors an Oregon landing site for parochial reasons, he adds, and “is twisting all of this to suit her own purposes.” He still maintains that some of the Chinese porcelain found at the bay came from Drake’s cargo.
Others find Darby’s arguments persuasive. “[Darby] did a superb job of mustering evidence and deciphering it,” says R. Lee Lyman, an anthropologist at the University of Missouri in Columbia. “And it is highly likely Bolton was perpetuating a subterfuge.” Nevertheless, he says that it will be an uphill struggle to alter the prevailing narrative, given the deep emotional resonance that Drake continues to have for many in the Golden State.
Darby says she expects pushback, particularly from the guild, which she characterizes as “an advocacy organization not an academic organization.” She adds that her conclusions about Bolton “will be a deep shock, and their denial is understandable.” But Darby is also confident that they will be swayed by careful study of her evidence. Lyman is not so sure. “The historical inertia placing Drake in California is so great,” says Lyman. “You get wedded to an idea, and it is hard to question it.”
The Plate of Brass
Once the repairs to the Golden Hind were completed, Drake claimed the land for England by setting up a stout post to which he nailed a metal plate engraved with his declaration and a sixpence and named the land Nova Albion. In 1936, a brass plate was found in Marin County at Limantour Beach. Historians believed that this plate, which carried a similar text as recorded from Chaplain Fletcher's diary, was final proof that Drake landed in Marin.
However, the plate did not withstand the tests of time or science. Modern testing confirmed that the plate could not have been manufactured with the technology available to sailors in the late 1500s and must have been created much later. It wasn't until 2002 that the origin of the fake plate was finally uncovered. Today, the plate can be found in the Bancroft Library at the University of California, Berkeley as an object lesson in how hoaxes can be accepted as fact.
Sir Francis Drake claims California for England - HISTORY
California has been inhabited for thousands of years. When Europeans first arrived there were a number of Native American tribes in the area including the Chumash, Mohave, Yuma, Pomo, and Maidu. These tribes spoke a number of different languages. They were often separated by geography such as mountain ranges and desserts. As a result, they had different cultures and languages from the Native Americans to the east. They were mostly peaceful people who hunted, fished, and gathered nuts and fruit for food.
Golden Gate Bridge by John Sullivan
A Spanish ship captained by Portuguese explorer Juan Rodriguez Cabrillo was the first to visit California in 1542. Several years later, in 1579, English Explorer Sir Francis Drake landed on the coast near San Francisco and claimed the land for England. However, the land was far away from Europe and European settlement didn't really begin for another 200 years.
In 1769, the Spanish began to build missions in California. They built 21 missions along the coast in an effort to convert the Native Americans to Catholicism. They also built forts called presidios and small towns called pueblos. One of the presidios to the south became the city of San Diego while a mission built to the north would later become the city of Los Angeles.
When Mexico gained its independence from Spain in 1821, California became a province of the country of Mexico. Under Mexican rule, large cattle ranches and farms called ranchos were settled in the region. Also, people began to move into the area to trap and trade in beaver furs.
Yosemite Valley by John Sullivan
By the 1840s, many settlers were moving to California from the east. They arrived using the Oregon Trail and the California Trail. Soon these settlers began to rebel against Mexican rule. In 1846, settlers led by John Fremont revolted against the Mexican government and declared their own independent country called the Bear Flag Republic.
The Bear Republic didn't last long. That same year, in 1846, the United States and Mexico went to war in the Mexican-American War. When the war ended in 1848, California became a territory of the United States. Two years later, on September 9, 1850, California was admitted into the Union as the 31st state.
In 1848, gold was discovered at Sutter's Mill in California. This started one of the largest gold rushes in history. Tens of thousands of treasure hunters moved to California to strike it rich. Between 1848 and 1855, over 300,000 people moved to California. The state would never be the same.
Even after the gold rush ended, people continued to migrate west to California. In 1869, the First Transcontinental Railroad made traveling west much easier. California became a major farming state with plenty of land in the Central Valley for growing all sorts of crops including apricots, almonds, tomatoes, and grapes.
In the early 1900s, many major motion picture companies set up shop in Hollywood, a small town just outside of Los Angeles. Hollywood was a great location for filming because it was close to several settings including the beach, the mountains, and the desert. Also, the weather was generally good, allowing for outdoor filming year round. Soon Hollywood became the center of the filmmaking industry in the United States.
Los Angeles by John Sullivan
Sir Francis Drake claims California for England - HISTORY
Detail - 1579
June 17, 1579 - Francis Drake claims the lands of California for Great Britain and Queen Elizabeth I, landing in Drake's Bay and naming it New Albion. Drake is on his voyage around the world in the ship, the Golden Hind.
Francis Drake was not Magellan, the first to circumnavigate the globe, but he was the second, and during his voyage from England and back, his discoveries in the Americas and Orient sometimes get overlooked. That overlook includes his short foray along the coast of what is today's California, and his friendly meeting with the Indians of northern California, in an area he called Drake's Bay or Nova (New) Albion. Yes, he claimed that area for Great Britain, but did not try to subjugate the natives. There was no slavery or altercation during the six weeks near what would become San Franciso. The exploration gave gifts to the Indians they gave some back. Unfortunately, this was not the normal occurence for many of Drake's adventure where friendly relations with natives did not occur.
Drake was thirty-seven years old when he left on November 15, 1577 from Plymouth, England on his circumnavigation trip. He had been to the Americas before, battled the Spanish on the Isthmus of Panama, and was considered a hero in England. The Spanish did not think the same they thought pirate. After a month of trouble at sea around the English isles, Drake's expedition plied the Atlantic Ocean with six ships, including his flagship Pelican (later renamed the Golden Hind) and one hundred and sixty-four men. His official reason for the trip from the Queen an expedition against the Spanish on the Pacific Coast of the Americas. He reached the coast of Argentina at San Juilian and stayed the winter by winter's end he was down to three ships upon entering the Straits of Magellan.
It was September of 1578 when he reached the Pacific. Now down to one ship (one destroyed in a storm, the other returning to England), the Pelican (now Golden Hind) and Drake's crew attacked Spanish ports, towns, and ships. One ship contained 25,000 pesos of Peruvian gold and L7 million pounds of Spanish money. He pursued other treasure ships thought to be returning from Manila to Acapulco, but did not find them, instead landing in northern California on June 17, 1579.
The New Albion (Nova Albion) Landing
Reaching the 38th parallet, Drake lay port in what would be termed Drake's Bay, claiming the land for Great Britain, and calling it Nova Albion (New Britain). He met the Coast Miwok Indians, established friendly relations, and reportedly stayed six weeks, leaving on July 23. Just where was New Albion? In 2012, the United States Department of the Interior recognized the landing location in Drakes Bay with the establishment of a National Historic Landmark at Point Reyes, part of Point Reyes National Seashore.
Excerpt from original account by Drake's Chaplain Francis Fletcher.
"The next day, after our comming to anchor in the aforesaid harbour, the people of the countrey shewed themselues, sending off a man with great expedition to vs in a canow. Who being yet but a little from the shoare, and a great way from our ship, spake to vs continually as he came rowing on. And at last at a reasonable distance staying himselfe, he began more solemnely a long and tedious oration, after his manner : vsing in the deliuerie thereof many gestures and signes, mouing his hands, turning his head and body many wayes and after his oration ended, with great shew of reuerence and submission returned backe to shoare againe. He shortly came againe the second time in like manner, and so the third time, when he brought with him (as a present from the rest) a bunch of feathers, much like the feathers of a blacke crow, very neatly and artificially gathered vpon a string, and drawne together into a round bundle being verie cleane and finely cut, and bearing in length an equall proportion one with another a speciall cognizance (as wee afterwards obserued) which they that guard their kings person weare on their heads. With this also he brought a little basket made of rushes, and filled with an herbe which they called Tahdh. Both which being tyed to a short rodde, he cast into our boate. Our Generall intended to haue recompenced him immediatly with many good things he would haue bestowed on him but entring into the boate to delitier the same, he could not be drawne to receiue them by any meanes, saue one hat, which being cast into the water out of the ship, he tooke vp (refusing vtterly to meddle with any other thing, though it were vpon a board put off vnto him) and so presently made his returne. After which time our boate could row no Avay, but wondring at vs as at gods, they would follow the same with admiration."
By the 21st of June, the Golden Hind was brought closer to shore for repairs, forts and tents were contructed for protection, and the relationship continued with the Miwok Indians. It was suggested that the natives thought of the explorers as Gods. In another excerpt, the General (Francis Drake) claimed the land for Great Britain.
"This country our Generall named Albion, and that for two causes the one in respect of the white bancks and cliffes, which lie toward the sea the other, that it might haue some afiinity, euen in name also, with our own country, which was sometime so called.
Before we went from thence, our Generall caused to be set vp a monument of our being there, as also of her maiesties and successors right and title to that kingdome namely, a plate of brasse, fast nailed to a great and firme post whereon is engrauen her graces name, and the day and yeare of our arriuall there, and of the free giuing vp of the prouince and kingdome, both by the king and people, into her maiesties hands : together with her highnesse picture and armes, in a piece of sixpence currant English monie, shewing itselfe by a hole made of purpose through the plate ', vnderneath was likewise engrauen the name of our Generall, etc."
New Albion and It's Later Import
Drake and the crew of the Golden Hind would head west across the Pacific Ocean after leaving New Albion, traverse that sea, explore the Indonesian archipelago, then return to England in September of 1580. The establishment and claim at New Albion would be used to establish English colonial charters for the next two centuries on a sea to sea American continent, first at Roanoke in 1584 and Jamestown in 1607. Later it would be used by English explorers and colonists, including George Vancouver, in their claims of territory in Oregon and Canada.
Image above: Engraving of Sir Francis Drake, Date Unknown, W. Hall. Courtesy Library of Congress. Below: Indians greeting Francis Drake in California, 1599, Theodr De Bry's Historia Americas. Courtesy Library of Congress. Source info: Library of Congress drake.mcn.org, "Francis Drake in Nova Albion" by Oliver Seeler "The World Encompassed" based on the notes of Francis Drake's Chaplain, Francis Fletcher, 1628 Archive.org Wikipedia.
History Photo Bomb
"Coronado's March," drawing by Frederic Remington, 1897. Courtesy Library of Congress.
Sir Francis Drake
Born in 1540, Sir Francis Drake went to sea at about age 12. After Spaniards attacked an English trading fleet, including his ship, at San Juan de Ullua in Mexico, Drake began raiding Spanish ships and coasts. In a quarter-century career, Drake attacked Panama, the west coast of South America, the Caribbean, and the coast of Spain. He was second in command of the English fleet during the Spanish Armada Campaign. Drake was knighted in 1581 for his voyage of circumnavigation and died at sea on January 27, 1596.
Queen Elizabeth challenged the world dominance of Spain's King Philip II in the late 16th century. Excluded from New World trades by the Spaniards, the Queen's mariners enriched their country and themselves in state-sanctioned raiding efforts. Drake was an instrumental figure in this campaign.
In 1578, Drake sailed into the Pacific to raid the west coast of South America. The flagship was the Golden Hind, a small war galleon owned by Drake. He captured a treasure ship off Ecuador and then sought the Northwest Passage.
Finding no strait in Oregon, he turned south and landed at Drakes Bay to prepare to cross the Pacific and to claim the land for England. In September 1580, Drake and his remaining crew members returned to England with the riches they had gathered during their expedition.
with the riches they had gathered during their expedition.
Topics. This historical marker is listed in these topic lists: Exploration &bull Wars, Non-US. A significant historical date for this entry is January 27, 1596.
Location. 37° 56.683′ N, 122° 30.527′ W. Marker is in Larkspur, California, in Marin County. Marker can be reached from Sir Francis Drake Boulevard. Touch for map. Marker is at or near this postal address: 101 Sir Francis Drake Boulevard, Larkspur CA 94939, United States of America. Touch for directions.
Other nearby markers. At least 8 other markers are within 3 miles of this marker, measured as the crow flies. Golden Gate Ferry (here, next to this marker) The Golden Hind (here, next to this marker) Green Brae Brick Kiln (approx. 0.2 miles away) Greenbrae Brickyard Superintendent's Cottage (approx. 0.2 miles away) Marin (approx. 2.3 miles away) Mission San Rafael Arcangel (approx. 2.3 miles away) The Gate House (approx. 2.4 miles away) The Belrose Theater (approx. 2.4 miles away). Touch for a list and map of all markers in Larkspur.
More about this marker. The marker is mounted to the outside wall of the Larkspur Ferry Terminal.
States Stake Claim On Sir Francis Drake's Landing
Marcus Gheeraerts the Younger
Drake was the prototypical swashbuckling British ship captain. It took him three years to circumnavigate the world. In 1579, he spent five weeks repairing his ship and interacting with West Coast tribes. Amateur historian Garry Gitzen believes that happened near his house overlooking Nehalem Bay on the northern Oregon coast.
The shelves of Gitzen's basement library are lined not only with books about Sir Francis Drake, but also with what he says is evidence the British explorer dropped anchor near his home.
Gitzen points to a photo of an old survey marker chiseled into a rock.
"This is what he signed," Gitzen says. "You know, the only person who could do something like that was Francis Drake."
Gitzen is writing a book called Oregon's Stolen History. In it, he refutes the generally accepted claim that Drake landed in California, just north of what is now San Francisco. He says it matters because it's the truth.
"Otherwise, we're living a bunch of lies," he says. "Is that really what we want to do? I don't think so. If that's the case, why don't we just keep saying the sun is revolving around the Earth and the Earth is still flat?"
Ed Von der Porten heads a society of history buffs in California's Bay Area. As far as he's concerned, scholars settled the question long ago of where Drake first encountered West Coast tribes — in California's Drake's Bay. Most recently, he says, the National Park Service put that claim through not one but two scholarly commissions to see if there were any other alternative.
"The answer came back, as it always has, a resounding no," Von der Porten says.
There is still another possibility, however, for Drake's landing: Whale Cove in southern Oregon. That's where archaeologist Melissa Darby is studying. Darby says that as a scientist, she doesn't trust anyone who's 100 percent sure of something that happened more than four centuries ago.
"We don't know where he landed," Darby says. "Just the evidence that there are so many arguments about it tells me that it's not a done deal."
The Hunt Continues
For now at least, the National Park Service has accepted the petition to officially designate 17 locations around Drake's Bay in California as a national landmark. But that's not quite the end of the story.
National Park Service archaeologist Erika Martin Seibert says the point of this landmark is to recognize the first contact between the British and Native Americans.
"At this time, current scholarship supports this area as the landing sight of Drake's Bay," Seibert says. "But that doesn't mean we can't continue to look at other places."
Seibert says the reason people will keep studying Drake's circumnavigation of the world is because it was the "moonshot" of its time.
"He was a rock star. He did something that many people thought was impossible," she says.
Watch the video: The Drake Plate: Sir Francis Drake Claims California for Queen Elizabeth I and England - was it real (January 2022).
|
Cities in ISRAEL
Geography and Landscape
Israel (Ivrit: Medinat Yisrael, Arabic: Dawlat Isra'il = State of Israel), is a republic in the Middle East on the continent of Asia. The total area of Israel is officially 20,770 km2. This is according to the 1949 boundaries, excluding the Occupied Territories and the Palestinian Autonomous Territories - West Bank, Gaza Strip and Golan Heights, which together measure 7,375 km2.
From north to south the country stretches for about 420 kilometers, from east to west the distance varies from 20 to 160 kilometers. The Mediterranean coast is 230 kilometers long.
Israel is bordered to the north by Lebanon (79 km), to the northeast by Syria (76 km), to the east by Jordan (238 km) and the West Bank (307 km), to the south by Egypt (266 km) and in the southwest on the Gaza Strip (51 km).
Israel Satellite photoIsrael Satellite photoPhoto:Public domain
Israel is a narrow, elongated country and has three landscapes from west to east: the coastal plain, the western mountainous region and the ridge of el Ghor. The southern part of Israel is formed by the Negev desert, which covers half of the country.
The coastal plain is contiguous and is interrupted only by the Carmel Mountains and distinguished from south to north in Shephelah, the plain of Sharon and the valley of Zevoelun (Zebulon). The coastal strip is partly a fertile lowland, which is becoming wider, drier and more sterile in the south. Eventually this area turns into the Sinai desert. Carmel Mountain (550 m) is located near the coastal town of Haifa.
The western highlands, 700 to 1000 m high, can be divided from south to north into the mountain regions of the Negev and of Judea, Samaria and Galilee. The last three areas consist mainly of limestone and show karst phenomena. The surface of the Negev consists of granite. In the far north, the hills of Lebanon merge into the highlands of Galilee to heights of about 1,200 meters, descending to the Jordan Valley in the east, in the west to the coastal plain, and in the south to the valley of Esdraelon. To the south of Esdraelon, a plateau extends for approximately 150 kilometers.
Northern slope of Har Meron, IsraelNorthern slope of Har Meron, IsraelPhoto: Lior Golgher CC BY-SA 3.0 no changes made
The highest mountains are Har Meron near Zefad (1208 m) and Ramon in the southwest of the Negev (1035 m). The western highlands are interrupted southeast of Haifa by the Jezreel Plain, which connects the Jordan Valley to the Mediterranean Sea.
The deep rift of El Ghor encompasses the Jordan Valley in the north and the Valley of Arabah in the south. To the north, the surface of Jam Kinneret is 209 meters below sea level, to the south that of the Dead Sea is 394 meters below sea level.
The only major river is the Jordan, partly border river with Jordan. The remaining rivers in Israel all flow to the Mediterranean Sea, are short and have irregular water.
Climate and Weather
Israel's location between the Mediterranean and the desert largely determines the weather. Israel has no fewer than four climate zones, from Mediterranean to desert climate. Most of Israel can be divided into two seasons, the hot, dry summer and the cold, wet winter.
Snow in Jerusalem, IsraelSnow in Jerusalem, IsraelPhoto: Pikiwiki Israel CC 2.5 Generic no changes made
For example, temperatures in Jerusalem can drop to 5°C (lowest temperature ever recorded in Israel: -7°C) with sometimes even snow, while in the area around the Dead Sea the temperature can reach 45°C. There is also regular snow in the mountainous areas of Northern Galilee.
Western Israel has a Mediterranean climate with dry, hot summers and mild, damp winters. The el Ghor rift in the east receives less than 200 mm of rain per year and therefore has a desert climate.
In summer, temperatures often reach over 40°C locally, especially when the "hamsin" (Hebrew: sharav), a desert wind, blows from the east.
Hamsin, IsraelHamsin, IsraelPhoto: Zeeveev CC 2.0 Generic no changes made
The summer heat is tempered during the day by the daily sea breeze from the west; the cooling at night is often so strong that dense mists and heavy dew occur. The hottest areas of Israel are Eilat, the Arava Valley in the Negev Desert, the lowest areas of the Jordan Valley (highest recorded temperature ever: 54° ), the valley of Bet Shean the shores of Lake Tiberias in summer and the dead Sea.
The winter rain usually falls in three periods: the early rain after the dry summer in October and November, the winter rain, in heavy showers, alternating with periods of sunshine and the late rain in March and April. The average annual rainfall is about 600 mm and decreases from north to south and from west to east.
Most rain falls between November and April; in the area around Mount Hermon up to 1000 mm per year falls, while in Eilat, in the south, there is usually less than 50 mm of rainfall per year. Northern Galilee has about 70 days of rain per year, in Jerusalem and in the mountains of Judea it rains for about 50 days and in the Negev desert it rains for about 10 days.
Climate tables:
Tel Aviv
daytime temperature18°C1921232628303131282420
night temperature8°C911131619212221171411
water temperature18°C1718192224272827252220
rainy days p / m1513123100015915
hours of sunshine per day67791112121210986
daytime temperature21°C2326313638394037332823
night temperature10°C1114181724262625211612
water temperature22°C2121232526272828272523
rainy days p / m112100000111
hours of sunshine per day78891012121110987
Plants and Animals
The west of Israel still belongs to the Mediterranean region, with a lot of evergreen forests. One of the largest forests in the country is the 2,000 hectare area around Bar'am, Ein Zeitim and Biriya, which was planted by the Jewish National Fund in the 1950s on the treeless ridges around the city of Tsefat. The carob tree grows in the low plains and on the dunes.
Pistacia atlantica, Elah Valley IsraelPistacia atlantica, Elah Valley IsraelPhoto:Davidbena at en.wikipedia CC 3.0 Unported no changes made
In the coastal mountains and as far as Jerusalem and Galilee, pine forest of Aleppo grows on chalk soil. Oak forests grow on sandy soils in these regions, which are mixed with Styrax officinalis and Pistacia atlantica as far as Galilee. Pistacia atlantica is one of the oldest trees that can be found in the Negev or Galilee. His predominant appearance is told in several biblical stories.
Inland, the mountains mainly contain scrub that is called "garrigue" in calcareous areas, and "maquis" on more acidic soil. This environment has a large number of plant communities, some of which are found nowhere else in the Mediterranean. Some special varieties are butterfly orchid, Calycotome villosa, Italian gladiolus, Cistus salviaefolius, Asiatic ranunculus, wild wheat, Galilee orchid and Helichrysum sanguineum.
Sabra IsraelSabra IsraelPhoto: Qasrawi 2000 CC 3.0 Unported no changes made
Everywhere in the Mediterranean area, planted and wild, one finds the Barbary fig, originally from South America (also called prickly pear cactus, in Israel called 'sabra').
To the east the vegetation changes into that of the steppe area, and here grows the stepped thorn thickets of the Zizyphus lotus, Zizyphus spina-Christi or jujube tree and Acacia arabica, all very tough plants that need little water.
The bank of the Jordan is virtually without vegetation. The Judean desert is barren and arid in summer, green in winter, and one large wild flower garden with sufficient rainfall in spring, including red anemones, yellow mustards, pink cyclamen, blue orchids and brown and purple irises. The approximately 160 nature reserves in Israel cover a total of 400,000 hectares and are home to more than 3000 plant species, of which approximately 150 are only found in Israel.
Acacia Raddiana, IsraelAcacia Raddiana, IsraelPhoto: Floratrek CC 3.0 Unported no changes made
Some special biotopes:
The Arava Valley is part of the great Syrian-African Rift, which runs from East Africa to southern Turkey. The temperature in this area can reach over 40°C and the originally African flora feels at home in these circumstances. In some wadis the Acacia raddiana grows, a species of tree that also grows on the African savannas. Loranthus acaciae grows as a parasite on acacia trees and its honey is an important food source for some bird species. The Mesembryanthemum from South Africa has succulent leaves that can absorb a lot of water.
In the northwest, the Negev desert turns into a semi-desert with a remarkable flora. Mainly annual plants grow between Beershaba and Sede Boker; the most common are grasses, such as Stipa capensis. The turpentine or terebint has been growing in the Negev for thousands of years. The black iris can be seen between March and May in the calcareous areas of the Negev and the Judean desert. The asphodel flowers from January to April.
Raven, IsraelRaven, IsraelPhoto: Greg Schechter CC 2.0 Generic no changes made
Over time, the species richness in Israel has decreased sharply, partly due to the large-scale clearing of forests, resulting in erosion. Only recently have many forests been replanted. Israel currently has about eighty reptile species and two hundred mammal species.
The panther is still found in very small numbers, just like the Syrian brown bear. Other recent species include the striped hyena, the common jackal, the common ichneumon or pharaoh rat, the hedgehog, the rock badger and a blind mouse (Spalax microphthalmus).
The approximately 400 species of bird world include the raven, the barn owl, the griffon vulture, the white wagtail, a sparrow (Passer biblicus), partridges and quail, the black-headed gull and the common pelican.
Insects and desert snails are particularly striking among invertebrates.
In the Negev desert, the Hai-Bar reserve has been set up for biblical fauna, partly through the import of onagers, Arabian oryxes and ostriches. Somali donkeys, caracals, jackals and wolves also live in this reserve.
Some special biotopes:
Desert Lark in IsraelDesert Lark in IsraelPhoto: Greg Schechter CC 2.0 Generic no changes made
In The Arava Valley the Arabian gazelle is in danger of extinction in the Arabian peninsula; south of Yotvata is a reserve where this animal is protected. Furthermore, the striped hyena, the wild cat, the sand rat, the jumping mouse and the gerbil live here. The green bee-eater is a bird found only in this valley; other special birds are the red desert lark, the desert finch, the Palestinian honey bird and the common desert lark.
Dorcas Gazelle, IsraelDorcas Gazelle, IsraelPhoto: MinoZig CC 4.0 International no changes made
In the northwest of the Negev, the desert gives way to semi-desert, home to animal species that have become rare in Israel. The most striking animal is the wolf, which is also found in the desert of Judea and the Arava valley. Because the animal is protected, the Dorcas gazelle is still common in Israel, as well as the brown hare and the fox.
The following bird species inhabit the Negev: hooded crow, stone curlew, spotted sandgrouse, the protected crested bustard and the crested lark.
The wadis have a dry and desert-like environment and are a habitat for many desert animals. After downpours, water remains in hollows and pools, resulting in varied flora and fauna. The Nubian ibex lives in the highest parts of rocky mountains, at a lower level you can see the cliff badger, which is related to the elephant!
As far as birds are concerned, the following stand out: desert wheatear, fan-tailed raven, Sinai rosefinch and Tristram starling.
Israel's geographic location, where three continents meet, means incredible numbers of migratory birds land twice a year. Thanks to the rising air currents, these birds can quickly travel thousands of kilometers.
Among the many bird species are also many birds of prey, such as steppe eagle, black kite, Balkan sparrow, steppe hawk, honey buzzard and screaming eagle. The stork and the black stork also cross the Holy Land twice a year.
Coral reef Red Sea, IsraelCoral reef Red Sea, IsraelPhoto: Israeltourism CC 2.0 Generic no changes made
The Red Sea is famous for its colorful fish, corals and invertebrates, which are part of the Indo-Pacific fauna. In the immediate vicinity of coral reefs you will find the greatest diversity of fish, which find food and shelter there. The so-called fringe reef has the greatest diversity of species. This is somewhat less at the higher plate reef and the lower deep reef.
The following list is just a small selection of the many species: four-eyed magician, blue parrotfish, masked puffer fish, wrasse, gray moray eel, gold-tail magician, surgeon fish, blue surgeon fish, clownfish, rock fish, boxfish and grouper.
Special life forms are: soft coral, red sponge, sea urchin, sea anemone, tube sponge, slug, sand tube worm, fire coral and barber shrimp.
The Mediterranean zone has been deforested by humans since the Stone Age, but there are still plenty of special animals, such as the Spurr-tighed tortoise and the mongoose. Common birds include lesser kestrel, red-rumped swallow, gray bulbul, goldfinch, black-headed lesser and turtle-dove.
Es Skhul cave where bones have been found in IsraelEs Skhul cave where bones have been found in IsraelPhoto: R Yusherun CC 3.0 Unported no changes made
Palestine is at the origin of the small Jewish people. Bone finds date back to 10,000 years BC. and the history of Israel began c. 3000 BC. The land lay between powerful empires such as Babylonia and Egypt, and important caravan roads ran through this area. In addition, it was a fertile arable land. Towards the end of the third millennium BC. the first political units arose; the Canaanites founded cities like Jericho, Megiddo, and Jerusalem.
Hyksos Spearhead. IsraelHyksos Spearhead. IsraelPhoto: Public domain
Even then, the history of Palestine was strongly linked to the developments in Egypt, who where about 1700 BC. conquered by the Hyksos. Only after 1550 BC. the Hyksos were again driven from Egypt and the country soon became the largest power in the Middle East. It was not long before Palestine became subject to Egypt, and the city kings appointed by the Egyptians were responsible for paying taxes to the Egyptian pharaohs. The vast majority of the population suffered greatly from the collection of taxes, often with the help of soldiers.
Philistines and Hebrews
Israel Five-City LeagueIsrael Five-City LeaguePhoto: Public domain
Early 13th century BC. the Philistines, a so-called sea people, invaded Palestine and succeeded the Egyptians despite fierce opposition. The Philistines ruled through the so-called "League of Five Cities," which consisted of the cities of Gaza, Ashkelon, Ashdod, Ekron, and Gath. In the absence of central authority, the Philistines could not properly defend Palestine against attacks from such tribes as the Edomites, the Ammonites, the Moabites, and especially the Hebrews, a nomadic pastoralist people. They emerged as the strongest from the infighting and founded several settlements, for the time being only in mountainous areas.
The native population of Palestine initially had little to fear from the Hebrews and were left alone. After permanent settlement in the mountains, the Hebrews moved into the valleys where the cities of Canaanites were. The fact that the militarily weaker Hebrews were able to conquer these cities fairly easily was partly due to the mutual struggle between the different cities, which weakened themselves. Furthermore, they waged war in a clever way and made use of spies, saboteurs and traitors, in short they had organized themselves perfectly and made good use of the weaknesses of the opponents.
It was now a matter for the Hebrews to consolidate the situation and, in their opinion, required strong central authority. It was considered time to establish a royal family. Throughout his reign Saul fought against the Philistines, as well as Edomites, Moabites, and Amalekites. During that time, Saul was able to unite the Israeli tribes and implement significant social changes. One of those new aspects was the imposition of some sort of tax, which, however, provoked widespread backlash. The last years of Saul's reign were marked by major conflicts with the traditional elite.
David with Goliath's Head for Saul, painting by RembrandtIsrael, David with Goliath's Head for Saul, painting by RembrandtPhoto: Public domain
After Saul was overthrown by David with the help of the Philistines, David took over the leadership of the Israeli people. First the southern tribes in Judah apointed him as king, in 1004 BC the northern tribes followed suit. The Philistines tried to break this covenant, but were defeated and then played no role in Israel's history. After this David tried to conquer Jerusalem; this succeeded and Jerusalem became the capital and the religious center of the kingdom. Domestically, David had the same problems as Saul. Protests and uprisings, led by his son Absalom among others, were crushed by David. In 965 BC. David was succeeded by his son Solomon who immediately eliminated all his competitors, but further ensured that the kingdom became relatively quiet. After Solomon's death, his eldest son Rehoboam succeeded him.
The northern tribes of Israel realized that under the new ruler they would have an even more difficult time than under his father. They then called Jeroboam back from Egypt and crowned him king of the northern states, creating a tense situation. Jeroboam, however, managed to keep his country out of war, but three of his successors were murdered, including his son Nadab. About this time the southern land of Judah and northern Israel were under threat from the Assyrians. Judah and Israel made peace to resist the Assyrians, who were devastatedly defeated in 853 BC. Not until 841 BC. Assyrian King Shalmaneser managed to subdue Israel. A hundred years later, the entire upper class of the Israelites were taken into slavery by the Assyrian king Sargon, and Israel temporarily disappeared from the map. Southern Judah was founded in 734 BC. conquered by the Assyrian Tiglath-Pileser. Judah accepted the rule and faithfully paid her taxes, leaving the people alone for a long time by the Asssyrians. Early eighth century BC. Palestine became a vassal state of Egypt and later the Egyptians were again expelled by the Babylonian prince Nebuchadnezzar. When Zedekiah (597-587 BCE) declared independence, Nebuchadnezzar became very harsh and plundered in 587 BCE. Jerusalem and destroyed Solomon's temple. After Nebuchadnezzar II's death in 562 BC. the Persians, led by Cyrus, managed to survive in 539 BC. Conquer Judea. Many rich Jews from Persia then returned to Judea.
Empire of the Seleucids 200 BCEmpire of the Seleucids 200 BC, IsraelPhoto: Thomas Lessman CC 3.0 Unported no changes made
After Alexander the Great's death in 323 BC. his vast empire was divided among his successors, the so-called Diadochi. Ptolemy was assigned Egypt and conquered in 320 BC. also Palestine. A hundred years later, the Seleucids invaded Palestine under the leadership of Antiochus III, and from 200 BC. the Jews were part of the Seleucid empire and the Hellenization of the country could be accelerated.
The Hellenes oppressed the Jews and of course a revolt was bound to happen.
The high priest Mattatias, who had fled to the desert, gathered a large number of militant supporters and this group named itself after one of Mattatias's forefathers, Hasmon. After the death of Mattatias, his sons Judas, Jonathan, and Simeon took over the leadership of the revolt of the Hashmoneans. Judas in particular, nicknamed the Maccabee, proved himself an excellent soldier and conquered in 164 BC. Jerusalem on the Seleucids. The Seleucids now formed a large army and tried to regain the lost ground, offering the Hasmoneans peace and freedom of religion. Judas fought on, but was killed in 160 BC. His brother Jonathan succeeded him, but for political motives he was removed in 143 BC. murdered.
After this, the third brother, Simeon, took over and managed to close a truce with the Seleucids. In return he was appointed high priest and became leader of the Jews with a reasonable degree of independence. In 140 BC. The heredity of this office was officially confirmed and the Hasmonean dynasty was finally established and the land was renamed Israel. In 134 BC. Simeon was killed by a relative, but his son, John Hyrcanus I, managed to put down the rebellion and ascend the throne himself. The Seleucids started another war but it came to nothing, on the contrary, Israel slowly but surely expanded its sphere of influence.
After the death of John, a bloody family struggle for the succession ensued and Alexander Jannai, a son of John, eventually came to power. Under his rule, the coastal cities of Galilee were conquered, as well as areas east of the Jordan.
Entry into Jerusalem by Herod the GreatEntry into Jerusalem, Israel by Herod the GreatPhoto: Publc domain
After Alexander's death, another succession battle ensued, from which the Romans took advantage. After the collapse of the Seleucid Empire, they had become the great power in this region, making Syria and Palestine the Roman province of Syria. After the death of the mighty Emperor Caesar in 47 BC. the area fell into civil war and was also attacked from the east by the Parthians.
His son Herod was proclaimed king of Palestine, he knew in 37 BC. to reclaim his empire and Jerusalem. Most of the members of the Supreme Council of the Israelites, the Sanhedrin, were executed by him. Herod ensured a long period of peace with foreign countries, but was a very harsh man to his subjects, who hated him. This made him increasingly suspicious and madness struck when he even had members of his own family murdered. When Herod in 4 BC. finally died at the age of 69, a sigh of relief went through Israel.
Three of his sons reigned until 44 AD. over his empire, after which the country was further ruled by Roman procurators, who, however, were more interested in enriching themselves, which made corruption rampant. In May 66 a revolt broke out and the Jews managed to drive the Romans out of several cities. In the summer of 67, the Romans re-entered the country, Flavius Vespasianus from the north and his son Titus from the south. Just before Vespasian took Jerusalem, word reached him that Emperor Nero had been overthrown and Vespasian was proclaimed emperor.
In 70 Titus finally managed to conquer Jerusalem. In 132, led by Simeon Bar Kokhba, a revolt against the Romans ensued and the Jews rapidly conquered the entire country. Only the regent in Britain, Julius Serverus, managed to stop the advance of the Jews by paying them back with equal coin. The decisive battle was won by him in 135, and 600,000 were counted among the Jews, as well as thousands of Roman soldiers.
Byzantine and Arab Empire
In 324 the Christian Constantine became the sole ruler of the Roman Empire and built churches wherever Jesus had been. One of his successors, Emperor Justinian (527-565), also followed this policy and many pilgrims brought prosperity to the country. In 529 the Samaritans revolted and in 614 the Persians marched through Palestine.
Caliphate around 654Caliphate around 654Photo: Mohammad adil at the English-language Wikipedia CC 3.0 no changes made
Between 634 and 644, the entire Middle East, including Palestine, was conquered by Caliph Omar I. However, the Palestinians did not suffer much because Islam was a tolerant religion. From 750 onwards, the Abbasids ruled Palestine from Baghdad for five hundred years. Jerusalem grew into the second most important city for Muslims at that time. From 905 the Abyssids were threatened by the Fatamids and the Byzantines. Churches and monasteries were burned to the ground by Sultan Hakim of the Fatamids. Hakim was murdered in 1021, followed by a short period of rest. Around 1070, Palestine was conquered by the Turks.
Conquest of Jerusalem in 1099Conquest of Jerusalem, Israel in 1099Photo: Public domain
On November 27, 1095, the then Pope Urban called for a crusade to liberate the holy sites in Palestine from the "unbelieving" Muslims. Ultimately, the period of the Crusades would last more than two centuries and cost millions of lives. In July 1099, Jerusalem was conquered with unprecedented killings of Muslims as well as Jews, men and women, children and the elderly. Big names associated with the Crusades were Robert Curthose, Raymond of Toulouse, Bohemond of Taranto and Godefroid of Bouillon. The latter died in 1100 and his brother Baudouin was crowned king of Jerusalem. Baudouin died in 1118 and was succeeded by a relative, Baudouin II, under whose reign the monastic orders of the Knights Templar and the St Johns were established.
The Muslims further intensified the fight against the Christians and even Baudouin was imprisoned. After paying the ransom, they released him, but in 1131 he died and was succeeded by his son-in-law Fulk of Anjou. In 1144 Jerusalem was conquered by the Saracens and again the Pope called for a crusade against the Muslims. However, this crusade, led by King Louis VII of France and Emperor of Germany Conrad III, failed completely, and the Muslim states in the Middle East grew stronger and stronger. Saladin, Sultan of Egypt at that time, conquered practically all the Crusader fortresses and cities in 1187 and Jerusalem was taken on October 2, 1187.
Richard Lionheart on a crusade, IsraelRichard Lionheart on a crusade, IsraelPhoto: Public domain
Another crusade was held, this time led by Richard the Lionheart of England, Philip August of France and Frederick Barbarossa of Germany. Despite the death of Frederick Barbarossa, the other two marched to the Holy Land and had some initial success. Richard the Lionheart even managed to wreck Saladin's army and then wanted to conquer Jerusalem again. Before that, Saladin proposed a peace treaty and free access to all sacred sites. Richard agreed in 1192 and returned to England. Four more Crusades followed, but in 1244 the kingdom of Jerusalem was finally conquered by the Muslims. In 1271 the last Christians left Palestine, only the city of Acre was occupied until 1291.
Turkish rule and British are given mandate over Palestine
After the Crusades, Palestine belonged to the empire of the Mamluks, who ruled the empire from Cairo. The Mamluks were defeated at Aleppo by the Ottoman Sultan Selim in 1516, beginning the 400-year rule of the Turks in the Middle East. Palestine no longer played any role on the international stage for a long time, and only came back into the picture at the time of the French emperor Napoleon Bonaparte. However, with the support of the British, Napoleon could be kept out of Palestine. In 1874, the Jews established the Palestine Exploration Fund, followed in 1878 by the foundation of the first agricultural settlement. Four years later, the first wave of immigration started from Eastern Europe. In 1896, Theodor Herzl wrote the book "The Jewish State", which advocated the establishment of a Jewish state in Palestine. Herzl would thus become the founder of Zionism, the Jewish national movement whose goal is the return of the Jewish people to the Holy Land (in fact the hill of Zion).
In 1901, Chaim Weizmann founded the Jewish National Fund, which spent money on land purchases. Between 1904 and 1914, many immigrants returned to Palestine, and the Palestinians gradually grew suspicious as more and more land fell into the hands of the Jews and the establishment of a Jewish state seemed to be drawing ever closer. In 1908, Arabs attacked Jewish villages for the first time. In November 1917, the Balfour Declaration followed, in which Britain declared that it supported the formation of a Jewish state in Palestine. Some time earlier France had indicated that it was sympathetic to these developments.
Map of Palestine and Transjordan during the British MandateMap of Palestine and Transjordan during the British MandatePhoto: Public domain
In April 1920 the British were given the mandate over Palestine and the country was again inundated with immigrants. Then the Grand Mufti of Jerusalem called for a holy war against the Jews and disturbances were the order of the day. The British were now much more cautious, fearful as they were to risk the alliance of the Arabs. This put an end to the Jews' dream for a state of their own for the time being, because to achieve this on your own was of course an illusion. Nevertheless, the Jews internally worked further and further towards a Jewish state, but the Arabs also increasingly acquired a national consciousness. This deepened the gap between the Jews and the Arabs and the number of violent clashes between the two peoples increased. The British, who still held the area under mandate, increasingly sided with the Arabs and turned the thumbscrews on the Jews.
Jeruzalem 8 mei 1945Jeruzalem 8 mei 1945Photo: Matson Photo service in the public domain
In 1933, power in Germany was taken over by the Nazis, and that was the signal for tens of thousands of Jews to immigrate to Palestine. This again caused a lot of trouble with the Arabs and just before the start of World War II the Britttsh announced a stop of immigration, despite the knowledge that the Jews were having a very difficult time in Germany. Yet many Jews still entered the country in secret and there was increasing resistance against both the British Mandate troops and the Arabs. In the meantime, World War II was raging in Europe and practically all of European Jewry was massacred by Adolf Hitler's Nazis. Approx. 6 million Jews were systematically murdered in concentration camps, most of them in gas chambers. A relatively small group managed to get out of the clutches of the Nazis, especially in countries such as Finland, Denmark, Italy and Bulgaria.
During the war, the British came under increasing attack in Palestine. Secret organizations attacked British targets and murdered British policemen and soldiers. On February 14, 1947, the British declared that they were no longer in control of the Arab-Jewish problem and enlisted the help of the United Nations. On November 29, 1947, the United Nations General Assembly approved the division of Palestine into a Jewish and an Arab state. Many Arabs were against this partition plan, and the Mufti of Jerusalem even called for all-out war on the Jewish state.
The State of Israel
Immediately a civil war broke out between Arabs and Jews, with the Jews beginning to gain the upper hand. Impressed by the bloody conflict and opposition from Britain, the United Nations wanted to undo the partition decision, but the now-formed Provisional Administration of the Jewish Community, which numbered 600,000 souls, proclaimed the Jewish State of Israel on May 14, 1948 the 26-year-old British Mandate on Palestine came to an end.
In response, barely hours later, tanks from Egypt, Transjordan, Syria, Lebanon, and Iraq rolled toward Israel; the War of Independence had begun. Although another US mediation plan was launched, Israel went to war against the enemy. With a one-month hiatus, fighting continued until early 1949, when ceasefire treaties were negotiated by the UN on the island of Rhodes, with Egypt, Lebanon, Jordan and Syria. However, through extensive arms supplies, Israel had built up such a predominance that even areas were conquered that are still Israeli territory today. Arab Palestinians fled in their thousands to neighboring countries and by early 1949, 80% had left the country or had been expelled by Israeli forces. They were forced to settle in refugee camps in Jordan (including the West Bank before 1967), Lebanon and the Gaza Strip that was annexed by Egypt. Jews from all over the world made just the opposite journey; especially from the Soviet Union and Arab countries, hundreds of thousands of Jews emigrated to Israel to help rebuild the country. Terrorists ("fedajin") were deployed from neighboring Arab countries to disrupt life in Israel. This cost the lives of approximately 1,300 Israelis and Israel responded with retaliation each time. This pattern would be the fate of the Israeli and Palestinian people to this day.
Israel's first prime minister and for many years the dominant figure was David Ben-Gurion (1948-1953; 1955-1963). He was the leader of the largest party, the socialist Mapai. State formation began under Ben-Gurion. Industrialization and mechanization of agriculture resulted in a welfare state following a Western example.
David Ben Gurion, Israel's first Prime MinisterDavid Ben Gurion, Israel's first Prime MinisterPhoto: Israeli GPO photographer CC 4.0 International no changes made
The main problem for Israel remained its relationship with the Arab states. Especially after the revolution in Egypt (1952) the situation began to become threatening, as Egyptian President Nasser strove to undo the defeat of 1948. In 1955, tensions increased further due to, among other things, arms supplies to Egypt from communist countries, the military alliances between Egypt and Arab countries and the closing of the Suez Canal in 1956. Israel was incited by France and Great Britain to start a war against Egypt . Sinai was incorporated in six days, but pressure from the United States forced Israel not to take it permanently. In March 1957, Israel withdrew its troops. The situation in the region now became very complicated and also the scene of the Cold War, in which the Arab states were supported by the Soviet Union and Israel by the United States and Western European countries.
In 1960 Prime Minister Ben-Gurion came into conflict with many party members, which led to his resignation in 1963. He was succeeded by the Finance Minister Levi Esjkol (1963-1969).
The Palestine Liberation Organization (PLO) was founded in 1964. They reminded the world community of the great Palestinian refugee problem, but Israel simply couldn't be relented to allow refugees to return to their old homeland. On the other hand, the many refugees in the countries where they stayed were an increasing source of problems.
Six-Day War and Yom Kippur War
Moshe Dayan, IsraelMoshe Dayan, IsraelPhoto: IDF Spokesperson's Unit CC 3.0 Unported no changes made
In the summer of 1967, Israel waged a pre-emptive war against neighboring Arab countries and occupied the Syrian Golan Heights, the Jordanian West Bank, Egypt's Sinai Peninsula with the Gaza Strip during the so-called Six-Day War (June 5-10, also known as the June War). and East Jerusalem. Led by the legendary Moshe Dayan, the Israelis won a resounding victory over Arab neighbors. On June 10, 1967, through the Security Council, the firing was stopped, ending the Six Day War.
On November 22, 1967, the Security Council passed Resolution No. 242 on the basis of Israel's withdrawal from the occupied territories, but Israel refused to withdraw from the occupied territories and installed a military administration. The Arab states refused to recognize Israel and after 1967 Israel was ravaged by Palestinian terrorists operating from Jordan and Lebanon. Retaliatory acts were also carried out on Egyptian territory, after which Egypt made proposals for peace negotiations, which, however, were rejected by Israel. In October 1973 Egypt and Syria took to the attack and initially achieved success in this so-called Yom Kippur war. Israel, however, retaliated and the United States and the Soviet Union arranged for an armistice. Diplomatic negotiations between Egyptian President Anwar as-Sadat, US mediator Henry Kissinger and Israeli Prime Minister Golda Meir (who had succeeded the late Eshkol in February 1969) were conducted in such a way that Egypt appeared to have emerged victorious. In March 1974, Ms. Golda Meir formed a new coalition government; in April, however, she announced her resignation. General Rabin became prime minister of a new coalition cabinet with, among others, Shimon Peres and Jigal Allon.
Peace Treaty between Israel and the Palestinians!
In the course of 1974, separation agreements were concluded with Egypt and Syria, whereby Israel withdrew from the territories it had occupied in the October war and also gave up part of Sinai.
In the meantime, Israel became increasingly isolated, especially through the use of the 'oil weapon' by the Arab countries, and was also involved in the Lebanese civil war because of the retaliatory and preventive actions on Lebanese soil against the Palestinians residing there.
Camp David agreementCamp David agreementPhoto: Public domain
In 1977, parliamentary elections were won by the conservative Likud party under Menachem Begin. The war meanwhile resulted in an economic crisis, which even led to emigration. In municipal elections in 1976, the Palestinian population voted en masse for the PLO, while the Israeli Palestinians increasingly declared their solidarity with the Palestinians in the occupied territories. In November 1977, President Sadat of Egypt visited Begin and proposed a peace settlement. In 1978, under the mediation of US President Carter at Camp David, prospects of a peace treaty between Israel and Egypt emerged. This peace treaty was actually concluded in March 1979, but the continued establishment of settlements in the occupied areas prevented further rapprochement. In August 1980, the Israeli parliament passed a law declaring Jerusalem the one and indivisible capital. The elections of June 30, 1981 were won by the Likud bloc, and its second cabinet could begin. In 1981, settlement policy was also intensified and on December 14 the Golan Heights was annexed, despite much international criticism.
Despite a tacit truce with the PLO in Lebanon, after an attack on the Israeli ambassador in London on June 6, 1982, Israeli forces entered southern Lebanon with great displays of power and even besieged the capital Beirut. Despite the retreat of the PLO fighters, Israel also faced a lot of domestic criticism, especially after the massacres by Lebanese allies in the Palestinian camps Sabra and Chatila in September 1982.
In August 1983, Prime Minister Begin resigned and his Foreign Minister Yitzchak Shamir took over the leadership of the cabinet. Early elections in March 1984 resulted in a government of 'national unity', in which first the socialist Shimon Peres (1984-1986) and then Likud leader Shamir (1986-1988) would be prime minister. In June 1985, this government decided, apart from the security zone, a complete withdrawal from Lebanon. This cabinet managed to improve the miserable economic situation with a profound restructuring policy.
In 1984, 10,000 Jews or Falashas from Ethiopia came to Israel via a secret airlift.
First Intifada
Barricade during the first intifada, IsraelBarricade during the first intifada, IsraelPhoto: Abarrategi CC 4.0 International no changes made
Growing unrest in the occupied territories was answered by Israel with harsh punishment, deportations, bans on appearance and school closures. In addition to PLO supporters, more and more Islamic fundamentalists, including the Hamas movement, also manifested themselves. On December 8, 1987, the Palestinian uprising or Intifada broke out in the Gaza Strip and the West Bank. Despite tough measures, the military proved unable to cope with this and growing disunity within Israel itself was reflected in the November 1, 1988 elections, in which both the Likud bloc and the Workers' Party lost seats to radical right and left parties.
Meanwhile close strategic, political and economic cooperation with the United States continued, but ties were also gradually re-established with the Soviet Union and other communist countries in Eastern Europe in the 1980s, which was reflected in, among other things, an increasing immigration of Russian Jews.
Second Gulf War
Israel fires Patriot missiles to take out Iraqi Scud missilesIsrael fires Patriot missiles to take out Iraqi Scud missilesPhoto: Alpert Nathan CC 3.0 Unported no changes made
At the end of 1989, Egypt under Mubarak and the United States made unsuccessful attempts to break the deadlock in talks over the occupied territories. On March 15, 1990, the Shamir-Peres cabinet fell and it was only after a difficult cabinet formation that Shamir finally managed to form a coalition of his Likud block with a number of religious and nationalist parties in June 1990.
After the start of the Second Gulf War on January 17, 1991, Iraq attempted to bring Israel into the battle by bombarding Israeli cities with Scud missiles, causing some deaths, but mostly material damage. Under pressure from the US government, Israel decided not to respond to the attacks in order not to endanger the anti-Iraqi coalition.
After the war, in February 1991, the Intifada flared up again. Partly under pressure from the Americans, Israel took part in a peace conference on the Middle East in Madrid at the end of 1991. The Palestinians who were part of the Palestinian-Jordanian delegation were given a hero's reception upon their return.
Period Rabin
Rabin, IsraelRabin, IsraelPhoto: Yaakov Saar CC 3.0 Unported no changes made
On July 13, 1992, Shamir was replaced by Yitzchak Rabin. The Rabin administration did not shy away from contacts with the PLO, which resulted in an agreement in Washington on September 13, 1993 on limited Palestinian self-government in Gaza and Jericho. This Oslo Accord opened up opportunities for improving relations with Syria, Jordan and Lebanon. The Oslo-2 accord followed in 1995, which provided for a phased Israeli withdrawal from the main cities of the West Bank.
In March 1993, the Workers' Party Knesset elected Ezer Weizman president to succeed Chaim Herzog. In mid-1994, Israeli Prime Minister Rabin and King Hussein of Jordan signed the "Washington Declaration," formally ending the state of war between the two countries. Negotiations with Syria, on the other hand, continued to be difficult, with the main stumbling blocks being the security measures in an Israeli withdrawal from the Golan Heights and the 'depth' of the peace to be made. In 1995 clashes between the Israeli army and its ally, the South Lebanese Army (SLA), on the one hand, and Shia Hezbollah fighters and Palestinians on the other, were again killed dozens of dozen.
Period Netanyahu
Netanyahu meets Arafat at the World Economic Forum in Davos in 1997Netanyahu meets Arafat at the World Economic Forum in Davos in 1997Photo: World Economic Forum CC 2.0 Generic no changes made
In November 1995, Prime Minister Rabin was murdered in Tel Aviv by a young Israeli nationalist. He was succeeded by Shimon Peres, who continued the peace process. Peres suffered a very minor defeat against Likud leader Benjamin Netanyahu in the parliamentary election in late May 1996 and in the first direct election of a new prime minister. Netanyahu formed a right-wing coalition government and pledged to continue the peace process with the PLO and the Arab countries. In the 1996 elections for a Palestinian Council and a Palestinian president, Arafat was elected president by a large majority.
During 1996, great political division arose in Israel over the peace process. The reasons for this were Hamas's suicide bombings and Netanyahu's policies, which overruled Rabin and Peres' peace-for-land philosophy and reluctantly continued negotiations with the PLO under great foreign pressure on the basis of a peace-for-security strategy.
Netanyahu announced the construction of new Jewish settlements and initially refused to agree to the Israeli military's withdrawal from Hebron, which was agreed upon in early 1997 following US pressure. Tensions between Israel and the PLO quickly escalated and the fragile relationship with the Arab countries was also tested by Israel's tough stance.
The peace process was further compromised when, in February 1997, Netanyahu announced the construction of the Har Homa Jewish residential area in East Jerusalem. In addition, in September of that year, the construction of new Jewish settlements began in Efrat, in the West Bank. Even the United States openly disapproved of the Netanyahu government's policies in October 1997, and dissatisfaction with Israeli settlement policies grew within the Arab world and the European Union. In November, talks resumed between Israel and Palestinian delegations on the further elaboration of the area transfer. In Israel, Netanyahu was hit hard by, among other things, an accusation of corruption and a failed assassination attempt on a Hamas leader by Israeli intelligence. Also in Israel itself, resistance grew against the Israeli presence in Lebanon, where the army carried out several attacks on the pro-Iranian Hezbollah. In June 1997, the Workers' Party elected Ehud Barak as party leader, succeeding Shimon Peres. In the Palestinian Autonomous Territories (West Bank and Gaza Strip), the living conditions deteriorated significantly as a result of Israel's punitive measures following the Hamas bombings. 70,000 Palestinians could not go to work because of the border closures. Palestinian leader Arafat also lost prestige due to the deadlock in the peace process and increasing corruption in the Palestinian circle. In April, British Prime Minister Blair made an attempt as President of the European Union to get the peace process back on track.
US pressure on Netanyahu eventually led to the Wye Plantation deal, led by US President Clinton and with the help of the ailing Jordanian King Hussein, was signed in October 1998 by Arafat and Netanyahu. The deal called for Israel to withdraw from 13.1 percent of the West Bank, and Arafat, in turn, pledged to take tougher action against Hamas terrorist attacks and was also ready to revise the Palestinian Charter. Israeli settlement policy was again a bummer and proved to be an obstacle to the implementation of Wye Plantation.
Period Barak
Barak, IsraelBarak, IsraelPhoto: Adi Cohen Zedek CC 3.0 Unported no changes made
In late 1998, Netanyahu's cabinet fell, but the Likud again elected Netanyahu as prime minister candidate and party leader. In response, several Likud leaders turned their backs on the party. Tensions between ultra-Orthodox and secular Jews in Israel ran high in early 1999.
The big loser of the parliamentary elections in mid-May 1999 was the Likud party; a big winner was the ultra-orthodox Shas party, which won 10 seats. The Workers' Party remained, despite a significant loss of seat. Netanyahu immediately retired as prime minister after the outcome, and the new prime minister became Ehud Barak of the Workers' Party. Netanyahu also stepped down as party leader and was succeeded by Ariel Sharon.
During his campaign, Barak had pledged to revive the peace process with Syria and the Palestinians, and made a concrete commitment that under his rule, the Israeli army would leave Lebanon within one year. Barak further promised that a referendum would be decisive on the return of the Golan to Syria and the withdrawal of the Israeli army from southern Lebanon.
Immediately after his cabinet was sworn in, Barak began negotiations with the Palestinians. After interventions by the United States Secretary of State, Albright, and the Egyptian President, Mubarak, Barak and Arafat closed on Sept. 4. 1999 a new agreement. In this 'Wye-2', Israel committed that 18.1% of occupied land in the West Bank would come under Palestinian control in three stages and that at least 350 Palestinian prisoners would be released. The most important addition in Wye-2 was a blueprint for a comprehensive peace between Israel and the Palestinians, which was to be finalized on February 13, 2000 and form the basis of a final peace settlement by September 2000. After this, Israel began implementing the agreement. 350 Palestinian prisoners were released in two phases and on October 4, 1999 protocols for the Gaza-Hebron link were signed. In January 2000, an agreement was reached between Israel and the Palestinians on the transfer of land in the West Bank.
However, the negotiations for peace went badly, and Jerusalem's status was a particularly sensitive issue. In protest against the ongoing construction of Jewish settlements, the Palestinians stopped negotiations in early December, but a secret summit between Barak and Arafat restarted the stalled peace process.
In December 1999, Israel and Syria agreed on peace negotiations and agreed to return the Golan in exchange for peace, and Israel's withdrawal from southern Lebanon in return for Syrian efforts to curb Hezbollah. In mid-April 2000, Israel completed the withdrawal of troops from Lebanon.
Israel received the Pope in early 2000 and the Chinese President also visited the country.
Period Sharon
Sharon, IsraelSharon, IsraelPhoto: Helene C. Stikkel in the public domain
Barak's coalition fell apart in mid-2000 as a result of disagreements between government parties over domestic and foreign policy. New elections took place in February 2001 and resulted in a major victory for Ariel Sharon's Likud party.
As a result of disagreements between Likud and the Workers' Party, elections again took place in January 2003. The Workers' Party lost these elections while the center-right Shinui party grew strongly. By March 2003, Sharon had formed a new cabinet consisting of Likud, Shinui, the National Religious Party, and the National Union, accounting for 68 of the 120 seats in the Knesset.
After the election, Sharon seemed to be heading a somewhat milder course. In early February he even held talks with moderate Palestinians. Meanwhile, the United States, the United Nations, the European Union and Russia put pressure on both sides. A "roadmap" was drawn up for comprehensive peace in the Middle East. In the course of 2003 and early 2004, many bloody attacks ensured that little of all good intentions came to fruition.
After 38 years of occupation, Israel completed the evacuation of its 22 settlements in the Palestinian Gaza Strip on August 22, 2005. Two weeks ahead of schedule and more peacefully than expected, all of the approximately 8,500 settlers left the area.
However, because of the disagreement of the religious parties, Sharon's government fell and Sharon founded his own political party, called Kadima. In 2006, Sharon fell into a deep coma and after a transition period Ehud Olmert became the new prime minister.
Period 2006-present
The 2006 Palestinian Territory elections were won by the fundamentalist-Islamist Hamas. This led to an economic and political boycott of the Palestinian Authority by Israel, the US and the EU, who classify Hamas as a terrorist organization. After an attack by the Lebanese movement Hezbollah on an Israeli border post in which three Israeli soldiers were killed and two captured and rocket fired on Israeli targets, the Israeli army launched a massive retaliatory attack on Lebanon that left over 1,100 Lebanese dead, most of them civilians. In Northern Israel 1,500 katyushar rockets fall; However, Israel saw no opportunity to stop these rocket fire.
Olmert, IsraelOlmert, IsraelPhoto: Government Press Office (Israel) CC 3.0 no changes made
The 2006 Israeli-Lebanese war caused major internal problems for Olmert. In southern Israel, too, the population is facing continuous rocket fire, this time from the Gaza Strip controlled by Hamas. Sderot has the hardest to endure here.
In November 2007, a conference was held in the American city of Annapolis between Israel, the Palestinian Authority and several Arab countries that also send representatives. President Bush convened this conference with the aim of creating an independent Palestinian state by the end of 2008.
In May 2008 it was announced that Olmert is suspected of corruption. An American businessman of Jewish descent named Morris Talansky is said to have given him a total of $ 150,000 over a period of 15 years. Olmert claims it was contributions to his election campaigns, but Israeli justice thinks he has pocketed the money. Olmert will step down in July 2008 and will be succeeded by Tzipi Livni. It fails to form a majority government. Meanwhile, Israel is campaigning in the Gaza Strip in connection with continued missile attacks on Israeli territory, sparking international protests for using disproportionate force.
Early elections will be held on February 10, 2009. In the elections, neither party achieves a clear majority. Benjamin Netanyahu is asked to form a new cabinet and succeeds in March 2009 with the support of the Barak Labor Party.
Israeli military forces on May 31, 2010 stopped a shipping convoy of goods en route to Gaza from the waters surrounding Cyprus with the intention of breaking the Gaza blockade and delivering relief supplies and medicines. 9 Turkish activists were killed and relations with Turkey were seriously disrupted. Direct talks with the Palestinians fail in the fall of 2010 due to disagreements over settlement issues. In November 2012, Israel fought a seven-day conflict with HAMAS over the Gaza Strip.
Nethanyahu in 2018, IsraelNethanyahu in 2018, IsraelPhoto: U.S.Embassy Tel Aviv CC 2.0 Generic no changes made
Parliamentary elections will be held in January 2013. Netanyahu takes over as leader of a coalition in March 2013. Direct negotiations with the Palestinians will resume in July 2013 and continue in 2014. In July 2014 reciprocal rocket fire between Hamas from the Gaza Strip and Israel. Israel enters Gaza with ground troops. In May 2015, after early elections, Netanyahu formed his fourth right-wing government with religious input. In May 2016, the coalition will be expanded with Jisrael Beetenoe. In June 2016, an agreement is reached in response to the Gaza shipping incident and Turkey and Israel are normalizing their relations. In February 2017, a law passed in parliament legalizing dozens of Israeli settlements on Palestinian territory. In December 2017, President Trump recognized Jerusalem as the capital of Israel, causing great unrest in the Arab world and among his Western allies. Between March 2019 and April 2020, three elections will take place in which Nethanyahu will take on a center coalition led by Benny Gantz, the former army chief of staff. There will be no clear majority, in April 2020 they will form a government of national unity to face the Covid Pandemic. In early 2021, Nethanyahu is due to appear in court on a charge of corruption.
Israel signed normalization agreements – brokered by the US – with Bahrain, the United Arab Emirates, and Morocco in late 2020 and reached an agreement with Sudan in early 2021.
In June 2021 Naftali Bennett of the Jewish nationalist Yamina party forms a broad coalition to oust Benjamin Netanyahu. Bennett formed the most ideologically diverse coalition in Israel's history, including the participation of an Arab-Israeli party. Under the terms of the coalition agreement, he would remain as prime minister until August 2023, then Alternate Prime Minister and Foreign Minister Yair Lapid would succeed him.
Jews: Emigrants and Immigrants
Emigration from IsraelEmigration from IsraelPhoto:Public domain
Already in the centuries around the beginning of our era, many Jews have left Palestine. They settled in all parts of the then known world: Asia Minor, Greece, Italy, Egypt and North Africa. These Jews mainly live in flourishing and extensive trading colonies. At the time, it is believed that more Jews lived outside Palestine than within its borders. In the course of the Middle Ages, Jews settled in most European countries, but also in the countries of the Middle East, Asia to China.Immigrants Israel just after the second world warImmigrants Israel just after the second world warPhoto: Public domain
The first groups of immigrants in the 15th and 16th centuries came from Spain and Portugal, from where they were expelled for their faith. The link between persecution and migration will continue to this day.
Immigration by continent, 1948-2000:
Immigration by period:
Worldwide, some 15 million Jews currently live, about half of whom live in Israel and a third in the United States.
Israel had 8.3 million inhabitants in 2017 (including East Jerusalem and the annexed Golan Heights), 75% of whom were Jews and almost 25% Arabs. By place of birth, the following distinction can be made among the Jews: Israeli by birth 76.3%, European / American by birth 16.2%, African by birth 4.8% and Asian by birth 2.7%. About 187,000 Jews live in the West Bank, about 5,000 in the Gaza Strip, about 20,000 in the Golan Heights and about 175,000 in East Jerusalem. Today more than 1.5 million Arabs live in Israel, 80% of whom are Muslims. The remaining 20% consists of Christians and Druze. About 1.7 million Palestinians live in the West Bank and East Jerusalem.
Population density IsraelPopulation density IsraelPhoto: Ynhockey CC 4.0 International no changes made
The population is very unevenly spread across the country: in the North district, north of Haifa, and in the South district, south of Jerusalem, - together 85% of the total area of the country - only about 27% of the total population. Israel is therefore one of the most urbanized countries in the world: more than 90% of the population lives in the cities, especially in Jerusalem, Tel Aviv and Haifa. The population density for all of Israel is approximately 376 inhabitants per km2. In the region around Tel Aviv, the average population density is approximately 7,000 inhabitants per km2. Israel is estimated to be one of the most densely populated countries in the world by 2020. The population density will then be about 750 people per km2.
Most of the Palestinians live in the countryside and in the cities of the territories occupied by Israel. The number of residents of the collective kibbutz has decreased significantly since 1948 in favor of the cooperative Moshavim. The Moshavim is an agricultural living pattern, in which about 4% of the Israeli population lives. The land is jointly owned and managed jointly.
The growth of the Jewish population in Israel has been very significant in a number of periods. In the period from 1948 to 1980 there was even talk of a growth of over 300%, until the early 1970s mainly a result of the enormous immigration. Since then, immigration has fallen sharply (in 1987 the number of emigrants exceeded that of the immigrants). From the early 1990s, many Jews immigrated from the Soviet Union. Since 1948, more than 2 million people have immigrated to Israel and it is therefore sometimes said that Israel is one nation, made up of a hundred nationalities. A global dichotomy that can be made distinguishes between Ashkenazi and Sephardi Jews. The Ashkenazim mainly come from the countries north of the Mediterranean. Sephardic Jews come from the countries south and east of the Mediterranean.
The composition of the Jewish population is very diverse. In general eight groups can be distinguished according to origin:
-A rather heterogeneous category of pre-modern immigration (aliyah) Jewish residents of Palestine. In 1882, the year in which modern Jewish emigration to Palestine actually began, there were approximately 24,000 Jews living in Eretz Yisrael.
- The approximately 150,000 Jews who entered the country as settlers in four waves between 1882 and 1931, mainly come from Eastern European countries.
-A number of more than 300,000 Jews from Central Europe, especially Germany and the territories occupied by Germany, who fled from their country of birth.
-The Jews who entered the country legally or illegally in the period 1945-1948, predominantly Jews who had escaped Nazi persecution or survived the concentration camps.
-The hundreds of thousands of Jews after 1948 who poured into the country from Iraq, Yemen and North Africa, in the eighties also from Iran and Ethiopia, the Falashas. The emigration of the Ethiopian Jews took place through Operation Moses, in which in the eighties and nineties of the last century approximately 30,000 people were brought to Israel in two stages.
-The Sabras, the Jews born in Israel (now by far the majority of the population).
-East European immigrants from after 1957, especially many from the Soviet Union after 1989.
-Zionists from the European countries, North and South America, South Africa and Australia.
Population growth was 1.51% in 2017 (birth rate 18.1 per 1000 inhabitants, death rate 5.2). The birth rate of the Palestinian population in Israel is greater than that of the Jewish population, while the death rates are almost the same.
In 2017, 27.5% of the population was under 15 years old; only 11.3% were over 65 years old. The average life expectancy in 2017 was 80.6 years for men and 84.5 years for women.
Israel's three largest cities:
Tel Aviv / Jaffa 3,608,000
Haifa 1,097,000
Jerusalem 839,000
Hebrew written in HebrewHebrew written in HebrewPhoto: Tenth Plague CC 4.0 International no changes made
There are two official languages in Israel: (modern) Hebrew and Arabic. Modern Hebrew, also called Ivrit, is spoken by almost the entire Jewish population.A number of groups of immigrants have also retained their language of origin.
The main languages are briefly described below:
Biblical Hebrew
Biblical Hebrew was from the 2nd century AD. a dead language until the end of the 19th century. The Old Testament, based on the Jewish Tanakh, the part of the Bible that dates from before the beginning of our era, was originally drawn up almost entirely in Hebrew.
In Hebrew we can distinguish:
1. Classical Hebrew (of which Biblical Hebrew is the bulk)
2. Post-Biblical Hebrew (from the Talmud and the so-called "midrashim", texts that explain the biblical texts)
3. Medieval Hebrew (that was when people first started writing about non-religious matters and because of that all kinds of new words had to be added.
4. Modern Hebrew
Hebrew alphabetHebrew alphabetPhoto: Maltin75 CC 3.0 Unported no changes made
Modern Hebrew
The revival of Hebrew as a living language is due to the Lithuanian Jew Eliezer ben-Yehuda, who settled in Jerusalem in 1881. He distributed glossaries for modern society among the new generation of Hebrew speakers. From the introduction of the British Mandate, Hebrew was an official language alongside English and Arabic. New emigrants were required to take a course in modern Hebrew or Ivrit, and this language is now spoken by some five million people in Israel. A Hebrew university was opened as early as 1924, and in 1966 Hebrew author Shmuel Yosef Agnon won the Nobel Prize in literature. Hebrew is a Semitic language which, like Arabic, is written from right to left. It is difficult to convert into Latin letters because it has no upper and lower case letters, for example. In fact, Hebrew has two alphabets, one for printed and one for handwritten text. The Hebrew script has 22 consonants and the missing vowels have been replaced by sound marks. Each letter also represents a number, but numbers are often written as the numbers known to us.
After Muhammad's death, Palestine was one of the first areas to be conquered by the Muslims and rapidly becoming Islamized. In addition to Arabic as a spoken language, there is also a liturgical language, classical Arabic, which, however, is increasingly distinguishing itself from Arabic spoken language. There are various Arabic dialects in Israel, which were brought by Jews from North Africa and the Middle East. The Arabic is written from right to left, except for numbers, which are written from left to right. Just like in Hebrew, vowels are replaced by sound signs
Palestinian Arabic
The Palestinian Arabic dialect is spoken by Palestine-born Arabs and by Jews who settled in the Holy Land before 1948.
This dialect is very similar to the Syrian and Lebanese dialect, but it has also been influenced by Hebrew and English. There are also regional differences and the language sounds different in cities and in the countryside.
English, French, Russian, Spanish, Yiddish
In Israel many other languages are spoken by immigrants who grew up in a different culture. For example, the so-called Jekkés, German immigrants from the 1930s, still only speak German. Other languages still spoken include Amharic, Ethiopian, Dutch, Hungarian, Polish, Chinese, and even Ladin, which is still spoken by descendants of Jews who were expelled from Spain in 1492.
English was the official language during the British Mandate and is still very important today. For pupils, English is compulsory at school from the age of 10.
In response to English colonialism, French was the third official language until the Six Day War in 1967, but is increasingly being supplanted by English. Many North African emigrants in particular still speak French.
The number of people who still speak Russian has grown enormously after the massive immigration from the Soviet Union and Russia. They live a lot in new cities like Carmiel, Natzrat Ilit and Arad.
Spanish is still mainly spoken by South American immigrants, who often settled in kibbutzim, but now also live in cities such as Carmiel, Ashkelon and Tel Aviv.
Yiddish originated in medieval Central Europe and is a mixture of Hebrew, German and a number of Slavic languages. After the extermination of the Jews in Eastern Europe, Yiddish seemed to be disappearing, but it is still spoken by a number of immigrants and in ultra-Orthodox communities in Russia, Poland, the Baltic States and Romania. Yiddish language and literature are even still taught in universities.
A few words:
good morningboker tovsabah al cheir
good nightlajla tovtisbah ala cheir
pleasebevàkashamin fadlak
Torah writer, IsraelTorah writer, IsraelPhoto: Willem van der Poll in the public domain
82% of the population adheres to the Jewish faith, about 14% to Islam, 2.7% to Christianity and 1.7% of the population are Druze.
The Jewish religion is based on the belief in one god and the covenant that God made with the Jewish people, the history of which began about 4,000 years ago with the era of Abraham. All aspects of Judaism originate from the Torah, the Mosaic Law. Together with the rabbinic commentaries, the Torah is the source of inspiration for the ideas and practices of all Jewish communities.
The Talmud is a set of beliefs that are the foundation of Judaism to this day. This collection consists of the "Mishna", which contains, in addition to the Torah, commandments and prohibitions, and the "Gemara", which contains rules of life and interpretations of the Torah.
Public life in Israel is dominated by the Jewish religion. The Sabbath rest has even been legislated, supplemented by local arrangements. The traditionally orthodox view of the Torah with its "248 Commandments and 365 Prohibitions" is a source of difficulty for public life in modern times.
The Chief Rabbinate, the highest religious institution of the Jews in Israel, consists of an Ashkenazi and a Sephardic Chief Rabbi. Rabbis are not priests, but teachers who, through their knowledge of the Talmud, come to different interpretations of the scriptures. The liberal Jews enjoy, at least in theory, the same facilities as other religious groups in building synagogues, but their rabbis are still not recognized as such.
The Hebrew calendar has a number of holidays that commemorate important events in the history of the Jewish people.
Feast of Tabernacles or Sukkot
This festival is held in memory of the return of the Jewish people to the Promised Land, when the Israelites still lived in tents and huts. The festival also marks the end of the agricultural season and the beginning of the annual cycle of the Torah. The Feast of Tabernacles begins five days after Yom Kippur.
The Feast of Fate, which commemorates the liberation of the Jews from the Persian occupation.
Yom Ha-Atsmaut
This day celebrates the day when the independent state of Israel was proclaimed in 1948.
Rosh Hashanah and Yom Kippur
The Jewish New Year is celebrated for eight days and this week is concluded by Yom Kippur or the Day of Atonement, the holiest day of the Jews, where the fate of man is determined by the balance between good and evil deeds.
The feast of light, the rededication of the Temple in Jerusalem after the victory over the Syrians by the Maccabees.
This festival celebrates the Exodus from Egypt and commemorates the killing of the firstborn by the strangler.
Shavuot or Feast of Weeks falls seven weeks after Pesach. On this harvest festival the first fruits are offered in the Temple and it is commemorated that the Torah was given to the Jewish people.
Yom Hashoa
Holocaust Remembrance.
Orthodox Jews in Jerusalem, IsraelOrthodox Jews in Jerusalem, IsraelPhoto: Borja García de Sola Fernández CC 2.0 Generic no changes made
Some Jewish Communities are:
Practicing Orthodox Jews
These Jews can be recognized by their yarmulke or "kipa", a small cap worn on the back of the head. They are convinced that man is responsible for history, and therefore participate fully in the social, economic and cultural life in Israel.
Reform Judaism
This liberal movement emerged in the 19th century and aimed to facilitate strict religious practices and rules. Even now people are still committed to the achievements of Western society.
Worldly Zionists
These pioneers of the idea of a Jewish state were strongly anti-religious. This does not alter the fact that they simply observe all Jewish holidays and eat kosher foods.
These Orthodox anti-Zionists emerged in Central Europe in the 18th century. They refuse to recognize the political authority of the state and do not view Zionism in the perspective of an explicit relationship with God. They value prayer more than study.
The Holy Scriptures and the prophets foretold the coming of a Messiah. In the Christian faith, the living God of Israel has taken the form of the man Jesus Christ. After his death it became the task of man to build up the Kingdom of God.
Jesus was born between 7 and 5 BC. in Bethlehem and lived in Nazareth. Between the ages of 28 and 30 he went with his disciples to Jerusalem via Galilee. On Easter of AD 30, he was sentenced to death and crucified by the Roman occupier.
After the crucifixion and resurrection, the apostles began to teach his teachings. The converted Jews were first called Christians in Antioch. The teachings of Christ were recorded in the four Gospels. The first three were written between the years 70 and 80, the fourth gospel around the year 100. They were written by the apostles Matthew, Mark, Luke, and John.
Bethlehem, Church of the NativityBethlehem, Church of the NativityPhoto: Abraham CC 3.0 Unported no changes made
Christian holidays:
On Christmas Eve, Catholics celebrate the birth of Christ.
All Christians celebrate the crucifixion and resurrection of Jesus Christ on this day. Easter concludes Holy Week and the 40-day Lent. The Orthodox Easter celebration is accompanied by many celebrations and the faithful light candles for a month.
Assumption of Mary
This feast has been celebrated since the 7th century, when the transition of Mary, the mother of Jesus, to life with God is commemorated.
Ascension Day
Forty days after Jesus' crucifixion and resurrection, he gathered his followers and took them to a mountain on the east side of Jerusalem. There he told them that he had finished his work on earth and that he was going to leave for heaven.
Fifty days after Easter, the Holy Spirit was celebrated on the congregation of faith and the foundation of the Church in Jerusalem is commemorated.
This feast is celebrated in the Eastern Churches and celebrates the unity of humanity and the divine nature of Christ, as well as the unity of the Old and New Testaments.
The Christian minority in Israel is represented by seventeen denominations. The many divisions in Christianity have resulted from theological differences and the growing divide between East and West. Following is a description of some of the important Christian denominations present in Israel:
In the mid-18th century, the Armenian people founded a Catholic church. The Armenian Patriarchate of Jerusalem is under the supreme authority of the Armenian Church and numbers about 2500 persons.
A Coptic Patriarchate has been established in Jerusalem since 1899. The Christian Copts come from Egypt and are closely associated with the hermits of the desert, Antony and Pachomius, and the monastic fathers Anasthasius and Cyril of Alexandria.
Greek Orthodox
Jerusalem's Greek Orthodox Community is headed by a patriarch and assisted by a synod of bishops, archimandrites and priests. They all still have the Greek nationality.
The Ethiopians were Christianized in the 6th century. An archbishop is at the head of the Jerusalem Church.
Lutherans, Anglicans, Baptists, and various National Churches, such as the Scottish and Danish Churches, are all represented in Jerusalem.
The Maronite community, of Syrian Christians in origin, was founded in 410 by Saint Maron, and makes up the majority of Christians in Lebanon. About 6,000 Maronites were victorious in Israel.
Islam, a monotheistic religion, is inspired by the Holy Scriptures (Old and New Testaments), and calls for submission to Allah and His divine word, the Quran, as revealed to the Prophet Muhammad by the angel Gabriel.
Five Pillars of Islam Five Pillars of IslamPhoto: Xxedcxx CC 3.0 Unported no changes made
Islam is based on the Quran and the Sunna (tradition), the account of the actions and words of the Prophet Muhammad. The five pillars of Islam are: the profession of faith, prayer, almsgiving, fasting during Ramadan and the pilgrimage to Mecca.
Allah speaks through the mouth of Muhammad and is part of the line of prophets that begins with Adam and continues with Noah, Abraham, Moses, Solomon, Joseph and Jesus, among others.
The Quran was written at the time of Othman and consists of 114 chapters or suras, divided into 6243 verses or aja.
Muslims pray five times a day, addressing the holy place of Mecca. The creed consists of uttering the words, "There is no God but Allah and Muhammad is his prophet." When this so-called "shahada" is publicly repeated three times, this is the sign of conversion to Islam.
Another pillar of Islam is the pilgrimage or "hajj". At least once in his life a Muslim must have made a pilgrimage to Mecca. Dressed in white, he must walk around the Kaaba and walk seven times between the hills of Safa and Marwa, the "omra" ritual.
The Dome of the Rock and Al Aqsa Mosque in Jerusalem have been the second most sacred sites of Islam after Mecca and Medina since the 7th century.
Al Aqsa Mosque in JerusalemAl Aqsa Mosque in JerusalemPhoto: Boubakar in the public domain
In Israel there are two main schools of Islam: the Sunnis and the Shias. The Sunnis strongly adhere to the legal aspect of Islam, that is, the law or "Sharia" and Islamic law or "fire". They have four schools of law, the Malikites, the Hanefites, the Caafiites and the Hanbalites. Palestinian Muslims adhere to the Chafi and Hanefite school.
The Shias are a minority in Israel. They are followers of Ali, who in their eyes, along with his sons Hassan and Hussein, are the only one to receive the Prophet's "will". In the eyes of the Shias, Ali is therefore the only legitimate successor to Mohammed.
A small, but emphatically present group are the Druze. The Druze have split off from the Shias again. The Shias recognize twelve Imams as successors to Ali. The Druze limit that number to seven. They settled in Palestine around the year 1000 and adhere to a mixture of Islam and Greek and Indian philosophies. Their texts and ceremonies are secret. The Druze have also had the status of an autonomous religious community since 1957.
Five Major Islamic Festivals:
This festival celebrates the birth of the Prophet Muhammad.
On the first day of the ninth month, in which the Divine Word was revealed to the prophet, Ramadan begins, which is why the believers are supposed to fast from sunrise to sunset.
Aïd el-Fitr
The first day of the month of Shawwal is the end of Lent. On that evening they eat pasta with dried apricots (amar ed-din).
The 17th day of the month of Rajab commemorates Muhammad's nocturnal journey from Mecca to Jerusalem, where he ascended to heaven.
Aïd el-Ahda
On the tenth day of dhu-el-Hijja they celebrate the sacrifice of Abraham, and each family slaughters a sheep.
Various faith communities:
Bahai's garden in Haifa, Israel.Bahai's garden in Haifa, IsraelPhoto: Israeltoutism CC 2.0 Generic no changes made
Bahá’í Faith
The Bahá'ís are officially recognized as the fourth faith community. The Bahá'í Faith is the youngest world religion (1844) with independent revelation and has about 6 million adherents throughout the world. It teaches that through a series of prophets (including Moses, Buddha, Jesus, Mohammed) God conveys to the people the basic teachings that are needed at that time. According to the Bahá'í Faith, the world is one country and all people are brothers and sisters. The center of the Bahá'í Faith is in Haifa, amid beautifully landscaped gardens and terraces that take the visitor all the way up from the base of Mount Carmel. There are a number of buildings on the site, including a tomb with a golden dome. It was built in 1909 by Abdu'l-Bahá, son of the founder of the Bahá'í Faith, Bahá'u'lláh.
The Samaritans were already persecuted in the Bible as a religious community. It now consists of only about 100 families living near the sacred Mount Gerazim near Nablus. They consider the Torah, the first five books of the Old Testament, and the Book of Joshua to be holy scriptures. Their first language is Arabic.
The 15,000 Karaites is a religious group that believes in the Torah as God's Word, but rejects all later writings. They are neither Arabic nor Islamic in origin and even have their own language.
State structure
Israel does not have a written constitution because the orthodox and liberal currents could not agree on points of principle. In 1950, the Knesset (literally "meeting" or "assembly") decided to pass so-called fundamental laws from time to time, which together are to take the place of a constitution.
Knesset, IsraelKnesset, IsraelPhoto: James Emery CC 2.0 Generic no changes made
The Republic of Israel is a parliamentary democracy with a president as head of state. Executive power rests with the prime minister and the ministers, who together form the government; they need the confidence of the parliament, the Knesset. The president asks the leader of the party with the most seats to form the government. The 120-member Knesset has legislative power and is elected by the people once every four years, under a system of proportional, direct and secret suffrage. All citizens aged 18 and over have the right to vote. Each voter has two votes: one for the Prime Minister and one for a party; there is no division into electoral districts. There is a 1% electoral threshold, which means that the political landscape is highly fragmented. A clear drawback is that the need for coalition governments results in a kind of dictatorship of the small parties. No Israeli party has ever achieved an absolute majority. The electoral law was changed in March 2001 and the old system will be used again with effect from the new elections. This means that one vote is cast for one party.
The president appoints the cabinet formator after consulting the representatives of the party factions in the Knesset. The president himself is elected by secret ballot by the Knesset for a term of five years and is himself a member of the Knesset. There is the possibility of re-election for another five years. Ministers can also be members of the Knesset, but not necessarily. In fact, although the president is the head of state, he only holds a ceremonial position.
The judiciary are the judges and the courts. Israeli criminal law consists of three parts: the laws from the time when the country was a province of the Ottoman Empire (until 1917), the laws from the time of the British Mandate, and the laws of the State of Israel itself. A number of family cases are heard by religious courts in Israel. The Supreme Court makes the final decision. For the current political situation see chapter history.
Administrative division
Districts of IsraelDistricts of IsraelPhoto:Golbez CC 2.0 Generic no changes made
The country is divided into six districts and 17 subdistricts. The municipal councils are elected for four years and the occupied areas are under military administration, advised by civilian officials.
Six districs:
HadaromBe’ér Sheva’14.231 km2950.000
HamerkazRamla1.276 km21.550.000
HazafonNazerat ‘Illit3.324 km21.130.000
HefaHefa863 km2840.000
Tel AvivTel Aviv-Yafo171 km21.165.000
YerushalayimYerushalayim652 km2795.000
University of HaifaUniversity of HaifaPhoto: Zvi Roger CC 3.0 Unported no changes made
The education system in Israel is partly in line with Europe, partly with the United States; education is given in Hebrew or Arabic for the Arabs. Israel has more than 3,000 schools and more than 110,000 teachers. On average, Israelis have more than 12 years of education and illiteracy is below 3%.
There are four types of education in Israel: state education, religious education, Arab education, and private education. 56% of people aged 20-24 go to university and more than half of university students are women.
Israeli children attend kindergarten for a year from the age of five, before attending primary school for six years. This is followed by a three-year middle school, after which the ten-year compulsory education comes to an end.
After middle school, the majority of the students go to a vocational school or to a higher school. About a quarter of Jewish families send their children to one of the many religious schools.
Israel has six universities: the Hebrew University in Jerusalem, the National Religious School of Bar-Ilan near Tel Aviv, the Ben Gurion University of Beersheva and the Israeli Institute of Technology (Technion) in Haifa. Furthermore, the Weizmann Institute is also considered a university.
Almost a quarter of the working population has completed a university education, placing Israel in third place in the world ranking after the United States and the Netherlands.
Typically Israeli
Kibbutz Gat from the airKibbutz Gat from the airPhoto: Amos Meron CC 3.0 Unported no changes made
The kibbutzim (IvrieT: kibbutz, kibbutzim) are a typical Israeli experiment. They were built not only to achieve the socialist ideal of equality, but also for security reasons. The first kibbutz was founded in 1909 in an area south of Lake Genesareth. At present, Israel has about 270 kibbutzim and new kibbutzim are still being set up. On average there are about 400 people working in a kibbutz, but a few villages have more than 1000 inhabitants.
These agricultural settlements are based on the principle of joint production and consumption and equality of all members. Today only three percent of Israelis live in kibbutzim. Almost half of all agricultural products come from these companies and many of them also make industrial products. About fifty kibbutzim in holiday areas have embraced tourism. Several dozen of them are members of the Kibbutz Hotels Chain, which has its own headquarters in Tel Aviv. The kibbutz movement has its own institutes for higher education and scientific research, in addition to its own chamber orchestra, theater groups, dance groups, galleries and publishers. Education, health care, childcare, laundries and other services are free.
In the past, all the buildings of the kibbutz were within a walled space with only one entrance. From the 1930s onwards, the houses and relaxation rooms were no longer around the courtyard, but separated from the other buildings. During the British Mandate the first kibbutzim consisted of dilapidated wooden barracks, with palisades and guarded by a watchtower.
From the kibbutzim emerged the "mosjavim", a type of cooperative in which families combine the individual exploitation of their farming business with joint ownership of means of production and services. The moshav does not have a fixed design, but each family receives an approximately equal plot of land of equal quality. The first moshav was built in 1920 and today about 4% of the Israeli population lives in one of the 450 moshavim.
In the "mosjavim shitufim" the land is also common property and there is joint management.
Kosher McDonalds in IsraelKosher McDonalds in IsraelPhoto: Andrés Monroy-Hernández CC 2.0 Generic no changes made
The indication "kosher" at a restaurant means that the kitchen is under the supervision of the chief rabbinate. Kosher guarantees that the choice of ingredients and preparation are under strict supervision. Non-kosher foods are unclean, or "tefa".
In principle, a distinction is made between natural products that do not require any special preparation, such as fruits, vegetables and coffee. Meat may be used under certain conditions and there are unclean products that should never be used.
Kosher is only the flesh of animals slaughtered with a razor-sharp knife so that it no longer contains blood, because blood is considered part of the soul.
Kosher mammals are cloven-hoofed ruminant animals such as beef, lamb, and goat, but not pig, rabbit, or camel, which are either non-cloven-hoofed or do not chew their food. Of poultry, duck, goose chicken, pigeon, pheasant and turkey is kosher, game fowl is not. Of the aquatic animals, kosher is what has scales, so mussels and crustaceans do not.
Dairy products must come from kosher animals and meat and milk must be strictly separated. Dairy products should only be consumed five hours after a meat meal. After a milk product you have to wait another two hours.
Weizmann Institute of Technology, IsraelWeizmann Institute of Technology, IsraelPhoto: Niv CC 2.0 Generic no changes made
The period after the proclamation of the State of Israel in 1948 to 1973 was characterized by rapid growth of the economy: the gross national product (GNP) increased by approximately 9% annually. This development was made possible by large capital imports in the form of foreign aid and loans, large donations from Jews outside Israel, payments and supplies in the context of German reparations and increased productivity.
After 1973 the economic situation deteriorated rapidly: in 1977 economic growth was only 0.5%. By 1982, economic growth had almost come to a standstill, after which it gradually recovered to growth of 5.2% in 1987, after which it fell to just 1% in 1989 and was even negative in 2002, -0, 8%. In recent years, the economy of Israel has grown considerably again at around three percent. (3.8% in 2013)
The most important economic problems include high inflation (58% in 1974, 440% in 1984, a very tight austerity policy then drastically reduced inflation to 16% in 1987, rose again in 1989 to almost 21% over the period 1985 to 1994 average 18%, over 1995-1996 8.3%, in 2002 5.7%), in recent years it has been improving with percentages of 1.7% in both 2012 and 2013), balance of payments, high defense costs, sharply increased debt to foreign countries and unemployment (6.8% in 2013). Unemployment fell from 11% in 1992). Unemployment is still high, especially in peripheral areas and among minorities. The biggest blows have been in traditional industries, which face increasing competition from low-wage countries.
GDP is growing by 6 to 7% annually and was $ 36,400 per capita in 2017. The composition of the GNP in 2017 was as follows (in brackets the distribution of the labor force): agriculture 2.4% (1.1%), industry, mining and construction 25.6% (17.3%), government, services and transport 69.5% (81.6%). 40% of the working population is made up of women.
Agriculture, livestock, forestry and fishing
Orange cultivation IsraelOrange cultivation IsraelPhoto: Lehava Activity 2013 Pikiwiki Israel CC 2.5 no changes made
About 20% of Israel is cultivated for agriculture, about 4100 km2. Domestic production provides the vast majority of Israel's food needs, and that has been agricultural policy since 1948. The significance of the agricultural sector for the Israeli economy has declined dramatically since 1948, from over 60% in 1948 to just over dn 1% in 2017. The mosjavim and the kibbutzim are the most important business form in agriculture. However, many of these cooperatives were in financial difficulties. Given the low annual rainfall, irrigation is essential. Advanced irrigation systems are used to make more and more land in more southern desert-like areas suitable for agricultural purposes. Citrus fruits are the principal agricultural produce, and the production of horticultural crops such as vegetables and flowers, as well as cotton, dates, olives, almonds, grapes, avocados and bananas are important. In recent years, more and more investments have been made in new products, mainly floriculture, advanced technology and know-how. About 90% of the flower production, intended for export, goes to the Netherlands, where it is auctioned and distributed to the rest of Europe. Bulk production is increasingly being abandoned. Grain is mainly grown in the valleys of Jezreel and Harod.
Livestock farming mainly includes sheep, goats, cattle and poultry. Israel provides for its own needs for milk, eggs, chicken and turkey. Beef and mutton often have to be imported.
Forestry is of exceptional importance, given the great hydrological value of the forests. More than 600 km2 is occupied by forest.
Fishing is practiced in the Mediterranean and the Atlantic, but is not very economic. Freshwater fish supplies Lake Kinneret and the fish farming ponds (carp) in the former Choelemeer area. Most fish is currently still being imported, about 90,000 tons.
Mining and energy supply
The Dead Sea contains billions of tons of various salts. In modern Sodom, the Dead Sea Works extract potassium carbonate and bromine from it, using natural gas extracted at Arad. Israel is the world's largest exporter of bromine. Furthermore, the Negev desert in particular is rich in minerals such as copper, phosphate, marble, gypsum and glass sand. The exploitation of minerals is predominantly in the hands of the state. The raw materials for the chemical industry are extracted from the natural resources of Israel, mainly minerals such as potash, magnesium and bromide from the Dead Sea and phosphate from the Negev.
Refinery in Haifa, IsraelRefinery in Haifa, IsraelPhoto: Pikiwikisrael CC 2.5 Generic no changes made
By far the most important energy source is petroleum, which, however, must be imported almost entirely. After the return of Abu Rodeis' fields in 1975 and the Alma fields to Egypt in 1979, the United States has guaranteed Israeli oil supplies. Ashdod and Haifa have refineries, and small oil fields have been found at Ashdod. The Haifa refinery has a capacity of more than 6 million tons per year, more than enough to meet its own needs. Natural gas is extracted from the Dead Sea, but exploitable gas fields have now also been discovered in the portion of the Mediterranean Sea under Israeli authority.
In 1979, a coal-fired power plant in Hadera was commissioned. Much value is attached to the development of nuclear energy. In 1976 an agreement was signed with the United States for the construction of two nuclear power plants, the first of which was commissioned in 1986. A small hydroelectric power station is located on the Jarmuk, a tributary of the Jordan. Many houses are equipped with amenities that allow you to take advantage of solar energy. Israel is committed to the development of alternative and clean energy sources and is already the world leader in solar energy.
Due to the current water crisis, Israel will invest approximately 4 billion euros in water management and water supplies over the next ten years. These funds mainly go to desalination facilities and also to other purification facilities and other supplies. This is desperately needed, because the population growth is expected to double current water consumption. Dry winters have highlighted the need for structural solutions, not just for Israel, but for the entire region. The water supply for Israel is also closely linked to the security issue. Two thirds of the water sources are outside Israel, on the Golan Heights and the West Bank. For the time being, a lot of water will still have to be imported.
Aircraft industry IsraelAircraft industry IsraelPhoto: Tiraden CC 4.0 International no changes made
The lack of raw materials and energy and the small domestic market mean that industrial development is limited. The fastest growing industrial sectors are the capital-intensive electronic and metal industries, including aircraft construction and the arms industry. The chemical industry is one of the driving forces in the Israeli economy. In 2017, this sector constituted 25% of the total industry and one of the main export sectors of Israel (70% goes to the United States). Food and textile industry declined in importance. Also important are diamond, cement and wood processing.
Major industrial centers are Tel Aviv, Haifa, Jerusalem, Ramle, Ashkelon, Ashdod, Hadera and Petach Tikwa.
The software industry is very important to the Israeli high-tech sector, with sales of billions of dollars and approximately 13,000 most highly educated employees. Initially this sector relied mainly on the military sector, nowadays the emphasis has shifted to civil applications.
Export IsraelExport IsraelPhoto: R. Haussmann, Cesar Hidalgo CC 3.0 no changes made
Important imports are petroleum, machinery, means of transport rough diamonds, weapons, grain, edible oil and fats.
In 1975 a trade agreement was concluded with the EU, which stipulated that from 1980 there would be no mutual import restrictions on each other's industrial products. A free trade agreement with the United States was concluded in 1986 and new agreements with the EU followed in 1988.
In 2017, 30% of exports went to the EU, 28.8% to the United States, 7% to Hong Kong. In the same year, 40% of Israeli imports came from the EC and 11.7% from the United States. In 2017, the main import partners were the United States, China, Germany, Belgium and Switzerland. The main export partners in that year were the United States, Belgium, China and Hong Kong.
Ben Gurion Airport IsraelBen Gurion Airport IsraelPhoto: Chris Hoare CC 2.0 Generic no changes made
The highways connecting Eilat to Haifa form a significant road link between the Red Sea and the Mediterranean. The road network covers more than 20,000 kilometers, of which 500 km consists of four-lane roads.
There are a number of railway lines (total length 1275 km), but their density is low. The railway line going south from Beersjeba to Eilat is important for opening up the Negev. However, as the railways are very sensitive to attacks, few goods and people are transported by train. There are plans at central and local level for the expansion and improvement of the national railway network and the construction of light rail connections in the major cities.
Public passenger transport takes place mainly by means of approximately 6000 passenger buses. The Egged Transportation Cooperative Society is the world's largest public transport company after London Transport.
Much use is made of the sjerut taxis, collective taxis, in which people only pay for their own seat. A striking feature in the infrastructure of the Occupied Territories is the network of Israeli roads. These are not accessible to Palestinian residents and form a kind of corridors between the various Jewish settlements and the Israeli territory.
For a long time, Israel had only one modern seaport, namely Haifa. The outdated port of Jaffa-Tel Aviv has closed; In Ashdod, 30 km south of Tel Aviv, a second Mediterranean seaport was built in 1965, and the new seaport of Eilat was established in the same year. Israel Shipyards in Haifa is one of the largest shipyards in the Eastern Mediterranean.
There are passenger ferry services from Haifa to Cyprus, Greece and Italy. Eilat is mainly used for import from and export to Asia and Australia. The main shipping company is Zim Israel Navigation, one of the ten largest container shipping companies in the world.
The international airport is Ben Gurion in Lydda (Lod) near Tel Aviv. The Israeli airline El Al from there maintains air connections with four continents. Foreign airlines also regularly fly to Ben Goerion. The other Israeli airports only have significance for domestic traffic, provided by the airline Arkia.
Israel is one of the world leaders in aircraft technology, including Israel Aircraft Industries.
Economy in the Occupied Territories
Micro industry GazaMicro industry GazaPhoto: Ehabich at English Wikipedia CC 3.0 no changes made
The Palestinian economy in the West Bank and Gaza Strip is highly dependent on Israel. In the 1990s, more than 80% of imports came from Israel and the Palestinian economy also exported more than 80% of its total exports to Israel. Palestinian workers in Israel contribute more than a quarter of the gross national product by transferring part of their wages to family members in the Palestinian territories.
A characteristic of the Palestinian economy is small-scale agriculture and production (olives, citrus fruits). The economy is highly dependent on imports. The Palestinian economy is especially vulnerable because of Israel's security policy. Israel's closing of the borders hampers the free movement of people and goods, which has disastrous consequences for the Palestinian economy. The Palestinian economy also has to contend with the barriers Israel imposes on foreign investment.
In many ways, the Palestinian economy has characteristics of a developing country: malfunctioning government services, poorly developed infrastructure, unclear tax laws and a lack of a well-functioning banking system. On the other hand, the Palestinian population is highly educated and there are many skilled workers.
Holidays and Sightseeing
Dead Sea, IsraelDead Sea, IsraelPhoto: Yair Haklai CC 3.0 Unported no changes made
Tourism has become of great importance to the Israeli economy since the 1990s. A lot of foreign money comes in and many people have found a job in tourism. The problem for Israel is political instability in the region, which means that the number of tourists can vary widely each year. Most tourists come from the United States, Germany, France and the Netherlands In the first half of 2008, more than 1.5 million international travelers visited the country. Israel expects to attract about 2.8 million visitors throughout 2008. Well-known attractions are the Dead Sea, the City of Tel-Aviv, Jerusalem and Eilat, the latter two are discussed in more detail below.
Jerusalem, Dome of the Rock and Western Wall, IsraelJerusalem, Dome of the Rock and Western Wall, IsraelPhoto: Publc domain
A visit to the city of Jerusalem, which is sacred to Christians as well as Muslims and Jews is one of those things you must do once in your life. A must visit place in the city is the Wailing Wall or the Kotel on the Temple Mount. The wall is a remnant of the Jewish Temple that once stood there, it was the western part. That is why it is also called the Western Wall. Jews have been gathering here since 70 BC to pray. Traditionally, notes with wishes are placed between the stones of the wall. The Western Wall is the stage for many festive events such as Bar Mitzwa’s. The Church of the Holy Sepulcher is Jerusalem's main Christian pilgrimage site. It is believed that Jesus was crucified and buried here. The church dates from 1149 and is built in the Gothic style. This holy place is well worth a visit. The Dome of the Rock, with its impressive golden dome, should not be missing from this list. Like the Wailing Wall and the Church of the Holy Sepulcher, this is a place of great religious importance. This is where Mohammend is said to have started his journey to heaven and Abraham is said to have sacrificed his son here. Believe this or not, the Dome of the Rock is a particularly beautiful building and well worth seeing. The dome was built in 691 and Quranic verses can be found on the walls. The ceiling painting is also very worthwhile.
Eilat at night, IsraelEilat at night, IsraelPhoto: Gal Gubesi CC 4.0 International no changes made
Eilat has become Israel's vacation paradise in recent years. The Israelis themselves discovered this a long time ago, but now the resort is also gaining fame among foreign holidaymakers. The attraction of the town mainly consists of the rich underwater world and the pleasant climate. The reef off the coast of Eilat is home to over 230 species of coral, home to the most wonderful fish and even turtles and dolphins. A vacation in Eilat simply cannot pass without checking out the coral reef. Diving and snorkelling are of course possible, but also a fun way to admire the underwater life is a trip in a glass bottom boat, so that you can see everything below the surface. Interested in more natural beauty? Not far from the seaside resort you will find the Timna Valley, where copper ore was formerly extracted, but which is now a nature park where the special Pillars of Solomon, the Mushroom and the famous arches have been carved in the rocks.
Cahill, M.J. / Israel
Chelsea House Publishers
Gerhard, C. / Israël
Van Reemst
Griver, S. / Israël : inclusief de Palestijnse Autonome Gebieden
Groeneveld, M. / Israël: een leesboek
Het Heilig Land
Rauch, M. / Israël
Sanger, A. / Israël
Van Reemst
Semsek, H.-G. / Israël : Westelijke-Jordaanoever, excursies naar Jordanië
Het Spectrum
CIA - World Factbook
BBC - Country Profiles
Last updated December 2021
Copyright: Team Landenweb
|
Item 6: Understand type conversions
In general, Rust does not perform automatic conversion between types. This includes integral types, even when the transformation is "safe":
let x: i32 = 42;
let y: i16 = x;
error[E0308]: mismatched types
--> use-types/src/
14 | let y: i16 = x;
| --- ^ expected `i16`, found `i32`
| |
| expected due to this
help: you can convert an `i32` to an `i16` and panic if the converted value doesn't fit
14 | let y: i16 = x.try_into().unwrap();
Rust type conversions fall into three categories:
• manual: user-defined type conversions provided by implementing the From and Into traits
• semi-automatic: explicit casts between values using the as keyword
• automatic: implicit coercion into a new type
The latter two don't apply to conversions of user defined types (with a couple of exceptions), so the majority of this Item will focus on manual conversion. However, sections at the end will discuss casting and coercion – including the exceptions where they can apply to a user-defined type.
User-Defined Type Conversions
As with other features of the language (Item 5) the ability to perform conversions between values of different user-defined types is encapsulated as a trait – or rather, as a set of related generic traits.
The four relevant traits that express the ability to convert values of a type are:
• From<T>: Items of this type can be built from items of type T.
• TryFrom<T>: Items of this type can sometimes be built from items of type T.
• Into<T>: Items of this type can converted into items of type T.
• TryInto<T>: Items of this type can sometimes be converted into items of type T.
Given the discussion in Item 1 about expressing things in the type system, it's no surprise to discover that the difference with the Try... variants is that the sole trait method returns a Result rather than a guaranteed new item; the trait definition also requires an associated type that provides the type of the error E for failure situations. You can choose to ignore the possibility of error (e.g. with .unwrap()), but as usual it needs to be a deliberate choice.
There's also some symmetry here: if a type T can be transmuted into a type U, isn't that the same as it being possible to create an item of type U by transmutation from an item of type T?
This is indeed the case, and it leads to the first piece of advice: implement the From trait for conversions. The Rust standard library had to pick just one of the two possibilities (to prevent the system from spiralling around in dizzy circles1), and came down on the side of automatically providing Into from a From implementation.
If you're consuming one of these two traits, as a trait bound on a new trait of your own, then the advice is reversed: use the Into trait for trait bounds. That way, the bound will be satisfied both by things that directly implement Into, and by things that only directly implement From.
This automatic conversion is highlighted by the documentation for From and Into, but it's worth reading the code too:
impl<T, U> Into<U> for T
U: From<T>,
fn into(self) -> U {
Translating a trait specification into words can help with understanding more complex trait bounds; in this case, it's fairly simple: "I can implement Into<U> for a type T whenever U already implements From<T>".
It's also useful in general to look over the trait implementations for a standard library type. As you'd expect, there are From implementations for safe integral conversions (From<u32> for u64) and TryFrom implementations when the conversion isn't safe (TryFrom<u64> from u32).
There are also various blanket trait implementations. Into just has the one shown above, but the From trait has many impl<T> From<T> for ... clauses. These are almost all for smart pointer types, allowing the smart pointer to be automatically constructed from an instance of the type that it holds, so that methods that accept smart pointer parameters can also be called with plain old items; more on this below and in Item 8.
The TryFrom trait also has a blanket implementation for any type that already implements the Into trait in the opposite direction – which automatically includes (as above) any type that implements From in the same direction. This conversion will always succeed, so the associated error type is 2 the helpfully named Infallible.
There's also one very specific generic implementation of From that sticks out, the reflexive implementation:
impl<T> From<T> for T {
fn from(t: T) -> T {
Translating into words, this just says that "given a T I can get a T". That's such an obvious "well, doh" that it's worth stopping to understand why this is useful.
Consider a simple struct and a function that operates on it (ignoring that this function would be better expressed as a method):
/// Integer value from an IANA-controlled range.
#[derive(Clone, Copy, Debug)]
pub struct IanaAllocated(pub u64);
/// Indicate whether value is reserved.
pub fn is_iana_reserved(s: IanaAllocated) -> bool {
s.0 == 0 || s.0 == 65535
This function can be invoked with instances of the struct
let s = IanaAllocated(1);
println!("{:?} reserved? {}", s, is_iana_reserved(s));
but even if From<u64> is implemented
impl From<u64> for IanaAllocated {
fn from(v: u64) -> Self {
it can't be directly invoked for u64 values
error[E0308]: mismatched types
--> casts/src/
75 | if is_iana_reserved(42) {
| ^^ expected struct `IanaAllocated`, found integer
However, a generic version of the function that accepts (and explicitly converts) anything satisfying Into<IanaAllocated>
pub fn is_iana_reserved_anything<T>(s: T) -> bool
T: Into<IanaAllocated>,
let s = s.into();
allows this use:
if is_iana_reserved_anything(42) {
The reflexive trait implementation of From<T> means that this generic function copes with items which are already IanaAllocated instances, no conversion needed.
This pattern also explains why (and how) Rust code sometimes appears to be doing implicit casts between types: the combination of From<T> implementations and Into<T> trait bounds leads to code that appears to magically convert at the call site (but which is still doing safe, explicit, conversions under the covers), This pattern becomes even more powerful when combined with reference types and their related conversion traits; more in Item 8.
Rust includes the as keyword to perform explicit casts between some pairs of types.
The pairs of types that can be converted in this way is a fairly limited set, and the only user-defined types it includes are "C-like" enums (those that have an associated integer value). General integral conversions are included though, giving an alternative to into():
let x: u32 = 9;
let y = x as u64;
let z: u64 = x.into();
The as version also allows lossy conversions:
let x: u32 = 9;
let y = x as u16;
which would be rejected by the from / into versions:
error[E0277]: the trait bound `u16: From<u32>` is not satisfied
--> casts/src/
113 | let y: u16 = x.into();
| ^^^^ the trait `From<u32>` is not implemented for `u16`
= help: the following implementations were found:
<u16 as From<NonZeroU16>>
<u16 as From<bool>>
<u16 as From<u8>>
= note: required because of the requirements on the impl of `Into<u16>` for `u32`
For consistency and safety you should prefer from / into conversions to as casts, unless you understand and need the precise casting semantics (e.g for C interoperability).
The explicit as casts described in the previous section are a superset of the implicit coercions that the compiler will silently perform: any coercion can be forced with an explicit as, but the converse is not true. (In particular, the integral conversions performed in the previous section are not coercions, and so will always require as.)
Most of the coercions involve silent conversions of pointer and reference types in ways that are sensible and convenient for the programmer, such as:
• converting a mutable reference to a non-mutable references (so you can use a &mut T as the argument to a function that takes a &T)
• converting a reference to a raw pointer (this isn't unsafe – the unsafety happens at the point where you're foolish enough to use a raw pointer)
• converting a closure that happen not to capture any variables into a bare function pointer (Item 2)
• converting an array to a slice
• converting a concrete item to a trait object, for a trait that the concrete item implements
• converting3 an item lifetime to a "shorter" one (Item 14).
There are only two coercions whose behaviour can be affected by user-defined types. The first of these is when a user-defined type implements the Deref or the DerefMut trait. These traits indicate that the user defined type is acting as a smart pointer of some sort (Item 8), and in this case the compiler will coerce a reference to the smart pointer item into being a reference to an item of the type that the smart pointer contains (indicated by its Target).
The second coercion of a user-defined type happens when a concrete item is converted to a trait object. This operation builds a fat pointer to the item; this pointer is fat because it also includes a pointer to the vtable for the concrete type's implementation of the trait – see Item 8.
1: More properly known as the trait coherence rules.
2: For now – this is likely to be replaced with the ! "never" type in a future version of Rust.
3: Rust refers to these conversions as "subtyping", but it's quite different that the definition of "subtyping" used in object-oriented languages.
|
Don't let Energy Bills add to debt problems
With the summer months behind us, it's time to start looking at saving for the winter. With colder weather we need an extra bit of heat and our gas and electricity bill will rise accordingly. Money Dashboard has compiled our recommendations to stop those bills getting out of control:
1. Heat is expensive
It's useful to bear in mind that the devices that create heat tend to be the ones that need the most fuel. Only fill the kettle with the amount of water you need. Don't leave the iron or hair-straighteners on longer than you need to. If your heating is on turn down the thermostat and put on another layer instead.
2. Be eco-friendly
A lot of environmental advice has budgetary benefits too. Switch to energy saving light bulbs and turn them off when you're not in the room. Also, don't leave electronics on standby. They still use electricity until you turn them off. The Energy Saving Trust reports that £900 million of electricity is wasted every year due to appliances being left on standby.
3. Do your own meter readings
Energy providers' estimates are just that; estimates. If you are billed too little, you will end up with big bill at the end of the year that you were not expecting. If they bill you too much, you'll be out of pocket in the short term. You can usually submit your own meter reading online or over the phone, and an accurate reading means an accurate bill; if you know how much you are spending you can budget more effectively.
Money Dashboard automatically identifies and categorises transactions like Utility payments allowing you to track your spending and budget accordingly. Put simply, if you know how much you're going to spend on bills, you can work out how much you have left to spend on other things, or to pay off debts.
4. Pay by monthly Direct Debit
Electricity and gas suppliers prefer payment by Direct Debit and are sometimes willing to offer incentives in the form of 5-10% discounts to customers who pay by this method.
5. Internet billing
Some providers will offer discounted rates to internet customers, as the reduced requirement for personal interaction and paperless billing saves them money. Often you can save up to 10% in relation to the standard tariff.
6. Try to avoid pre-payment meters
Pre-payment meters tend to be more costly than billed meters paid by direct debit. It might cost you to get a billed meter installed, but it's worth it for the savings as well as the convenience.
7. Don't be fooled by ‘dual fuel'
‘Dual fuel' is when you get gas and electricity from the same supplier. A lot of the time, it's cheaper overall to go with dual fuel as suppliers offer deals. But that's not always the case; sometimes getting fuel from two different suppliers works out cheaper overall. Be sure to compare prices for dual fuel and separated electric and gas suppliers.
8. Find the cheapest supplier
You may have been with the same gas and electricity supplier for years, but there might be a cheaper deal out there. It's surprisingly easy to switch your supplier as you use the same gas, electricity, pipes, meter & safety. The only difference is the price and the customer service.
The quickest way to figure out the cheapest gas and electricity supplier for you is to use a price comparison website like Energy Helpline or You input your address and details and they show you what deals are available.
9. Reduced rates for financial hardship
Some utility suppliers will offer discounted rates for those who can prove they are receiving state benefits or otherwise experiencing financial hardship. There are also grants available from the Energy Saving Trust and various charities that will pay for improved home heating and insulation or cover part or all of your utility bills. Criteria for application and the extent of the discount or grant will vary, and in some cases it may not be cheaper than the best online or Direct Debit deals, but if you believe you may be entitled it is worth asking your provider, or read more at the Consumer Wiki.
Back to blog home
Get started with Money Dashboard
Related articles
Download app
Join 600,000+ people on Money Dashboard
Sign up - it's free
|
WVU researchers explore how a benign virus can be used to treat eye diseases
Viruses have a bad reputation for a good reason. But some viruses don't harm people. Some can even help them.
Researchers with the West Virginia University School of Medicine are studying how a benign virus can make new treatments for eye diseases possible. They're exploring how to use engineered adeno-associated virus, or AAV, to compensate for missing protein or swap out genetic mutations that cause vision problems and replace them with DNA that works as it should.
"Eighty-five percent of Americans are seropositive for AAV. However, the virus has never been associated with any pathological effect," said Wen Tao Deng- an assistant professor in the Department of Ophthalmology and Visual Sciences- who is leading the effort. "We engineered the virus to use it as a vehicle to deliver the genes we are interested in. We use it as a tool to actually benefit us. So, this is a good virus."
The National Eye Institute has awarded the five-year project $1.9 million.
The project focuses on genetic mutations that affect specific photoreceptors in the eye, called L- and M-cones.
When you lose your L and M-cones, basically you lose visual acuity; you lose your ability to read; you lose your color vision. It severely, severely affects your daily function."
Wen Tao Deng, Assistant Professor, Department of Ophthalmology and Visual Sciences, West Virginia University School of Medicine
Deng and her colleagues will use mouse models they have genetically modified to lose their L- and M-cones in a way that mimics the experience of humans who inherit this mutation.
They will analyze-;on a molecular level-;the unique mechanisms that underlie the disease.
They'll take advantage of AAV's "Trojan horse" ability to sneak into the nucleus of a photoreceptor and either replace its missing protein or rout a troublesome mutation while installing healthy DNA in its place.
Deng is also interested in developing new treatments that could delay the onset, or slow the progression, of a range of eye diseases- from red-green color vision defects (the most common form of color deficiency) to blue cone monochromacy (a much rarer condition) and other forms of cone dystrophy.
"We are also interested in delaying the degeneration," Deng said. "Some patients gradually lose vision. So, if you could delay their photoreceptor cell degeneration for 5 to 10 years, this also could give them an expanded window of treatment. This is especially important for children to buy time until we identify a treatment to reverse it."
You might also like... ×
Young children’s mobility, motor skills linked with good vision and eye health
|
February: Jean Sibelius
Jean Sibelius (8 December 1865 –20 September 1957)
He was a Finnish composer of the later Romantic period He is one of the most famous people from Finland and one of the greatest composers of symphonies of all times. He was born at a time when Russia had a lot of power in Finland and the Finnish people were trying hard to keep their own culture and their independence. This nationalism can be heard in a lot of his music, especially some of the choral music. After 1928 he composed very little. He lived in retirement in his home in the Finnish countryside.
To find out more and listen to his music- click here
|
Linux Features, Modify everything of the windows
Linux Features, Modify everything of the windows
Customization : Linux can be changed at will. You can absolutely modify everything, from the way the system starts up to the appearance of the windows, the way the mouse behaves or the operation of the program that manages the internet connection. You can also replace parts of the system. Windows is not very editable. Apart from the appearance of the windows, you cannot change much.
Automation :
Under Windows, it is difficult to automate certain tasks by scripts (because you have to click on buttons). Windows scripting is limited. Additional programs must be used (batch files, WSH, VBScript, KixStart, AutoIt …).
Linux Features
Under Linux, absolutely everything is scriptable. This allows you to automate all the tasks you want (Some examples: Rename a set of files; Automatically turn off the computer at a given time or after a task is completed; Reconfigure the firewall at a specific time, or on triggering of a specific event …).
Availability of sources :
Sources for Linux and its tools are available. This allows you to see the internal workings of the system and even to modify it. Anyone can monitor what’s being done, and quickly find bugs. Linux and its software thus evolve thanks to contributions coming from all over the planet.
Windows is a black box. We don’t know how it works internally, and no one other than Microsoft can modify and correct it. You have to trust Microsoft.
The availability of sources has security implications (see below).
Independence :
Windows XP can only be installed after validation by internet with Microsoft servers. You are dependent on Microsoft to be able to install Windows XP. If Microsoft decides to shut down Windows XP, you will no longer be able to install or reinstall it.
More and more software companies are using this kind of mechanism. Your computer, software, and your own personal files are becoming increasingly dependent on outside private companies, which have more and more control over them.
With the technologies being developed, you won’t even be able to start your computer without outside permission.
With Linux, you are the master of your computer and the system is completely autonomous and independent.
Peripheral devices:
Linux supports many more devices than Windows as standard. That said, device manufacturers almost always provide drivers for Windows, but rarely for Linux. If you have a very recent device , it is possible that there is no driver for Linux. You may end up with a device that you cannot use.
That said, devices older than 6 months are generally usable on Linux without any problems. And especially the Linux drivers are maintained almost for life : You will not end up one day with a device that you can no longer use (as has happened to users going from Windows XP to Vista).
System requirements :
Linux requires less powerful machines than Windows. Even with an old 386 with 64 MB of RAM, you can surf the internet, draw and type your mail. And with a powerful machine, it’s a real pleasure.
In addition Linux tends to swap less than Windows (better management of virtual memory).
The latest versions of Windows (Vista for example) require a powerful computer for even the simplest tasks (working with files or typing mail). Vista requires at least 1 GB of RAM and 15 GB of disk space. On the Linux side, Ubuntu is content with 256 MB of RAM and 4 GB of disk. And there are Linux distributions that work with 64 MB of RAM … and without a hard drive. Linux therefore allows access to computers for the greatest number at the lowest cost, in particular with old computers.
Openness and compliance with standards :
Linux is more open to standards than Windows. This makes Linux easier to interconnect with other systems than Windows. For example, Linux is supplied as standard with HTTP, FTP, telnet, SMTP, POP3, ssh, SMB, NFS clients and servers … This makes Linux a system of choice for everything related to networks and communications.
Under Windows, most often it is necessary to buy or install additional software, sometimes quite expensive. Standards are often poorly respected, which makes interconnection of systems complicated. In addition Microsoft often tries to impose its own standards which are redundant with existing standards. Windows often fails to meet standards, which makes communication more difficult.
Decoupling of the graphical interface :
Under Linux, the graphical interface is software like any other. Advantage: this allows you to choose your graphical interface among all those available: KDE, Gnome, XFCE, IceWM, FluxBox, WindowsMaker. You can very well not launch it when you don’t need it. Very practical not to waste resources unnecessarily on the servers.
Under windows 12 lite download, you have no choice of the graphical interface. You are also obliged to undergo it even when you do not need it. Consequence: Under Windows, if the graphical interface crashes, you can no longer access your system to repair it. On Linux, all you have to do is start in text mode: you can still access your system.
Post Comment
|
Measles Outbreaks
By United Family Healthcare 2019-05-30 14:00:56
United Family Healthcare (UFH) give you what you need to know
Measles Outbreaks
The number of measles cases this year so far represents a 300% increase from the number of cases seen in the previous year, with over 110,000 measles cases in the first three months of 2019. An outbreak in Hong Kong in May followed large increases in the Philippines (25,000 cases and 355 deaths) and the United States. Outbreaks in 2019 have also occurred in Ukraine, Madagascar, India, Australia, Cambodia, Japan, Laos, New Zealand, the Republic of Korea, Singapore, and Vietnam. Measles remains a global issue.
What do we need to know about measles and how can we prevent it?
Measles is a highly contagious virus:
• The infection is characterized by fever, malaise, cough, runny nose and conjunctivitis, followed by a body rash.
• Tragically, most of the cases are among children under 5 years old.
• Measles can cause debilitating complications, including encephalitis, severe diarrhea and dehydration, pneumonia, ear infections, and permanent vision loss.
• The period of contagiousness is from 5 days before appearance of rash to 4 days afterward.
• The illness may be transmitted in public spaces, even in the absence of person-to-person contact.
People who are at risk for measles include:
• Children too young to get a measles shot.
• People who have never had a measles shot.
• People who did not get a second measles shot.
• People who got a shot that did not work well.
• Children from 6 through 11 months should receive one dose of MMR. Children who receive the first dose of MMR before age 12 months should receive two additional doses, separated by at least 28 days, beginning at age 12 to 15 months.
• Children ≥12 months of age should receive two doses of MMR separated by at least 28 days, with the first dose administered on or after the first birthday.
What are the symptoms of measles?
The first symptoms can include: a high fever – up to 40ºC; feeling sick, cold symptoms; loss of appetite; spots in the mouth (Koplik spots).
After the first symptoms, many people have: pink eyes (light-sensitive); sneezing and coughing and a red rash that starts on the face and spreads to the body. The rash should resolve after 3-4 days, with the skin peeling, followed by the other symptoms, while the cough may continue for 1 to 2 weeks.
How is measles treated?
For most people, there is no specific treatment, but important steps to take include supportive care, rest, drinking plenty of fluids and taking acetaminophen to help with fever and aches. Doctors also sometimes give vitamin A to children with severe measles.
Can measles be prevented?
Yes. The MMR vaccine prevents infection.
All children should get the MMR (Measles Mumps Rubella) vaccine when they are 12 to 18 months old. Then they need a second shot when they are 4 to 6 years old. A child should have the second shot before he or she starts school. The MR vaccine (Measles Rubella) is also an efficient tool against Measles; according to the China Center of Disease and Control (CDC), babies and children living in China need to get the MR vaccine at 8 months old.
Tong Wei Chng MD
Chief of Pediatrics
Shanghai United Family Pudong Hospital
|
16S rDNA Sequencing
• DNA Reading
• Hybridoma Antibody Sequencing
Immune Repertoire Sequencing
DNA Sequencing & Gene Analysis
NGS High Throughput Sequencing
16S rDNA Sequencing
Metagenome Sequencing
RNA Sequencing
CRISPR Amplicon Sequencing
16S rDNA sequencing can be used to detect a species’ classification, abundance, population structure, system evolution, and colony community of bacteria in environmental samples. Synbio Technologies next-generation sequencing technology can easily determine the variable V3 and V4 regions of the 16S rDNA gene. The goal of this sequencing is to detect the sequence variation and abundance of the 16S target region of environmental samples. The sequencing technology is capable of parallel sequencing of multiple samples, making it suitable for the identification of the most bacteria.
Competitive Advantages
• Accurate Identification: Defining the bacterial flora as “species”, and even low abundance species are included.
• Advanced Analysis and Algorithm: Combined the latest Greengene database with our self-built database to provide accurate analysis of between-group variance, colony community, and community evolution.
• Higher Efficiency and Lower Cost: Compared with traditional methods of bacterial colony identification while reducing cost.
16S rDNA Sequencing Work Flow
Service Specifications
Sample TypeSequencing Model Sampling Requirements Turnaround Time
Bacteria or DNA, total DNA amount is recommended > 20 ng (no host, impurities or contamination)MiSeq, PE250/300 >0.04M clean dataNo obvious degradation and protein contamination, OD260/280 ≥1.5, OD260/230 ≥1.0, concentration ≥30 ng/μL. DNA sample: soluble in nuclease free ddH2O water in 1.5ml tube, sealed with sealing film and frozen transport.Standard Delivery : 60 business days. For specific projects, please consult.
Data Analysis
Standard AnalysisAdvanced Analysis
1. Original data processing and statistics
2. Variable region validation
3. Operation Taxonomic unit clustering and analysis
4. Classification and abundance analysis of single species
5. Analysis of species richness in diverse species
6. Alpha diversity analysis
7. Beta diversity analysis
1. Main factor analysis of the samples difference
2. Component significant difference analysis
3. System evolutionary tree construction
4. The personalized analysis
|
Skip to main content
How We Test USB Wi-Fi Adapters
Quantitative Testing
Throughput is one of a Wi-Fi adapter's key specifications. Since these are all AC1200-capable models, you might not expect much variation between them.
The router and the overall network topography remained constant throughout testing. Throughput was measured at five, 25, 50 and 75 feet. The distances were determined with a measuring tape. Fifty-foot testing included two intervening walls that were in the way due to the physical setup of the test lab. Line of sight was slightly lost at 50' as we turned a corner to get far enough away from the router. Tests reaching 75 feet go out into an open atrium just outside the suite's entrance. The setup is in the first floor of a commercial building, and the walls are constructed of plaster.
Above is a figure of the office test setup that shows the various distances, and the intervening wall structures at 50 and 75 feet. If 75-foot testing is done, the procedure puts the tested adapter outside of the office. The diagram is not to scale.
A labeled image shows the physical setup of the testing location. Note that the router on the left-hand side is standing at an upright angle.
When a router and adapter are too close together, the Wi-Fi signals can cause interference (as represented in the image above). This is why a router and an access point (AP) should not be set up in close proximity, and not on the same Wi-Fi channel.
In most cases, the speeds close to the router at the five-foot distance were slower than at 25 or 50 feet. This phenomenon can happen when the signals are too strong between the router and the adapter, creating interference as a result of being too close to each other. Simply put, this is one of those cases when a wired connection is preferred over wireless.
A screenshot of the type of data that the popular Web-based Speedtest ( generates. While it will provide an indication of the download and upload speed, it measures the Internet connection and not the wireless network speeds.
Measuring the speed of a network connection is a vital part of each adapter's quantitative evaluation. Simply running an Internet speed metric, such as, will give you the speed of the online network connection, but it's not the right test to use to compare Wi-Fi speeds. Unless you have a gigabit connection (such as Google Fiber), the throughput of a home's internal network will be faster than the Internet, invalidating the use of a Web-based tool for these tests.
A screen capture from Windows 7 showing the network speed estimate over a 5GHz Wi-Fi network; in this case it was 325.0 Mb/s.
There are better ways to approximate wireless network speed. The first is Windows' Network Center, which can provide a rough estimate of the network adapter's connection. It's not an indicator of actual performance though, so it is not the best number to use since it does not get measured directly.
For home users with multiple devices on their network, by transferring a file between two computers within a Windows Home Group, knowing the size of the file and timing the transfer time, speed can be measured and Mb/s derived. One of the devices should be wireless and the other one wired, preferably on a 10/100/1000 Mb/s Ethernet port, to best measure the wireless connection's speed. The limitation of this method is that the transfer is manually timed, introducing an element of inaccuracy (the last time I did this I was using a stopwatch app for my smartphone, practicing starting the transfer and the stopwatch simultaneously, which is difficult to perfect).
This is the raw data that IxChariot generates. Note the throughput at the 50-foot distance on the 5GHz band fluctuated significantly from a minimum of 18.5 to a maximum of 231.8 Mb/s, which, while it averaged out to 116.8 Mb/s, hardly tells the whole story!
Given the limitations of the previously mentioned techniques, a dedicated software solution was chosen: IxChariot.
This software can measure network performance in a reliable and consistent fashion, including TCP throughput. It'll report minimum and maximum speeds, as well as calculate the average. TightVNCViewer, a remote desktop software solution, is used to remotely access our ASRock server in order to log into a desktop session and retrieve the IP address. Then, TightVNCViewer is terminated. When working with IxChariot, we designate the server's IP address as Endpoint 1 and the laptop's IP address as Endpoint 2. We use the High Performance Throughput test to determine speeds that are reported in Mb/s. Throughput tests are run to completion, which means, specifically, the test is run until 100 timing records are finished and the results recorded with screenshots.
A hypothetical bar graph of the speeds obtained (x-axis, expressed in Mb/s) from four wireless AC USB adapters (y-axis). Note that the maximum speeds are in black, the minimum speeds in red and the average speeds in blue. While one product may have lower minimum values, it may not necessarily have lower averages. This same observation also applies to peaks values.
PassMark's WirelessMon is a software package we use to measure signal strength in five-foot increments on the 2.4 and 5GHz bands. The WirelessMon software provides a signal strength reading between the device under test and the router for a given distance. The notebook is held in each spot for 20 to 30 seconds to get this reading. This data set looks at how good the antenna is on the adapter, or if a manufacturer's implementation of beamforming (a Wi-Fi technology designed for directional signal transmission) is working or not, as the distance increases compared to the competition. All of this data gets put into an Excel spreadsheet with relevant notations on how the test went specifically for that USB AC1200 Wi-Fi adapter. With the data collected on each product, all data points are put into a separate Excel spreadsheet for comparative analysis.
A screenshot of the software PassMark's WirelessMon.
A hypothetical plot of four wireless AC USB adapters with the distance from the router expressed in feet (x-axis), and the signal strength in decibels (dB, y-axis). While the red and black lines follow a linear progression, the blue and green plots illustrate how nonlinear signal strength may be.
|
DR Congo: Panic in Goma after warnings of second volcanic eruption
Over 400,000 people have fled and others evacuated the battered Goma city in eastern Democratic Republic of Congo (DRC) over fears of another volcanic eruption.
This followed warnings from local authorities that Mount Nyiragongo could erupt again anytime after killing 32 people and leaving thousands homeless during when the volcano spewed lava over the surrounding areas last week. Mount Nyirangongo, Africa’s most active volcano is located just 10 km from Goma.
Before the second warning, those who had fled the city initially had started returning from the neighbouring Rwanda across the border where they had gone to seek safety.
More than 200 aftershocks have so far rattled the area since the first eruption destroying several buildings and leaving two long cracks appearing in the ground.
According to UN estimates, Goma’s population is about 670,000 people though several nongovernment organizations in the region put this number close to 1 million.
Following last week’s eruption, series of earthquakes and tremors have been experienced in the area with some felt as far as Kigali, Rwandan capital, 65 miles away from the Volcano which is part of the Virunga National Park.
Damage from last week’s eruption
In addition to the 32 people who were killed, several buildings including homes, roads, and farms have been destroyed by volcanic activity while close to 400,000 have either fled or evacuated from Goma to safer areas. Many of these have crossed to Rwanda to the border city of Gisenyi where authorities have directed them to a former refugee camp and a secondary school.
Residents of Gisenyi themselves have also fled the town going eastwards deeper into Rwanda as earthquakes and tremors continue to shake the city. According to Rwanda authorities, the reported earthquake reached 4.9 magnitude on Thursday.
Hundreds of houses and forests on the edges of Goma were set alight by rivers of molten rock which streamed from Mount Nyiragongo as it erupted last week. One stream of flowing lava stopped close to Goma airport which caused more panic at this hub of humanitarian aid operations in the region.
According to Unicef, UN’s children agency, over 530 missing children were rescued after being separated from their parents during the eruption.
History of Mount Nyiragongo eruptions
Prior to last week’s eruption, Mount Nyiragongo had last erupted in 2002 killing 250 people while 120,000 were left homeless. The volcano’s deadliest eruption happened in 1977 when more than 600 people were killed.
Goma at the centre of natural and humanitarian calamities
The city of Goma located in the north Kivu region of eastern DRC, near the border with Rwanda has for decades been troubled by both natural and civil disasters. The city has for long been destabilised by civil conflicts between government troops and several rebel groups including ADF and M23 which have left thousands of people dead and others displaced.
The conflicts have affected both human and wildlife with some of the militia groups operating in forests which are part of the protected area of Virunga National Park. Rebels have on several occasions killed park rangers and sometimes tourists have also been killed or abducted paralysing tourism activity in the park. The park has on several occasions been closed to tourism due to security reasons with the recent one happening in early 2020.
Such conflicts coupled with natural disasters like volcanic eruptions leave Goma and the nearby areas as one of the most risky areas to live in Africa and the world at large.
Virunga National Park protects part of the population of the endangered mountain gorillas which together with Mount Nyiragongo are are the leading tourist attractions of the park. Currently, there are 8 groups of habituated gorillas that are visited by tourists in Virunga National Park. The gorilla permit at Virunga National Park is sold at $400 per person and it is the cheapest compared to $1,500 in Rwanda and $700 in Uganda.
Why is Goma frequently associated with volcanic activity
Mount Nyiragongo, one of the world’s most active volcanoes is located just 10km from the city of Goma which puts the city at a very high risk whenever it erupts. Lava from the volcano usually flows towards Goma destroying some parts of the town including threatening to cover the airport sometimes.
Posted in blog.
|
Is feline obesity a problem?
YES – Obesity, generally defined as an excess of body weight of 30% or more, is the most common nutritional disease of domestic cats. Although the frequency varies from one country to the next, on average up to 40% to 45% of all adult cats are obese! Despite these alarming figures, very little is known about the detrimental effects of obesity on feline health. Obesity in cats is a known risk factor for type 2 diabetes mellitus, heart disease, osteoarthritis, certain forms of cancer and lower urinary tract disease. In humans, obesity causes an increase in illness and mortality at all ages and is associated with diabetes mellitus, certain types of cancer, impaired mobility and arthritis, high blood pressure, heart disease, and other illnesses. Recent studies suggest that heart disease also develops in obese cats! More research is needed to evaluate this and to determine what other detrimental effects obesity has on cats.
“Obesity in cats is frequently associated with hepatic lipidosis.”
Obesity in cats is frequently associated with hepatic lipidosis. This is a severe form of acute liver failure in cats. It typically occurs in cats that are obese and have undergone a brief period of “stress” which causes anorexia. The “stress” may be as simple as a change of house or a change in diet. When it first became recognized, hepatic lipidosis was an almost universally fatal disease in cats. Fortunately, with improved, aggressive and prolonged therapy about 70% to 80% of affected cats can now be successfully treated. However, because of the risk for this potentially fatal disease, weight loss programs for obese cats need to be done cautiously and always under the care of a veterinarian.
What causes obesity in cats and how should it be treated?
Many factors contribute to obesity in cats, and not all of them are clearly understood. Some are probably genetic, while others are related to diet and environment. It is important for the cat owner and veterinarian to keep these factors in mind when treating the obese feline patient. Prevention is better than treatment, but this is not always easy. Indoor cats are more prone to obesity, perhaps because they eat more out of boredom, but also because they have less opportunity to stay trim through exercise. Remember that everybody should run and play, including cats!
Once a cat becomes obese, the challenge for owner and veterinarian alike is to promote safe weight loss. It is better to set realistic goals for weight reduction rather than attempting to force the cat down to a “normal” weight. Usually a 15-20% reduction in weight is a reasonable and achievable target.
“Weight that is lost slowly is more likely to stay lost!”
Avoid rapid weight loss since it puts the cat at risk for development of hepatic lipidosis. Weight that is lost slowly is more likely to stay lost! No drugs or magic pills can be used safely or effectively. Commercial weight loss diets are available from veterinarians and provide the basis for a successful weight loss program. However, they are more effective when combined with additional exercise. This also has the advantage of providing more time for interaction between the cat and the family, which we know provides enjoyment and is beneficial for the health of both. With some patience and extra care, obese cats can lose weight safely and effectively, with the ultimate goal of prolonging a healthy happy life!
Author: Ernest Ward, DVM. © Copyright 2009 Lifelearn Inc. Used and/or modified with permission under license.
|
Scorching salt
Additional image::
The earth is cracked and the horizon bare. The deathly silence is broken by the occasional whirring of crude-oil pumps. Women, going about their daily life in bright mirror-work lehangas, add a dash of colour to an otherwise arid background. This tough terrain has dominted 50-year-old Shantabhai Maganbhai Bamania's life since he was 10. Shantabhai is an Agaria, a salt worker. The Rann of Kutch in Gujarat is his home to him and his family for eight months a year, from September to April. The remaining four months they spend in Kharagoda. Not just Shantabhai, the Rann of Kutch is home to more than 100,000 workers like him for eight months a year, who come from villages 30 to 40 kms away.
A 2006 report of a Union ministry of environment and forests-World Bank project, Biodiversity Conservation and Rural Livelihood Improvement, notes that nearly 60 per cent Agarias live below the poverty line.Their livelihood has been under threat ever since the Little Rann of Kutch (the Rann is divided into the Little Rann and the Great Rann) was notified as a wildlife sanctuary in 1973 to protect the wild ass. In 2006, the salt workers were served eviction notices.
The saltmaking Agarias do not understand why they are being asked to go, leaving behind an occupation they have been involved in for centuries. Where is the conflict, they ask. Even forest officials are unable to show any evidence of conflict. According to the forest department's own census, the population of wild asses has gone up beyond what is called "the safe level to achieve the objective of conservation.' Despite such a success story forest officials are rigid when it comes to the marginalised Agarias: since the area has been declared a sanctuary there cannot be any human population there, say officials.
The Agarias' vulnerability stems from the fact that they have no land deeds. No survey has ever taken place in the Little Rann of Kutch since independence; it does not figure in government revenue records. Revenue department records in fact refer to the area as Survey Number Zero.
Enlarge view
In Survey Number Zero
During monsoon, water from the Arabian Sea floods the Rann converting it into a lake. In September, when the waters recede, it's time for Agarias from the 107 villages around the Little Rann to move in. Mud huts come up in Survey Number Zero, where Agarias stay till spring, making the Vadagara variety of salt
|
Question: Is Mongolia a republic?
Government and politics Mongolia is a semi-presidential representative democratic republic with a directly elected President. The Peoples Party – known as the Peoples Revolutionary Party between 1924 and 2010 – formed the government from 1921 to 1996 (in a one-party system until 1990) and from 2000 to 2004.
When did Mongolia became Republic?
July 11, 1921, then became celebrated as the anniversary of the revolution. The Mongolian Peoples Republic was proclaimed in November 1924, and the Mongolian capital, centred on the main monastery of the Bogd Gegeen, was renamed Ulaanbaatar (“Red Hero”).
What type of government is Mongolia?
Republic Unitary stateSemi-presidential systemParliamentary republic Mongolia/Government
Is Mongolia controlled by China?
Mongolia is an independent country, sometimes referred to as Outer Mongolia, sandwiched between China and Russia. Inner Mongolia is an autonomous region of China equivalent to a province.
Is Mongolia communist country?
Communist Dictatorship in Mongolia (1921-1990) This made Mongolia the first Asian and the second country in the world (after Russia) to adopt communism. The Mongolian Peoples Republic was proclaimed in November 1924 and modelling it on the USSR, it became its satellite state until the year 1990.
Is Genghis Khan Chinese?
“We define him as a great man of the Chinese people, a hero of the Mongolian nationality, and a giant in world history,” said Guo Wurong, the manager of the new Genghis Khan “mausoleum” in Chinas Inner Mongolia province. Genghis Khan was certainly Chinese,” he added.
Why is Mongolia not part of China?
Naturally, Chinese 1911 revolutionary leaders insisted they would retain all the territory, including Outer Mongolia, occupied under the Qing Dynasty. So, in brief, a series of internal and external rise and fall in Mongolia caused its southern part (a.k.a Inner Mongolia) to remain as a part of China.
Has Genghis Khan been found?
For eight centuries, the Mongols called this place the Great Prohibition and forbade plowing in this area. From here, Sayan Mountain Range looks like on your palms. A very suitable place to bury the conqueror of the world. However, no one has found the body of Genghis Khan.
Who defeated Mongols in Middle East?
Contact us
Find us at the office
Give us a ring
Oluwadamilola Gleich
+93 552 509 928
Mon - Fri, 8:00-17:00
Tell us about you
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.