text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
|---|---|---|---|---|---|---|---|---|---|
Brondesbury and Mapesbury are small areas between Kilburn and Willesden
The Early Years
Together with the Willesden part of Kilburn, Brondesbury and Mapesbury used to belong to St.Paul's Cathedral in the medieval times. Mapesbury (named after Walter Map, an early medieval priest) lies north of Mapes (later Willesden) Lane, and Brondesbury ('Brand's manor') to the south.
Their history is closed related with Kilburn.
A rural area for much of its history, some houses were built on Willesden Lane in Brondesbury only in 1847. It was on a hill, which made it suitable for better quality housing. In 1860 new suburban development of Willesden began were. Larger villas were built in Brondesbury. Several of them served as hostels for Belgian refugees during the First World War.
A mill stood in Mapesbury, which was destroyed by fire in 1863. This incident led to the creation of a volunteer fire services in Kilburn.
In 1866 the parish of Christchurch, Brondesbury, was formed, the first new parish within the original parish of Willesden.
The decline in the housing market at the turn of the 20th century meant that the western part of Brondesbury was not built over until 1920, and Brondesbury Manor House remained standing until 1934.
Mapesbury was developed later than Brondesbury. Shortly after 1901 houses were built north of the Metropolitan Railway. Mapesbury House, south of the Metropolitan, survived until 1924.
The Jewish Community
In the 1870s a wave of Jewish immigrated came to Brondesbury, both from East End and directly from Eastern Europe. Initially the Jews in Brondesbury walked to synagogues in St.John's Wood or Hampstead. The first temporary synagogue was built in 1902 and a permanent one in 1905. By 1914 the Synagogue had 413 male seatholders.
Later the Jewish population moved to Willesden Cricklewood, Dollis Hill and beyond. The Synagogue closed in 1974 and the building is now part of Muslim school.
Return to Main Menu
© Brent Heritage website 2002
|
<urn:uuid:6b3bfecc-e0e2-4b0f-bf80-b6377c4f7fa5>
|
CC-MAIN-2016-26
|
http://www.brent-heritage.co.uk/brondesbury.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00030-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.973352
| 468
| 2.640625
| 3
|
550.630 Public Health Biology
This micrograph depicts the histopathologic changes found in a biopsied lymph node indicative of a Kaposi's sarcomatous lesion from a patient with AIDS. Courtesy of the CDC/ Dr. Edwin P. Ewing, Jr.
Sharon S. Krag, Gary Ketner, Gregory Glass, Barry Zirkin 与 James Yager
Offers an integrative molecular and biological perspective on public health problems. Explores population biology and ecological principles underlying public health and reviews molecular biology in relation to public health biology. Modules focus on specific diseases of viral, bacterial, and environmental origin. Uses specific examples of each type to develop the general principles that govern interactions among susceptible organisms and etiologic agents. Devotes special attention to factors that act in reproduction and development. Places emphasis on common elements encountered in these modules. These may include origin and dissemination of drug resistance, organization and transmission of virulence determinants, modulation of immune responses, disruption of signal transduction pathways, and perturbation of gene expression. Also considers the role of the genetic constitution of the host.
|
<urn:uuid:d1b01929-815e-4f6f-9b49-048b5ab151eb>
|
CC-MAIN-2016-26
|
http://www.myoops.org/cocw/jhsph/courses/PublicHealthBiology/index.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00076-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.677505
| 230
| 2.578125
| 3
|
Plot is a Verb
on May 10, 2013 in On Writing
Discussion and handouts from Muse and the Marketplace lectures, May 2013
There's an ongoing argument among writers: how many different stories/plots are there? The span goes from an infinite number to 212 to 37 to just one. You can buy a book by George Polti that outlines his 37 stories, but I wouldn't bother. One is: kinsman kills unrecognized kinsman. Not all that useful if you're not Shakespeare. My contention is that there's only one story, and if you understand how this plot works you'll have the major tool you need to write a successful novel.
Here's the story:
There once was a woman who had a terrible problem enter her life (inciting incident). She decided that she was going to solve/get rid of her problem so she devised a plan (goal). But whenever she put this plan into action, everything around her worked against her (conflicts) until the problem had grown even worse and she seemed even further then ever from reaching her goal. At this darkest moment (crisis), the woman made a decision (with both gains and sacrifices) based on who she was and what she had learned in the story. Through this decision and the resulting action (climax) her problem was resolved (resolution) in either a positive (happy ending) or negative way (unhappy ending).
And here's your skeleton:
- the inciting incident
- the goal
- the conflicts:
- the crisis
- the climax
- the sacrifice: the price of the choice
- the gain
- the unconscious need filled from the back story
- the resolution
So how do you make this work for you? First, watch the movie BIG with Tom Hanks. Yes, that's right. It's a wonderful example of story structure. Then download the attachment. It contains three sheets: Josh's Story (with nothing filled in), Your Story (with nothing filled in) and Josh's Story (with all the elements filled in). Without looking at the completed Josh's Story, fill in the blanks on the empty one. Then look at the completed sheet: how well did you do? Now fill in the sheet for Your Story. Whether you haven't written a word or if you're on your tenth draft, I'm sure it will open your eyes and make your novel better.
Download Worksheet [PDF]
|
<urn:uuid:9ce19789-1211-49f6-a9cf-829206b2357c>
|
CC-MAIN-2016-26
|
http://bashapirobooks.com/blog/2013/05/plot-is-a-verb
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00154-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969789
| 500
| 2.859375
| 3
|
The force on an object that resists its motion through a fluid is called drag. When the fluid is a gas like air, it is called aerodynamic drag (or air resistance). When the fluid is a liquid like water it is called hydrodynamic drag (but never "water resistance").
Fluids are characterized by their ability to flow. In somewhat technical language, a fluid is any material that can't resist a shear force for any appreciable length of time. This makes them hard to hold but easy to pour, stir, and spread. Fluids have no definite shape but take on the shape of their container. (We'll ignore surface tension for the time being. It's really only significant on the small scale — small like the size of a droplet.) Fluids are polite in a sense. They yield their space relatively easy to other material things; at least when compared to solids. A fluid will get out of your way if you ask it. A solid has to be told with destructive force.
Fluids may not be solid, but they are most certainly material. The essential property of being material (in the classical sense) is to have both mass and volume. Material things resist changes in their velocity (this is what it means to have mass) and no two material things may occupy the same space at the same time (this is what it means to have volume). The portion of the drag force that is due to the inertia of the fluid — the resistance that it has to being pushed aside — is called the pressure drag (or form drag or profile drag). This is usually what someone is referring to when they talk about drag.
Recall Bernoulli's equation for the pressure in a fluid…
P1 + ρgy1 + ½ρv12 = P2 + ρgy2 + ½ρv22
The first term on each side of the equation is the part of the pressure that comes from outside the fluid. Typically, this refers to atmospheric pressure weighing down on the surface of a liquid (not relevant right now). The second term is the gravitational contribution to pressure. This is what causes buoyancy (also not relevant right now). The third term is the kinetic or dynamic contribution to pressure — the part related to flow (very relevant). This will help us understand the origin of pressure drag.
Start with the definition of pressure as force per area. Solve it for force .
|P =||F||⇒||F = PA|
Replace the generic force symbol F with the more specific symbol R for drag. (You can also use D if you wish.) Drop in Bernoulli's equation for the pressure in a moving fluid…
|F = PA =||⎛
Rearrange things a bit and here you go…
R = ½ρCAv2
Wait a minute. Where'd that extra symbol come from? Who put that C in there and why?
Let's run through all the symbols one at a time, explain their meaning and how they relate to pressure drag. In essence, let's take the equation apart and put it back together again.
- Drag increases with the density of the fluid (ρ). More density means more mass, which means more inertia, which means more resistance to getting out of the way. The two quantities are directly proportional.
R ∝ ρ
- Drag increases with area (A). Exactly what we mean by this is subject to debate. To me, and in the context of this model, area is the cross sectional area projected in the direction of motion. (I would further simplify this by calling it the projected area.) Take the cross section of the object in the direction of its motion. This is the area of the tube of fluid that must be cast aside to let the object pass. This is the most logical thing to call the area, but not everyone agrees with me. To some, the word "area" refers to the area of contact between the object and the fluid. This also makes sense, but not in the context I've described above. Surface area is not important when one is dealing with pressure drag, but it is important when dealing with viscous drag — drag caused by layers of the fluid sticking to the object and to one another. More surface area means more of the object is in contact with the fluid, which means more drag. Viscous drag is just as real as pressure drag, but I don't want to deal with it right now.
R ∝ A
- Drag increases with speed (v). I hope that this is self-evident. An object that is stationary with respect to the fluid will certainly not experience any drag force. Start moving and a resistive force will arise. Get moving faster and surely the resistive force will be greater. The hard part of this relationship lies in the detailed way speed affects drag. According to our very sensible model derived from Bernoulli's very sensible equation, drag should be proportional to the square of speed.
R ∝ v2
Which brings us to our last factor…
- Drag is influenced by other factors including shape, texture, viscosity (which results in viscous drag or skin friction), compressibility, lift (which causes induced drag), boundary layer separation, and so on. These factors can be dealt with separately in a more complete theory of drag (how tedious in one sense, but how necessary in another) or they can be piled into one monolithic fudge factor (oh yes, please) called the coefficient of drag (C).
R ∝ C
Combining all these factors together yields a theoretically limited (but empirically very reasonable) equation. Here it is again…
R = ½ρCAv2
Simple, compact, wonderful. A nice equation to work with — or is it?
Well, yes and no.
- Yes, but it works only as long as the range of conditions examined is "small". That is, no large variations in speed, viscosity, or crazy angles of attack. The way around this is to reduce the coefficient of drag to a variable rather than a constant. (I can live with this.) Say that C depends on some yet to be specified set of factors. It is totally acceptable to say that it varies with this that or the other quantity according to any set of rules determined by experiment.
- No, since speed is squared. [Gasp!] Recall that speed is the derivative of distance with respect to time. Have you ever tried to solve a nonlinear differential equation? No? Well, welcome to hell. Wait, let me rephrase that — Welcome to Hell! [Ca-rack! Boom!] Ah ha ha ha ha haaaa! [Rumble] You fool! Just wait till you see what's in store for you when you try to solve the differential equations. The mathematics will consume you. [Ca-rack! Boom!] Ah ha ha ha ha haaaa! [Rumble].
Whew. What the hell was that all about? I might not know how to solve every kind of differential equation off the top of my head, but so what. I can always look for the solution in a book of standard mathematical tables or an on-line equivalent. You don't scare me demonic voice in my head.
|Cd||object or shape|
|2.1||ideal rectangular box|
|1.3~1.5||empire state building|
|0.7~1.1||formula one race car|
|0.6||bicycle with faring|
|0.7~0.9||tractor-trailer, heavy truck|
|0.6~0.7||tractor-trailer with faring|
|0.35~0.45||suv, light truck|
|0.15||Aptera high-efficiency electric car|
|0.15||airplane wing, at stall|
|0.05||airplane wing, normal operation|
|0.020~0.025||airship, blimp, dirigible, zeppelin|
other mathematical models
The pressure drag equation derived above is to me the most reasonable mathematical model of drag — especially aerodynamic drag. But as the demonic voice in my head said, it isn't always the easiest one to work with — especially for those just learning calculus (differential equations to be more precise). Those who know a lot of calculus just deal with it. Those who don't know any calculus just ignore it.
R = ½ρCAv2
A simplified model of drag is one that assumes that drag is directly proportional to speed.. This sometimes is good enough. (Maybe we should call it the" good enough model of drag".) It is especially useful when teaching calculus students how to solve differential equations for the first time. I haven't found it to be all that applicable to real world situations, however. (We'll use b as the generic constant of proportionality from now on.)
R = − bv
A more general model of drag is one that is agnostic about higher powers (pun intended). This is good attitude to have when you are exploring drag experimentally. Don't assume you know anything about how drag varies with speed, just measure the two quantities and see what values work best for the power n and the constant of proportionality b.
R = − bvn
Possible the mist general model is one that assumes a polynomial relationship. Drag might be related to speed in a way that is partially linear, partially quadratic, partially cubic, and partially described by higher order terms.
R = − ∑ bnvn
drag and power
If you want to go fast, you've got to work hard. That should be a statement of the obvious. But why? Well for one thing, it takes energy to get going — kinetic energy. This equation says, if you want to go twice as fast you've go to work four times harder (K ∝ v2).
K = ½mv2
While that's certainly true, it isn't of much use to us here on earth. If we lived in the vacuum of space, all we'd ever have to worry about was the energy needed to change our state from one speed to another. Here on earth, the atmosphere has another opinion. Whatever energy we add to a system to get it going, the atmosphere drags it away — all of it eventually. In order for a moving body to stay in motion on the Earth it not only has to get going, it has to actively work to keep going. This undeniable fact of life is why Newton's first law (the law of inertia) wasn't discovered until 1666 (approximately).
To keep an object in motion in the presence of drag (aerodynamic or otherwise) requires an ongoing input of energy. Work must be done over some time. Power must be used. Recall the following chain of reasoning that starts from the definition of power as the rate at which work is done…
|P =||W||=||F · Δs||= F · v|
Replace the generic force variable with a generic power equation for drag…
P = (bvn) v
Thus in general…
P = bvn + 1
or more specifically, in the case of pressure drag…
P = (½ρCAv2) v
P = ½ρCAv3
Thus, if drag is proportional to the square of speed, then the power needed to overcome that drag is proportional to the cube of speed (P ∝ v3). You want to ride your bicycle twice as fast, you'll have to be eight times more powerful. This is why motorcycles are so much faster than bicycles.
Power expended against drag is the biggest impediment to moving freely for both bicycles and motorcycles. Humans can do sustained physical work like cycling at the rate of about a tenth of a horsepower. Motorcycles have engines that are on the order of 100 horsepower. (Sorry for the American units.) That makes a motorcycle about one thousand times more powerful than a human on a bicycle. As a result they can go about ten times faster, since 1,000 = 103. I've found through personal experience on all day bicycle rides that I typically cover ⅙ the distance that I would if I sat behind the wheel of a car all day.
Yes I realize that cars aren't motorcycles, but what we're really comparing here are wheeled vehicles powered by human muscle with those powered by internal combustion engines. Yes I realize that a 6 to 1 ratio is not exactly the same as 10 to 1, but what I'm doing here is a quick order of magnitude comparison. Your individual results may vary — but not significantly.
It's much more than the name of a bad movie. It's something every student of aerodynamic drag should understand.
Imagine yourself as a parachute jumper; or better yet, imagine yourself as a BASE jumper. BASE is an acronym for building, antenna, span, escarpment. Since none of these platforms is moving horizontally, none of these jumpers has any initial horizontal velocity. Not that it really matters, but this reduces some of the complexity. Step off the platform and draw your free body diagram as you fall.
You start with no initial velocity, there is no aerodynamic drag, and you are effectively in free fall with an acceleration of 9.8 m/s2.
Now it gets complicated. There is an initial acceleration, therefore there is an increase in speed. With an increase in speed comes an increase in drag and a decrease in net force. This decrease in net force reduces acceleration. Speed is still increasing, just not quite as fast as it was initially.
Speed continues to increase, but so too does drag. As drag increases, acceleration decreases. Eventually one can imagine a state when the drag and weight forces are equal. You are in equilibrium. You continue moving, but you cease accelerating. You have reached your terminal velocity. Given the usual posture of skydivers, the type of clothes they normally wear, and the conditions of the air near the surface of the Earth; your typical skydiver has a terminal velocity of 55 m/s (200 km/h or 125 mph). The speed that you have in this state is the one you will always acquire if you are given enough time.
That is until the parachute opens. Opening the chute significantly increases your projected area, which cranks up the aerodynamic drag proportionally. The upward drag force now exceeds the downward pull of gravity. The net force and acceleration are directed upward. Note: this does not mean the skydiver is moving upward. Acceleration does not determine the direction of motion of an object, it determines the direction of the change in motion. When a parachute is just opened, the velocity is down and the acceleration is up. Your speed decreases as a result, which is the whole point behind the parachute.
Speed decreases, so drag decreases. Drag decreases, so the net force decreases. Eventually the net force is zero, you stop accelerating, and you reach a new terminal velocity — one that makes landing more comfortable, something like 6 m/s (22 km/h or 13 mph) or less.
Note that a terminal velocity is not necessarily a maximum value. It's a limit that can be approached from either direction. An object could start off slow and speed up to a terminal velocity that's a maximum (like a skydiver stepping off a BASE) or it could start off fast and slow down to a terminal velocity that's a minimum (like a skydiver who's just opened her parachute). "Terminal" is a fancy way to say "end". A terminal velocity is one that you end with. For falling objects, this occurs when drag equals weight.
|vt = √||2mg|
Terminal velocity applies to situations besides skydiving. Drive your car with the accelerator in a constant position and you'll eventually reach a terminal velocity. The forward driving force of the tires on the road will eventually equal the backward drag force of the air (and the rolling resistance of the tires, which is discussed somewhere else in this book). Note how I said "eventually". Terminal velocity is a speed things approach but never quite reach. Proof of this statement requires calculus and will be discussed in the practice problems of this section.
Terminal velocity can have any value — including zero. What happens to a ship in the ocean when the propeller stops turning? The forward thrust goes away and all that's left is the backward drag. The ship goes slower and slower and slower until it stops (stops relative to any current, that is). The ship will reach a terminal velocity of zero. For large container ships this may take minutes of time and kilometers of distance, but it will happen eventually. If you don't have the time or the space and you really want to stop a large seagoing vessel, you run the engines in reverse. In this case it's thrust that stops the ship, not drag.
|vt (m/s)||falling object|
|373||skydiver, 39 km (Felix Baumgartner, 2012)|
|367||skydiver, 41 km (Alan Eustace, 2014)|
|274||skydiver, 31 km (Joseph Kittinger, 1960)|
|146||skydiver, 04 km (Christian Labhart, 2010)|
|6||skydiver, parachute open|
|
<urn:uuid:9b6849a3-c1a9-412a-ad5b-b6fbea0759e1>
|
CC-MAIN-2016-26
|
http://physics.info/drag/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00056-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940828
| 3,623
| 4.25
| 4
|
Scientists from IBM's Zürich Research Laboratory
and Chalmers University of Technology in Sweden have found a way to alter
a single atom.
The researchers used a low-temperature scanning tunneling microscope
and a voltage pulse to place an electron on an individual gold atom, then
remove the electron. Regular atoms are neutral, while ions -- atoms with
more or fewer electrons -- carry a charge.
The gold atom, positioned on an ultrathin film of sodium chloride,
remained stable during the operation, despite the change in charge. The
gold atom was kept stable by small changes in the positions of nearby
atoms in the film.
Ions have different chemical and physical properties than corresponding
neutral atoms. Being able to switch an individual atom to an ion and back
promises a new way to control attributes like chemical reactivity, optical
properties, and magnetic properties, according to the researchers.
This control could eventually lead to devices that work at the
atomic scale, like a nonvolatile memory cell that stores information in
a single atom. Practical atomic-scale memory would increase the amount
of data that can be stored in a given area by 10,000 times, according
to the researchers.
Charged atoms can be used to influence nearby molecules as well,
according to the researchers.
The work appeared in the July 23, 2004 issue of Science.
Projector lights radio
Cell phone melds video
Sound system lets
Chips measure electron
Twisted fiber filters
bring walking to VR
Speck trios make
Single gold atoms
Pen writes micro wires
Design eases nano
View from the High Ground Q&A
How It Works
News | Blog
Buy an ad link
|
<urn:uuid:09fec7d7-aa26-469f-b5a7-f8be7609bf1c>
|
CC-MAIN-2016-26
|
http://www.trnmag.com/Stories/2004/081104/Single_gold_atoms_altered_Brief_081104.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00012-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.843563
| 361
| 3.71875
| 4
|
of Union Veterans of The Civil
Department of New York
Colonel Augustus van Horne Ellis Camp 124
Remembering The Pathfinder
General John Fremont
Rockland County, NY
John Charles Fremont
|Born in Savannah, Georgia on January 21, 1813,
Fremont was one four major generals appointed by President Lincoln - he was easily the
most celebrated. As a Union general, Fremont's major Civil War contribution was more
political than military when he focused Union attention on the
role emancipation should play in the North's war policy.
The magnetic and legendary "Pathfinder" became a national hero early in life for his trailblazing exploits in the Far West. A leader in wresting California from Mexico, he served as one of the state's first senators and got rich in the Gold Rush. Fremont's popularity and his antislavery position were equally instrumental in his being chosen the Republican Party's first presidential nominee in 1856, the youngest man yet to run for the office. With Southern states threatening secession if he were elected, Fremont's loss to James Buchanan forestalled disunion for another four years
In Europe at the outbreak of the Civil War, he purchased a cache of arms in England for the North on his own
initiative and returned to America. Abraham Lincoln, mostly for political reasons, appointed him major general in
May 1861, placing him in command of the precarious Department of the West. Based in St. Louis, Fremont
pent more energy fortifying the city and developing flashy guard units than equipping the troops in the field. His
forces suffered several losses, particularly a major defeat at Wilson's Creek that August.
Attempting to gain a political advantage in the absence of a military one, Fremont, in an unprecedented and
unauthorized move, issued a startling proclamation at the end of the month declaring martial law in Missouri and
ordering that secessionists' property be confiscated and their slaves emancipated. The action was cheered by
antislavery Republicans, but Lincoln, concerned that linking abolition to the war effort would destroy Union
support throughout the slave-holding border states, asked Fremont at the very least to modify the order.
Fremont refused, sending his wife, the influential daughter of former Senate leader Thomas Hart Benton, to
Washington to talk to the president. Displeased with Fremont's effrontery, Lincoln revoked the proclamation
altogether and removed him from command. Pressure from his fellow Republicans forced Lincoln to give the
popular Fremont another appointment, and in March 1862 he was named head of the army's new Mountain
Department, serving in Western Virginia.
Over the following two months, he endured several crushing losses against Thomas "Stonewall" Jackson during
the Confederate general's brilliantly successful Shenandoah Valley Campaign. After a military reorganization
placed him under the command of former subordinate John Pope, Fremont angrily resigned his post, never to
receive a new Civil War appointment. In 1864, however, he began another presidential bid with the backing of
a cadre of Radical Republicans, but withdrew from the race in September and threw his support to Lincoln after
a rapprochement in the party. When he lost most of his fortune by the end of the war, Fremont tried the railroad
industry. His reputation damaged by an 1873 conviction for his role in a swindle, he nevertheless resumed his
political career, and later in the decade began serving as territorial governor of Arizona but depended on his wife's
income from writing during most of his later years. He died in New York City, July 13, 1890 and is buried in
Rockland County, overlooking the Hudson River.
General John Fremont's Gravestone
Rockland County, NY
Ceremonies are held at the site periodically by the Ellis Camp, SUVCW, to remember John Fremont,
the Union General, political leader, and Pathfinder.
Return to Colonel Ellis Camp 124 Home
Return to Department of New York Home Page
Return to SUVCW Home Page
Return to SUVCW Web Site Index
Return to SUVCW List of Departments
|
<urn:uuid:595bdb8e-aa39-4450-98b0-61a515376eab>
|
CC-MAIN-2016-26
|
http://www.suvcw.org/ny/camps/ellis/projects/Fremont/Fremont.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00045-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965987
| 858
| 3.09375
| 3
|
After the Ironman Lake Tahoe announcement, we have received many questions about how to race at altitude and what effects it can have on performance. The general consensus among the sports science community is that the effects from altitude are seen at altitudes above 3,500 feet. IM Lake Tahoe will be raced above 6,200 feet and definitely fits the definition of a race at altitude. Here locally, we have events in Flagstaff and Show Low that are raced at altitudes of 7,000 feet and 6,200 feet respectively.
Due to a decrease in atmospheric pressure at altitude, we take in less O2 per breath than we do at sea level. The body tries to make up for this O2 deficit by increasing the respiratory rate and the heart rate. This is an attempt by the body to increase the amount of O2 that is delivered to the muscles. The end result is that during exercise, we can expect to hit lactate threshold at slower paces and typically see higher HRs than we would at sea level.
So keep the following things in mind when racing at altitude. These are written with Olympic-distance racing in mind, but the principles can be applied across all distances. The intensities are obviously lower the longer the race distance.
Swim – Start very, very slow. Your breathing rate is fixed while swimming but there is less O2 available per breath than what you are used to. By fixed, this refers to the fact that you can only breathe every so many strokes because your head is otherwise underwater. If you are two-count breathing, you are only able to breathe once every two strokes. You cannot increase respiratory rate without increasing stroke rate. If you start too fast you will go into O2 debt (exceed LT) in about 2-4 min. We see this every year where people who start in a sprint end up breast stroking at the first buoy as they try and increase their respiratory rate (get the head out of the water) to get some more O2 into their system. Start slowly and build into the swim – stay in control. Do not sprint the first 200 meters! This applies to all racers – fast or not so fast. Be smart. Swimming at altitude is the biggest challenge you face on race day. Start slow and you will give yourself a chance.
Bike/Run – Trust your perceived exertion when racing at altitude. While HR is usually high when resting at altitude due to the body trying to get more oxygen where it needs to go, the HR is usually suppressed during exercise because we hit threshold at much lower outputs than we do at sea level. Expect to hurt the same as you would at threshold at sea level but you will be moving at a somewhat slower pace due to the decreased O2. The same goes for the run. You will hurt the same but will be moving slower than at sea level. Don’t get discouraged if you see slower mph or pace/mile than what you are used to. Just push yourself at what feels like a threshold effort and you will be right on course. Realistically, racing a 7,000-foot course is probably not the likeliest place to set a PR J. You can expect a decrease in performance by about 7% at 7,000 feet. This means a 45-min 10K runner will be doing well to run under 49 min at 7,000 feet. Adjust your time goals accordingly. Try and ride at what feels like threshold to just below threshold on the bike and threshold to just above threshold on the run.
Be sure to drink plenty of fluids when at altitude as the increased respiratory rates can cause you to get dehydrated faster even when just walking around.
Have fun, race hard and enjoy the mountains!
|
<urn:uuid:3ff2e886-cd92-484e-bd4e-dbbc82243274>
|
CC-MAIN-2016-26
|
http://www.camelbackcoaching.com/racing-at-altitude/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00055-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957029
| 763
| 2.734375
| 3
|
|1923||First estrogen bioassay is developed. The test detects estrogenic activity
in biological extracts and determines relative potencies of compounds and mixed
|1929||Commercial production of PCBs begins in the United States in response to
the electrical industry's need for a safer cooling and insulating fluid for
industrial transformers and capacitors.|
|1938||British scientist and physician Edward Charles Dodds announces the
synthesis of a chemical that acted in the body like a natural estrogen. Called
DES, it is hailed by leading researchers and gynecologists as a wonder drug
with a host of potential uses. (Dodds was later knighted for his scientific
achievement.) Soon after Dodds invents DES, researchers in the United States
begin giving the synthetic hormone to women with problem pregnancies. The
massive experiment would eventually involve an estimated 4.8 million pregnant
|1948||Paul Muller is awarded a Nobel Prize in medicine for discovering the
insect-killing properties of DDT.|
|1950||DDT is shown to disrupt sexual development in roosters -- possibly by
acting as a hormone. Scientists V.F. Lindeman and Howard Burlington find that
young roosters treated with DDT fail to develop normal male sex
characteristics, such as combs and wattles. The pesticide also stunted the
growth of the animals' testes. These scientists noted a similarity between DDT
and DES, a synthetic estrogen given to women for problem pregnancies. DDT,
they observe, "may exert an estrogen-like action" on the animal in question.|
|1952||By this date, four separate scientific studies show women treated with DES
to prevent miscarriage did no better than those treated with alternatives such
as bed rest or sedatives. Further analysis will show that DES actually
increases the number of miscarriages, premature births, and deaths among
|1962||"Silent Spring" is published. Rachel Carson's book describes health
problems observed in wildlife such as egg shell thinning, deformities and
population declines. Carson links these adverse effects to exposure to
pesticides and other synthetic chemicals.|
|1963||Study shows that newborn mice receiving estrogen injections developed
tissue pathologies such as cysts, cancers, and lesions. Results indicate that
exposure to naturally occurring hormones early in life can produce harmful
health effects and point to possible early life causes of cancer in adult human
|1968||DDT is shown to be estrogenic in mammals and birds.|
|1971||DES is linked to vaginal cancer in daughters whose mothers had taken the
drug during the first three months of pregnancy. By this date, millions of
pregnant women had received prescriptions from physicians for DES.|
U.S. Food and Drug Administration directs doctors not to prescribe DES to
pregnant women and bans the drug for animal use.
|1972||DDT use is restricted in agriculture by the U.S. Environmental Protection
|1973||International Joint Commission (IJC) for the U.S. and Canada singles out
first "Areas of Concern" in the Great Lakes region, noting extensive pollution
and threats to wildlife.|
|1975 & 1976||DES is shown to cause developmental abnormalities in male mice and
reproductive problems in humans.|
|1977||Use and manufacture of PCBs restricted by the U.S. Environmental
Protection Agency. PCBs continue to be manufactured and sold overseas|
|1978||Great Lakes Water Quality Agreement between U.S. and Canada calls for
virtual elimination of persistent toxic substances from Great Lakes basin.|
|1979||National Institute of Environmental Health Sciences holds conference
entitled: Estrogens in the Environment I. Presented papers identify and
evaluate both advertent and inadvertent hormone mimics.|
Manufacture of PCB's banned in the U.S., but not their use or storage.
|1982||DES is shown to cause developmental abnormalities and vaginal cancer in
|1983||Responding to public concern over dioxin contamination at Times Beach,
Love Canal, Jacksonville and other sites, the U.S. Congress directs the EPA to
conduct a National Dioxin Study to determine the extent of contamination
|1985||National Institute of Environmental Health Sciences holds a conference
called Estrogens in the Environment II: Influences on Development.
Presentations address the effects of environmental estrogens on puberty in
young children. Also noted is the ubiquitous nature of the contaminants, their
potency and their potential impact on public and environmental health.|
EPA's Dioxin Risk Assessment classifies dioxin as a known animal and
probable human carcinogen, setting the lowest "safe exposure level" on
Eight Great Lakes states develop remedial action plans to address
environmental damage seen in IJC-targeted "Areas of Concern."
|1986||Documents are leaked to Greenpeace showing EPA agreed to demands from the
paper industry to keep results of National Dioxin Survey secret.|
Under threat of lawsuit, EPA releases National Dioxin Survey. The study
finds dioxin is present in discharge from paper mills and in finished paper
products (due to chlorine bleaching of white paper).
Paper industry pressures EPA to reconsider its 1985 Dioxin Risk Assessment
in hopes of obtaining a less damaging judgment on dioxin's health effects.
|1988||EPA begins its first reassessment of dioxin.|
|1990||The EPA and the Chlorine Institute (an industry group) co-sponsor the
Banbury Conference on Dioxin, which takes place on Long Island, New York.
Conference attendees reach a consensus on dioxin's probable mechanism of
Theo Colborn co-authors "Great Lakes, Great Legacy?," detailing
developmental, reproductive, metabolic and behavioral damage to wildlife from
persistent chemical pollutants.
Fifth Biennial report of IJC puts threat in plain language, saying that the
principle danger of persistent organochlorine chemicals is to the fetus.
Environmental groups around the Great Lakes form the Zero Discharge
Alliance to oppose production of bioaccumulative toxic substances.
|1991||Theo Colborn helps organize a conference called "Chemically Induced
Alterations in Sexual Development: The Wildlife-Human Connection" and held at
Wingspread in Racine, Wisconsin. For the first time, scientists from many
disciplines are brought together to discuss concerns about endocrine-disrupting
chemicals in the environment. Participants present evidence that compounds may
have deleterious effects on sexual development in a variety of wildlife
species. Possible impacts include reproductive system abnormalities, reduced
fertility, behavioral abnormalities, and population declines -- particularly in
Researchers Ana Soto and Carlos Sonnenschein report that some plastic
compounds widely used in a variety of consumer products are estrogenic in
The Chlorine Institute (an industry group) prematurely issues a press
release stating that below a certain threshold of exposure, dioxin has no
adverse effects. Group makes false claim that this was the consensus of the
EPA administrator Bill Reilly states publicly that dioxin seems less
dangerous than previously thought. He initiates a second EPA reassessment of
Greenpeace tours 40 Great Lakes cities by boat in preparation for upcoming
IJC meeting in Traverse City, Michigan. The publicity campaign focuses on the
goal of zero dioxin discharge by the paper industry. Greenpeace distributes a
report entitled: "The Product is the Poison: The Case for a Chlorine
|1992||Sixth Biennial Report of the IJC calls for a phase-out of chlorine as an
industrial feedstock. Drinking water and pharmaceutical uses are exempted.
Environmental groups and industry are surprised by this wide-reaching
Physician Niels Skakkebaek publishes a paper demonstrating that human sperm
counts may have declined 50 percent over the last 50 years.
|1993||Referring to the perceived decrease in human sperm counts, scientist Lou
Guillette tells the U.S. Congress, "Every man sitting in this room today is
half the man his grandfather was, and the question is, are our children going
to be half the men we are?"|
A link between environmental estrogens and male reproductive problems is
hypothesized in scientific papers.
Chemical Manufacturer's Association forms the Chlorine Chemistry Council
(CCC) to promote the industry's agenda in the debate over chlorine chemistry.
CCC launches a public relations campaign, including television advertisements
asserting the need for chlorine.
|1994||EPA releases a Public Review Draft of its Dioxin Reassessment. It covers
dioxin, dioxin-like PCBs and furans. The report concludes that these chemicals
cause harm at levels similar to those seen in the general public. In addition
to cancer, potential damage is seen to the immune, nervous and reproductive
|1995||The National Academy of Sciences and National Research Council sponsor a
panel study called "Hormone Related Toxicants in the Environment."|
The EPA's Science Advisory Board reviews draft of Dioxin Reassessment.
|1996||The topic of endocrine disrupters is popularized with the publication of
"Our Stolen Future," which is co-authored by Theo Colborn and
includes an introduction by U. S. Vice President Al Gore.|
*President Clinton signs the Food Quality Protection Act and amendments to
the Safe Drinking Water Act, establishing the EPA's Endocrine Disruptor
Screening and Testing Advisory Committee (EDSTAC). EDSTAC is a unique advisory
committee of 40 members from industry, academia, government and environmental
groups. It is charged by Congress to develop a chemical screening program for
endocrine disruptors by 1998, and to implement the program by August, 1999.
Scientist Lou Guillette publishes his finding that male alligators in Florida's Lake Apopka have strikingly low levels of testosterone and
abnormally small phallus size. Pesticide residues in this contaminated lake
appear to have "feminized" the alligators there.
Psychologists Sandra and Joe Jacobson report that children exposed to high
levels of PCBs before birth have as much as a 6.2 point IQ deficit later in
Dr. Harry Fisch publishes a study refuting any decline in U.S. sperm
counts. He found, instead, striking geographical variation in sperm counts
across the U.S. While sperm counts remained constant in a given region between
1970 and 1994, New York had higher counts than Minnesota, which had higher
counts than California. Fisch thinks that the geographical variation may have
confused other research that, in 1992, showed a worldwide decline in human
|1997||Work by researcher Fred vom Saal shows that bisphenol-A, a component
of polycarbonate plastic, can alter the reproductive development of lab mice at
extremely low doses. Bisphenol-A mimics the natural sex hormone estrogen. Male
mice exposed to this plastic during fetal development have premanently enlarged
prostates and lower sperm counts. The effects occur at doses near those that
humans are exposed to each day from sources like food packaging and dental
A study by the Centers for Disease Control and Prevention shows that
hypospadias, a hormone-dependent genital defect, is on the rise in baby
The National Institute of Environmental Health Sciences (HHS) holds its
fourth major conference on estrogens in the environment in Arlington VA.
Numerous scientific papers and reports are presented on toxicology, risk
assessment and research for this emerging health concern.
Tulane University scientists retract an environmental estrogen study
published in a June 1996 issue of Science. The report had claimed that
combinations of pesticides were as much as 1,600 times more potent as
environmental estrogens than the individual pesticides. The research results
couldn't be replicated and the study was retracted.
|1998||The National Academy of Sciences Institute of Medicine is expected to issue
its report on hormone-related toxicants in the environment. The NAS panel will
critically review the literature, identify known and suspected impacts on fish,
wildlife and humans, and recommend research, monitoring and testing priorities,
among other activities.|
By August, the EPA committee EDSTAC is mandated to develop recommendations
on how to screen and test chemicals for their potential to disrupt hormone
function in humans and wildlife. EDSTAC's final plenary session is set for June
17-18 in Washington, D.C.
A research paper published in the Journal of the American Medical
Association reports that the proportion of males to females born has been
declining in the U.S. and Canada since the 1970s and in Denmark and the
Netherlands between 1950 and 1994. The study's authors suggest that endocrine
disruptors may play a role, pointing to increased numbers of male reproductive
disorders. When the study is reported in the popular press, some scientists
downplay the significance of the reported trend.
Vice-president Al Gore urges the chemical industry to voluntarily release
vital health information about thousands of commonly used chemicals. He says
such a move would "empower citizens with new knowledge" to safeguard their
neighborhoods against potential chemical hazards.
The United Nations Environment Programme plans to hold a meeting in late
June in Montreal to expand throughout the world an agreement to ban, phase out
or limit the production of Persistent Organic Pollutants (POPs). POPs are
chemical substances that persist in the environment, bioaccumulate through the
food web, and pose a risk of causing adverse effects to human health and the
environment. Persistent Organic Pollutants include: aldrin, dieldrin, endrin,
chlordane, DDT, heptachlor, hexachlorobenzene, mirex, toxaphene, PCBs, dioxins,
On Earth Day, the Chemical Manufacturers Association announces it will
urge its members to voluntarily increase their health effects testing program
of industrial chemicals to 100 chemicals a year by 2003.
|
<urn:uuid:6ecf613c-3193-4b5b-84fa-0cf6b5032f80>
|
CC-MAIN-2016-26
|
https://www.pbs.org/wgbh/pages/frontline/shows/nature/etc/cron.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00060-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.910159
| 2,982
| 3.328125
| 3
|
Scripps Research Institute Scientists Convert Human Skin Cells into Sensory Neurons
LA JOLLA, CA—November 24, 2014—A team led by scientists from The Scripps Research Institute (TSRI) has found a simple method to convert human skin cells into the specialized neurons that detect pain, itch, touch and other bodily sensations. These neurons are also affected by spinal cord injury and involved in Friedreich’s ataxia, a devastating and currently incurable neurodegenerative disease that largely strikes children.
The discovery allows this broad class of human neurons and their sensory mechanisms to be studied relatively easily in the laboratory. The “induced sensory neurons” generated by this method should also be useful in the testing of potential new therapies for pain, itch and related conditions.
“Following on the work of TSRI Professor Ardem Patapoutian, who has identified many of the genes that endow these neurons with selective responses to temperature, pain and pressure, we have found a way to produce induced sensory neurons from humans where these genes can be expressed in their ‘normal’ cellular environment,” said Associate Professor Kristin K. Baldwin, an investigator in TSRI’s Dorris Neuroscience Center. “This method is rapid, robust and scalable. Therefore we hope that these induced sensory neurons will allow our group and others to identify new compounds that block pain and itch and to better understand and treat neurodegenerative disease and spinal cord injury.”
The report by Baldwin’s team appears as an advance online publication in Nature Neuroscience on November 24, 2014.
In Search of a Better Model
The neurons that can be made with the new technique normally reside in clusters called dorsal root ganglia (DRG) along the outer spine. DRG sensory neurons extend their nerve fibers into the skin, muscle and joints all over the body, where they variously detect gentle touch, painful touch, heat, cold, wounds and inflammation, itch-inducing substances, chemical irritants, vibrations, the fullness of the bladder and colon, and even information about how the body and its limbs are positioned. Recently these neurons have also been linked to aging and to autoimmune disease.
Because of the difficulties involved in harvesting and culturing adult human neurons, most research on DRG neurons has been done in mice. But mice are of limited use in understanding the human version of this broad “somatosensory” system.
“Mouse models don’t represent the full diversity of the human response,” said Joel W. Blanchard, a PhD candidate in the Baldwin laboratory who was co-lead author of the study with Research Associate Kevin T. Eade.
A New Identity
For the new study, the team used a cell-reprogramming technique (similar to those used to reprogram skin cells into stem cells) to generate human DRG-type sensory neurons from ordinary skin cells called fibroblasts.
To start, the scientists examined previous experiments and identified several transcription factors—managerial proteins that switch on the activity of large sets of genes—that seemed crucial to the ability of immature neurons to develop into adult sensory neurons. They found that the combination of the transcription factors Brn3a plus Ngn1, or Brn3a plus Ngn2, reprogrammed a significant percentage of the embryonic mouse fibroblasts into what looked—and acted—like mature DRG-type sensory neurons.
“We added compounds including capsaicin, which activates pain receptors on DRG neurons, and menthol, which activates cold receptors, and saw subsets of our induced neurons light up with activity just as real DRG neurons would,” said Eade.
Remarkably, although mouse studies had indicated that different transcription factors were differently important for generating pain and itch sensing neurons versus pressure and limb position neurons, in the dish these factors produced equal numbers of each of the three main subtypes.
A Step Toward ‘Personalized Medicine’
Using the same recipes of transcription factors, the team was able to convert adult human fibroblasts, which are harder to reprogram, into DRG neurons. The conversion rate was lower, but the induced neurons seemed just as much like their natural counterparts as those produced from embryonic mouse fibroblasts.
“We can definitely scale up of the numbers of these induced neurons as needed,” Blanchard said.
The feat means that scientists now can relatively easily study DRG sensory neurons derived from many different people, to better understand the diversity of human sensory responses and sensory disorders and advance a “personalized medicine” approach. “We can start to understand how individuals respond uniquely to pain, cold, itch and so on,” said Blanchard.
Other co-authors of the study, “Selective conversion of fibroblasts into peripheral sensory neurons,” were Valentina Lo Sardo, Rachel K. Tsunemoto, Daniel Williams, and Pietro Paolo Sanna, all of TSRI; and Attila Szűcs of the University of California San Diego, who performed many of the electrical tests on the induced neurons.
Support for the research came from the Dorris Neuroscience Center, the California Institute of Regenerative Medicine, the Baxter Family Foundation, the Del Webb Foundation, The Norris Foundation, Las Patronas, the National Institutes of Health (National Institute on Drug Abuse [DA031566], National Institute on Deafness and other Communication Disorders [DC012592] and National Institute of Mental Health [MH102698]), the National Science Foundation and the Andrea Elizabeth Vogt Memorial Award.
About The Scripps Research Institute
The Scripps Research Institute (TSRI) is one of the world's largest independent, not-for-profit organizations focusing on research in the biomedical sciences. TSRI is internationally recognized for its contributions to science and health, including its role in laying the foundation for new treatments for cancer, rheumatoid arthritis, hemophilia, and other diseases. An institution that evolved from the Scripps Metabolic Clinic founded by philanthropist Ellen Browning Scripps in 1924, the institute now employs more than 2,500 people on its campuses in La Jolla, CA, and Jupiter, FL, where its renowned scientists—including two Nobel laureates and 20 members of the National Academy of Science, Engineering or Medicine—work toward their next discoveries. The institute's graduate program, which awards PhD degrees in biology and chemistry, ranks among the top ten of its kind in the nation. For more information, see www.scripps.edu.
# # #
Office of Communications
|
<urn:uuid:0aaaefab-ba3a-4810-bdd7-59cf50f55515>
|
CC-MAIN-2016-26
|
http://www.scripps.edu/news/press/2014/20141124baldwin.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00026-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.928439
| 1,381
| 3
| 3
|
GRAVITY BALLS Tell EXACT TIME (Nov, 1931)
GRAVITY BALLS Tell EXACT TIME
A CLOCK without hands, which tells the exact time by the rolling down hill of steel balls, has been perfected by a Philadelphia inventor. It required twenty years to discover the secret of accuracy in rolling the balk mile after mile, but on a recent three months’ test run the clock showed a gain of only a few minutes.
The steel balls are automatically released at the top and travel in relays to the bottom on track made by two fine music wires. The mechanism starts a new ball every 2-1/2 minutes. When a ball reaches the 60 mark it enters a trip cage which lowers it onto the hour beam. When 60 balls have descended to the beam it tilts and turns indicator to next hour. The device is especially adaptable to window advertising displays.
|
<urn:uuid:eda6129c-88c1-445f-b1f9-d21a470c8052>
|
CC-MAIN-2016-26
|
http://blog.modernmechanix.com/gravity-balls-tell-exact-time/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.919535
| 184
| 3.203125
| 3
|
Frequently Asked Questions
18. Which surgeries and vascular interventions are used to treat stroke?
Surgery can be used to prevent stroke, to treat stroke, or to repair damage to the blood vessels or malformations in and around the brain.
- Carotid endarterectomy is a surgical procedure in which a surgeon removes fatty deposits, or plaque, from the inside of one of the carotid arteries. The procedure is performed to prevent stroke. The carotid arteries are located in the neck and are the main suppliers of blood to the brain.
In addition to surgery, a variety of techniques have been developed to allow certain vascular problems to be treated from inside the artery using specialized catheters with the goal of improving blood flow. (Vascular is a word that refers to blood vessels, arteries, and veins that carry blood throughout the body.)
A catheter is a very thin, flexible tube that can be inserted into one of the major arteries of the leg or arm and then directed through the blood vessels to the diseased artery. Physicians trained in this technique called angiography undergo additional training to treat problems in the arteries of the brain or spinal cord. These physicians are called neurointerventionalists.
- Angioplasty is widely used by angiographers to open blocked heart arteries, and is also used to prevent stroke. Angioplasty is a procedure in which a special catheter is inserted into the narrowed artery and then a balloon at the tip of the catheter is inflated to open the blocked artery. The procedure improves blood flow to the brain.
- Stenting is another procedure used to prevent stroke. In this procedure an angiographer inserts a catheter into the artery in the groin and then positions the tip of the catheter inside the narrowed artery. A stent is a tube-like device made of a mesh-like material that can be slipped into position over the catheter. When positioned inside the narrowed segment the stent is expanded to widen the artery and the catheter is removed. Angioplasty or stenting of the carotid artery can cause pieces of the diseased plaque to loosen. An umbrella-like device is often temporarily expanded above to prevent these pieces from traveling to the brain.
- Angiographers also sometimes use clot removal devices to treat stroke patients in the very early stage. One device involves threading a catheter through the artery to the site of the blockage and then vacuuming out the clot. Another corkscrew-like device can be extended from the tip of a catheter and used to grab the clot and pull it out. Drugs can also be injected through the catheter directly into the clot to help dissolve the clot.
|
<urn:uuid:0ac31e8f-a876-41ad-8997-573400991637>
|
CC-MAIN-2016-26
|
http://nihseniorhealth.gov/stroke/faq/faq18.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00199-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947387
| 555
| 3.59375
| 4
|
Carbohydrates: Sugar, Starch, and Fiber
This article will cover the basics of carbohydrates (“carbs”) — what they are, why we need them, and which types are better for our health.
What Are Carbohydrates?
Carbohydrates are organic compounds that contain single, double, or multiple sugar units. Simple sugars are only one or two sugar units long and are typically sweet tasting whereas complex carbohydrates are thousands of sugar units long and have a starchy taste. For more information about the chemistry of carbs, see Biology Online.
|Simple sugars||Complex carbohydrates|
All digestible simple sugars and starches eventually get converted to glucose in our body. Most types of cells use glucose as their main fuel source. After we eat sugars or starches, our blood glucose level rises. This signals our body to produce insulin, a hormone, so that cells can take the glucose out of the bloodstream and use it for energy. Excess glucose will be stored as glycogen in our liver and muscle. If there is still excess glucose after maxing out glycogen storage, it will be converted and stored as body fat. Eating too much sugar or starch of any type can cause you to gain weight.
Sometimes people get confused as to how simple sugars and starches affect blood glucose. Please read “Tips for Managing Diabetes” if you would like more information about carbohydrates and diabetes.
Fiber is a non-digestible complex carbohydrate. Our gut does not possess the enzymes needed to break apart the links between sugar units. Undigested fiber travels through our gut and while doing so, provides health benefits. Fiber also encourages growth of healthful bacteria in our lower gut. Benefits come from two different types of plant fibers that are classified based upon whether or not they dissolve in water (soluble) or not (insoluble). It is important to consume both types of fiber for maximum health benefits.
|Fiber type||Benefit||Food sources|
|Insoluble||Regularity (relieves constipation), lower risk of diverticulosis (gut pouches that get inflamed)||Bran from grains/cereals, skins and seeds from fruits and vegetables, dried beans/peas, brown rice|
|Soluble||Helps reduce straining with excretion, binds cholesterol in the gut, and helps blunt rise of blood glucose after a meal.||Fleshy part of fruits and vegetables, oats, dried beans/peas|
How Much Fiber Is Enough?
The Dietary Reference Intake (DRI) for total fiber intake for adults is 14 grams per 1000 calories intake. If you do not want to do the math, then the DRI for standard intakes is:
25 grams for women
38 grams for men
The DRI is for total fiber, there is no breakdown by type of fiber. However, to help reduce blood cholesterol levels, the National Institutes of Health (publication No. 06-5235) recommends 5 – 10 grams/day of soluble fiber.
One simple way to meet fiber goals is to eat three or more servings of whole grains and five or more servings of fruits and vegetables daily. Vegetables include both non-starchy and starchy types, as well as dried beans and peas. With this strategy, you consume a variety of healthful foods that provide both types of fiber.
You can select fiber as one of your nutrients to track in the PLAN section on the web. Once you select it, you can enter your goal and it will show up on reports. As well, fiber will be displayed in the “Day Nutrition Summary” at the bottom of the meals screen within your iPhone app.
Current Sugar Guidelines
Last year, the American Heart Association published recommended limits for intake of added sugars as a means to help reduce risk for heart disease. The current recommended limits for added sugars are:
100 calories for women (25 grams or 6 teaspoons)
150 calories for men (38 grams or 9 teaspoons)
Added vs. Natural Sugar?
The guidelines specifically refer to added sugars: table sugar, honey, natural syrups (e.g. agave, maple, and molasses), commercial syrups (e.g. high fructose corn syrup), and concentrated fruit sugars added to foods to sweeten or preserve. The naturally occurring sugars in milk, fresh fruit, dried and frozen fruit without added sugar and 100% fruit juice are not considered added sugars. Artificial sweeteners and sugar alcohols are also not considered added sugars.
Sugar grams listed on the Nutrition Facts panel include both naturally occurring sugars and added sugars. This might change in the future with a new labeling law, but for now, you have to look at the ingredient list to find added sugars. Names for added sugars are numerous - MyPyramid has a list of commonly used names here.
If you choose to select “Sugars” as a nutrient to track, just be aware that the value will include both naturally occurring and added sugars (total sugars). The added sugar limit of 25 – 38 grams should not be entered as a MyNetDiary goal as it will create a falsely low limit. Remember, there are naturally occurring sugars in almost all foods. A more appropriate goal for total sugars is 25% of total calories (e.g. 125 grams for a 2000 calories diet).
Hidden Sources of Added Sugars in “Healthy Foods”
It is easy to identify regular soda pop and energy drinks as examples of empty calories, but what about sugary foods and drinks that also have nutrients? Choose brands that have less added sugar than their rivals. Or, select unsweetened versions. See the list below for nutritious foods that often have too much added sugar:
- Yogurt – regularly sweetened and frozen
- Chocolate milk, sweetened soy milk or rice milk
- Breakfast cereals – especially granola
- Oatmeal – sweetened instant flavors
Sweet Tooth Tip
One teaspoon of added sugar is about 4 grams. If you add your own sweetener to unsweetened foods and drinks, you can control added sugar more easily than buying presweetened foods.
Healthful carbohydrates are those that provide nutrients while limiting fat, sodium, and added sugar. The simplest way to consume healthier carbohydrates is to choose unprocessed whole grains, cereals, and starchy vegetables, fresh fruit, dried beans and peas, and skim milk or low sugar soy/rice milk. If you can choose these instead of refined versions, you should be able to meet your fiber goals while also limiting added sugars, sodium, and fat. Carbs are not a dieter’s
are – but you do have to choose wisely.
If you have questions about this topic then ask them in MyNetDiary Community Forum!
Medline Plus. “Carbohydrates.” Access online at: http://www.nlm.nih.gov/medlineplus/carbohydrates.html
MyNetDiary. “About Low Carb Diets.” Access online at: http://www.mynetdiary.com/diet-and-weight-loss-resources.html
National Heart Lung Blood Institute. “Food sources of soluble fiber. Access online at: http://www.nhlbi.nih.gov/chd/Tipsheets/solfiber.htm
Harvard School of Public Health. “Fiber: start roughing it.” Access online at: http://www.hsph.harvard.edu/nutritionsource/what-should-you-eat/fiber-full-story/index.html
American Heart Association. “Whole grains and fiber.” Access online at: http://www.americanheart.org/presenter.jhtml?identifier=4574
Harvard School of Public Health. “Time to focus on healthier drinks.” Access online at: http://www.hsph.harvard.edu/nutritionsource/healthy-drinks/focus/index.html
This article can be found at http://www.mynetdiary.com/carbs-in-weight-loss.html
|
<urn:uuid:c966991b-a333-4880-a92e-0be953fd06bd>
|
CC-MAIN-2016-26
|
http://www.mynetdiary.com/carbs-in-weight-loss.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00189-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.896569
| 1,705
| 3.875
| 4
|
Evidence of Ceremonial Bridge Found in Japan
Thursday, January 28, 2016
SAKAI, JAPAN—Five new boreholes for piers thought to have supported a massive bridge have been found at the Nisanzai Kofun burial mound in Japan’s Osaka Prefecture. The bridge is estimated to have been nearly 40 feet wide, 150 feet long, and aligned with the center of the keyhole-shaped mound. “It seems likely that people stood by on both sides of the bridge while a temporary casket for the body was taken into the tomb. It gives us clues as to how ancient burial rites were performed at giant burial mounds,” Taichiro Shiraishi of the Chikatsu Asuka Museum told The Asahi Shimbun. The bridge, thought to have been used in the late fifth century, would have been torn down after the ceremony. “It is unlikely that a structure of this kind was unique to this burial mound. If they get the chance, we hope researchers will investigate other large tombs as well,” Shiraishi said. To read more about archaeology and Japanese history, go to "Khubilai Khan Fleet."
Pirates of the Caribbean, evidence for the oldest Irishman, Iron Age Swiss cheese, India’s cannabis frescoes, and the Silk Road route to Nepal
|
<urn:uuid:b11e2540-2bd1-46a0-9367-b2fd8eb0625f>
|
CC-MAIN-2016-26
|
http://www.archaeology.org/news/4107-160128-japan-kofun-bridge
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00095-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962401
| 281
| 2.96875
| 3
|
Report by Sudeshna Basu, Georgia State University Fall 1997
Over the past decade or so, businesses have accumulated huge amounts of data in large databases. These stockpiles mainly contain customer data, but the data's hidden value--the potential to predict business trends and customer behavior--has largely gone untapped.
To convert this potential value into strategic business information, many companies are turning to data mining, a growing technology based on a new generation of hardware and software. Data mining combines techniques including statistical analysis, visualization, decision trees, and neural networks to explore large amounts of data and discover relationships and patterns that shed light on business problems. In turn, companies can use these findings for more profitable, proactive decision making and competitive advantage. Although data mining tools have been around for many years, data mining became feasible in business only after advances in hardware and software technology came about.
Hardware advances--reduced storage costs and increased processor speed--paved the way for data mining's large-scale, intensive analyses. Inexpensive storage also encouraged businesses to collect data at a high level of detail, consolidated into records at the customer level.
Software advances continued data mining's evolution. With the advent of the data warehouse, companies could successfully analyze their massive databases as a coherent, standardized whole. To exploit these vast stores of data in the data warehouse, new exploratory and modeling tools--including data visualization and neural networks--were developed. Finally, data mining incorporated these tools into a systematic, iterative process.
SAS Institute understands the key issues and challenges facing businesses today--including the need to control costs, build up customer relationships, and create and sustain a competitive advantage.
SAS Institute defines data mining as the process of selecting, exploring, and modeling large amounts of data to uncover previously unknown patterns for a business advantage. As a sophisticated decision support tool, data mining is a natural outgrowth of a business' investment in data warehousing. The data warehouse provides a stable, easily accessible repository of information to support dynamic business intelligence applications.
As the next step, organizations employ data mining to explore and model relationships in the large amounts of data in the data warehouse. Without the pool of validated and "scrubbed" data that a data warehouse provides, the data mining process requires considerable additional effort to pre-process data.
Although the data warehouse is an ideal source of data for data mining activities, the Internet can also serve as a data source. Companies can take data from the Internet, mine the data, and distribute the findings and models throughout the company via an Intranet.
There's gold in your data, but you can't see it. It may be as simple (and wealth-producing) as the realization that baby-food buyers are probably also diaper purchasers. It may be as profound as a new law of nature. But no human who's looked at your data has seen this hidden gold. How can you find it?
Data mining lets the power of computers do the work of sifting through your vast data stores. Tireless and relentless searching can find the tiny nugget of gold in a mountain of data slag.
In "The Data Gold Rush," Sara Reese Hedberg shows the already wide variety of uses for the relatively young practice of data mining. From analyzing customer purchases to analyzing Supreme Court decisions, from discovering patterns in health care to discovering galaxies, data mining has an enormous breadth of applications. Large corporations are rushing to realize the potential payoffs of data mining, both in the data itself and in marketing their proprietary tools.
In "A Data Miner's Tools," Karen Watterson explains the three categories of software to perform data mining. Query-and-reporting tools, in vastly simplified and easier-to-use forms, require close human direction and data laid out in databases or other special formats. Multidimensional analysis (MDA) tools demand less human guidance but still need data in special forms. Intelligent agents are virtually autonomous, are capable of making their own observations and conclusions, and can handle data as free-form as paragraphs of text.
"Data Mining Dynamite" by Cheryl D. Krivda shows how to facilitate the data-mining process. Data is handled far faster after it has been cleansed of unnecessary fields and stored in more convenient forms. Housing data in data warehouses reduces the load on production mainframes and supports client/server analysis. Parallel computing speeds the search process with multiple simultaneous queries. And any activity handling this volume of data requires consideration of physical storage options.
In the short term, the results of data mining will be in profitable if mundane business-related consequences. Micro-marketing campaigns will explore new niches. Advertising will target potential customers with new precision.
In the not-too-long term, data mining may become as common and easy to use as E-mail. We may direct our tools to find the best airfare to the Grand Canyon, root out a phone number for a long-lost classmate, or find the best prices on lawn mowers. The software will figure out where to look, how to evaluate what it finds, and when to quit. Our knowledge helpers may become as indispensable as the telephone.
But it's the long-term prospects of data mining that are truly breathtaking. Imagine intelligent agents being turned loose on medical-research data or on subatomic-particle information. Computers may reveal new treatments for diseases or new insights into the nature of the universe. We may well see the day when the Nobel prize for a great discovery is awarded to a search algorithm.
The amount of information stored in databases is exploding. From zillions of point-of-sale transactions and credit card purchases to pixel-by-pixel images of galaxies, databases are now measured in gigabytes and terabytes. In today's fiercely competitive business environment, companies need to rapidly turn those terabytes of raw data into significant insights to guide their marketing, investment, and management strategies.
It would take many lifetimes for an analyst to pore over 2 million books -- the equivalent of a terabyte -- to glean important trends. But analysts have to. For instance, Wal-Mart, the chain of over 2000 retail stores, every day uploads 20 million point-of-sale transactions to an AT&T massively parallel system with 483 processors running a centralized database. At corporate headquarters, they want to know trends down to the last Q-Tip.
Luckily, computer techniques are now being developed to assist analysts in their work. Data mining (DM), or knowledge discovery, is the computer-assisted process of digging through and analyzing enormous sets of data and then extracting the meaning of the data nuggets. DM is being used both to describe past trends and to predict future trends.
Mining and Refining Data
Experts involved in significant DM efforts agree that the DM process must begin with the business problem. Since DM is really providing a platform or workbench for the analyst, understanding the job of the analyst logically comes first. Once the DM system developer understands the analyst's job, the next step is to understand those data sources that the analyst uses and the experience and knowledge the analyst brings to the evaluation.
The DM process generally starts with collecting and cleaning information, then storing it, typically in some type of data warehouse or datamart (see figure below). But in some of the more advanced DM work, such as that at AT&T Bell Labs, advanced knowledge-representation tools can logically describe the contents of databases themselves, then use this mapping as a meta-layer to the data. Data sources are typically flat files of point-of-sale transactions and databases of all flavors. There are experiments underway in mining other data sources, such as IBM's project in Paris to analyze text straight off the newswires.
THE DATA MINING PROCESS
DM tools search for patterns in data. This search can be performed automatically by the system (a bottom-up dredging of raw facts to discover connections) or interactively with the analyst asking questions (a top-down search to test hypotheses). A range of computer tools -- such as neural networks, rule-based systems, case-based reasoning, machine learning, and statistical programs -- either alone or in combination can be applied to a problem.
Typically with DM, the search process is iterative, so that as analysts review the output, they form a new set of questions to refine the search or elaborate on some aspect of the findings. Once the iterative search process is complete, the data-mining system generates report findings. It is then the job of humans to interpret the results of the mining process and to take action based on those findings.
AT&T, A.C. Nielsen, and American Express are among the growing ranks of companies implementing DM techniques for sales and marketing. These systems are crunching through terabytes of point-of-sale data to aid analysts in understanding consumer behavior and promotional strategies. Why? To increase profitability, of course.
Similarly, financial analysts are plowing through vast sets of financial records, data feeds, and other information sources in order to make investment decisions. Health-care organizations are examining medical records in order to understand trends of the past; they hope this information can help reduce their costs in the future. Major corporations such as General Motors, GTE, Lockheed, Microsoft, and IBM all have R&D groups working on proprietary advanced DM techniques and applications.
Hardware and software vendors are extolling the DM capabilities of their products -- whether they have true DM capabilities or not. This hype cloud is creating much confusion about data mining. In reality, data mining is the process of sifting through vast amounts of information in order to extract meaning and discover new knowledge.
It sounds simple, but the task of data mining has quickly overwhelmed traditional query-and-report methods of data analysis, creating the need for new tools to analyze databases and data warehouses intelligently. The products now offered for DM range from on-line analytical processing (OLAP) tools, such as Essbase (Arbor Software ) and DSS Agent (MicroStrategy), to DM tools that include some AI techniques, such as IDIS (Information DIscovery System, from IntelligenceWare) and the Database Mining Workstation (HNC Software), to the new vertically targeted advanced DM tools, such as those from AT&T Global Information Solutions.
Many people argue that the OLAP tools are not "true" mining tools; they're fancy query tools, they say. Since these programs perform sophisticated data access and analysis by rolling up numbers along multiple dimensions, some analysts still include them in the category of top-down mining tools. The market has yet to see much in the way of more-advanced mining tools, although the spigot is being turned on by application-specific DM tools from AT&T, Lockheed, and GTE.
One major DM trend is the move toward powerful application-specific mining tools. "There is a trade-off in the generality of data-mining tools and ease of use," observes Gregory Piatetsky-Shapiro, principal investigator of the Knowledge Discovery in Databases Project at GTE Laboratories. "General tools are good for those who know how to use them, but they really require lots of knowledge to use them."
AT&T, for example, recently introduced Sales & Marketing Solution Packs to mine data warehouses. They're tailored to vertical markets in retail, financial, communications, consumer-goods manufacturing, transportation, and government. These programs provide about 70 percent of the solution, with final tailoring required to fit the individual client's needs, AT&T says. Complete with AT&T parallel hardware, software, and some services, Solution Packs start at around $250,000.
Both GTE and Lockheed Martin may shortly follow suit. GTE is already entertaining proposals to turn its Health-KEFIR (KEy FIndings Reporter) into a commercial product . The Artificial Intelligence Research group at Lockheed Martin has been investigating and developing DM tools for the past 10 years. Recently, the Lockheed group built an internal application-development tool, called Recon, that generalizes their DM techniques, then applied it to application-specific problems. A beta version of the first vertical packages -- for finance and marketing -- will be available in 1996. The system has an open architecture, running on Unix platforms and massively parallel supercomputers. It interfaces with existing relational database management systems, financial databases, proprietary databases, data feeds, spreadsheets, and ASCII files.
In a similar vein, several neural network tools have been customized. Customer Insight Co., for instance, has built an interface to link its Analytix marketing software with HNC Software's neural network-based Database Mining Workstation, creating a marketing DM hybrid. HNC Software's Falcon detects credit-card fraud; according to HNC, the program is watching millions of charge accounts.
Invasion of the Data Snatchers
The need for DM tools is growing as fast as data stores swell. More-sophisticated DM products are beginning to appear that perform bottom-up as well as top-down mining. The day is probably not too far off when intelligent agent technology will be harnessed for the mining of vast public on-line sources, traversing the Internet, searching for information, and presenting it to the human user. Microelectronics and Computer Technology Corp. (MCC, Austin, TX) has been pioneering work in this area, developing a platform, called Carnot, for its consortium members. Carnot-based agents have been successfully applied to both top-down and bottom-up DM of distributed heterogeneous databases at Eastman Chemical.
"Data mining is evolving from answering questions about what has happened and why it happened," observes Mark Ahrens, director of custom software sales at A.C. Nielsen. "The next generation of DM is focusing on answering the question `How can I fix it?' and making very specific recommendations. That's our focus now -- our Holy Grail." Meanwhile, the gold rush is on.
Data mining is the search for relationships and global patterns that exist in large databases, but are `hidden' among the vast amounts of data, such as a relationship between patient data and their medical diagnosis. These relationships represent valuable knowledge about the database and objects in the database and, if the database is a faithful mirror, of the real world registered by the database. One of the main problems for data mining is that the number of possible relationships is very large, thus prohibiting the search for the correct ones by simple validating each of them. Hence, we need intelligent search strategies, as taken from the area of machine learning. Another important problem is that information in data objects is often corrupted or missing. Hence, statistical techniques should be applied to estimate the reliability of the discovered relationships.
TOOLS AND TECHNIQUES
Data visualization software is one of the most versatile tools for data mining exploration. It enables you to visually interpret complex patterns in multidimensional data. By viewing data summarized in multiple graphical forms and dimensions, you can uncover trends and spot outliers intuitively and immediately.
In the data mining process, visualization tools help you explore data before modeling--and verify the results of other data mining techniques. Visualization tools are particularly useful for detecting patterns found in only small areas of the overall data.
Neural networks (also known as artificial neural networks, or ANNs) extend data mining's predictive power with their ability to learn from your data. Neural networks are software applications that mimic the neurophysiology of the human brain, in the sense that they learn from examples to find patterns in data.
Often presented as a "black box" approach to data mining, neural networks are useful when parametric models are difficult to construct. Neural networks are also key data mining tools when the emphasis is on predicting rather than explaining complex patterns.
How Neural Networks Learn
Neural networks consist of a number of neurons that are interconnected--often in complex ways--and then organized into layers. Neurons are very simple processing units that compute a linear combination of a number of inputs and then perform a simple mathematical process on the result to produce an output.
Neural networks learn to predict outputs by training themselves on sample data. The network reads the sample data and iteratively adjusts the network's weights to produce optimum predictions. Then the neural network applies its "knowledge" to data being mined. Most neural networks learn by applying some sort of training rule that adjusts the synaptic connections between neurons in various layers of the network on the basis of presented patterns.
Tree-based models--which include classification and regression trees--are the most common induction tools used in data mining. Tree-based models automatically construct decision trees from data, yielding a sequence of rules, such as "If income is greater than $60,000, assign the customer to this segment."
Like neural networks, tree-based models can detect nonlinear relationships automatically, giving these tools an advantage over linear models. Tree-based models are also good at selecting important variables, and therefore work well when many of the predictors are partially irrelevant.
Data mining employs a variety of traditional statistical methods such as cluster analysis, discriminant analysis, logistic regression, and time series forecasting. For model fitting and validation, data mining also uses more general statistical methods that conduct automated searches for complex relationships and apply fresh data to tentative relationships.
Traditional statistics give users the additional tools to examine data more closely and comprehend underlying patterns and relationships. Unlike tools such as neural networks, which are opaque processes that do not provide interpretations of findings, statistical exploration clarifies data mining's output. Statistics are particularly useful when data resist interpretation by other methods.
This is a data mining tool for the discovery of strategic relevant information from large databases. Data Surveyor searches for relationships, trends and patterns. It uses highly efficient techniques to test many potential relationships for their statistical significance, allowing many hundreds of variables to be taken into account. The most interesting relationships are presented to the user.
The data is loaded from the company's DBMS into Data Surveyor's own data server. This server uses advanced parallel database techniques that enable databases of hundreds of MBytes to be searched efficiently for interesting relationships. Moreover, use of a separate data server spares the company's DBMS.
The perceived user of Data Surveyor is an expert in the application domain, e.g. an actuarian, data-analyst or database-marketer. Their domain knowledge is considered to be vital during the mining process. In the design of Data Surveyor, emphasis is on a user-friendly interface, that allows the expert to specify his or her interest/target and input his or her knowledge of the application domain.
The result of the mining process consists of simple, yet powerful rules, that can easily be interpreted by the domain expert. Data Surveyor automatically produces a graphical representation of the mining process, and an overview of all actions and results during the mining process. Moreover, it generates a description of the database and a management summary containing a compact overview of the most important results. The user can easily add his or her own comments to this documentation.
Data Surveyor is a CWI product.(link to http://www.cwi.nl/inedx.html)
Applications of DATA MINING
Insurance companies and banks use data mining for risk analysis. An insurance company searches in its own insurants and claims databases for relationships between personal characteristics and claim behavior. The company is especially interested in the characteristics of insurants with a highly deviating claim behavior. With data mining, these so-called risk-profiles can be discovered. The company can use this information to adapt its premium policy.
Data mining can also be used to discover the relationship between one's personal characteristics, e.g. age, gender, hometown, and the probability that one will respond to a mailing. Such relationships can be used to select those customers from the mailing database that have the highest probability of responding to a mailing. This allows the company to mail its prospects selectively, thus maximizing the response. In more detail, this works as follows:
Company X sends a mailing (1) to a number of prospects. The response (2) is e.g. 2%. The response is analyzed using data mining techniques (3), discovering differences between the customers that did respond, and those that did not respond. The result consists of database subgroups that have a significantly higher response probability (4), e.g. of all young couples with double incomes, 24% replied to the last mailing. The groups with the highest response-probability are selected as targets for the next mailing (5). Data mining thus increases the response considerably.
Production Quality Control
Data Mining can also be used to determine those combinations of production factors that influence the quality of the end-product. This information allows the process engineers to explain why certain products fail the final test and to increase the quality of the production process.
The above production process consists of steps A, B and C. During each step, machine settings and environmental factors influence the quality of the product. A quality test takes place at the end of the process. With data mining, the company can discover which combinations of machine adjustments and other factors result in faulty products. Using this information, the number of failing products can be decreased.
Step 1: Set Goals
Determine your business objectives. Define your data warehousing strategy. Calculate the investment you can make and the return on investment you need.
Step 2: Evaluate Expertise
Assess your quantitative expertise. Define your relationship with IT and the degree of in-house support you can count on. Determine the level of user skills.
Step 3: Address Technical Issues
Answer technical questions about the data you want to mine, such as file size and data quality. Define any technical constraints.
Step 4: Make It Happen
Know the buy-in and approval process in your organization. Determine your timeframe for implementation. Select an implementation option.
Data mining is often seen as an unstructured collection of methods, or as one or two specific analytic tools, such as neural networks. However, data mining is not a single technique, but an iterative process in which many methods and techniques may be appropriate. And--like data warehousing--data mining requires a systematic approach.
Beginning with a statistically representative sample of the data, you can apply exploratory statistical and visualization techniques, select and transform the most significant predictive variables, model the variables to predict outcomes, and affirm the model's accuracy.
To clarify the data mining process, SAS Institute has mapped out an overall plan for data mining. This step-by-step process is referred to by the acronym SEMMA: sample, explore, modify, model, and assess.
Step 1: Sample
Extract a portion of a large data set big enough to contain the significant information yet small enough to manipulate quickly.
For optimal cost and performance, SAS Institute advocates a sampling strategy, which applies a reliable, statistically representative sample of the full detail data. Mining a representative sample instead of the whole volume drastically reduces the processing time required to get crucial business information.
If general patterns appear in the data as a whole, these will be traceable in a representative sample. If a niche is so tiny that it's not represented in a sample and yet so important that it influences the big picture, it can be discovered using summary methods.
Step 2: Explore
Search speculatively for unanticipated trends and anomalies so as to gain understanding and ideas.
After sampling your data, the next step is to explore them visually or numerically for inherent trends or groupings. Exploration helps refine the discovery process.
If visual exploration doesn't reveal clear trends, you can explore the data through statistical techniques including factor analysis, correspondence analysis, and clustering. For example, in data mining for a direct mail campaign, clustering might reveal groups of customers with distinct ordering patterns. Knowing these patterns creates opportunities for personalized mailings or promotions.
Step 3: Modify
Create, select, and transform the variables to focus the model construction process.
Based on your discoveries in the exploration phase, you may need to manipulate your data to include information such as the grouping of customers and significant subgroups, or to introduce new variables. You may also need to look for outliers and reduce the number of variables, to narrow them down to the most significant ones.
You may also need to modify data when the "mined" data change. Because data mining is a dynamic, iterative process, you can update data mining methods or models when new information is available.
Step 4: Model
Search automatically for a variable combination that reliably predicts a desired outcome.
Once you prepare your data, you are ready to construct models that explain patterns in the data. Modeling techniques in data mining include neural networks, tree-based models, logistic models, and other statistical models--such as time series analysis and survival analysis.
Each type of model has particular strengths, and is appropriate within specific data mining situations depending on the data. For example, neural networks are good at combining information from many predictors without over-fitting, and therefore work well when many of the predictors are partially redundant.
Step 5: Assess
Evaluate the usefulness and reliability of findings from the data mining process.
The final step in data mining is to assess the model to estimate how well it performs. A common means of assessing a model is to apply it to a portion of data set aside during the sampling stage. If the model is valid, it should work for this reserved sample as well as for the sample used to construct the model.
Similarly, you can test the model against known data. For example, if you know which customers in a file had high retention rates and your model predicts retention, you can check to see whether the model selects these customers accurately. In addition, practical applications of the model, such as partial mailings in a direct mail campaign, help prove its validity.
Back to main page of DATA VISUALIZATION APPLICATIONS
|
<urn:uuid:aa433e5f-5b33-4ca4-9dc0-80d5894807c0>
|
CC-MAIN-2016-26
|
http://www.siggraph.org/education/materials/HyperVis/applicat/data_mining/data_mining.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00064-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.923165
| 5,341
| 2.65625
| 3
|
Saturday, March 30, 2013
Easter Bunnies: Make Mine Chocolate!
In the months following Easter, local humane societies and rabbit rescues are flooded with rabbits, former Easter gifts whose owners no longer want them. The unlucky ones are dumped outside where predators, cars, illness and injury virtually guarantee an early death.
In 2002, in an attempt to address the problem, the Columbus House Rabbit Society began a campaign to educate the public on the realities of living with a rabbit, and to discourage giving live rabbits as Easter gifts. Using ceramic pins in the form of chocolate bunnies as the symbol, the campaign's goal is to spread the message that rabbits should not be casually acquired and to educate the public about the special needs of these often-fragile creatures.
The pins serve as conversation starters. Comments about the pin provide the wearer the opportunity to share the message with the general public. These informal conversations are supported by a card that is distributed with each pin, and by business cards that can be handed out to interested parties.
Both the pin card and the business card list important facts that should be considered before bringing a rabbit into the home. The goal is to educate the public of the challenges of properly caring for rabbits and to encourage them to purchase chocolate Easter bunnies (or stuffed toy animals) rather than live rabbits.
In 2012, the "Make Mine Chocolate" celebrated its first decade of existence. What began as the dream of three women in Columbus, Ohio, has become an international reality. Each year brings more partners committed to sharing their dream of educating the public about the realities of living with rabbits before a purchase is made.
For more information on Make Mine Chocolate, visit their website or "Like" their Facebook page!
|
<urn:uuid:ea6dd7ec-fdf9-4234-b582-05b300480a40>
|
CC-MAIN-2016-26
|
http://bunnyjeancook.blogspot.com/2013/03/easter-bunnies-make-mine-chocolate.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00160-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958339
| 356
| 2.53125
| 3
|
August 7, 2014
What Can Grizzly Bears Tell Us About Diabetes And Obesity?
April Flowers for redOrbit.com - Your Universe Online
Diabetes rates are continuing to rise globally, affecting the lives of millions of people. With such high rates, it seems feasible to ask if the condition serves any biological purpose. A new study, published in Cell Metabolism, reveals that grizzly bears have a natural state of diabetes, which not only serves a biological purpose, it is also reversible.The researchers note that grizzly bears are obese, but not diabetic, in the fall. During hibernation, however, they become diabetic. The condition seemingly cures itself when they wake up in the springtime. The findings demonstrate how much we still have to learn about how animals cope with conditions that would cause disease in humans from observing natural biology and evolutionary experimentation.
According to the Wall Street Journal, grizzly bears can take in as much as 58,000 calories a day and weigh over 1,000 pounds. They can lift a heavy tree trunk with one paw, and take on wolves and mountain lions — and win. But these aren't the qualities that the researchers were concerned with. Ursus arctos horribilis, or the grizzly bear, spend their entire lives obese, dealing with weight gain in a way that rats could never imitate.
A person has Type 2 diabetes when their cells lose the ability to respond to insulin. The research team, led by Dr. Kevin Corbit of Amgen, Inc., found that in grizzly bears, unlike in humans, insulin levels in the animals’ blood do not change. Instead, the bears' cells that communicate with insulin are able to turn off and on their ability to respond to the hormone. More surprising, when the grizzlies are at their most obese, they are also the most insulin sensitive, or least diabetic. They are able to do this by deactivating a protein called PTEN in fat cells.
"This is in contrast to the common notion that obesity leads to diabetes in humans," Corbit said in a Cell Press statement. Liver and muscle tissue are common places for fat to accumulate in other animals with obesity, but grizzlies store all of the fuel that they need during hibernation in fat tissue, instead.
The researchers say that their findings highlight how complex the relationship between obesity and diabetes really is. "Our results clearly and convincingly add to an emerging paradigm where diabetes and obesity—in contrast to the prevailing notion that the two always go hand-in-hand—may exist naturally on opposite ends of the metabolic spectrum," explained Dr. Corbit. "While care must be taken in extrapolating preclinical findings to the care of particular patients, we believe that these and other data do support a more comprehensive and perhaps holistic approach to caring for patients with diabetes and/or obesity," he added.
According to Corbin, the cellular mechanisms that lead to obesity in certain people could also be protecting them from diabetes, and the mechanisms that lead to diabetes in other people might protect them from becoming obese. For example, humans with low PTEN levels are exquisitely insulin sensitive, even if they are obese.
"Moving forward, this more sophisticated understanding of the relationship between diabetes and obesity should enable researchers not only to develop therapies targeting these mechanisms, but also to identify the appropriate patients to whom these therapies should be targeted."
Corbin intends to continue his research, over the next two years, exploring how the bears are able to maintain this balance between obesity and diabetes by sequencing the bears' genome.
"That would really accelerate the discovery research for bears," he concluded.
|
<urn:uuid:35bbbd2c-2f3c-43cd-9549-8e655f575d69>
|
CC-MAIN-2016-26
|
http://www.redorbit.com/news/health/1113208072/diabetes-obesity-in-grizzlies-080714/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00063-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960236
| 743
| 3.3125
| 3
|
Here are Some Other Useful Links
Ground and Balloon-Borne Observations of Sprites and Jets
First Color Image of a Sprite Ever Recorded (Image Copyright University of Alaska, Reproduced with Permission).
What is a Sprite?
There are so many good sprites pages out there, it would be a shame to reproduce them. Here are some good links.
(a).The possibility that visible light emissions can occur in the mesosphere and ionosphere above thunderstorms has been the subject of speculation and rumor for many years. The development of fast low-light level video recording technology has revolutionized this field of study. Exciting new observations have confirmed the existence of such emissions, which have become known as red sprites, blues jets, elves, and anomalous optical events (AOE's). These observations have generated an enormous amount of interest in these emissions and have raised several important scientific and technical questions. These questions include the following: (1.) How are the emissions excited? In particular, what is the nature of the electromagnetic fields involved in the excitation process? (2.) How much dc electric current, if any, do sprites, jets, elves and AOE's deliver to the ionosphere? (3.) What produces the differences between the three types of event? (4.) Are the reported atmospheric gamma ray bursts associated with sprites? (4.1) If so, are the causative energetic electrons locally accelerated or are they precipitated trapped radiation belt particles? (5.) What is the effect of electric fields radiated by lightning strokes and sprites on the ionosphere? A range of effects have been suggested that include optical emissions that are distinctly different from sprites, heating, density enhancements, electron acceleration and gamma ray emission. (6.)What is the role of sprites, jets, elves and AOE's in the excitation of the global circuit? (7.)Which of the rapidly proliferating set of models of these events are correct? This proposal requests support for a set of balloon and airborne experiments that can make a contribution to answering these questions. The proposed balloon paylaod will carry three axis broad band electrci field and magnetic field detectors with sufficient dynamic range and bandwidth to resolve the expected sprite excitation field and to distinguish between ac and dc excitation mechanisms. A gamma ray spectrometer consisting of a scintillation counter and a pulse height analyzer will also be flown. These payloads will be flown during the summer of 1998, hopefully in conjunction with the continuing activities of other investigators.
(b.) New Task
(c.) In year 1, we will design and construct three balloon payloads with electric field, magnetic field, X-ray counter and photometer instruments. In year two, we will complete the payloads and fly them 4 times apiece from sites in or near western Iowa and Missouri, in conjunction with ground-based observations. In year 3, we will analyze and publish the results.
(d.) NASA is presently supporting an active program of airborne observation of sprites and related phenomena. This program will extend and complement those investigations. It will contrbute to understanding of gamma ray emissions of terrestrial origin observed by the BATSE instrument on the Compton Obervatory. By contributing to our understanding of troposphere-ionosphere coupling, it will complement and extend aspects of the UARS and TIMED missions.
The following is a list of recent additions to our web. Whenever we publish a paper, write a specification, submit a status report, or add anything else to our web, we'll put a notice here. Every month we'll remove the oldest items. The most recent changes are listed first, and each item is linked to the page with the updated content.
Copyright 1999, 2004, University of Houston
|
<urn:uuid:df6149bf-f859-4706-87de-c3735614c4f7>
|
CC-MAIN-2016-26
|
http://www.uh.edu/research/spg/Sprites99.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00118-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918387
| 759
| 2.953125
| 3
|
Dubspot instructor and course designer Matt Shadetek returns with another episode of Secret Knowledge, a Logic Pro video tutorial series full of production tips, techniques, and advice for Logic users. In this installment, Shadetek explores the concept of polyrhythms using Logic’s ES2 virtual analog synthesizer.
In this tutorial we’re going to do something a little more experimental than we usually do and explore the concept of polyrhythms. I thought this would be a good topic since a lot of people throw this term around without completely understanding the meaning.
A polyrhythm is not any rhythm with a lot going on in it, for example a busy percussion part. A polyrhythm is when we have multiple rhythms existing simultaneously. An example of this would be if a DJ took two songs, one in 6/8 time signature and one in 4/4 and layered them over each other. Overtly polyrhythmic music is not something which we find in a lot of electronic music and in western music in general and so to the untrained ear it can seem quite complex. A way that I find it can be interesting to introduce polyrhythms is to use them on ‘soft’ rhythmic elements, like changes in timbre or pitch. In this example I set up a one note acid-like synth line over a four on the floor beat and then create two patterns. One pattern contains a pitch bend and is five beats long and the other contains a filter tweak and is three beats long. These rhythms loop on their own cycle and create interesting shifting patterns. The 4 beat line and the 5 beat pitch bend coincide every 20 beats (or 5 bars in our drum/acid time signature of 4/4) and the filter tweak coincide every 12 beats (or 3 bars). The entire piece repeats every 15 bars with the first beat of each pattern lining up and restarting. To understand the length of the repetition cycle in a polyrhythmic system you can multiply the length of each cycle by each other, so in this case a 3 and 5 bar cycle create a 15 bar pattern.
Using these kind of simple looping pieces we can create what we call emergent complexity, where simple rules give rise to complex results. What I like about this approach is that although at times the changes may seem unexpected or even random they are being created by a system which can be understood and controlled. – Matt Shadetek
Matt Shadetek is a DJ, producer and teacher based in Brooklyn, New York. He runs the Dutty Artz label with DJ /Rupture and will be releasing his second solo album The Empire Never Ended on March 26th 2013. Hear his music at mattshadetek.com
For further exploration of Logic check out Dubspot’s six-level Logic Pro Producer program, designed by Matt Shadetek. In this course (whether at our online school or our physical school in New York City), students will learn to create a four-track EP, starting with a set of musical sketches and developing them over the course of six levels, refining their craft as they advance.
Master Logic with our complete program of courses culminating in a four-track EP ready for release. In addition to achieving a complete overview of the composition process in Logic you’ll also earn the Dubspot Producer’s Certificate in Logic Pro. After completing this program, you will leave with a new EP, a remix entered in an active remix contest, and a scored commercial to widen your scope.
- Logic Pro Level 1: Shake Hands with Logic
- Logic Pro Level 2: Completing Your First Track
- Logic Pro Level 3: Mixing Essentials
- Logic Pro Level 4: Sound Design & Instrumentation
- Logic Pro Level 5: Advanced Composition & Production
- Logic Pro Level 6: Taking Your EP Global
Music Production with Logic Pro classes just started; sign-up today!:
January 30th in NYC – Wednesdays & Fridays, 6:15-am-9:00pm
The week of April 21st DUBSPOT ONLINE
|
<urn:uuid:8701c62d-785a-4890-9aa8-57cfb7c8a0e6>
|
CC-MAIN-2016-26
|
http://blog.dubspot.com/logic-tutorial-polyrhythmic-acid-w-es2-synthesizer-secret-knowledge-w-shadetek-pt-9/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.921773
| 844
| 2.515625
| 3
|
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
This is a background article. See Psychological aspects of trafficking in children
Trafficking of children is a form of trafficking in human beings and is one of the practices associated with the worst forms of child labour by the International Programme on the Elimination of Child Labour (IPEC). Child trafficking is a crime under international law and under the national legislation of many countries. It typically involves:
- the separation of children from their family - "by force, coercion, trickery – including the administration of drugs – family and other complicity, or by much gentler persuasion, misinformation, or through ignorance about what really awaits them at their destination" - and
- their relocation within the same country or across borders for purposes such as forced labour, prostitution, pornography, organ removal or use as child soldiers.
- Convention concerning the Prohibition and Immediate Action for the Elimination of the Worst Forms of Child Labour or Worst Forms of Child Labour Convention (ILO, no. 182, 1999)
- Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children, supplementing the United Nations Convention against Transnational Organized Crime (UN General Assembly, 2000)
Under both of the above-mentioned instruments, any person of less than eighteen years of age is considered to be a child.
United States LawEdit
United States Federal law criminalizes sex trafficking of children under Title 18 U.S.C. 1591 and Title 18 U.S.C. 2421-2423. Section 1591, a civil rights statute, makes it illegal to "recruit, entice, harbor, transport, provide or obtain by any means a person" knowing that either the person will be compelled through "force, fraud or coercion" to submit to a sex act, or that the person is under 18 years of age and will likewise be forced to commit a sex act. Sections 2421-2423, part of the 2003 PROTECT Act, criminalizes transport of minors for sex acts. It also criminalizes travelling to engage in illicit sex in another country. This provision of the law empowered federal prosecutors to address American's exploitation of minors in foreign countries.
- ↑ International Labour Organisation, Unbearable to the Human Heart: Child Trafficking and Action to Eliminate It, Geneva: ILO/IPEC, 2002/ 97 pages. Quote from p. 20. N.B. The word "trafficking" is spelled "traficking" on the title page. . Retrieved 2006-12-03.
- 'Asia's sex trade is 'slavery' - BBC
- Europe warned over trafficking of children - BBC
- Fears of rising child sex trade – The Guardian
- 'Tracking Africa's child trafficking - BBC
- 5,000 child sex slaves in UK - The Independent
- 'Streets of despair - The Observer
- Trafficking in Minors - United Nations Interregional Crime and Justice Research Institute
- Child Laundering How the Intercountry Adoption System Legitimizes and Incentivizes the Practices of Buying, Trafficking, Kidnapping, and Stealing Children, David M. Smolin.
- fr:trafic d'enfants
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|
|
<urn:uuid:65f84fc1-d608-436b-8a63-338827fd4a30>
|
CC-MAIN-2016-26
|
http://psychology.wikia.com/wiki/Trafficking_of_children
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00180-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.865671
| 693
| 3.265625
| 3
|
Monday, February 9, 2009
Thursday, January 22, 2009
Tuesday, January 6, 2009
These chapters show how to fulfill an entire project life cycle illustrated with a
case study. We describe each task with the roles, responsibilities, and the
techniques to apply it. We group tasks by activity and group activities by
iterative phases. Within each phase, we explain the depth of modeling required
before moving on to the next. We separate each of the five phases of the
approach into its own chapter. The general structure of each task begins with
the roles and their participation, followed by relevant technique-oriented
discussion. Additionally, we provide a case study component to illustrate each
Chapter 3: Chart Solution
The Chart Solution chapter describes how to assess the business problem, set
the project scope using object modeling, identify requirements, and plan the
project strategy. We present how to frame the business situation with modeling
to lay groundwork for the solution.
Chapter 4: Structure Solution
The Structure Solution chapter describes how to refine object models to
understand the work required for building a solution. We suggest ways to
segment the solution to produce subsystems in parallel and build from the core
outward. We describe how to detail the project plan for iterative and
Chapter 5: Build Solution
The Build Solution chapter describes how to finalize object models for
application creation through iteration. We discuss steps for building the
various solution deliverables including the application, the database, the
documentation, and the training materials.
Chapter 6: Integrate Solution
The Integrate Solution chapter describes how to merge multiple subsystems
developed in parallel with the current released version of the solution.
Chapter 7: Implement Solution
The Implement Solution chapter describes how to deploy the solution for
version release and obtain confirmation of a completed solution.
The Conclusion summarizes the topics presented in the book and suggests
ways to adopt the approach in your enterprise.
The Deliverables Glossary provides descriptions of all the deliverables
identified in the approach
Monday, September 29, 2008
The quiz tool from the beginning of this chapter is actually an entire system of programs designed to work together, in this case, five different programs Each quiz is stored in two separate files, which are automatically generated by the programs. Figure 6.11 is a flow diagram that illustrates how the various programs fit together.
The QuizMachine.php program is the entry point into the system for both the test administrator and the person who will be taking the quiz. It essentially consists of three forms that allow access to the other parts of the program. To ensure a minimal level of security, all other programs in the system require some sort of password access. The QuizMachine program primarily serves as a gateway to the other parts of the system. If the user has administrative access (determined by a password), he or she can select an exam and call the editQuiz.php page. This page loads up the actual master file of the quiz (if it already exists) or sets up a prototype quiz, and places the quiz data in a Web page as a simple editor. The editQuiz program calls the writeQuiz.php program, which takes the results of the editQuiz form, and writes it to a master test file as well as an HTML page.
If the user wants to take a quiz, the system moves to the takeQuiz.php page, which checks the user's password and presents the quiz if authorized. When the user indicates he or she is finished, the gradeQuiz.php program grades the quiz and stores the result in a text file.
Finally, the administrator can examine the log files resulting from any of the quizzes by indicating a quiz from the QuizMachine page. The showLog.php program will display the appropriate log file.
The heart of the quiz system is the quizMachine.php page. This is the only page that the user will enter directly. All the other parts of the system will be called from this page or from one of the pages it calls. The purpose of this page is to act as a control panel. It consists of three parts, corresponding to the three primary jobs that can be done with this system: Writing or editing quizzes, taking quizzes, and analyzing the results of quizzes. In each of these cases, the user will have a particular quiz in mind, so the control panel automatically provides a list of appropriate files in each segment. Also, each of these tasks requires a password, to provide at least some level of security.
The main part of the QuizMachine.php program simply sets up the opening HTML and calls a series of functions, which will actually do all the real work.
On her MySpace page, actress Robin Bain describes herself as a budding writer and director of short films that are screened at film festivals around the world, including Paper Doll and Run Fatboy Run. Her latest project, Robot Chicken, aired last night on the Cartoon Network and featured the actress as a live action character.
Sunday, September 28, 2008
Saturday, September 27, 2008
The main rivers of Pakistan, the Indus River and its tributaries, have been intertwined with the region’s economy, geography, and identity since time immemorial. This river has been shaping life directly for over 4000 years, by way of annual flooding, and since the second half of the 19th century has fed the largest contiguous irrigation system in the world.
|
<urn:uuid:dca44551-4f13-4a1c-8008-87069a89fa0f>
|
CC-MAIN-2016-26
|
http://blogsocean.blogspot.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00162-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91599
| 1,129
| 2.828125
| 3
|
SEHR, volume 5, issue 1: Contested Polities
Updated 27 February 1996
Writing on the contested histories of Ayodhya and Somnath, Peter Van der Veer refers to the inimical relations between Hindus and Muslims as "one of the most important master-narratives of colonial orientalism in India." Van der Veer argues that the manner in which Hindu-Muslim relations were construed in British historiography was crucial to legitimating the validity of British rule as one of enlightened disinterestedness. It is also clear, however, that the "facts" marshalled by British commentators are central to the description of Hindu-Muslim relations. The destruction of Hindu temples by Muslim rulers and the forcible conversion of Hindus at sword-point represent one type of fact that underwrites a master-narrative of violent change, as recorded, for example, in James Mill's work, The History of British India (1817). In opposition is another kind of fact, also documented in British historiography in works such as Thomas W. Arnold's The Preaching of Islam (1896): the patronage of Hindu shrines by Muslim saints and Muslim tomb-worship by Hindus; the sharing of titles and names, as well as certain social practices and customs, by both Hindus and Muslims, and so forth. If both "facts" are equally accurate descriptions of Hindu-Muslim relations, then tolerance and intolerance can be defined as the respective absence or presence of violence and forcible change from outside.
But by the last quarter of the nineteenth century, when the first British census reports were commissioned to establish the causes of Muslim expansion in India, the official sociology of India no longer depicted religious change through outside intervention as the central issue: Islamic conversion was represented as having less to do with either the coercive or the charismatic character of Islam than with economic necessity or social ostracism from Hinduism. What models of tolerance or intolerance are then suggested by this reading of Muslim expansion, in which conversion to Islam is the result neither of a gradual mystical insight that incorporates aspects of Hindu worship nor a violent rupture of existing beliefs, but rather of the exclusionary character of caste Hinduism? Indeed, in this description the agency of intolerant action would seem to have shifted from Islam to Hinduism, which, by casting out its members, enables Islam to offer the possibilities for social betterment to these excluded groups. But the meanings of exclusion and incorporation are as volatile as that of tolerance and intolerance, for the conversion to Islam precipitated by lower-caste status in Hinduism also gives rise to movements of reconversion to Hinduism. These attempt to reverse the principle of exclusion and challenge the appeal of rival religious systems by re-absorbing those who had earlier been cast off or had not been fully assimilated. Yet, belying their incorporative philosophy and reformist tendencies, reconversion movements such as the Arya Samaj exhibit a morbid defensiveness that finds expression in group solidarity and an enforced collective identity. Indeed, the point of the reformist discourse of these movements is that, in redefining the boundaries of the Hindu community, reconversion is incorporated into the discourse of Hindu nationalism.
Posing similar questions about incorporation and exclusion (albeit in a different context) in a provocative essay on Tamil art, Vidya Dehejia points to the appropriation of Vaishnavite features in Shaivite art as possible evidence of sectarian tension. Dehejia contests the assumption that wherever there is rivalry or contention between two religious communities, one should expect to find not appropriation or borrowing of features of the rival religious system, but its total destruction, and that the establishment of a separate identity normally requires negation of the competing system. Dehejia's argument raises fundamental questions about whether tolerance can be assumed to be equivalent to syncretism, and intolerance to absolutism and exclusivity. How, for example, do we respond to reconversion movements which attempt to include, in a broadly reformist way, groups that had been formerly excluded? Does the gesture of reclamation and incorporation efface the earlier one of marginalization? Suppose the groups being courted do not want to return to the fold -- are they denying their true origins and willfully affiliating themselves with a community with which they are united not by belief but by social circumstance?
With this discussion of the complex meanings of appropriation and exclusion in mind, my intent in this essay is to locate the point in British discourse where the "facts" of forcible and violent change, as presented in works like Mill's History of British India (1817), and the "facts" of peaceful assimilation to Islam, as presented in Arnold's The Preaching of Islam (1896), are no longer clearly demarcated as self-evident examples of Islamic intolerance in the first instance and tolerance in the second -- where, in short, incorporation and exclusion resist being unproblematically located in ideas of religious syncretism and religious absolutism, respectively.
The point in British discourse at which narratives of tolerance and intolerance acquire a shifting center of reference -- at times it is Islam and at other times Hinduism -- is determined, as I suggested above, by the writing of India's official sociology in the third quarter of the nineteenth century, when the first census reports, settlement reports, and district gazetteers were commissioned. The enumeration of India's populations marks the period when the boundaries of religious communities are redrawn in relation to empirically derived explanations about the expansion of the Muslim population in nineteenth-century India. While Muslim groups are identified as separate from the Hindu community and therefore also a separate political entity, a large majority of Muslims are also recognized for the first time in British discourse, as Peter Hardy notes, as originally having been Hindus who had converted for reasons other than direct force or spiritual illumination. For the first time, the discourse of nationalism is processed through a discourse of origins. For instance, questions confronting British administrators included deciding how the numbers of Muslims in India were to be categorized -- as descendants of those who originally came from Arab lands and were subsequently indigenized (i.e., "hereditary" Muslims), or as descendants of native Hindus who had converted. Were Hindus who had converted to Islam to be considered less Muslim (i.e., more Hindu) than other Muslims of Arab-descent groups?
I want to approach these questions of origins through that institutional instrument most directly focused on the determination of identity -- census-taking -- and, in particular, census-taking as an established feature of British colonial administration. The census reports on India issued between 1872 and 1901 made the first systematic attempt to categorize the religious identities of Indian peoples (including converts) according to criteria of racial origin, customs, and laws. In the course of such categorization, various oppositions were constructed out of the material of enumeration -- oppositions such as foreign/indigenous, national/local, pure/hybrid, lineal descent (or hereditary)/convert. In assessing the strength of the Muslim population in India between 1872 and 1901, the census threw the bulk of its weight on the side of the second term in each of these oppositions to draw a picture of the Indian Muslim, not as an autonomous "other" but as a version of the Hindu at an earlier historical moment before the advent of Arab, Afghan, and Turkish groups -- and before possibly forcible conversion of Hindus to Islam. The "contrived" assimilation of Muslim Indians to Hindu India is not simply a nineteenth-century Indian nationalist strategy of fighting colonial oppression, as it is portrayed by recent Hindu revivalists, but a feature of late nineteenth-century British discourse itself. Indeed, the British assertion of the local origins of Indian Muslims challenged the separatist impulse among Muslims as a claim that was belied by the "facts" accumulated by British ethnographic data, census reports, and commissioned surveys -- facts which placed the Muslims closer in racial features, behavior, habits, and customs to other native inhabitants of India, including Hindus. The "Muslim" represented in the British census reports is marked by an ambivalent identity -- neither truly Muslim nor truly Hindu, riven by social class differences that, in their turn, displace the possibilities of a unity of religious belief or identity. The critical issue in the historiography of Hindu-Muslim relations is not so much that British policy conceived of Hindus and Muslims as separate communities, but that the theory of common origins -- from which other social and religious identities were willingly or forcibly adopted -- produced a crippling situation that disallowed either total unity of Hindus and Muslims or total division between them.
I should point out that I am not reading these reports to suggest a continuity between British colonial discourse and the rhetoric of modern communalism, or to argue that the roots of contemporary communal problems lie in late nineteenth-century techniques of information gathering. Rather, what I would like to argue is that acts of classification, such as the census, establish the categories of knowledge about the racial and religious composition of the people it enumerates that enter the domain of memory for the colonized -- a fluid and shadowy realm of meaning that acquires a suggestive power and resonance in the construction of future relationships between India's ethnic and religious groups. The shift from elite to mass politics in Indian nationalism gave a new importance to the masses of Muslim converts who were denied an origin outside India. As a descriptive catalog of India's ethnic composition, the British census establishes fixities of racial and religious categories, even as it insinuates the possibilities of overlapping and common origins rather than real historical difference. The function of the census to introduce categories of difference and then deny them must be seen to have a complex effect on the structure of perceptions in Hindu-Muslim relations, if not on those relations themselves. My interest lies in examining the mediating role of British ethnography in the production of a field of remembered identities, both Hindu and Muslim, that feeds into the discourses of religious nationalism.
The first systematic assessment of India's Muslim population was made in the British census reports of 1872. H. Beverly, the superintendent of the census, made the potentially explosive assertion that the large presence of Muslims in Bengal was due not so much to the introduction of foreign blood into the country but to the conversion of the former inhabitants, for whom a rigid system of caste discipline made Hinduism intolerable. Many Bengali Muslims took exception to this conclusion, and Khondkar Fazli Rabi wrote The Origins of the Musalmans of Bengal (1895) to prove that the truth was indeed quite the opposite: he was at pains to point out, for example, that many leading Muslim families could trace their origins to foreign roots -- families such as the Saiads, who refrained from intermarriage with families of more "dubious" ancestry. Piqued by what he took to be Beverly's social condescension, Rabi wrote,
It can safely, and without any fear of contradiction, be asserted that the ancestors of the present Musalmans of this country were certainly those Musalmans who came here from foreign parts during the rule of the former sovereigns, and that the present generation of Musalmans are the offspring of that dominant race who remained masters of the land for 562 years.
Other Muslim historians, however, were less extreme in their claims, and though committed to the theory of the foreign origin of Indian Muslims, they reluctantly admitted that local converts dominated the total. At the same time the figures quoted were generally conservative. Abu A. Ghaznavi, who was asked by the British to respond to Fazli Rabi's claim, calculated that roughly twenty percent of the Muslims living in Bengal were lineal descendants of foreign settlers, fifty percent had a mixture of foreign blood, and the remaining thirty percent, he claimed, were probably descended from Hindus and other converts.
The 1901 Census, however, dismissed these figures as too disproportionate and placed the percentage of converts from Hinduism much higher. The idea of the original "Hindu-ness" of Muslim inhabitants extended to the argument that the early Muslim invaders in Bengal were not even Arabs but Pathans. Yet the fact recorded in the census is that the Muslims who called themselves "Shekh" outnumbered those who professed to be Pathans in a ratio of fifty to one, and furthermore, that many of these "Shekhs" had only recently begun to claim this name and were formerly known as Ashraf in south Bengal and as Nasya in north Bengal. Two different commentaries are thus juxtaposed in a contained narrative of conflicting memories: the descriptive record of Muslim self-definitions as Arab-descended is framed by a commentary that negates those self-perceptions and posits an alternative explanation of Muslim origins in the fractured space of Hindu communities.
Explanation became even more racialized through the ethnographic contributions of Herbert Risley, who was brought into the census-taking operations at a crucial stage of description. The ethnographic scale of measurement, or "Cephalic index," that he devised conclusively "proved" the Hindu origins of Indian Muslims, despite the latter's claims to foreign ancestry that their names and titles presumably asserted. By taking measurements of the proportion of the breadth of the head to its length, as well as of the breadth of the nose to its length, Risley placed Muslims closer in racial features to the lower castes of Chandals and Pods than to Semitic peoples. Here is a clear instance of how the discourse of class, blending indistinguishably with the discourse of race, appropriated the category of religion as uniting both discourses; it became possible to state that "although the followers of the Koran form the largest proportion of the inhabitants [of Rangpur district], there is little reason to suppose that many of them are intruders. They seem in general, from their countenances, to be descendants of the original inhabitants." The split between "original" Muslims, defined as those who comprised the higher classes, and local Muslim converts from Hinduism, who were consistently identified with the lower classes, did two things: first, it accentuated differences not so much between Hindus and Muslims but between Muslims and Muslims on the point of foreign or native descent, with Muslims converted from Hinduism being regarded more ambiguously as Muslim and more relationally placed vis-a-vis Hindus; secondly, the dichotomy of foreign versus locally descended Muslims replaced a unity of Muslim identity -- which the profession of Islam presumably implied -- with categories of differences based on social class. Both factors figure importantly in the reconversion movements led by Hindu groups as early as the nineteenth century, and which continue to function today in certain regions of India (especially in those areas where mass conversions have taken place, such as in Meenakshipuram).
The reconversion movements (which often include rituals of purification, or shuddhi) are a relatively unique phenomenon in that they seek to reverse the total excommunication from Hinduism that apostasy to any other religion generally demanded. Furthermore, reconversion is premised on the activation of remembered identities long since lost or abandoned. The readmission into Hinduism of converts to Islam required, often as a test, that they display types of behavior that no Muslim would ever be identified with, such as eating pork. (Many Muslims who had converted from Hinduism had still not adopted the taboo against pork-eating. For such converts, it was thus possible to exhibit those behaviors that made them acceptable to caste Hindus.) Most important, the emphasis in reconversion rituals on practices, habits, and usages as markers of religious identity bears a strong resemblance to a similar emphasis in the British census reports that gave greater weight to customs and practices over the self-declarations of religious identity as a means of classifying religious groups in India. If apostates could be reclaimed back into the fold despite their earlier rejection of Hinduism in favor of another religion, such reclamation was made possible by a political discourse of religious identity whereby a Hindu remained a Hindu by virtue of retaining certain social customs. What the census report does, in other words, is establish a set of scientifically derived representations that enables the Hindu community to claim Muslims among its own by virtue of criteria drawn from racial categories suggesting cultural continuity.
In its preoccupation with the question of Muslim origins, the census revealed its own bias toward downplaying the foreign element in the composition of Indian Muslims, only one sixth of whom were placed as Arab- or Pathan-descended Muslims. The rest were listed as local converts from Hinduism who still preserved habits and usages from the religion they had supposedly repudiated. The census report consistently accentuates the "Hindu-ness" of Muslim converts in proportion to minimizing the self-definitions of those whom it sought to enumerate, the census-taker often assuming the prerogative of listing them under the group to which he thought they belonged, even though extensive inquiry was adopted as a means of eliciting more detailed information from Muslims and other religious groups on where they placed themselves. In almost every category -- age, religion, caste, marital status, and so forth -- questions generated a bewildering range of responses that often led the census-taker to make the determination himself. An infuriating inexactitude of response emerges as one of the central frustrations of census-taking (for age, many inhabitants of Indian villages were known to respond with a blithe bis-chalis" [twenty or forty]), and such vagueness encouraged the census-takers to transfer the authority for self-classification from the subject to themselves. On the matter of religious classification, the census-takers had clear instructions that they were to accept each person's statement about their religious affiliation, no matter how vague or imprecise it might be, but in practice this rule was systematically overlooked. Instead, the census-takers took it upon themselves to decide whether an individual was Hindu or Muslim, which in practical terms often meant determining whether a Muslim could trace his roots to foreign ancestors or whether he was descended from local converts from Hinduism. Often such determinations were based on a combination of the customs, usages, and practices followed by the individual and his racial features, rather than personal declaration of religious identity or religious belief, the latter being routinely effaced in the final classification. (This was also true, incidentally, with regard to Christian converts who were often judged as Hindus, not as Christians, in cases they filed in the British Indian courts for restitution of rights they had forfeited under Hindu law, the basis for the decision being the degree to which their behavior, habits, and manners conformed to those of Hindus, such as preserving the joint family system.)
Classification was made even more problematic in the case of Muslims who appeared to follow Hindu customs to some extent and had half-Hindu names, yet called themselves by an upper-class Muslim title. Some were formerly high-caste Hindus who, on conversion to Islam, were allowed to assume upper-class Muslim titles such as "Shekh" even though they continued to adhere in part to Hindu customs and, in a few rare cases, even to intermarry with those Muslims who were of foreign descent. On the contrary, the lower castes, who were often converts, had to be content with the title "Nau-Muslim," or "New Muslim." It was only in the case of converts who came from functional groups that Hindu names and titles were still retained, such as Kali Shekh, Kalachand Shekh, etc. As a Muslim convert of low social position rose in station, he was likely to assume more high-sounding designations that combined both Hindu and Muslim names. For instance, almost in a crude sort of parody, the gradual upgrading of a low-caste convert like Meher Chand is seen in the progressive combination of names and titles that he acquired through conversion to Islam: Meher Ullah, Meheruddin, Meheruddin Muhammad, Munshi Muhammed Meheruddin, Munshi Muhammed Meheruddin Ahmad, and finally Maulavi Munshi Muhammed Meheruddin Ahmad.
Perhaps the most damaging assessment made by the census report, at least in terms of the repercussions that it had on future constructions of Muslims as "outsiders," was that while the majority of Indian Muslims were identified as local converts or descendants of converts from Hinduism, the conclusion established by the British census-takers was that Indian Muslims saw themselves as "other"-defined, their point of reference for personal identity lying outside India in a quasi pan-Islamic unity:
All Mohammedans look on Arabic as their sacred language and they interlard their conversation with any Persian or Arabic words they can pick up from their Mullahs or from their religious books. The grammar remains Bengali and it is only some of the vocables which are changed. The better educated converts often deliberately abandon their native language. The Garpeda Bhunjas of Balasore furnish an illustration of this. They are descended from a Brahman and the females are still so far imbued with Hindu prejudices that they abstain from beef. But they have completely given up the use of Oriya and now speak Hindustani even in the family circle.
In this accent on the practice of "difference" by those who were indeed drawn away from Hinduism at one point in their history, the British census gave Hindu nationalists of a later generation a language of foreignness and otherness to describe Muslims who had no proper claims to a unique foreign identity, and who yet were said to have made such claims in a gesture of denial of their (in many cases) Hindu origins.
In other words, the British census produced a complex construction of Muslim religious and ethnic identity playing on both the assertion and the denial of difference. The very function of the census was to show, through enumeration, that the assertion of "difference" -- the idea of Muslims as outsiders -- was propagated by Indian Muslims themselves. Once this was established as a specifically Muslim claim, that declaration of difference was promptly denied by the categories adopted in British census-taking, which sought to demonstrate that the bulk of the Muslim population came from local converts. While Indian Muslims were anxious to link their ancestry to Arab roots, British commentators seemed bent on proving their mixed heritage -- that the majority of Muslims in India at the time of the first major census in 1872 were indeed converts and not descendants of Arab settlers and conquerors.
One of the most fascinating sections of the 1901 census is an appendix that lists individual cases of conversion in various districts of east and north Bengal. In this listing, the cause of the vast majority of conversions is established as neither proselytism nor doctrinal conviction, but romance. The elaborate narrative sketch that follows each case reads like a romantic novel in its stress on Hindus converting to Islam primarily to marry men or women with whom they had fallen in love. As in any romance novel worth its name, each story is carefully annotated with the names of principal characters, central episodes, conflict, and climactic resolution.
The comparison with the romantic novel stops at the point of narrative structure, however. Though romance is presented as the main motive for conversion, the play of human desires and feelings has no place here. The enumerated instances of mixed marriages, in which marital union between Hindus and Muslims is achieved only with the conversion of one partner to the religion of the other, become examples of exile, excommunication, and existential isolation. The potential function of interracial love and marriage to offer a model of cultural syncretism -- as a counterpoint to the homogeneity of a politically determined religious culture -- is less emphatic in the British account than the irreversible loss of community, which results from romantic attachments that impel individuals to convert in a final act of desperation. Though twenty-one of the forty reported cases list romance as the motive for conversion, the inner details of each case suggest that caste is the main player, not love: it is not the impulse of love that drives Hindus to embrace the religion of their Muslim spouses, but the fact that they lose membership in their former community as a result of their romantic attachments. In other words, the real cause for conversion still continues to be a condition that is built into Hinduism -- its ability to render caste members as outcastes through mere association with non-Hindus, mainly by having romantic intrigues with them, but also by sharing food with them or coming under their care during illness (the two other primary causes of conversion to Islam listed in the appendix). Only six of the forty cases listed suggest that doctrinal inclination -- the inherent appeal of Islam -- had anything to do with conversion. These instances add up to show the extent to which a vast number of Hindus became Muslims not because they chose to or in order to attain the objects of their desire, or even for reasons of practical expediency, but because the door had been permanently shut on them by Hinduism.
The presentation of case histories is itself marked by an unusual reflexity and self-consciousness. One of its most conspicuous features is the scrupulous annotation of the religious and racial authorship of each section of the returns as Hindu, Muslim, or British. It is to be expected that the sections written by Muslim informants would stress that Hindus converted to Islam voluntarily and as a result of the deep impression made by Koranic teachings, communicated not just by preachers but by enthusiastic lovers as well. It is also to be expected that the descriptions written by Hindus would minimize the role of individual conviction and attribute conversion to force; by extension they would emphasize seduction rather than love. The persistent divisions between Hindus and Muslims on the interpretation of conversion -- Muslims claiming that the bulk of Islamic conversions were voluntary and Hindus claiming that they were forced -- are reproduced in the interpretations of love and marriage between members of the two communities. Whereas the Muslim authors stress the conventionally romantic aspects of Hindu-Muslim liaisons and the attractions of Islamic faith as intrinsic to romantic love, the Hindu authors of the report's various sections dismiss love as a motive existing independently of proselytizing zeal and signifying emotional and spiritual needs.
The sections attributed to British authorship articulate a position that appears to work in a space between the two positions on romance and conversion taken by Muslim and Hindu authors, representing the two extremes of volition and coercion. But the British position turns strategically on a stance of measured uncertainty and ambivalence. Is "falling in love" the result of free will or manipulated desire? That central question is raised but never answered, because the event of "falling in love" is shifted from the category of effect (i.e., the result of conviction, sexual passion, emotional or spiritual needs, desire to establish autonomy in human relationships, etc.) to that of cause (i.e., of excommunication, civil death, and eventually conversion). The focus again returns to the caste features of Hinduism and to the notion of an undivided community before the disruptions wrought by mixed marriages and the threats to a stable religious identity that such marriages pose. This movement parallels the shift in emphasis, so apparent from the very mode of the census-taking operations, from the individual to the community, the latter increasingly subsuming the subjectivity of converts and reducing their actions to the helpless reactions of those who have been shut out from a secure and proper place in Hindu society. The conversion of Hindus to Islam in the context of romance is represented in these reports as a result of excommunication from Hinduism, not as willed change or the exercise of unfettered choice whose unfortunate outcome happens to be exile and excommunication.
What are we to make of the pattern discernible in the census reports that suggests a keen British interest in proving the Hindu origins of the Muslim population of India? In considering several hypotheses, we may find it profitable to examine Peter Hardy's provocative argument that deciding whether Muslims were either foreign settlers or local converts was vital to resolving the British debate about whether to confer the right of self-governance on Indians. If, as Hardy speculates, Muslims in British India were descendants of foreign settlers with a culture foreign to India, the British could claim justification for not treating the population they ruled as a united people capable of sustaining self-governing institutions of a kind that required, for their successful functioning, a modicum of shared moral values. I have shown, however, that the census reports seemed not to favor the theory that Muslims were foreigners but rather that they were converts from Hinduism. This suggests that a different type of reasoning may be behind the conclusions put forth by the census reports. Hardy does indeed consider the possibility that the presence of a large number of voluntary converts among Muslims could have suggested to the British political establishment an inherent instability about the Hindu community and innate fissures within its membership, giving grounds for the British suspicion that India could not be dealt with as a homogeneous political community.
I would like to advance two slightly different arguments, however. First, the emphasis on class differences between so-called hereditary Muslims and Muslim converts is part of a well-documented tendency in British commentaries to explain Indian society in terms of the caste system. Hindus converting to Islam presumably repudiated not merely a religion or world-view but caste itself. The British, however, interpreted the converts' imperfect assimilation into Muslim society and the pursuit of titles and rank by low-caste converts not as a genuine desire for upward social mobility, as more recent Muslim historians suggest, but as evidence of the continuing influence of Hindu social ideas and the perpetuation of an invisible caste system even in the new Islamic order. Indeed, the dislike of educated Muslims for the theory that most of the local converts in east and north Bengal were from the lower castes is doggedly read by the British as reflecting a persistent caste mentality. That is to say, if Indian Muslims wanted to be recognized as descendants of foreign settlers, they must have been motivated largely by a desire to conceal their low-class Hindu origins. Edward Gait, the census commissioner of the 1901 report, even remarked:
The Moghals are converts, just as much as are the Chandals [a low-caste tribe of Bengal]. It is only a question of time and place. The Christian religion prides itself as much on converts from one race as on those from another, and except for the influence of Hindu ideas, it is not clear why Muslims should not do so too.
We can infer from this sort of analysis that the British determination to prove the local origins of Indian Muslims assumes that any future political scheme for India would have to consider -- even to the point of reproducing to some extent -- the systems of social stratification by which Indian society had come to be defined in colonial discourse.
The second argument I would make, however, might seem to contradict the first. At the same time that the census reports showed a tacit recognition (and even acceptance) of social stratification, there was also an eagerness to show the forces of change that had been set in motion in "traditional" Hindu society, to the point that outcastes like the Kochs of Bengal were converting to Islam because they had a "disposition to change." That the conversions revealed the existence of a volatile and dynamic society, constantly in flux, appeared to confirm some of the positive consequences of outside intervention, be it by the Mughals or the British. Thus the almost inordinate British preoccupation with proving that the large majority of Muslims in Bengal were originally Hindus ran parallel to the British zeal in demonstrating the validity of catalyzing India into social and cultural change.
But the effect of this dual, contradictory move was to further complicate the ambivalent identity of Indian Muslims, who are represented as having both rejected (either voluntarily or involuntarily) the religion they once professed (even several generations removed) and retained aspects of it in their social orientation. There are, therefore, two kinds of often conflicting memories that inhabit the Hindu past: one, the memory of having once been an undivided community that had been violently torn asunder by foreign invasions, depredations, and cultural violence, of which forcible conversion is the most radical and divisive; and two, the memory of betrayal, repudiation, and willful reaffiliation to another community that the Muslim self-definition as "foreign-descended" appeared to suggest to Hindus. In both cases the Indian Muslim could not readily be identified as either outsider or insider. The sense of betrayal is further accentuated by the mythologies surrounding forcible conversion which were often propagated -- or so claimed the British census reports -- by Muslims themselves. One typical story recorded in the census report tells of a time when the Muslim population in Bengal was still scattered and it was customary for each Muslim dweller to hang an earthen-pot [badana] from his thatched roof as a sign of his religious affiliation. The census report recounts a story about a learned maulvi who, after a few years' absence, went to a Hindu village to visit a disciple dwelling there. Unable to locate the latter's earthen-pot, he was told on inquiry that his Muslim disciple had renounced Islam and joined a tribal group. The maulvi on his return to the city reported this incident to the nawab, who in a fit of rage ordered his troops to surround the village and compel every person there to become Muslim. As part of Muslim folklore, this extravagant story is narrated at one level as an example of Muslim assimilationist zeal and dogmatic pride. But when retold in the context of the census report, it has the effect of mythologizing the increase of the Muslim population in Bengal and removing history and ideology from the construction of a hybrid Indian identity.
In increasingly alarming ways, Hindu revivalists have sought to reinscribe that history and ideology through the reconfigurative instrument of memory and through rituals like reconversion, which in many respects functions as the handmaiden of memory. The whipped-up hysteria surrounding forcible conversion as one of Muslim India's most bitter legacies is not just expressive of Hindu antagonism to Muslims and to the history of violent rule in the past that the Muslim presence connotes. The hysteria is also part of a well-developed, concerted effort to remind Muslims of their original identity as Hindus, inasmuch as it is, at the same time, a sinister reminder to them that Muslim claims to "difference" and "otherness" are falsely founded and therefore untenable. (The BJP's insistence on eliminating separate personal laws for Muslims, concerning such things as marriage, divorce, and inheritance rights, and on developing a common civil code by which Hindus and Muslims would be governed alike derives, I think, from this reclamation of Muslims as Hindus.) Within this extreme logic, if there is anything worse than marauding Arab and Afghan invaders plundering Hindu temples and destroying Hindu religious life and culture, it is the fact that those who were once Hindus and subsequently converted (even if only because they occupied a low or outcaste position in Hindu society) now dare to deny their "true" heritage and make claims to a separate religious (and also political) identity.
What makes the intolerance of Hindus invisible, especially to Hindus themselves, is a rhetorical strategy that can be seen in the British census reports and which Hindu nationalists have subsequently adapted for their own purposes: contrasting the fluid, mercurial status of Muslims -- they are either foreigners or converts, but never presented as having a direct, unmediated relationship to India -- with the fixed, essentialized status of Hindus as the original, real inhabitants. Though the census introduces the category of "Animists" to suggest a pre-Aryan presence in India, the incorporation of the culture of animists to that of the early Aryans is presented as a process outside conversion and religious expansion. While Aryan incorporation of animist features in such things as Aryan stone-images is unchallenged as an example of syncretic adaptation, a similar process of incorporation-as-assimilation (for example, Hindu stone pillars used as steps in mosques) is considered a defilement of Hindu culture by Muslim conquerors. The fact that the former is considered an example of tolerance and the latter of intolerance has a great deal to do with the distinctions drawn between an originary culture and a culture described as derivative and foreign. While, as I suggested above, the census account of the conversion of lower-caste Hindus to Islam would seem to have shifted the agency of intolerance from Islam to Hinduism, the simultaneous representation of Hinduism as the "original" religion of India removes Hinduism from a history of expansion and religious conversion as active as that of, say, Islam. To conceptualize Hindu-Muslim relations as a relationship of native to convert (as the British census does) or native to foreigner is to introduce notions of incorporation and exclusion that become ideologically charged in the struggle to affirm origins. Sadly, history vanishes, leaving only distorted memory in its place.
The BJP/VHP forces claim that the destruction of the Babri Masjid on December 6, 1992, has now made the temple-mosque controversy a non-issue. Nothing could be further from the truth. But that is not because the demolition of the mosque had been preceded by viable alternative ways of resolving the crisis. If there was a strange commingling of modernist and mythic elements in the political symbolism of the Ram movement, its very oddness would have seemed to make it open to swift dismantling. But the reaction against Hindu revivalism has not been particularly successful in enabling a higher level of discourse to develop, for the counterreply either asserts an equally positivist claim (i.e., the Babri Masjid was built on an empty plot of land, Islamic settlements pre-date Hindu presence in Ayodhya, Islam forbids the construction of a mosque over a "pagan" temple, and so forth), or it takes the form of postmodern skepticism against all truth claims (i.e., there was no temple, there was no Ram, there was no Mughal "invasion" or forcible conversions or iconoclasm, for the history of Ayodhya, like all given histories, is subject to doubt and can never be known).
In either case religious belief, Hindu or Islamic, remains unplaced and unaccounted for. No matter what the evidence might be for or against the existence of a Ram temple before the Babri Masjid came to be built, the weight of the evidence has not seemed to affect the authoritativeness with which belief in Ram is accepted by Hindu devotees. Letters to the editor in various national dailies during the period of the Ayodhya crisis suggest that the immediate priority of building the temple had receded into the background ("Ram lives in the heart, not in temples" is the line one encounters most often), but not so the firmness and solidity of belief in Ram. One writer told the editor of the Madras-based The Hindu, "I am prepared to accept the declaration of historians that Ram probably never existed, but that will not stop my believing in him nonetheless." If this can be taken to be a typical response, then the very debate over Ayodhya -- the historicity of Ram, the presence of a mosque on the site of a Hindu temple, and the instances of iconoclasm that accompanied Islamic conversions -- is narrowly concentrated on the verification of facts that in reality have little or nothing to do with the actual problem. That problem is how modern Indian secularism can accommodate and absorb the reality of religion and the power of religious conviction experienced by believers, while at the same time protect the rights of those who believe differently.
Ashis Nandy's contention that Indian secularism has exhausted itself and failed to offer a potent alternative to the rising tide of violence in Indian politics and religion has been construed by some of his critics as a reactionary, anti-secularist argument. What I take Nandy to mean, however, is that Indian secularism has taken the form of the very thing it opposes in principle -- religious intolerance -- and allowed for further divisions between religious ideology and everyday practices of religious belief. If, as Nandy contends, one of the trends in recent South Asian history is the splitting of religion into faith and ideology -- faith defined as a way of life which is "operationally plural and nonmonolithic" and ideology as organized religion which is identifiable with a set body of texts -- the modern Indian state has chosen to define its secular character more in reaction against religious ideology than in relation to religious belief. As an example of how this tendency translates into policy, the regulation of excesses of religious ideology that might threaten the national interest has become an acquired function of the modern state, which is authorized to act as the ultimate arbiter for religious disputes. The reality of individual belief cannot be dealt with as an autonomous reality by the machinery of state, because the secular nation recognizes only the social component of religion -- its hierarchic structures and organizational features. Hence, the state cannot engage with the individual; it can respond only to the material and symbolic orderings of religion as a social institution.
Are the possibilities of religious communication thus foreclosed in the modern secular state, especially if part of the imported baggage of the state is an ingrained skepticism toward personal conviction? The urgent challenge to Hindu revivalism made by secular historians reveals the latter's own difficulties in dealing with religion as a heterogeneous belief system irreducible to mere ideology. If nationalism can be defined as the total set of representational practices that establish the grounds of nationality, then terms like "cultural nationalism" or "religious nationalism" already assume a seamless unity of aspirations, goals, and agendas, a selection and filtering that irons out the contradictions embedded in the processual construction of national identity from the fragments of religious, racial, cultural, and other forms of self-identification. Peter Van der Veer cautions us against this totalizing approach and urges that "we should take religious discourse and practice as constitutive of changing social identities, rather than treating them as ideological smoke screens that hide the real clash of material interests and social classes." However forcefully allegories of the nation, constituting the history of modern secularism, might draw attention to the teleology of its own formation (and by this I refer specifically to the triumphalist rhetoric of rights and citizenship on the model of liberal principle), the narratives produced in the crucial space of negotiation between national and religious identity yield the most visible light on the strains and stresses in community self-identification, especially when community or individual self-perceptions conflict with the definitions accorded them by the nation-state.
Perhaps, as David Krieger suggests in a recent essay, if ideology and faith as polarized terms are replaced by a notion of "cultural metanarratives" at work in non-monolithic, pluralistic societies, it would be easier to conceptualize -- and revitalize -- possibilities for the attainment of a pragmatics of discourse where the meaning rather than the validity of truth claims is foregrounded -- in other words, a discourse that presses to the very limits the contestations of religious ideology, to the point where it can be broken down to illuminate areas of personal belief. In Krieger's conceptualization of the problem of communication, every form of knowing can be construed at a level of discourse higher than argumentation, or what he calls the level of a discourse of limits. Such a discourse accepts the possibility of unknowability and preserves an agonistic concept of "truth" in a pluralistic context, "where discontinuity upon the level of limit-discourse is an inescapable fact." An instance of such discontinuity is the shading of ideology into faith, which is effected by what Krieger calls a "methodological conversion." Krieger draws heavily on religious conversion as a metaphor for a theory of knowledge to suggest the conceptual means by which the gaps between different cultural metanarratives might be bridged. The cognitive rejection of one narrative through the conative acceptance of another is a conceptual analogue to the displacement of religious ideology by faith. "Such a conception is necessary," Krieger writes, "to deal with the problem of how global thinking -- the general validity of knowledge, the universality of norms and a more than merely local solidarity with fellow humans and with nature -- is possible in a radically pluralistic world and a postmodern context."
Strictly at the level of argumentation, an impasse can be overcome only when the ideological premises of the parties to a dispute happen to be similar. If they are not, even the same sets of facts, including those which are entirely non-controversial, can yield totally different conclusions, as is all too apparent in the Ayodhya debate where Hindu and Muslim activists derived completely contrary conclusions from virtually the same evidence. That these facts have a different meaning within the different paradigms involved in the dispute -- mythological and historical -- is further complicated by the absence of a metalanguage to negotiate the conflicting paradigms. The only form of negotiation that is possible is one that entails a transition, or a conversion, from one paradigm to another. Indeed, the process of transition between worldviews emerges, in contexts of pluralism, as the only credible form of negotiation. If discourse beyond the level of argumentation is to materialize, it cannot be grounded in a unitary worldview or religion, but rather in the ability to move between worldviews.
If a higher level of discourse is to be made possible, certain pragmatic conditions for communication would necessarily have to be in place. For one thing, the clash of metanarratives cannot be resolved in terms of the pursuit of knowledge, which is how it had been approached in the Ayodhya debate right up until the time of the mosque's destruction. "True" knowledge is conferred an authority that is belied by the resulting intransigence of both parties in the dispute. The search for knowledge is unavoidably bound up with a struggle for social and political power. For Muslim believers and secular Indians alike, the general fear of losing power to Hindu revivalists had certainly raised the stakes for "proving" the nonexistence of a Hindu temple at the mosque site, to the point that unraveling the truth about Ayodhya had become tantamount to a struggle for hegemonic dominance.
At the end of a provocative and learned essay on perceptions of Islamic conversions by vastly different groups in South Asia -- medieval historians (both Hindu and Muslim), European commentators, and modern scholars -- Peter Hardy speculates:
whether the hypotheses of modern commentators and scholars are themselves essays in conversion, albeit not wholly conscious or deliberate ones: the conversion of agents of the East India Company or of the Crown to particular conceptions of their interests and their duties in India; the conversion of South Asian Muslims to particular conceptions of their future relationships with each other and with non-Muslims; or, to look into an area of inquiry not here entered, namely that of Hindus' interpretations of conversion to Islam, the conversion of Hindus to particular conceptions of their future relationships with Muslims.
The power of conversion as an epistemological concept is that it reclaims religious belief from the realm of intuitive (non-rational) action to the realm of conscious knowing and relational activity. What I hope will emerge in future discussions, even if tentatively, is an examination of that unexplored area pointed to by Hardy -- not just Hindu interpretations of Islamic conversion, but more importantly, the reorientation (or conversion) of Hindus to ways of relating with the Muslim community in India. Conceived in these relational terms, conversion is defined not as a renunciation of an aspect of oneself (as it is in the personal or confessional narrative form), but as an intersubjective, transitional, and transactional mode of negotiation between two otherwise irreconcilable worldviews.
1. Peter Van der Veer, "Ayodhya and Somnath: Eternal Shrines, Contested Histories," Social Research 59.1 (Spring 1992): 96.
2. Vidya Dehejia, "Shaivite and Vaishnavite Art: Pointers to Sectarian Tensions?" Unpubl. paper.
3. On this point, Peter Van Der Veer's anthropological fieldwork in Surat offers illuminating insights: he argues, for instance, that the discourse of tolerance and communal harmony is related to the eclipse of the themes of Hindu participation and the influence of Hinduism from the debate about Sufi ritual. See Van der Veer, Religious Nationalism: Hindus and Muslims in India (Berkeley: U of California P, 1994) 33-43.
4. Peter Hardy, "Modern European and Muslim Explanations of Conversion to Islam in South Asia: A Preliminary Survey of the Literature," Conversion to Islam, ed. Nehemia Levtzion (New York: Holmes, 1979).
5. The question of what produced changes in the strength of any religion was settled by reference to three causes: the reproductive power of a religion's adherents, migration, and conversion. By the 1890s Muslims had grown twice as rapidly as Hindus, and the census asks the question: "How far is this due to the conversion of Hindus and how far to the greater fecundity of Muslims?" (E. Gait, The Lower Provinces of Bengal and Their Feudatories, Census of India 1901, 6.1, Report [Calcutta, 1902] 156. Henceforth abbr. Census of India, 1901.)
6. H. Beverley, Report of the Census of Bengal, 1872 (Calcutta, 1872), pars. 348-354. E.g.: "The real explanation of the immense preponderance of the Musalman religious element in this portion of the delta is to be found in the conversion to Islam of the numerous low castes which occupied it. . . . If further proof were wanted of the position that the Musalmans of the Bengal delta owe their origin to conversion rather than to the introduction of foreign blood, it seems to be afforded in the close resemblance between them and their fellow-countrymen who were still from the low castes of Hindus. That both are originally of the same race seems sufficiently clear, not merely from their possessing identically the same physique, but from the similarity of the manners and customs which characterise them."
7. Khondkar Fazli Rabi, The Origins of the Musalmans of Bengal (1895; Dacca: Soc. for Pakistan Studies, 1970) 43. See also Rafiuddin Ahmed, The Bengal Muslims 1871-1906: A Quest for Identity (Delhi: Oxford UP, 1981) for a complex exploration of the construction of Muslim identity. Ahmed argues that a dominant feature of the nineteenth-century campaigns of Islamization in Bengal was the attempted rejection of virtually all that was Bengali in the life of a Muslim as something "incompatible with the ideas and principles of Islam"(106).
8. Census of India 1901 166.
9. Census of India 1901 166. Rafiuddin Ahmed maintains that the Muslim community's claims to family names and alien origins, by way of removing the stigma of their local descent, "were helped by certain government measures like census classification"(Ahmed 184). While it is true that the census did elicit the names by which Muslims called themselves, this should not be taken to mean that it accepted the foreign origins that those names connoted. On the contrary, it often contested their authenticity, incredulously dismissing, for instance, the number of self-proclaimed "Shekhs" as being more than twenty times the estimated population of "Arabia" at that time.
10. H. H. Risley, The Tribes and Castes of Bengal, 2 vols. (Calcutta: Bengal Secretariat P, 1891) 1: xxii-xxxvii.
11. Census of India 1901 167. Emphasis added.
12. The ritual of shuddhi contributed greatly to the increase of Hindu-Muslim antagonism. For many Muslims, the infamous pork-test of the Shuddhi Sabha was taken as the ultimate insult to their religious adherence. But there were many communities and individuals who manifested dual types of behavior, and they were targeted as ripe candidates for shuddhi. For instance, the religious status of the Malkanas, in the western part of what was then called the United Provinces, was a confused one. Their culture showed the influence of Islam, even to the point of using Muslim functionaries in some of their ceremonies. At the same time they retained many Hindu practices. However, in the census they tended to declare themselves Muslims. Several unsuccessful attempts to reconvert them had been made between 1907 and 1910, but as J.T.F. Jordens points out, "the decisive break-through came in 1922 when the Hindu Rajputs in their Kshatriya Upkarini Sabha passed a resolution in support of receiving the Malkanas, and permitting them to be reunited with the Rajput Hindu brotherhood after purification"(158). J.T.F. Jordens, "Reconversion to Hinduism, the Shuddhi of the Arya Samaj," Religion in South Asia, ed. Geoffrey Oddie (Delhi: Manohar, 1982).
13. This argument is elaborated elsewhere in my "Coping with (Civil) Death: The Christian Convert's Rights of Passage in Colonial India," After Colonialism: Imperial Histories and Postcolonial Displacements, ed. Gyan Prakash (Princeton: Princeton UP, 1994).
14. Census of India 1901 166.
15. App. II, "Extracts from District Reports regarding Causes of Conversion to Muhammadism," Census of India, 1901x-.xix.
16. Ahmed 184.
17. Cf. Census of India, 1901, which describes Hinduism as "not so much a form of religious belief as a social organization. . . A man's faith does not greatly matter so long as he recognizes the supremacy of the Brahmans and observes the restrictions of the Hindu caste system"(152). Bernard S. Cohn has powerfully shown how the British system of objectification through census-taking hinged on caste and religion as crucial sociological keys to understanding Indian society and Indian people. Cohn maintains that "ideas about caste -- its origins and functions -- played much the same role in shaping policy in the latter half of the nineteenth century that ideas about the village community and the nature of property played in the first half of the nineteenth century"(243). In the hands of an ethnographer like Herbert Risley, who wielded anthropometric instruments as if they were weapons of war, the caste system fed into theories of racial purity and social hierarchy. See Bernard Cohn, "The Census, Social Structure and Objectification in South Asia," An Anthropologist among the Historians and Other Essays (Delhi: Oxford UP, 1987).
18. Census of India, 1901 172.
19. Census of India, 1901 166.
20. While S.A.A. Rizvi states that Muslim commentators usually give an "altogether exaggerated account of proselytisation," claiming pride in Islam for winning scores of Hindu followers (17), Peter Hardy suggests a more ambivalent reading. Hardy proposes that while it is true that there was a certain amount of exaggerated self-glorification among recorders of Muslim history, Muslim historians were less interested in showing how Islam expanded through force and chance than through the missionary zeal of sufis and pirs. See S.A.A. Rizvi, "Islamic Proselytisation, Seventh to Sixteenth Centuries," Religion in South Asia, and Hardy, "Modern European."
21. "Ayodhya Temple," letter, The Hindu, 10 Dec. 1990.
22. Ashis Nandy, "The Politics of Secularism and the Recovery of Religious Tolerance," Mirrors of Violence, ed. Veena Das (Delhi: Oxford UP, 1990).
23. Simon During, for instance, provides a useful working definition of nationalism as "the battery of discursive and representational practices which define, legitimate, or valorize a specific nation-state or individuals as members of a nation-state." See Simon During, "Literature -- Nationalism's Other? The Case for Revision," Nation and Narration, ed. Homi Bhabha (London: Routledge, 1990) 138.
24. Van der Veer, Religious Nationalism ix.
25. David J. Krieger, "Conversion: On the Possibility of Global Thinking in an Age of Particularism," Journal of the American Academy of Religion 58.2 (1990): 223-243. See also Alan M. Olson, "Postmodernity and Faith," Journal of the American Academy of Religion 58.1 (1990): 37-53. See also Michael C. Banner, The Justification of Science and the Rationality of Religious Belief (Oxford: Oxford UP, 1990) for an illuminating analysis of the problematic opposition between the "rationality" of science and the "irrationality" of religious belief and the circularity of paradigm conflicts to which this gives rise.
26. Krieger 227.
27. Krieger 223.
28. Hardy 99.
29. In their challenge to conceptions of conversion as forcible and radical change, recent advances in the scholarship stress the relational features of conversion, though not always critically or with a view to examining the grounds of relationality. An historiography based on notions of transition rather than change is interested in recovering a pragmatics of intersubjective communication. The work of scholars like Robin Horton, Susan Bayley, Derryck Shreuder, and Geoff Oddie has aimed to supplant the contestational features of conversion with a version that stresses its adaptive tendencies. For instance, in her work on Muslims and Christians in South Indian society, Saints, Goddesses and Kings (Cambridge: Cambridge UP, 1989), Susan Bayley challenges as misleading the view that conversion is a radical movement from one religion to another, resulting in the total repudiation of one for the other. Rather, she emphasizes the fluidity of the original religion which allowed for the "conversion" of its individuals to other religions. Bayley, echoing Robin Horton, contends that conversion is not as transgressive or disruptive of the norms of a society as generally maintained. Applying Horton's theories of African conversion to the Indian context, Bayley argues that conversion is not simply a shift of individual conviction or communal affiliation: if Indians have embraced another religion, it is less so because they believed in some egalitarian message that the new religion had to offer, but because they invested in an alien religious being new divine power associated with the old. See Robin Horton, "African Conversion," Africa: Journal of the International African Institute 41.2 (1971): 85-108; and Deryck Shreuder and Geoffrey Oddie, "What is 'Conversion'? History, Christianity and Religious Change in Colonial Africa and South Asia," Journal of Religious History 15.4 (1989): 496-518.
|
<urn:uuid:d95bb582-fcd7-45c8-bd61-5bb2883b2bed>
|
CC-MAIN-2016-26
|
http://web.stanford.edu/group/SHR/5-1/text/viswanathan2.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00034-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959431
| 12,106
| 3.0625
| 3
|
view a plan
This is a stick horse competition relay idea
P.E. & Health
Title – Pretend Horse Competion
By – Debbie Haren
Primary Subject – Health / Physical Education
Secondary Subjects – Language Arts
Grade Level – PreK – K
-A broom stick or slender stick for each child. If possible a stick horse bought from a store.
-cones or boxes
Set up an outside area or the gym to be an obstacle coarse for the “horses”. Have everyone get into two lines and explain to the children that this is called a relay race. The first team to get through the course and have every person been through it is the winning team! The obstacle course should be boxes or cones set apart and the children have to go around them in a zig zag pattern and then do it again on the way back!!
Large muscle skill. Following directions. Working as a team. Supporting each other through words of encouragement!
E-Mail Debbie Haren !
|
<urn:uuid:14aa1c5d-c7d8-49cb-baab-e763696ba547>
|
CC-MAIN-2016-26
|
http://lessonplanspage.com/pestickhorsecompetitionrelayideapk-htm/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00061-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.909103
| 210
| 3.421875
| 3
|
When choosing a place to put their money, people consider how safe their money will be, how easy it is to access, and whether it will earn more money. Students explore how well different savings places achieve these objectives. Students learn that people who don’t want to carry money with them or keep it at home often choose to put their money in a savings account at a bank or credit union. These financial institutions protect money from theft and other losses. They also pay interest on money deposited. This lesson works well as a follow-up to the ABCs of Saving.
Students encounter the concept of scarcity in their daily tasks but have little comprehension as to its meaning or how to deal with the concept of scarcity. Scarcity is really about knowing that often life is 'This OR That' not 'This AND That'. This lesson plan for students in grades K-2 and 3-5 introduces the concept of scarcity by illustrating how time is finite and how life involves a series of choices. Specifically, this lesson teaches students about scarcity and choice: Scarcity means we all have to make choices and all choices involve "costs." Not only do you have to make a choice every minute of the day because of scarcity, but, when making a choice, you have to give up something. This cost is called oppportunity cost. Opportunity cost is defined as the value of the next best thing you would have chosen. It is not the value of all things you could have chosen. Choice gives us 'benefits' and choice gives us 'costs'. Not only do you have to make a choice every minute of the day, because of scarcity, but also, when making a choice, you have to give up something of value (opportunity cost). To be asked to make a choice between 'this toy OR that toy' is difficult for students who want every toy. A goal in life for each of us is to look at our wants, determine our opportunities, and try and make the best choices by weighing the benefits and costs.
When individuals produce goods or services, they normally trade (exchange) most of them to obtain other more desired goods or services. In doing so, individuals are immediately confronted with the problem of scarcity - as consumers they have many different goods or services to choose from, but limited income (from their own production) available to obtain the goods and services. Scarcity dictates that consumers must choose which goods and services they wish to purchase. When consumers purchase one good or service, they are giving up the chance to purchase another. The best single alternative not chosen is their opportunity cost. Since a consumer choice always involves alternatives, every consumer choice has an opportunity cost.
The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information.
Designed primarily for elementary and middle school students, each of the 15 lessons in this guide introduces an economics concept through activities with modeling clay.
9 out of 17 lessons from this publication relate to this EconEdLink lesson.
This publication contains 16 stories that complement the K-2 Student Storybook. Specific to grades K-2 are a variety of activities, including making coins out of salt dough or cookie dough; a song that teaches students about opportunity cost and decisions; and a game in which students learn the importance of savings.
6 out of 18 lessons from this publication relate to this EconEdLink lesson.
This interdisciplinary curriculum guide helps teachers introduce their students to economics using popular children's stories.
2 out of 29 lessons from this publication relate to this EconEdLink lesson.
|
<urn:uuid:331b18fe-ab87-4f8e-8b40-8b187f1c9938>
|
CC-MAIN-2016-26
|
http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=414
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00140-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958506
| 746
| 4.21875
| 4
|
Predictive model shows shots would also save $7 billion in related health-care costs
WEDNESDAY, Oct. 29 (HealthDay News) -- Vaccinating infants with what's known as the "7 valent pneumococcal conjugate vaccine" (PCV7) could save more than 357,000 lives and $7 billion in costs by preventing bacterial infections during a flu pandemic, according to a predictive model developed by U.S. researchers.
Pneumococcal disease (such as meningitis) and other bacterial infections can follow flu and cause secondary infections that worsen flu symptoms and increase the risk of flu-related death. For example, it's believed that bacterial infections caused almost half of the deaths of young soldiers during the 1918 worldwide flu pandemic, according to background information in an Emory University news release.
"We've known for years that bacterial infections can develop after influenza. Unlike the 1918 flu pandemic, which preceded the antibiotic era, we now have vaccines that can prevent these types of pneumococcal infections. This model shows what a dramatically different outcome we could expect with standard PCV vaccination," Keith P. Klugman, professor of global health at Emory's Rollins School of Public Health, said in the news release.
He and colleagues at Harvard University, i3 Innovus in Medford. Mass., and Wyeth Research created a model to estimate the public health and economic effect current influenza vaccination practices would have on children younger than two years old during a flu pandemic. Since 2000, the Centers for Disease Control and Prevention has recommended PCV vaccinations for infants and children.
The model showed that current PCV vaccination practices lower costs in a typical flu season by $1.4 billion and would cut costs by $7 billion in a pandemic. It also predicted that PCV vaccination would prevent 1.24 million cases of pneumonia and 357,000 pneumococcal-related deaths in a pandemic.
The findings were presented this week at an infectious diseases conference in Washington, D.C.
"Our research shows that routine pneumococcal vaccination is a proactive approach that can greatly reduce the effects of a future flu pandemic," Klugman said. "Countries that have not yet implemented a pneumococcal vaccination program may want to consider this as part of their pandemic flu preparedness."
Klugman is a paid consultant for Wyeth Pharmaceuticals, which funded the study.
The Immunization Action Coalition has more about pneumococcal diseases.
-- Robert Preidt
SOURCE: Emory University, news release, Oct. 28, 2008
All rights reserved
|
<urn:uuid:5069dfbc-c039-4982-b635-7d215dece623>
|
CC-MAIN-2016-26
|
http://www.bio-medicine.org/medicine-news-1/Flu-Vaccine-Could-Prevent-357-000-Deaths-in-Pandemic-28166-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00029-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95289
| 538
| 2.75
| 3
|
The Nazi Expedition to Tibet
Ernst Schäfer, a German hunter and biologist, participated in two expeditions to Tibet, in 1931–1 932 and 1934–1936, for sport and zoological research. The Ahnenerbe sponsored him to lead a third expedition (1938-1939) at the official invitation of the Tibetan Government. The visit coincided with renewed Tibetan contacts with Japan. A possible explanation for the invitation is that the Tibetan Government wished to maintain cordial relations with the Japanese and their German allies as a balance against the British and Chinese. Thus, the Tibetan Government welcomed the German expedition at the 1939 New Year (Losar) celebration in Lhasa.
In Fest der weissen Schleier: Eine Forscherfahrt durch Tibet nach Lhasa, der heiligen Stadt des Gottkönigtums (Festival of the White Gauze Scarves: A Research Expedition through Tibet to Lhasa, the Holy City of the God Realm) (1950), Ernst Schäfer described his experiences during the expedition. During the festivities, he reported, the Nechung Oracle warned that although the Germans brought sweet presents and words, Tibet must be careful: Germany’s leader is like a dragon. Tsarong, the pro-Japanese former head of the Tibetan military, tried to soften the prediction. He said that the Regent had heard much more from the Oracle, but he himself was unauthorized to divulge the details. The Regent prays daily for no war between the British and the Germans, since this would have terrible consequences for Tibet as well. Both countries must understand that all good people must pray the same. During the rest of his stay in Lhasa, Schäfer met often with the Regent and had a good rapport.
The Germans were highly interested in establishing friendly relations with Tibet. Their agenda, however, was slightly different from that of the Tibetans. One of the members of the Schäfer expedition was the anthropologist Bruno Beger, who was responsible for racial research. Having worked with H. F. K. Günther on Die nordische Rasse bei den Indogermanen Asiens (The Northern Race among the Indo-Germans of Asia), Beger subscribed to Günther’s theory of a “northern race” in Central Asia and Tibet. In 1937, he had proposed a research project for Eastern Tibet and, with the Schäfer expedition, planned to investigate scientifically the racial characteristics of the Tibetan people. While in Tibet and Sikkim on the way, Beger measured the skulls of three hundred Tibetans and Sikkimese and examined some of their other physical features and bodily marks. He concluded that the Tibetans occupied an intermediary position between the Mongol and European races, with the European racial element showing itself most pronouncedly among the aristocracy.
According to Richard Greve, “Tibetforschung in SS-Ahnenerbe (Tibetan Research in the SS- Ahnenerbe)” published in T. Hauschild (ed.) “Lebenslust und Fremdenfurcht” – Ethnologie im Dritten Reich (“Passion for Life and Xenophobia” – Ethnology in the Third Reich) (1995), Beger recommended that the Tibetans could play an important role after the final victory of the Third Reich. They could serve as an allied race in a pan-Mongol confederation under the aegis of Germany and Japan. Although Beger also recommended further studies to measure all the Tibetans, no further expeditions to Tibet were undertaken.
|
<urn:uuid:27642ccd-358b-4e4c-8112-d524b790baf5>
|
CC-MAIN-2016-26
|
http://aryannordicalpinealiens.blogspot.com/2008/09/nazi-expedition-to-tibet.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93972
| 767
| 3.34375
| 3
|
About the Holodomor National Awareness Tour
Awareness for a better world
The Holodomor National Awareness Tour is a project conceived by the Canada-Ukraine Foundation, and supported by the Government of Canada and the Governments of Ontario and Manitoba, to raise awareness of the Holodomor.
The Holodomor was a genocidal famine carried out in 1932-33 by the Soviet Union led by Joseph Stalin, which resulted in the deaths of millions of Ukrainians.
The Holodomor National Awareness Tour features the Holodomor Mobile Classroom (HMC), a state-of-the-art mobile learning space, to educate and engage students and the public across Canada about the Ukrainian Holodomor. Visitors will:
- Learn about the Holodomor through digital media
- Appreciate how history shapes our world today
- Be inspired by personal stories of Holodomor survivors
- Leave empowered to protect Canadian values of freedom and democracy
the richest soil
the best harvests
the biggest lie
the best kept secret
the Ukrainian genocide
At a school or public community event
If you are interested in having the Holodomor Mobile Classroom (HMC) at your school or next community public event, such as a fair or festival, please contact the Holodomor National Awareness Tour Office . The HMC is a great tool for diversity or tolerance training and can also be booked for your workplace.Watch Holodomor Mobile Classroom (HMC) – Fly by video
Please click video above to take a virtual tour of this state of the art mobile classroom.
Take students beyond the textbook
The Holodomor National Awareness Tour is a project of the Canada-Ukraine Foundation, a not-for-profit organization. The costs to have the HMC visit a school is $500 for the day, which covers some our costs and support the continuation of this project in the future. We encourage schools to arrange for four classes to visit the HMC the day of the booking (two in the morning and two in the afternoon, each 60 minutes in length).
Visit the teaching section to learn more about the educational programming offered on the HMC.
Can’t find what you need? Contact the Holodomor National Awareness Tour office.
For Schools & Teachers
Go beyond the textbook
The Holodomor Mobile Classroom (HMC) provides high school students with a powerful look at the Holodomor, the famine genocide committed by the Soviet Union under Joseph Stalin’s leadership in 1932-33. The HMC, a 40 ft. RV, features state-of-the-art learning technology, providing an experiential learning opportunity that brings history to life.
In the Holodomor Mobile Classroom (HMC), a 60-minute lesson is guided by a facilitator who leads students to an understanding of the concept of genocide and its consequences. The lesson emphasizes the role of an autocratic government in its capability to create conditions for a massive man-made Famine.
The Five Ws (Who? What? When? Where? Why?)
Students participate in thought-provoking discussions and in a digital learning activity to answer the Five Ws (Who? What? When? Where? Why?). They will also analyze the Holodomor as a case study of genocide and how it can serve to inform our present-day responses to such crimes against humanity in our local and global communities.
Meeting Curriculum Expectations
The lesson, developed for the intermediate and senior grade levels deals with basic information on the Holodomor. It addresses concepts and themes linked to the new Ontario Ministry of Education’s curriculum expectations in the Canadian and World Studies and Social Sciences and Humanities courses under:
- Continuity and Change
- Historical Significance/Historical Perspective
- Cause and Consequence/Objectives and Results
- Social Justice
The Holodomor Mobile Classroom (HMC) programming complements the following courses:
- Canadian History Since World War I, Grade 10, University Preparation (CHC2D)
- Canadian History Since World War I, Grade 10, College Preparation (CHC2P)
- Civics and Citizenship, Grade 10, Open (CHV2O)
- Politics in Action: Making Change, Grade 11, Open (CPC3O)
- Equity, Diversity, and Social Justice, Grade 11, Workplace Preparation (HSG3M)
- World History Since 1900: Global and Regional Interactions, Open, Grade 11 (CHT3O)
- Adventures in World History, Grade 12, Workplace Preparation (CHM4E)
- World History Since the Fifteenth Century, University Preparation, Grade 12 (CHY4U)
- World History Since the Fifteenth Century, College Preparation, Grade 12 (CHY4C)
- Canadian and International Politics, Grade 12, University Preparation (CPW4U)
- World Issues: A Geographic Analysis, Grade 12, College Preparation (CGW4C)
- Equity and Social Justice: From Theory to Practice, Grade 12, University/College Preparation (HSE4M)
Once the HMC has been booked, the classroom teacher will receive a Pre-Visit Package that includes:
- An introductory letter
- The historical background with a timeline of events leading to the Holodomor
- Behaviour expectations and safety measures on the bus, which need to be reviewed by the classroom teacher prior to the visit
- Parking requirements on school property
- Pre-visit activities and Post-visit suggestions for short-term and long-term assignments for students
On the completion of the lesson, the classroom teacher will be requested to fill out a survey. We encourage teachers to send the HMC five student reflections on their visit to the Bus.
This website offers educators and students information about the Holodomor for teacher and classroom use, such as background information, primary documents, articles, excerpts from literature, eye witness accounts. In addition, lesson plans and suggested assignments for students in grades 6-12 are included. A comprehensive list of resources is available for educators and specific curriculum applications for courses in Provincial curricula.
Book the Bus Tour
Book the Holodomor Mobile Classroom (HMC) to learn more about the Holodomor. To book the HMC at a school or public event, please click the link below or for more information about the Holodomor National Awareness Tour please contact the Holodomor National Awareness Tour office. Please note that in order to accommodate the HMC, we require a designated parking area of 20ft x 50ft (approximately 1000 square feet) which includes a safety perimeter.Watch Holodomor Mobile Classroom (HMC) – Fly by video
Sponsors and Partners
Holodomor National Awareness Tour Project Office
620 Spadina Avenue, 2nd Floor
Tel: (416) 966-9800
|
<urn:uuid:4f04fc5e-7497-4431-8dd6-8563c44a8e63>
|
CC-MAIN-2016-26
|
http://www.holodomortour.ca/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00036-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.901501
| 1,461
| 3.296875
| 3
|
How we behave in different groupsAt one moment in our life or the other we all have been a member of a group. Some of these groups we don't choose freely, like our families. But there are also other groups we choose to be a part of, like sports clubs, religious clubs or political parties. Therefore, a family, a village, a political party, a trade union are all social groups.
Peer groups are also a type of social group, one which can be very important for adolescents and young people. Belonging to a group is very significant, as it makes us feel good about ourselves and improves self-esteem. However sometimes, peer groups might have negative impact on us as young people.
In this session we will learn more about social groups, types of social groups, peer pressure, how peer pressure can positively or negatively affect our health and development and how to resist negative peer pressure.
|
<urn:uuid:a7839781-7bc9-4dcf-aa94-6fad0824b2ff>
|
CC-MAIN-2016-26
|
http://www.learningaboutliving.org/south/young_people/relationships/social_groups
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00088-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960033
| 186
| 3.5
| 4
|
Charcot-Marie-Tooth disease, type 4C (CMT4C) is a demyelinating CMT peripheral sensorimotor polyneuropathy with early-onset scoliosis or kyphoscoliosis.
CMT4C is a relatively frequent form of CMT4: it was first described in Algeria but families have since been reported from Morocco, Mediterranean countries (Italy, Turkey and Greece) and from Germany, the Netherlands and France.
Scoliosis may be the inaugural feature of the disease, with onset usually occurring in childhood. However, delayed walking has been observed as an early sign in some cases. Neuropathy usually manifests during childhood or adolescence and is slowing progressive leading to severe motor retardation in some cases. Foot deformities are frequent and additional features may include respiratory insufficiency, hypoacousis and deafness.
CMT4C is caused by mutations in the SH3TC2 gene (5q32). In the Gypsy population, the disease has been suggested to arise as a result of a founder mutation (p.R1109X), but at least one other mutation (p.C737_P738delinsX), has been found to underlie CMT4C in this population.
CMT4C is transmitted in an autosomal recessive manner.
Last update: January 2009
- Dr Carmen ESPINÓS
- Pr Francesc PALAU
|
<urn:uuid:2a13dbc3-b099-454e-a397-b923c48b8a23>
|
CC-MAIN-2016-26
|
http://www.orpha.net/consor/cgi-bin/OC_Exp.php?lng=EN&Expert=99949
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00158-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952689
| 300
| 2.953125
| 3
|
Illuminating Hidden Pathways Using Projected Light [Future of Light]
Projection lighting technologies can deliver a visual guide and assurance where and when people need it.
Whether we’re aware of them or not, we use lighting cues to help us navigate our lives on a daily basis. An inviting neon sign helps indicate that businesses are open, while lights off me nobody is home. But what if these signals were to become more specific, assisting us through individual decisions and situations as they arise?
In a trend we are calling Guiding Light, PSFK Labs has noticed that lighting innovations leverage advances in projection technology by mapping directional cues and heads up warnings onto surfaces in real-time. Oftentimes cued to react to different circumstances or deployed on-demand, these technologies deliver a visual guidance where and when people need it. “The bigger idea of embedded or contextual lighting and projection for wayfinding and placemaking is something that is an area exploding with growth,” explains Brett Renfer, senior technologist at Rockwell Group during his conversation with the PSFK Labs’ team.
One example leveraging projection technology as a directional cue is the PoolLiveAid, a prototype for an augmented reality system that projects lines onto a pool table to help players aim their shots. Developed by researchers at the University of the Algarve in Portugal, the augmented laser guidance system is capable of detecting the position of balls, cue stick and table, creating a tool for teaching aim and shot selection. Using a projector mounted above the table that has been hooked up to a computer, the system is able to continually show the ever changing path the cue ball is expected to travel based where the player is aiming, while taking into account the positions of the other balls.
Jaewoo Chung’s Guiding Light is a project from MIT Media Lab that projects wayfinding arrows from a smartphone onto the floor to create an illuminated GPS system indoors. Using a smartphone with a mini-projector and magnetic positioning, Guiding Light projects an arrow on the ground that directs a user to their desired destination. The technology’s Bluetooth badge is equipped with four magnetic sensor arrays, uncovering a user’s location within the magnetic fields of any given building.
While the technology requires no special infrastructure, each building needs to be “magnetically mapped” first. In contrast to existing heads-up displays that push information into the user’s field of view, Guiding Light works on a pull principle, relying entirely on users’ requests and control of information. During our conversation, Usman Haque, director of Haque Design + Research noted, “The logical step beyond using mobile phones as augmented reality interfaces is actually projecting information on to the urban fabric. Floors and steps are obvious, but underutilized, informational projection surfaces as well.”
Guiding Light and PoolAidLive point to the ways that relevant information can be layered onto physical environments through light to guide us in new ways. We may see more mapping of directional cues onto the physical environment at outdoor events like concerts or plays, indicating the location of things like restrooms and food and drink vendors. Or projection mapping could be used during emergency situations which assist with helping people find a way out of their building.
These examples also fall into a larger theme we are calling Enlightened Communication, where designers are exploring the use of light as a substitute for physical boundaries, helping to change the way people perceive their surroundings. These solutions work to demarcate new areas on demand, creating flexible environments which can accommodate different use cases and be redefined according to needs.
The Future of Light series explores light’s potential to improve lives, build communities, and connect people in new and meaningful ways. Brought to you in partnership with Philips Lighting, a full report is available as an iOS and Android app or as a downloadable PDF.
|
<urn:uuid:7f47df0e-af77-48c2-bc7f-cf891807b344>
|
CC-MAIN-2016-26
|
http://www.psfk.com/2013/09/illuminating-hidden-pathways-around-us-light-future-light.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00189-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930006
| 795
| 3.078125
| 3
|
United States District Court for the Western District of Washington
The Judiciary Act of 1789 created the U.S. district courts as trial courts with jurisdiction over admiralty, minor crimes, and suits involving the federal government. Over the course of the nineteenth century, Congress expanded the district courts’ jurisdiction. With the abolition of the circuit trial courts in 1911, the district courts became the sole trial courts of the federal system. They heard all matters arising in their district under the laws of the United States. During national Prohibition, district court dockets came to be dominated by criminal law cases. Originally the courts’ districts mirrored state boundaries. As the population and federal jurisdiction expanded in the nineteenth and early twentieth centuries, Congress authorized multiple districts for some states.
The state of Washington was organized by Congress as a single judicial district in 1890, with one district court judge, and the district was assigned to the Ninth Judicial Circuit. In 1905, Congress divided the state into the Eastern and Western Districts. As the population and judicial business expanded dramatically with the boom of the Pacific Northwest, brought on by the Alaska and Yukon gold rushes, an additional judgeship was created for the Western District in 1909.
United States Circuit Court of Appeals for the Ninth Circuit
Established in 1891, the U.S. circuit courts of appeals were the first federal courts designed exclusively to hear cases on appeal from trial courts. These were courts that settled issues of law; they did not try original cases. Congress established a court of appeals in each of the existing nine regional circuits. The existing circuit judges and one newly authorized judge in each circuit served as the judges of the appellate courts along with district court judges or Supreme Court justices, who could make up the required three-judge panels. The same 1891 act gave the U.S. circuit courts of appeals jurisdiction over the great majority of appeals from the U.S. district courts and the U.S. circuit courts. This appellate function was intended by Congress to reduce the number of cases that could be routinely appealed to the Supreme Court. The 1925 Judiciary Act, also known as the Judges’ Bill, further restricted appeals to the Supreme Court, and this Act, combined with the explosion of litigation brought about by the expansion of federal activity, resulted in great growth of business before the courts of appeals. By the 1920s, each U.S. court of appeals had at least three assigned judges, ending the need for regular service by district judges on court of appeals panels.
The U.S. Circuit Court of Appeals for the Ninth Circuit originally heard appeals from trials in federal courts in California, Oregon, Nevada, Washington, Idaho, and Montana. In 1900, the territories of Alaska and Hawaii were added to the circuit; in 1912, Arizona was added. The Ninth Circuit Court of Appeals had the broadest geographical jurisdiction as it also heard appeals from American possessions across the Pacific and a special extraterritorial court in China.
Supreme Court of the United States
The Supreme Court was the only court named in the Constitution. The Judiciary Act of 1789 first set out the details of the Court’s organization and jurisdiction. Subsequent acts and its practices over time altered the original plans. By the time of the Olmstead case, the Supreme Court was a court of nine justices, including the Chief Justice. The Supreme Court exercised limited original jurisdiction as set out in the Constitution, but it was primarily an appeals court. Moreover, the Court largely controlled what cases it would hear on appeal. The 1891 act establishing the U.S. circuit courts of appeals authorized the justices of the Supreme Court to accept or reject cases brought to them through petitions for writs of certiorari, and the 1925 Judges’ Bill further increased the justices’ discretion in determining which cases to hear by eliminating certain automatic appeals that had previously existed.
|
<urn:uuid:637b65f2-f4a1-4459-be35-8f5ad3f78b0d>
|
CC-MAIN-2016-26
|
http://www.fjc.gov/history/home.nsf/page/tu_olmstead_jurisdiction.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00048-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970396
| 783
| 3.203125
| 3
|
In this section, you can find...
For all that the Internet can offer us, it sometimes offers a platform for promoting hatred and violence. In this section, we cover what online hate means, what Canadian law says about it, and how young people and adults can respond to it while keeping in mind Canada’s position on freedom of expression.
Diversity in Media Toolbox
The Diversity and Media Toolbox is a comprehensive suite of resources that explores issues relating to stereotyping, bias and hate in mainstream media and on the Internet. The program includes professional development tutorials, lesson plans, interactive student modules and background articles.
|
<urn:uuid:d994f1fc-38b0-41ba-b58d-93fc0247d434>
|
CC-MAIN-2016-26
|
http://mediasmarts.ca/digital-media-literacy/digital-issues/online-hate
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00032-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.902391
| 127
| 2.78125
| 3
|
The primary purpose of the NARMS retail meat surveillance program is to monitor the prevalence of antimicrobial resistance among foodborne bacteria. In 2007, surveillance for antimicrobial resistance included Salmonella and Campylobacter from 9 states that comprise the Foodborne Diseases Active Surveillance Network (FoodNet). In addition to Salmonella and Campylobacter, 3 FoodNet states also collected Enterococcus and Escherichia coli. The results generated by the NARMS retail meat program serve as a reference point for identifying and analyzing trends in antimicrobial resistance among these organisms.
Highlights of the 2007 Report
Due to the low recovery rate of Salmonella from ground beef and pork chops, statistical analysis of trends in resistance from these sources should be considered with caution.
The percentage of Salmonella isolates susceptible to all antimicrobials by meat type (Table 8) was as follows:
47.5% Chicken Breast
15.3% Ground Turkey
92.3% Ground Beef
44.4% Pork Chops
First-line antimicrobial agents recommended for treating salmonellosis are ciprofloxacin, ceftriaxone and trimethoprim-sulfamethoxazole (IDSA, Practice Guidelines for the Management of Infectious Diarrhea. Clinical Infectious Diseases 2001; 32:331–50). Macrolides and fluoroquinolone are used in the treatment of Campylobacter infections.
- 2.6% (5/190) of Salmonella from ground turkey were resistant to nalidixic acid compared with 8.1% (6/74) in 2002. Nalidixic acid resistance was not present in Salmonella from chicken breast in 2007. Resistance to nalidixic acid corresponds to decreased fluoroquinolone susceptibility; however, fluoroquinolone resistance has never been detected in Salmonella recovered from any retail meat since the program began in 2002. One isolate (0.5%) from ground turkey was resistant to both nalidixic acid and ceftiofur.
- 5.3% (10/190) of Salmonella isolated from ground turkey showed resistance to ceftiofur in 2007, compared with 8.1% (6/74) in 2002.
- There is a highly statistically significant increasing trend of ampicillin resistant Salmonella isolated from ground turkey going from 16.2% (12/74) in 2002 to 42.6% (81/190) in 2007.
- 16.2% (16/99) of Salmonella isolates from chicken breast were resistant to the third-generation cephalosporin ceftiofur compared with 10% (6/60) in 2002. Resistance to this agent corresponds to decreased susceptibility to ceftriaxone.
- More than 95% of Campylobacter are recovered from chicken breast each year (Table 10).
- There is a consistent and marked difference in resistance patterns among C. jejuni and C. coli isolated from chicken breast. Macrolide resistance is much higher in C. coli than C. jejuni for all years from 2002 to 2007, with 6.3% (9/143) resistance to azithromycin for 2007 compared to 0.6% (2/332) in C. jejuni. Similarly, resistance to ciprofloxacin in C. coli (25.9% [37/143]) is significantly higher than in C. jejuni (17.2% [57/332]).
- Another notable trend is that Campylobacter coli from chicken meat has shown a highly statistically significant (p < 0.0001) increasing trend in resistance to ciprofloxacin from 10% (9/90) in 2002 to 25.9% (37/143) in 2007.
- 48.6% (161/332) of C. jejuni isolates were resistant to tetracycline, up from 38.4% (76/198) in 2002 (p = 0.02).
|
<urn:uuid:14f746ba-1f89-4331-97ce-5b58736519b7>
|
CC-MAIN-2016-26
|
http://www.fda.gov/AnimalVeterinary/SafetyHealth/AntimicrobialResistance/NationalAntimicrobialResistanceMonitoringSystem/ucm180994.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00177-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.904976
| 852
| 2.625
| 3
|
Life on Gliese 667Cc?
The extra-solar planet GJ667Cc (or Gliese 667Cc) has been declared the most Earth-like object known outside of our solar system. It orbits a type of star which is studied at the Institute of Theoretical Astrophysics.
Figure 1. Click here for a larger version. Comparison of the planets that are currently the promising candidates for potentially habitable worlds. The Earth Similarity Index (ESI) describes how similar an object is to Earth and ranges from zero (no similarity) to one (identical to Earth). The surface temperature, which is the most important factor in the ESI, depends much on the distance to the host star. The zone around a star, where the surface temperature allows for the existence of liquid water on the planetary surface, is called habitable zone. Gliese 667Cc is located in the habitable zone of its host star and is currently the most Earth-like planet known (ESI=0.82). Credit: Planetary Habitability Laboratory @UPR Arecibo.
The discovery was announced already in last November and recently confirmed (see references below). The planet GJ667Cc is even more similar to our Earth than Kepler-22b, which was confirmed as a potentially habitable planet just a few weeks ago. GJ667Cc, the new prime candidate for a habitable world, is only 22 lightyears (or 200 million million kilometres) away, which is in our direct cosmological neighbourhood.
More and more planets
Less than 20 years ago before the discovery of the first exoplanets, we could only speculate about the possible existence of such planets. By now, more than 700 exoplanets are known, while over 2000 candidates wait for confirmation – among them many in the so-called habitable zone around their host stars (see Figure 1). The increasing number of known potentially habitable planets suggests that such planets might be frequent in the universe.
A cool red host star
While Kepler-22b is orbiting a sun-like star, GJ667Cc is accompanying a red dwarf star of spectral type M. Stars of this type, also called “M-dwarfs”, are smaller and cooler than our Sun but make up at least 60% of all stars. M-dwarfs are currently investigated here at the Institute of Theoretical Astrophysics. Most stars that host exoplanets are thus expected to be cool M-dwarfs, which makes the case of GJ667Cc particularly interesting.
Figure 2: See here for a larger version. The planetary system around the red dwarf star Gliese 667C consists of up to three currently known exoplanets of super-Earth type. The planet c is located right in the habitable zone, where liquid water on the planetary surface may exist. For comparison of sizes, the Earth and Mars are shown next to an artist impression of the planet 667Cc.
Credit: Composite figure based on images from the Planetary Habitability Laboratory @UPR Arecibo.
With an effective temperature of 3400 °C on its surface, the host star GJ667C is much cooler than our Sun, which has a surface temperature of 5500 °C. This red dwarf star therefore emits much less radiation than the Sun, reaching a luminosity of only just over one percent of the solar value.
Not too hot and not too cold
The habitable zone around a star is defined such that the temperatures on the surface of a planet are just right for water to exist in liquid form. Liquid water is considered to be one of the most essential requirements for the formation of life. The Earth is located in the habitable zone around the Sun at a distance of 1 AU (astronomical unit). Owing to the much lower energy output, the habitable zone around the red dwarf star GJ667Cc is much closer to this star at distances between 0.11 AU and 0.23 AU (see Figure 2). The Earth would be an iceworld if it orbited this star instead of our Sun. Fortunately, the planet GJ667Cc is located eight times closer to its star (at 0.12 AU), which puts GJ667Cc comfortably into the habitable zone.
GJ667Cc receives a radiation flux which is about 90% of what we receive from our Sun on Earth. Although most of this radiation is emitted in the infrared (IR), it is most likely enough to allow for liquid water on the planetary surface. The exact surface temperature is nevertheless uncertain and depends on a number of yet unknown factors. The temperature could be a pleasant 30°C if we assume a planetary atmosphere that is similar to the Earth’s. A more massive atmosphere, however, would result in higher temperatures and Venus-like conditions, which are unfavorable for life. Further observations are needed to answer if GJ 667Cc truly supports liquid water and if the conditions on this planet are appropriate for hosting life.
Close to the star
Due to the short distance to its central star, GJ667Cc orbits this star in only 28 days. One year on this planet is thus only 28 Earth days long. This would make it possible to celebrate your 1000th birthday (which is just 77 years on Earth). The days, however, could be very long. As the planet is so close to its central star, it is very likely that the planet is tidally locked. It would rotate synchronously and show always the same side towards the star – an effect that we know from our Moon. Consequently, there could be eternal day on the hemisphere towards the close-by star and eternal night on the other side, which is facing outer space. The temperature differences between both sides could be large and could affect the global climate.
Figure 3: The Sun seen from Earth (left) and an impression of how the red host star could look from the surface of the planet Gliese 667Cc (right). Although the star is much smaller than the Sun, it would appear larger on the sky and cast a faint reddish light on the planet’s surface. The two other stars Gliese 667A and B would be visible on the sky, see upper left corner on the picture to the right.
Credit: S. Wedemeyer-Böhm, University of Oslo (2012)
The short distance lets the star appear much larger on the sky of GJ667Cc than the Sun on our sky. The red dwarf would be seen as a red disk that, in comparison to the Sun, is about 3 times as wide and covers about 10 times the area (see Figure 3). The star casts a faint reddish light on the planet’s surface. Furthermore, GJ667 is part of a triple star system. The distance to other two stars Gliese 667A and B is about 230 AU, which is about six times the distance between Pluto and the Sun and clearly outside the planetary system around Gliese 667C. Nevertheless, these two stars would be prominently visible on the sky. Our Sun would also be seen with the naked eye as a distant star.
A flaring host star
Unfortunately, there is a potential problem with the nearby host star. Many M-dwarf stars – among them GJ667C – are known to emit flares, which are intense bursts of radiation and energetic particles. Flares on the Sun are known to have a direct impact on our Earth and can, for instance, cause problems with satellite and radio communication. Flares on M-dwarfs, however, can be a thousand times stronger than compared to our Sun. Such mega-flares can double the brightness of the star in minutes. Life on the surface of GJ667Cc would have to find a solution for this problem, especially since the planet is close to its flaring host star.
More problems for life
Another problem is connected to the presumably strong magnetism of the star. Many red dwarfs may be often covered by starspots (the analogues of sunspots) that could reduce the energy output of the star by as much as 40% for periods that may last months. Together with the fact that the red dwarf star emits almost no ultraviolet light, these varying light conditions could be a potential problem for the formation of life as we know it.
Living on GJ667Cc – a heavy experience
Being on GJ667Cc would certainly be a quite different experience. The mass of the planet GJ667Cc is estimated to be (at least) 4.5 times that of the Earth. Like Kepler-22b, GJ667Cc is a Super-Earth, i.e. a planet that is slightly larger and heavier than our Earth. The size and density are not known yet which leaves the possibility that GJ667Cc after all could be an inhabitable gas planet. Only a more compact rocky or ocean planet with a corresponding radius between about 1.7 and 2.2 Earth radii would be favorable for the formation of life.
The higher mass of this planet results in a different gravitational acceleration (that’s what keeps us on the ground) compared to our Earth. In the case that GJ667Cc is a rocky planet, the gravitational acceleration on the surface would be up to 60% higher than what we experience on the surface of Earth. In other words, you would feel (up to) 1.6 times heavier (1.6 g). A person with a weight of 75 kg on Earth would thus weigh 120 kg on the Super-Earth.
Furthermore, a heavier planet can keep a more massive atmosphere. Consequently, the atmospheric pressure at the planetary surface is likely to be higher. If GJ667Cc has an atmosphere that scales proportional to the terrestrial one, then the pressure would just be a few times higher. For a more extreme case like a Venus-type atmosphere, the pressure could be several hundred times larger, which corresponds to a water pressure a few kilometres deep down in the ocean on Earth.
Figure 4: Tardigrades or “waterbears” are examples for animals that can exist and develop under extreme conditions here on the Earth. They tolerate extreme temperatures and high radiation doses.
Could there be life?
Although GJ667Cc is located in a habitable zone, the conditions on the planet could be very different from our Earth. Life would be facing some potential challenges, which may include low and varying light conditions, possibly a higher atmospheric pressure, and violent flares. Nature proves to be inventive though. Even on our own planet we find species that show an amazing ability to adapt to extreme conditions. Examples are the so-called tardigrades, which are also known under the charming names “waterbears” and “moss piglets” (see Figure 4). These tiny creatures range in size from just 0.1 mm up to 1.5 mm. They are found in hot springs, ocean sediments, under ice sheets and even on top of the Himalayas. Waterbears tolerate extreme temperatures from just above absolute zero (-273 °C) to about +150 °C. They can survive years without water and a 1000 times more radiation than other animals. These hardened creatures have even been returned alive from studies in low earth orbit where they were exposed to space conditions. Even if the conditions on GJ667Cc might not be favourable for most terrestrial life forms, it certainly leaves room for the imagination. We can only speculate how fauna and flora – if present at all – would evolve under such different conditions.
Written by researcher Sven Wedemeyer-Böhm.
More details about this discovery are available in the HEC project area of the PHL website
Astrobiology at NASA
Exoplanet app for your iPhone
The HARPS search for southern extra-solar planets XXXI. The M-dwarf sample, submitted to Astronomy & Astrophysics.
A planetary system around the nearby M dwarf GJ 667C with at least one super-Earth in its habitable zone, to appear in Astrophysical Journal Letters.
The HARPS search for southern extra-solar planets XXXV. Super-Earths around the M-dwarf neighbors Gl433 and Gl667C, submitted to Astronomy & Astrophysics
|
<urn:uuid:5d36186d-c32c-4689-b76e-11765326af86>
|
CC-MAIN-2016-26
|
http://www.mn.uio.no/astro/english/research/news-and-events/news/astronews-2012-02-17.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00196-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946719
| 2,561
| 3.296875
| 3
|
Nestled in Northeast India next to the Brahmaputra River sits Majuli Island, a giant sandbar that happens to be the largest river island on Earth, home to some 150,000 people. It is also the location of the 1,360 acre Molai Forest, one of the most unusual woodlands in the world for the incredible fact that it was planted by a single man. Since 1979, forestry worker Jadav Payeng has dedicated his life to planting trees on the island, creating a forest that has surpassed the scale of New York’s Central Park.
While home to such a large population, rapidly increasing erosion over the last 100 years has reduced the land mass of Majuli Island to less than half. Spurred by the dire situation, Payeng transformed himself into a modern day Johnny Appleseed and singlehandedly planted thousands upon thousands of plants, including 300 hectares of bamboo.
Payeng’s work has been credited with significantly fortifying the island, while providing a habitat for several endangered animals which have returned to the area; a herd of nearly 100 elephants (which has now given birth to an additional ten), Bengal tigers, and a species of vulture that hasn’t been seen on the island in over 40 years. Gives you more than a little hope for the world, doesn’t it?
Filmmaker William Douglas McMaster recently wrote and directed this beautiful documentary short titled Forest Man from the perspective of Payeng’s friend, photographer Jitu Kalita. The project was funded in part last year through Kickstarter. The video is a bit longer than what we usually see here on Colossal, but completely worth your time. (via Gizmodo)
|
<urn:uuid:7138acc7-a28a-45e2-aa86-a10e051d4ac3>
|
CC-MAIN-2016-26
|
http://www.thisiscolossal.com/2014/07/forest-man/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00041-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962908
| 349
| 2.765625
| 3
|
Recently, activists converged on our nation’s capital, urging President Obama to act on climate change. While there are things the president can do, there are also things we can do here in Washington state to fight climate change. The most obvious way is to tap into renewable energy sources like solar.
While the effects of climate change will be felt across the country, Washington will be hit hard as climate change threatens the stability of our natural resources. With rising temperatures, we’ll see water scarcity that would heavily impact apple and cherry crops. Drier rivers also threaten salmon runs and could lead to the collapse of that fishery. And Washington’s oyster growers would see a big drop in their profits since more carbon in our atmosphere means more carbon in our ocean, making it more acidic and dissolving oyster shells. We’re starting to see many of these effects today, but we can curb some of the most severe consequences by cutting our dependence on fossil fuels and expanding our renewable energy portfolio.
Solar is a way that every homeowner can make a difference, and House Bill 1106 in the state Legislature allows more homeowners and businesses to access solar energy. The Legislature should approve this bill to fight climate change.
|
<urn:uuid:99678f48-540a-4b63-862d-1131295e4870>
|
CC-MAIN-2016-26
|
http://www.columbian.com/news/2013/mar/25/letter-solar-energy-solution/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943724
| 249
| 3.015625
| 3
|
Recently I got some gorgeous vintage cabochons, and made a bezel for one that night.
- Cabochon- a flat backed, undrilled piece for jewelry making. Generally made out of stone, glass or resin. The ones used in this article are glass
- Bezel- the frame that surrounds the edge of a cabochon for using it as jewelry. Bezels in metal work are generally a flat wire soldered to a sheet base, then the stone is set into the frame and the wire is burnished down around the stone to hold it in place. In beadwork, a bezel can be done using almost any off-loom beading method. In the example, it's done with even count tubular peyote.
The cabochons I used are faceted vintage West German glass. They have a metallic coating on the back to really make them sparkle. If you're looking for vintage glass, some of the best pieces are from Japan and West Germany.
The bezel is made using tubular even count peyote with no decreases. This is one of the easiest methods to make a bezel for oval or round cabochons. Instead of making decreases to snug the bezel around the top and bottom of the stone, you switch bead sizes.
You can't really tell by looking at it, but the blue size 11 seed beads are a little smaller than the size 11 silver lined crystal seed beads. Since seed beads can be different sizes from manufacturer to manufacturer, or even within the same manufacturer based on finishes, the bead counts can vary.
At least 3 different bead sizes work best. In this case, size 11/o seed beads, size 11/o Delicas, and size 15/o seed beads.
You start these bezels with your largest beads. In this case, the size 11 seed beads. Make a loop with an even number of beads that will go around the stone. For the blue beads that was 50 beads around, for the crystal, it was 48. Start working in even count peyote. The number of rounds depends on the thickness of the edge of the stone. Since the edge of this one has a nice taper, I used 4 rounds. Since the original loop is 2 rounds (up and down beads) that meant two more rounds. After you work enough rounds that you're ready to start snugging it down for the cabochon, you switch to the next smaller bead. The back of these have two rounds of the size 11s. I stitched the two rounds, then used the smallest beads for the next two rounds. The last round, I went through once more to reinforce it so it was nice and tight. Then weave needle and thread through the beadwork to the top of the bezel. Set the cabochon into the newly created bezel cup. Because I didn't want the beadwork to obstruct too much of that wonderful cabochon, I only did one round of the size 11s, then two of the size 15s for the top. Reinforce by going through the last round again, and then tie off the thread, weave it in, and tie off a couple more places before breaking off the thread.
After I finished making the bezel, I had to choose what to do with it. Adding fringe to the long edge and sewing it to a small pin back would make a pretty brooch, adding fringe a short edge, and making a bail would be a pretty pendant. After discussing it with my daughter, we decided a bracelet would be best.
So I beaded two more cabochons, then stitched them together, and added chevron style daisy chains and a bead and loop clasp to finish the bracelet.
The cabochons used for this project came from Treefrog Beads. She has a wonderful selection at great prices, including this faceted style in lots of colors, and unfaceted cabochons that are the same size in a lot of colors.
|
<urn:uuid:cc57b289-5e1a-49a8-9a7c-71302a0ed9bf>
|
CC-MAIN-2016-26
|
http://www.bellaonline.com/articles/art176286.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00146-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950719
| 827
| 2.703125
| 3
|
With everything from meadows of seagrass and forests of kelp to tide-scoured submarine cliffs and gravel dunes sculpted by tidal currents, Dorset’s submarine landscapes, habitats and wildlife are more than a match for those we are more familiar with on land.
Being mostly out of sight puts undersea habitats at greater risk - they are less understood, less appreciated and damage can go unnoticed for a long time. This partly explains why marine life around the UK has been declining.
Dorset Wildlife Trust wants to see a return to Living Seas - marine wildlife thriving in our coastal waters, recovering from past decline as we use the sea’s resources more wisely and learn to value the sea for the many ways in which it supports our quality of life. We believe it is possible to achieve Living Seas within a single generation but only if we act now.
Our work to restore Living Seas focuses on three themes:
Underpinning all this work is our longstanding effort to better understand the marine habitats of Dorset. With the help of remote sensing techniques and an army of volunteer divers and shore-walkers we now have one of the best understood areas of seabed in the UK.
Seas Under Threat - Please Take Action!
Have your say and join the Wildlife Trust campaign to protect our sea life.
Purbeck Marine Wildlife Reserve
Visit the UK's longest established Voluntary Marine Nature Reserve
|>||Great Dorset Seafood|
|>||Studland Seagrass Meadows|
|>||Responsible Sea Angling|
|>||Kayaking with Wildlife|
|>||Shellfish Divers Code|
|>||Marine Protected Areas|
|
<urn:uuid:51fb2b30-9bd8-4eef-93dc-a537bf505ea2>
|
CC-MAIN-2016-26
|
http://www.dorsetwildlifetrust.org.uk/living-seas.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00185-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.903203
| 352
| 2.671875
| 3
|
Researchers say they have a new design for a bridge framework that will improve earthquake resistance and reduce damage.
The design also boasts faster on-site construction and uses common construction materials.
"The design of reinforced concrete bridges in seismic regions has changed little since the mid-1970s," John Stanton, a professor at the University of Washington, said in a press release from Purdue University.
During California's Loma Prieta Earthquake of 1989, the upper level of the double-deck Nimitz Freeway collapsed onto the lower deck, killing dozens of people.
The concept for the design was developed by Stanton, who teaches in UW's Department of Civil and Environmental Engineering.
The team also included professor Marc O. Eberhard and graduate research assistants Travis Thonstad and Olafur Haraldsson from the University of Washington; and professor David Sanders and graduate research assistant Islam Mantawy from the University of Nevada, Reno.
Speeding Up Construction
Faster construction is achieved by pre-fabricating the columns and beams, or "bents," off-site so they can be quickly erected and connected once shipped to the job location.
Bridge bents are currently made using cast-in-place concrete; therefore, the next piece can't be added until the one before it gains strength, researchers explained.
Pre-fabrication eliminates this waiting time, speeding on-site construction and reducing traffic delays.
"However, pre-fabricating means the pieces need to be connected on-site, and therein lies a major difficulty," Stanton noted.
"It is hard enough to design connections that can survive earthquake shaking, or to design them so that they can be easily assembled, but to do both at once is a real challenge."
The Rubber Band Effect
An important feature is that the columns are pre-tensioned.
Stanton described the idea by comparing it to a toy set of wooden building blocks with a hole through each one.
"Stack them on top of one another, put a rubber band through the central hole, stretch it tight and anchor it at each end. The rubber band keeps the blocks squeezed together.
"Now stand the assembly of blocks up on its end and you have a pre-tensioned column. If the bottom of the column is attached to a foundation block, you can push the top sideways, as would an earthquake, but the rubber band just snaps the column back upright when you let go," Stanton explained.
Since real bridge columns don't have rubber bands, very high-strength steel cables can be used to achieve the same effect, Stanton said. The cables, as well as some conventional rebar, are installed during the pre-fabrication process to help reduce site operations.
University of Washington, Seattle / NEES photo
The design includes the ability of bridge columns to re-center after an earthquake.
The "re-centering" capability is of utmost importance to ensure that bridge columns are vertical and not leaning at an angle after an earthquake.
During an earthquake, the rocking columns experience high local stresses at the points of contact, making the concrete vulnerable to being crushed. The researchers took special measures to mitigate this possibility by protecting the ends of the columns with short steel tubes, or "jackets," to confine the concrete.
Testing the System
The researchers said they have conducted successful cyclic testing on individual connections to test their design's durability.
"Cyclic tests of the critical connections have demonstrated that the system can deform during strong earthquakes and then bounce back to vertical with minimal damage," Stanton said.
This month, the team plans to test a complete bridge built with the system. The test will be done at 25 percent of full-scale on the earthquake-shaking tables at a Network for Earthquake Engineering Simulation (NEES) facility at the University of Nevada, Reno.
NEES is a shared laboratory network based at Purdue University.
Thonstad led the component design and building for the upcoming test. The column and cap beam components were then shipped to the testing facility, where Mantawy will lead the bridge construction.
The team will process data from the project, archive it, and make it publicly available through the NEES Project Warehouse data repository.
The research will be presented at Quake Summit 2014, which takes place July 21-25 in Anchorage, AK, and is part of the 10th U.S. National Conference on Earthquake Engineering.
|
<urn:uuid:cf90a50b-be49-43b5-a745-b5fe5813a2e4>
|
CC-MAIN-2016-26
|
http://www.paintsquare.com/news/?fuseaction=view&id=11681
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00002-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947707
| 909
| 3.5
| 4
|
Feeding the planet’s ever-expanding population while dealing with climate change will require a new way of thinking about agriculture. Current farming methods are depleting the earth’s resources and producing alarming quantities of greenhouse gases—agriculture operations currently produce 13 percent of human-based global GHG emissions. The environment is paying a huge price in biodiversity loss and deforestation, while the global economy leaks billions of US dollars per year on conventional agriculture’s economic side effects.
Turning agriculture a brighter shade of green will not only ease pressure on the environment and help cope with climate change, but will also create opportunities to diversify economies, increase yields, reduce costs, and generate jobs—which will in turn help reduce poverty and increase food security. Increasing farm yields and improving ecosystems services will be a boon to the 2.6 billion people who depend on agriculture for a livelihood, particularly in developing nations where most farmers live on small parcels in rural areas.
Huge gains can be made for a greener future by simply reducing agricultural waste and inefficiency. Nearly 50 percent of food produced is lost through crop loss or waste during storage, distribution, marketing, and household use. Some of these inefficiencies—especially crop and storage losses—can be addressed with small investments in simple farming and storage technologies.
Greening agriculture will require investment, research, and capacity building. UNEP’s contribution to this global effort includes the following innovative programmes:
For a more on the future of green agriculture please visit UNEP’s Green Economy Report website.
|
<urn:uuid:aa064a1a-8289-4875-8fef-70aed649eda9>
|
CC-MAIN-2016-26
|
http://www.unep.org/climatechange/mitigation/Agriculture/tabid/104336/Default.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00121-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.898544
| 318
| 3.90625
| 4
|
Carbon capture, for those who don’t know already, is the term given to various different technologies that can “capture” the carbon dioxide in streams of gases that would normally be emitted to the atmosphere. These streams come from any process or device that burns a fuel, from a petrol-powered lawnmower, through cars and trucks, right up to the gas and coal-fired power stations that keep our society humming along.
Ideally, we could use some kind of carbon capture technology to remove the carbon dioxide from all those emissions, since everyone (with a few notable exceptions) knows that carbon dioxide is a greenhouse gas. It causes a small but significant rise in the global average temperature, which in turn has potentially disastrous consequences such as a greater incidence of highly variable weather events (flooding, cyclones, drought and so on).
The reality though, is that capturing carbon dioxide is difficult and certainly prohibitively expensive from small and intermittent sources such as car exhausts. In most cases the capture technology would be as large and complicated as the car itself. Even then, there is no obvious place to deposit the captured carbon dioxide.
It only really becomes technically feasible once the sources are large and not moving, such as a coal-fired power station. Even then the current state-of-the-art technologies would use a large proportion of the power station’s output, just to power the capture and storage technology. The storage question, of where to put captured carbon dioxide, is another discussion in itself.
So if this whole business of carbon capture and storage is so difficult, and the existing technologies are so energy-intensive, then why is anyone bothering with carbon capture research and development at all? Why not just pursue a completely renewable energy future, powered by solar, wind, and other emerging clean energy technologies? That’s a tough question, and there’s no straightforward answer, but there are a few key issues.
The most commercially advanced renewable energy technologies are wind and solar, which both suffer from intermittency (they don’t generate power when the wind isn’t blowing or the sun isn’t shining respectively). To a certain extent wind intermittency can be solved by having large interconnected transmission networks, but these are expensive to build, especially in a country as large and sparsely populated as Australia.
The holy-grail solution for these technologies is cheap energy storage, so that excess power could be stored and then used at times of high demand. At the moment, apart from some clever solutions involving pumping water to higher elevations and then releasing it through turbines, there are no cost-effective storage solutions. So until there are suitable energy storage options in place, there is a limit to how much intermittent energy generation can be used. Some places in the world, such as Germany, have probably already passed that limit.
The second issue though is related to cost and resource availability. Certain regions in the world (such as China, the USA and much of Australia) still have enormous coal deposits that could be used to generate electricity. The governments, corporations and individuals that own those deposits are understandably keen to exploit them, and this is where carbon capture really comes into play. If a truly cost-effective carbon capture technology can be deployed, then there is the potential to generate low-cost baseload electricity (or at least, lower cost than the alternatives), which suits those governments, corporations and individuals eminently well.
All over the world, researchers and organisations are working feverishly to optimise existing carbon capture technologies, especially those that have already been demonstrated at some scale. This includes the absorption of carbon dioxide into liquid solutions, which has been practiced for several decades in the oil and gas industry, and the use of solid materials which adsorb carbon dioxide onto their surfaces.
In both liquid and solid cases though, large quantities of energy must be used to release the carbon dioxide, usually in the form of heat or a pressure change. To put this into perspective, early estimates for a power station are that up to 40% of the power output would be consumed running the carbon capture apparatus.
Other groups are working on the use of thin membranes which can filter the carbon dioxide from emission streams, but these too have a myriad of problems. The membranes usually cannot operate at the high temperatures of the emission gas streams, and typically become soft and swollen after prolonged exposure to carbon dioxide, limiting their effectiveness.
Of course many researchers, myself included, have proposed novel carbon capture technologies. A report just published in the journal Angewandte Chemie International Edition describes a new approach using a metal organic framework.
This is a highly structured material with lots of channels inside it with very specific dimensions and properties. It can absorb carbon dioxide and then release it after exposure to light. This is particularly exciting because the light used is very similar to concentrated sunlight, so it could potentially be used in a process that captures carbon dioxide and releases it without the necessity of high temperatures or a pressure change, both of which are expensive. The material we used is expensive though, and may not be suitable for the very large scale technologies that will be required for coal-fired power stations.
Most research and development funding has been directed at the so-called “near-commercial” technologies, as the power generation industry is notoriously risk-averse and prefers to adopt a mature technology. The reality though, is that there are only marginal improvements to be made to the conventional carbon capture technologies, and in my opinion, it is entirely appropriate to designate them as “so-called” near-commercial. They are nearly commercial now, and they always will be.
In the meantime, my colleagues and I, and other aspiring researchers around the world, will continue working with whatever funding support we can obtain to find the next big thing in carbon capture and storage. Maybe it will involve metal organic framework materials, maybe something else we haven’t even invented yet. Two things are for certain though; it won’t be a slight iteration on the existing technology, and it won’t be funded by the power generation industry.
|
<urn:uuid:305374d6-9bef-4171-8de3-9e31b4ee35de>
|
CC-MAIN-2016-26
|
http://theconversation.com/carbon-capture-cant-rely-on-fine-tuning-old-technologies-12169
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954835
| 1,255
| 3.78125
| 4
|
1Not open to discussion or modification: the essential features of the constitution are non-negotiable
More example sentences
- Really, those are also almost non-negotiable.
- He said: ‘We have got to decide what parts of the health service we think are non-negotiable.’
- The union has rejected the report out of hand and claimed it had been told that the 11 per cent was non-negotiable and would be linked to 4,500 job losses and the closure of scores of fire stations.
For editors and proofreaders
Line breaks: non-negoti¦able
Definition of non-negotiable in:
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed.
|
<urn:uuid:d8b26c09-79b4-447c-b632-e6f5a88de04c>
|
CC-MAIN-2016-26
|
http://www.oxforddictionaries.com/definition/english/non-negotiable
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00077-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962186
| 171
| 2.625
| 3
|
If you want to save this information but don't think it is safe to take it home, see if a trusted friend can keep it for you. Plan ahead. Know who you can call for help, and memorize the phone number.
Be careful online too. Your online activity may be seen by others. Do not use your personal computer or device to read about this topic. Use a safe computer such as one at work, a friend's house, or a library.
Abuse is maltreatment. It can be physical, such as hurting the body, or it may be emotional, sexual, or even financial. Injury from abuse may occur to children or vulnerable adults or among spouses.
Suspect physical abuse when:
- An injury can't be explained or doesn't match the explanation.
- Repeated injuries occur.
- Explanations change for how an injury happened.
You may feel uneasy if your health professional brings up the issue of abuse. Health care providers have a professional duty and legal obligation to evaluate the possibility of abuse. It is important to consider this possibility, especially if there were no witnesses to an injury.
If you suspect abuse, seek help. You can call the local child or adult protective agency, police, or clergy or a health professional such as a doctor, nurse, or counselor.
If you think your child has been abused, there are resources available to help.
Current as of: April 24, 2015
To learn more about Healthwise, visit Healthwise.org.
© 1995-2016 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
|
<urn:uuid:5613ef46-b7b3-4393-91de-933c327105b2>
|
CC-MAIN-2016-26
|
http://www.uwhealth.org/health/topic/frame/abuse/not3632.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.939598
| 344
| 2.9375
| 3
|
Table of Contents
Data Distribution Over the Web
In the last year, many Web surfers interested in VRML (Virtual Reality Modeling Language) files have found themselves visiting the NCSA Astronomy Digital Image Library (ADIL) to view 3-D visualizations of a galaxy or an interstellar cloud. Others, perhaps looking for images of the Milky Way, have caught a glimpse of what the center of our galaxy looks like in radio waves. It would seem that a such an image-ready medium like the World Wide Web would be a perfect fit for such an image-oriented field as astronomy. Indeed, judging from the public response to the ADIL and related astronomy resources (such as the hugely popular Mars Pathfinder site from NASA and public gallery from the Hubble Space Telescope ), the Web has proved to be extremely successful in distributing science to the general public. The Web, of course, has also been important for distributing that same science throughout the scientific community as well; however, making the network an effective tool for scientists through the distribution of research-quality data presents a number of challenges.
The NCSA Astronomy Digital Image Library was developed with support from NASA and the National Science Foundation to address some of the challenges of distributing scientific data over the network. Its specific mission is to collect fully processed astronomical images in FITS format (a standard astronomical image format ) and make them available to the research community and the interested public via the World Wide Web. The research component itself has two sides, which I will discuss in this story: on the one side, the ADIL allows users to search, browse, and download astronomical images. This can be non-trivial when the images are not in the usual GIF or JPEG formats. On the other side, the ADIL provides researchers with a place to archive and share their fully-processed images with the community by allowing them to add the images to the Library's collection.
ADIL can be thought of as a place to search for and store data. But it is also a tool that strives to work at a high conceptual level, providing a bridge between data and astronomical ideas. This is accomplished in part through links between the images and other electronic data including, in particular, scientific literature. In fact, today, the majority of current refereed journal literature is available on-line, either as abstracts or full articles. Interconnecting astronomical resources on the network has been the topic of considerable effort within the community which I will discuss in a follow-up story (to appear in the February 1998 issue of D-Lib). With such connections to the scientific literature, the ADIL can be more than just a repository for astronomical images; it can be a part of the presentation of scientific results. Astronomers can now publish scientific data to a level not previously possible. In this way, we hope that the ADIL and resources like it will change the way astronomers do research.
Many of the complications of running a scientific data library trace back to two characteristics of the basic data items being serverd: the item's file type and size. The ADIL stores and distributes its images in FITS format, which is not a file type generally supported by the Web. Why not use GIF or JPEG? To understand why these formats are not appropriate for scientific data, consider the difference between scientific images and the usual sort of images one finds on the network.
The biggest difference is that, to a scientist, an image is multi-dimensional, regularly-sampled array of measurements. This means first of all that the image is not restricted to two (or even three) dimensions. Second, the value at each pixel represents a scientific measurement or quantity, such as brightness, temperature, or magnetic field strength. The value could be an integer or floating-point number or something more complex. In contrast, the value in GIF pixel is an index into some color table. A scientific image often contains no notion of color. The application of a color table to a scientific images is usually applied only during visualization. The visualization process usually causes a loss of information in the image (e.g. one might only have 256 colors) in order to highlight some particular feature of the data.
The other important feature of a scientific image is its associated metadata. The metadata are the ancillary data needed to properly interpret the basic image data. They include basic information like the number of dimensions in the image, image size, and data type contained in each pixel which allow the data to be read in properly by application programs. They also include information necessary to properly analyze the data. For an astronomical image, the metadata might include information like the telescope used, the observing frequency, the position in the sky, and the name of the object in the image. Such information plays an important role when searching for and browsing images in a library. Thus, a scientific format must not only be able to support a scientist's notion of an image, it must also be able to store necessary metadata needed to handle that image.
The other important concern for handling scientific data is the size of the individual data items. There is no restriction on how big a FITS file can be, and in practice, they can be between a few hundred kilobytes to several hundreds of megabytes in size. Downloading such files through today's Internet is a slow operation; therefore, the data library must have effective ways of browsing the data -- that is, finding out what's in the data without downloading it all.
A Visit to the Library
For web surfers, the ADIL home page provides links to various highlights of the Library's contents. An astronomer visiting the ADIL, however, normally would first go to the Library's Query Page . This HTML form allows the user to search for images using a variety of criteria, including:
|Figure 1. An excerpt from the ADIL Query Page.|
As an example, the user could enter "supernova" in the "Object Type" box and press the "Submit Query" button, and a list of matching images would appear in a Results Page. For each image, the page lists some of the metadata associated with the image so that the user gets some idea of what the image contains. From this page, the user can download any of the matched images; however, most users would browse the images by clicking on the links to their Preview Pages.
The purpose of the Preview page is to give as much information as possible so that the users can determine what is in the image and whether they should download it for further analysis. This is done through the formatted presentation of the image metadata, a preview image, and links to further information. For example, a typical Preview Page contains the title and authors and a digest of the image header (see Figure 2). The preview image is a visualization of the FITS image in GIF format. Often, the image is subsampled to allow it to be downloaded quickly. If the FITS image has more than two dimensions, a typical 2-D subimage is chosen for previewing. There are links for further browsing of related data, including an abstract and the full FITS header. If the image does have more than two dimensions, there is a link to a "Movie Page" which allows the user to browse other 2D frames from the image.
|Figure 2. An excerpt from a sample Preview Page . This page contains preview information about the image as well as links to other information. Note the link just below the preview image labeled "Reference"; this anchor links the image to the related journal abstract.|
One important link found on the preview page (located just the preview image) is labeled "Reference". This is a link to the abstract in a related published article. These abstracts are provided by another astronomical data provider, the NASA Astrophysics Data System (ADS) . This link helps facilitate the connection between the data and the science it represents. In cooperation with the ADIL, the ADS provides similar links between the abstracts and related data in the ADIL. Thus, users browsing abstracts at the ADS site can easily access the data that went into that article stored in the ADIL.
It is interesting to note that many of the astronomical data providers available on the network have similar schemes for searching and browsing data. However, the details of the data access differ greatly because they are tailored to the particular data type they serve. Thus, it is difficult to find all the information available everywhere about "supernova"; currently, one must visit each site and use their interface to conduct a search. Efforts are underway to address this problem which I will discuss in the follow-up article.
New Methods for Browsing
The standard ADIL scheme for browsing is a kind of server-side browsing. In this type of browsing, the server filters the data and its metadata into a presentation in HTML format. The ADIL has been exploring other techniques for browsing its contents. One technique is the use of imagemaps for "visually searching" through a collection of images. For example, the Library contains a survey of molecular gas in the Milky Way Galaxy made up of 720 images. To browse this collection, the user can access the survey's Project Page . The image shown there represents the entire portion of the sky covered by the survey. By clicking on a location in the image map, one can get a list of nearby images.
The advent of Java allows us to explore techniques for client-side browsing. As an example, we have developed a Java Applet for browsing large images in the Library . This applet presents two views: a subsampled view of the image on the left and a "zoomed" image on the right. The zoomed view can be updated by clicking on locations in the subsampled view. The applet also tracks coordinate positions as the user moves the mouse over the image. From our explorations of Java, we have found a number of operations that are common to browsing all kinds of scientific images. This has led to a project at NCSA to develop a package of reusable Java classes for browsing scientific images. This package, called the Horizon Image Data Browser Package is currently available as an alpha release. A production release is expected by Summer 1998.
The ADIL has also been exploring VRML as a way of browsing images. As a 3-D equivalent to a GIF image, VRML can be used to create static visualizations of 3D images. The Library contains a number of VRML visualizations . In addition, we are now testing a VRML Server that allows users to create their own 3D visualizations of images in the ADIL.
Adding to the Library's Collection
The ADIL is more than a tool for astronomers looking for images to augment their research. It is also useful for authors who wish to share their images with the community. While many of the Library's images come from observatories, the core of the collection comes from individual authors. The ADIL provides a way to upload the images to the Library, along with any supporting data, where it can be processed and made available to the Library users.
Authors deposit images into the Library in the form of collections we refer to as "projects". Normally, an author would make a deposit at the end of some scientific study when the resulting publication is going to press; all the fully processed images associated with that paper would make up the project. The main requirements for making a deposit are:
In addition to the FITS images, the author can also include other kinds of data files related to the project. This could include table data or special visualizations of the data, such as GIF images, PostScript figures, animations, or VRML renderings.
When the author is ready to deposit, he or she first fills out an on-line submission form. Then, the author may either manually FTP the files to the ADIL anonymous FTP server or, if running on a UNIX platform, can download a customized script that uploads all the files automatically.
When a project is processed and placed on the Library's "shelves", it
is given a unique codename (e.g.
the first project deposited by Raymond Plante in 1995). When this
codename is appended to a standard URL base (e.g.
corresponding Project Page can be accessed directly. Items within the
project also have codenames (e.g. 95.RP.01.02 for the second image in
that project). Thus, every item in the library can be accessed via a
unique URL. We encourage authors to cite these URLs in their
published articles. For example, one might refer to an animation
sequence that illustrates a feature of the data that cannot be
conveyed as well with traditional 2D visualizations.
Behind the Desk: the Library Backend
For more information about what goes on "behind the desk" at the ADIL, consult the "Overview of the ADIL System" . In summary, when an author makes a deposit to the Library, a collection of programs, the "Electronic Librarian", engages to process the deposit. Metadata are extracted from FITS files and the inputs from the submission form and loaded into the database system (PostgreSQL) used for searching for images. The files are then archived in long term storage and moved to the "Library Shelves", making them available over the Web. Although this process is largely automated, the Human Librarian still plays an important role. The metadata, which allow the image to be located in a search, are not always contained in the FITS file or the submission form filled out by the user. The metadata that can be extracted might also be inaccurate. The human, therefore, is important for catching typos and making sure the metadata that get loaded into the database make sense.
|Figure 3. Data Flowing into the Library. Authors use FTP and the Web to deposit data and related information into the Library. Metadata for a searchable database is extracted, and the data is moved to storage.|
The ADIL storage model employs primary, secondary, and tertiary storage to hold the data. The primary storage are locally mounted hard drives containing the database, metadata used for constructing Preview Pages on-the-fly, and GIF preview images. These are kept on disk all the time for immediate user access. The secondary storage is comprised of fourteen gigabytes of local disk operated as a cache and which is used to store the actual FITS images. If the user downloads an image, the system first looks for it in the cache; if it is not there, it is automatically transferred from the tertiary (long-term) storage and delivered to the user. The cache's purging policy is designed to remove the largest files that have not been accessed recently first.
The ADIL uses the NCSA Mass Storage System (MSS) for its tertiary, long-term storage. This system is based on a bank of fast IBM Magstar tape drives (loaded by a robotic juke box) and more than 285 Gigabytes of its own disk cache. The drives feature a data rate of 9 Megabytes/second, and they can seek to any position in their 10-Gigabyte tapes in less than 60 seconds. The MSS is connected to the ADIL server with an FDDI network connection providing 100 Megabits/second transfer rates. Because of the cache's purging policy, transfer from MSS usually happens for only the larger images. Given the performance of the MSS, the bottleneck during the download of a large file to a remote workstation is almost always the Internet itself.
Data Archiving and Data Publishing
Prior to the Web and the ADIL, sharing data with one's colleagues was a difficult task. If an astronomer needed copy of someone else's data, he would have to contact the author of the data directly. Unless the author had been working with the data recently, she might have to go to considerable effort to locate the data on tape, make a copy, and then send it to the colleague who made the request. Given the effort necessary, there was a good chance that the data would not get transferred in a timely period--if at all.
Today, there are a number of centers distributing data over the network, including image data. Some serve as archives for raw or unprocessed data (such as the ASCA X-ray Telescope archive ) while others serve data that are essentially fully processed and ready for analysis. An example of the latter is the NASA SkyView archive which serves data from a number of large survey projects. It is important to note that it is not the goal ADIL to mirror data that is available from other (permanent) archives. Such archives are usually associated with large observatories or projects (such as NASA space observing missions) which can afford to include data repository as part of the overall mission. However, many images that produce published results come from smaller observatories that do not have publicly available archives. A resource like the ADIL is particularly important to astronomers conducting smaller-scale surveys, such as a recent chemical study of the Taurus Molecular cloud which includes images of over 20 different chemical species . Such comprehensive projects can form the cornerstone of many future studies as long as the data can be effectively distributed.
The availability of a variety of astronomical data on-line is already beginning to affect the way astronomers do research. At this time, the ADIL contains about 5,000 images representing over 13 gigabytes of data. These numbers are small compare to the library's capacity as well as its potential as a research tool; however, as the collection grows, the power of the Library will become more apparent. With a large variety of data available, astronomers can carry out multi-frequency studies of objects or a class of objects, comparing previously observed data with new data. Many questions in science can only be effectively addressed when a large amount of data exists, spanning many different objects, positions in the sky, or frequency bands. Previous observations are also very valuable in planning new projects.
The unique URLs for ADIL items provide a way to link the data to other information on the Web including the scientific literature. Transparent links between the literature and the data serve to pull the data into the publishing process. We envision a major shift in the norms of publishing in which data is published at same time as a refereed article.
This future, of course, requires a cultural change within the community. Admittedly, many scientists might feel overly exposed to scrutiny if their images were available in an analyzable format. Some are concerned that publishing the data might "give away" research they might do in the future. Such concerns may never go away, preventing some data from ever becoming public. Nevertheless, astronomers are becoming more accustomed to having easy access to data. I, therefore, see that having one's images available on-line will help promote the scientific results they produced, because other researchers that make use of the images are obligated to cite the previous work. In the end, a resource like the ADIL helps to complete the loop of scientific investigation: easy access to previous data makes it easier pose new questions and initiate new studies.
Roberts, D.A. and Goss, W.M. 1995. Multiconfiguration VLA H92alpha Observations of Sgr A West at 1 arcsec Resolution (http://imagelib.ncsa.uiuc.edu/document/95.DR.01).
|
<urn:uuid:4ce662b4-1359-4384-a6bf-1f019e2f9624>
|
CC-MAIN-2016-26
|
http://www.dlib.org/dlib/october97/adil/10plante.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00162-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.927319
| 3,991
| 3.09375
| 3
|
NTCA is dedicated to improving the quality of life in rural communities by advocating for broadband and other advanced communications infrastructure and services. Connecting and collaborating with partners and allies enables the association to expand its efforts and focus on rural education, health care, commerce and public safety—all areas that contribute to the vitality of a community. Through strategic relationships, NTCA and its partners can make an even greater impact on sustaining and growing rural communities.
|In remote areas of the country, where routine or emergency exams might be difficult to perform, doctors—through the use of broadband—can communicate and provide quality support to their patients despite the geographic distances that separate them. Telemedicine is a form of clinical medicine, where medical information is transferred online so patients and doctors can more easily consult from a distance. Telemedicine generally refers to the use of telecommunications and can be as simple as medical specialists discussing a patient over the telephone, or as complex as videoconferencing.||Access to learning is an essential factor in the economic development of any community. Rural schools face unique challenges of small student populations that sometimes limit the capability of schools to fund specific programs, such as language classes. Through the use of technology, distance learning gives rural students access to more individualized instruction and advanced coursework and allows them to receive instruction from teachers who are off-site. This arrangement provides rural students with the same flexibility and opportunities their counterparts in urban areas enjoy.|
|Agriculture and ranching remain the leading types of commerce in rural communities; however, broadband access opens up a diverse range of market opportunities for small businesses. Rural businesses are creating websites and accessing markets online—both regional and international—that otherwise were not available.||Public safety officials represent local, state and federal law enforcement agencies, fire departments, hospitals (as emergency medical technicians), and various other agencies related to public utilities and transportation. Broadband is an essential tool for emergency management and enables public safety officials to talk to one another during potentially life-threatening situations.|
|
<urn:uuid:ba1dc67d-20ee-4ab0-88b0-a21f3a49c737>
|
CC-MAIN-2016-26
|
http://www.ntca.org/outreach/strategic-outreach-draft.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00066-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945
| 412
| 2.53125
| 3
|
About the Authors: Kris Balderston serves as Special Representative for the Secretary of State's Global Partnerships Initiative, and Jacob Moss serves as U.S. Coordinator for the Global Alliance for Clean Cookstoves.
On Wednesday, the world's leading general medical journal The Lancet released a major new report which estimates that household air pollution attributed to cooking over open fires or basic cookstoves causes the premature deaths of approximately four million people annually -- many of them women and young children. This number -- which includes 3.5 million deaths associated with indoor exposures and another 500,000 deaths from cookstoves' contribution to outdoor air pollution -- is more than double previous estimates and underscores the need to renew efforts to prevent these deaths.
Three billion people globally rely on solid fuels like wood, charcoal, agricultural waste, animal dung, and coal for household energy needs, often burning them inside their homes in inefficient and poorly ventilated stoves or open fires. Polluting stoves and fuels used indoors expose women and their families to air pollution levels as much as 50 times greater than World Health Organization guidelines for clean air. Household air pollution exposure can cause heart and lung diseases in adults, pneumonia in children, and low birth weight among infants.
Women and children are affected most as they are exposed to high levels of pollution inside the home and often spend several hours every day collecting fuel -- time that could be much better spent on schooling or income generating activities such as farming or starting a microenterprise. Even households that purchase (rather than collect) solid fuels can save money by switching to cleaner stoves that are also more fuel-efficient.
The new estimate of the premature deaths due to household air pollution represents a significant increase from previous estimates of two million annual premature deaths. The study in The Lancet cited household air pollution as the fourth worst health risk factor globally, second worst among women and girls, and fifth worst among men and boys. It is the worst of the environmental risk factors affecting health (such as outdoor air pollution and unimproved water sources and sanitation), both globally and in poor regions. It is also the single worst health risk factor in South Asia and second worst in most parts of Sub-Saharan Africa.
This increase in estimated premature deaths from cookstove pollution is largely attributable to new evidence that allows for the inclusion of additional health impacts from exposure to cookstove smoke. For example, the new estimate includes effects of smoke inhalation on adult cardiovascular mortality and lung cancer associated with burning biomass, neither of which was previously considered. The new estimate also includes the contribution of cookstove smoke to the burden of disease associated with outdoor air pollution, a major problem in Asia and Sub-Saharan Africa.
Globally, the health impacts of household air pollution exceeds those of some of the most burdensome diseases in developing countries, including malaria, tuberculosis, or HIV/AIDS -- all of which are expected to decline substantially over the next two decades as a direct result of well-funded public health intervention campaigns in recent years. The success of these campaigns tells us that reducing the burden of household air pollution on global public health is also possible -- if the necessary resources and strategic vision are brought to bear.
In September 2010, Secretary Clinton helped launch the Global Alliance for Clean Cookstoves to save lives, improve livelihoods, empower women, and protect the environment. Led by the United Nations Foundation, the Alliance has grown to nearly 500 partners, including 38 countries. The U.S. commitment alone has reached up to $114 million.
The groundbreaking approach taken by the Alliance involves bringing together a diverse group of partners -- including governments, the private sector, multilateral institutions, microenterprises, foundations, local women's groups, and others -- to help overcome the market barriers that currently impede the production, deployment, and use of clean cookstoves in the developing world. A single actor can only do so much, but a truly cross-sectoral partnership such as the Alliance can solve this global problem at the scale it demands.
Together with its partners, the Alliance is making incredible progress towards its ambitious interim goal of 100 million clean cookstoves adopted by 2020. But our work is just beginning.
This new study makes more urgent than ever the absolute necessity of our efforts to lead the globe towards universal adoption of clean and efficient cooking solutions.
We invite you to join us as we seek to solve this critical issue and save lives.
|
<urn:uuid:76348410-8d9d-4d84-898b-fba9e68f8bff>
|
CC-MAIN-2016-26
|
http://blogs.state.gov/stories/2012/12/14/cooking-shouldnt-kill?page=0%2C1660
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948283
| 896
| 3.140625
| 3
|
Name of Material
Carbon monoxide, refrigerated liquid (cryogenic liquid)
CARBON MONOXIDE (Refrigerated Liquid)
- TOXIC; Extremely Hazardous.
- Inhalation extremely dangerous; may be fatal.
- Contact with gas or liquefied gas may cause burns, severe injury and/or frostbite.
- Odorless, will not be detected by sense of smell.
- EXTREMELY FLAMMABLE.
- May be ignited by heat, sparks or flames.
- Flame may be invisible.
- Containers may explode when heated.
- Vapor explosion and poison hazard indoors, outdoors or in sewers.
- Vapors from liquefied gas are initially heavier than air and spread along ground.
- Vapors may travel to source of ignition and flash back.
- Runoff may create fire or explosion hazard.
- CALL Emergency Response Telephone Number on Shipping Paper first. If Shipping Paper not available or no answer, refer to appropriate telephone number listed on the inside back cover.
- As an immediate precautionary measure, isolate spill or leak area for at least 100 meters (330 feet) in all directions.
- Keep unauthorized personnel away.
- Stay upwind.
- Many gases are heavier than air and will spread along ground and collect in low or confined areas (sewers, basements, tanks).
- Keep out of low areas.
- Ventilate closed spaces before entering.
- Wear positive pressure self-contained breathing apparatus (SCBA).
- Wear chemical protective clothing that is specifically recommended by the manufacturer. It may provide little or no thermal protection.
- Structural firefighters' protective clothing provides limited protection in fire situations ONLY; it is not effective in spill situations where direct contact with the substance is possible.
- Always wear thermal protective clothing when handling refrigerated/cryogenic liquids.
- See Table 1 - Initial Isolation and Protective Action Distances.
- If tank, rail car or tank truck is involved in a fire, ISOLATE for 800 meters (1/2 mile) in all directions; also, consider initial evacuation for 800 meters (1/2 mile) in all directions.
- DO NOT EXTINGUISH A LEAKING GAS FIRE UNLESS LEAK CAN BE STOPPED.
- Dry chemical, CO2 or water spray.
- Water spray, fog or regular foam.
- Move containers from fire area if you can do it without risk.
- Fight fire from maximum distance or use unmanned hose holders or monitor nozzles.
- Cool containers with flooding quantities of water until well after fire is out.
- Do not direct water at source of leak or safety devices; icing may occur.
- Withdraw immediately in case of rising sound from venting safety devices or discoloration of tank.
- ALWAYS stay away from tanks engulfed in fire.
- ELIMINATE all ignition sources (no smoking, flares, sparks or flames in immediate area).
- All equipment used when handling the product must be grounded.
- Fully encapsulating, vapor protective clothing should be worn for spills and leaks with no fire.
- Do not touch or walk through spilled material.
- Stop leak if you can do it without risk.
- Use water spray to reduce vapors or divert vapor cloud drift. Avoid allowing water runoff to contact spilled material.
- Do not direct water at spill or source of leak.
- If possible, turn leaking containers so that gas escapes rather than liquid.
- Prevent entry into waterways, sewers, basements or confined areas.
- Isolate area until gas has dispersed.
- Move victim to fresh air.
- Call 911 or emergency medical service.
- Give artificial respiration if victim is not breathing.
- Administer oxygen if breathing is difficult.
- Remove and isolate contaminated clothing and shoes.
- In case of contact with substance, immediately flush skin or eyes with running water for at least 20 minutes.
- In case of contact with liquefied gas, thaw frosted parts with lukewarm water.
- Keep victim warm and quiet.
- Keep victim under observation.
- Effects of contact or inhalation may be delayed.
- Ensure that medical personnel are aware of the material(s) involved and take precautions to protect themselves.
|
<urn:uuid:1c6dd892-acaf-40cd-94be-7aa44bd7169d>
|
CC-MAIN-2016-26
|
http://wwwapps.tc.gc.ca/saf-sec-sur/3/erg-gmu/erg/guidepage.aspx/guide168/id2859/mnid3282
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.800043
| 906
| 3.296875
| 3
|
Georgia Tech develops technology for more compact, inexpensive spectrometers
Technology allows for more versatile portable spectrometersATLANTA (February 8, 2006) -- Being the delicate optical instruments that they are, spectrometers are pretty picky about light.
But Georgia Tech researchers have developed a technology to help spectrometers -- instruments that can be used as the main parts of sensors that can detect substances present in even ultra-small concentrations -- analyze substances using fewer parts in a wider variety of environments, regardless of lighting. The technology can improve the portability while reducing the size, complexity, and cost of many sensing and diagnostics systems that use spectrometers. The technology has appeared in Applied Optics, Optics Express and Optics Letters and was presented as an invited talk at the IEEE Lasers and Electro-Optics Society Annual Meeting 2005.
Conventional spectrometers have multiple parts -- a narrow slit, a lens (to guide light), a grating (to separate wavelengths), a second lens and a detector (to detect the power at different wavelengths). The Georgia Tech team's goal was to combine all these pieces into two parts, a volume hologram (formed in an inexpensive piece of polymer) and a detector, to create a compact, efficient and inexpensive spectrometer that could be used for multiple spectroscopy and sensing applications.
"This technology is very useful for low-end spectrometers, but at the same time, there are many applications that require high-end spectrometers. This technology could convert a portion of a complex, high-end system into a much more versatile and light system," said Ali Adibi, head of the project and an associate professor in the School of Electrical and Computer Engineering.
Because of its light weight and relative insensitivity to optical alignment, the new design helps create more versatile and portable spectrometers for several applications where portability had been difficult. For instance, the technology would make hand held devices possible for carbon monoxide detection or on-the-spot blood analysis and other biomedical applications.
One of the key advantages to the new spectrometer is its insensitivity to alignment. Spectrometers are very sensitive to the direction and wavelength of light and several of their parts are devoted to keeping the light correctly directed.
But the Georgia Tech team was able to incorporate those necessary alignments along with the focusing functions into a volume hologram. This hologram is recorded by the interference pattern of two beams in a piece of photopolymer.
"There were lots of challenges because the light we need to analyze is diffuse in nature," Adibi said.
Conventional spectrometers work the best under collimated light (i.e. light moving in only one direction). However, the optical signal needed for practical sensing applications is diffuse. This problem is solved in conventional spectrometers by blocking light in all but one direction using a slit and a lens, but this also results in considerable power loss and lower efficiency.
"By choosing the appropriate hologram, we have no collimating hardware in our system. We have further demonstrated the capability of improving the throughput by using more complex holograms, which are recorded similar to less complex holograms, in our spectrometer without adding to the actual complexity of the system," Adibi added.
The Georgia Tech team has a prototype for a lower-end spectrometer comparable to those currently on the market but for a considerably lower cost, Adibi said. Their research will now focus on developing more complex systems by using specially designed volume holograms to improve the efficiency -- and thus the sensitivity -- of our spectrometers, Adibi added.
Last reviewed: By John M. Grohol, Psy.D. on 30 Apr 2016
Published on PsychCentral.com. All rights reserved.
|
<urn:uuid:4459972f-1538-43af-89d2-f93cd581506f>
|
CC-MAIN-2016-26
|
http://psychcentral.com/news/archives/2006-02/giot-gtd020806.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00049-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936855
| 772
| 3.15625
| 3
|
It’s like something out of a science fiction magazine, but leave it to MIT to turn science fiction into science fact. A study published in the June 12th edition of PLoS ONE reveals a new glucose powered chip that literally will create an interface between brain and machine. The glucose “fuel cell” brings hope that in the future we will be able to help paralytics regain control of their limbs using neural prosthetics powered by this new technology.
Glucose is basically the sugar that can be found in our blood. It is the usable form of energy that our bodies use to power our muscles and our brain. The glucose powered fuel cells can be seen, pictured below, on a silicon wafer.
The new fuel cells strip electrons from glucose molecules to create a small electric current. Implantable electronics are nothing new. Consider the pace maker, for instance. Many heart patients are alive and well today due to the tiny electronic module that keeps their heart in perfect rhythm. Oddly enough, scientists in the 1970’s originally proved they could power a pacemaker using glucose but due to some inefficiencies with an enzyme necessary to run them, they eventually decided to use lithium ion batteries instead. The difference in this new technology is that it contains no biological components whatsoever. It can generate hundreds of microwatts which can be used to power “ultra-low-power” implants.
Location, Location, Location
One of the groundbreaking aspects of this new research is not only that the fuel cells are powered by glucose, but also its placement in the body. Before this study, any research done using glucose fuel cells relied on blood or tissue fluid. This research suggested using cerebrospinal fluid which basically is a sugar filled barrier that surrounds the brain. One reason is that this fluid basically contains no cells that would stimulate an immune response. The other reason is that it is so rich in glucose. Due to the relatively small amount of glucose needed to power these fuel cells, no adverse affects are expected to occur in the brain.
Research like this is very encouraging especially for those who have lost use of their limbs due to paralysis. However, it may be a few years before we see this research being used in practical medical setting. If you would like more information, you can read the MIT press release http://web.mit.edu/newsoffice/2012/glucose-fuel-cell-0612.html, or for a more technical experience you can find the published study at this link http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0038436.
|
<urn:uuid:49873274-bb80-478f-9b7a-0f24209123bb>
|
CC-MAIN-2016-26
|
http://techie-buzz.com/science/medical-implants-of-the-future-may-be-powered-by-sugar.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00167-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.938444
| 550
| 3.59375
| 4
|
Book Nook: The Great Railroad Revolution-the History of Trains in America, by Christian Wolmar
The development of the railroad system in America was instrumental in the expansion of the nation that took place during the 19th Century. Without the railroads things might have turned out rather differently.
Christian Wolmar has written extensively about railroads. In "The Great Railroad Revolution," Wolmar's scintillating history of the development of the railroad system in America, readers will discover how this crucial expansion of railroads took place.
At the peak of the railroads in the USA there were over 250,000 miles of track in service. After World War One the railroads began to fall into decline. By 1960 railroad passenger service began to be drastically curtailed in the United States as the interstate highway system and the automobile sounded the death knell for the once extensive passenger train service that had served this country since the mid 19th Century.
Christian Wolmar is very engaged with transportation issues and he is a wonderful guy to interview. Enjoy!
|
<urn:uuid:d83662a4-efbb-403e-906b-641cb50960b2>
|
CC-MAIN-2016-26
|
http://wyso.org/post/book-nook-great-railroad-revolution-history-trains-america-christian-wolmar
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00076-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971041
| 212
| 2.8125
| 3
|
Skip to Content
View Additional Content In This Section
Optic neuritis is inflammation of the optic nerve, which lies at
the back of the eye and carries visual information from the eye to the brain.
Optic neuritis may cause partial or total loss of vision, usually in one eye,
and is often associated with pain when the eye moves.
When optic neuritis causes partial vision loss, effects may
Symptoms of optic neuritis usually develop over a period of a few
days to a week and stabilize for several weeks or months. In many cases vision
then improves on its own. If not, steroid treatment usually works to relieve the inflammation.
Optic neuritis can be linked with other neurological and
inflammatory conditions, such as multiple sclerosis.
Current as of:
February 20, 2015
Adam Husney, MD - Family Medicine & Anne C. Poinier, MD - Internal Medicine & Barrie J. Hurwitz, MD - Neurology
To learn more about Healthwise, visit Healthwise.org.
© 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
|
<urn:uuid:84f936c0-742b-45b5-8086-6e4b52e06b21>
|
CC-MAIN-2016-26
|
http://www.asante.org/app/healthwise/document.aspx?navigationNode=/1/59/1/&id=sto167591
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00014-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.885373
| 249
| 2.75
| 3
|
The Caribbean basin is home to some of the most complex interactions in recent history among previously diverged human populations. Here, we investigate the population genetic history of this region by characterizing patterns of genome-wide variation among 330 individuals from three of the Greater Antilles (Cuba, Puerto Rico, Hispaniola), two mainland (Honduras, Colombia), and three Native South American (Yukpa, Bari, and Warao) populations. We combine these data with a unique database of genomic variation in over 3,000 individuals from diverse European, African, and Native American populations. We use local ancestry inference and tract length distributions to test different demographic scenarios for the pre- and post-colonial history of the region. We develop a novel ancestry-specific PCA (ASPCA) method to reconstruct the sub-continental origin of Native American, European, and African haplotypes from admixed genomes. We find that the most likely source of the indigenous ancestry in Caribbean islanders is a Native South American component shared among inland Amazonian tribes, Central America, and the Yucatan peninsula, suggesting extensive gene flow across the Caribbean in pre-Columbian times. We find evidence of two pulses of African migration. The first pulse—which today is reflected by shorter, older ancestry tracts—consists of a genetic component more similar to coastal West African regions involved in early stages of the trans-Atlantic slave trade. The second pulse—reflected by longer, younger tracts—is more similar to present-day West-Central African populations, supporting historical records of later transatlantic deportation. Surprisingly, we also identify a Latino-specific European component that has significantly diverged from its parental Iberian source populations, presumably as a result of small European founder population size. We demonstrate that the ancestral components in admixed genomes can be traced back to distinct sub-continental source populations with far greater resolution than previously thought, even when limited pre-Columbian Caribbean haplotypes have survived.
Latinos are often regarded as a single heterogeneous group, whose complex variation is not fully appreciated in several social, demographic, and biomedical contexts. By making use of genomic data, we characterize ancestral components of Caribbean populations on a sub-continental level and unveil fine-scale patterns of population structure distinguishing insular from mainland Caribbean populations as well as from other Hispanic/Latino groups. We provide genetic evidence for an inland South American origin of the Native American component in island populations and for extensive pre-Columbian gene flow across the Caribbean basin. The Caribbean-derived European component shows significant differentiation from parental Iberian populations, presumably as a result of founder effects during the colonization of the New World. Based on demographic models, we reconstruct the complex population history of the Caribbean since the onset of continental admixture. We find that insular populations are best modeled as mixtures absorbing two pulses of African migrants, coinciding with the early and maximum activity stages of the transatlantic slave trade. These two pulses appear to have originated in different regions within West Africa, imprinting two distinguishable signatures on present-day Afro-Caribbean genomes and shedding light on the genetic impact of the slave trade in the Caribbean.
Citation: Moreno-Estrada A, Gravel S, Zakharia F, McCauley JL, Byrnes JK, Gignoux CR, et al. (2013) Reconstructing the Population Genetic History of the Caribbean. PLoS Genet 9(11): e1003925. doi:10.1371/journal.pgen.1003925
Editor: Eduardo Tarazona-Santos, Universidade Federal de Minas Gerais, Brazil
Received: May 7, 2013; Accepted: September 5, 2013; Published: November 14, 2013
Copyright: © 2013 Moreno-Estrada et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This project was supported by NIH grant 1R01GM090087 to ERM and CDB, NSF grant DMS-1201234 to CDB, the National Institute on Minority Health and Health Disparities (P60MD006902) to EGB, and NIH Training Grant T32 GM007175 to CRG. This work was also partially supported by an award from the Stanley J. Glaser Foundation to JLM and ERM, and by the George Rosenkranz Prize for Health Care Research in Developing Countries awarded to AME. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: JKB is an employee of Ancestry.com. CDB is on the Scientific Advisory Board of Ancestry.com, 23andMe's “Roots into the Future” project, and Personalis, Inc. He is on the medical advisory board of Invitae and Med-tek. None of these entities played any role in the project or research results reported here.
Genomic characterization of diverse human populations is critical for enabling multi-ethnic genome-wide studies of complex traits . Genome-wide data also affords reconstruction of population history at finer scales, shedding light on evolutionary processes shaping the genetic composition of peoples with complex demographic histories. This genetic reconstruction is especially relevant in recently admixed populations from the Americas. Native peoples throughout the American continent experienced a dramatic demographic change triggered by the arrival of Europeans and the subsequent African slave trade. Important progress has been made to characterize genome-wide patterns of these three continental-level ancestral components in admixed populations from the continental landmass and other Hispanic/Latino populations , including recent genotyping and sequencing studies involving Puerto Rican samples , , . However, no genomic survey has focused on multiple populations of Caribbean descent, and critical questions remain regarding their recent demographic history and fine-scale population structure. Several factors distinguish the Antilles and the broader Caribbean basin from the rest of North, Central, and South America, resulting in a unique territory with particular dynamics impacting each of its ancestral components.
First, native pre-Columbian populations suffered dramatic population bottlenecks soon after contact. This poses a challenge for reconstructing population genetic history because extant admixed populations have retained a limited proportion of the native genetic lineages . Second, it is widely documented that the initial encounter between Europeans and Native Americans, such as the first voyages of Columbus, took place in the Caribbean before involving mainland populations. However it remains unclear whether the earlier onset of admixture in the Caribbean translates into substantial differences in the European genetic component of present-day admixed Caribbean genomes, compared to other Hispanic/Latino populations impacted by later, and probably more numerous, waves of European migrants. Third, the Antilles and surrounding mainland of the Caribbean were the initial destination for much of the trans-Atlantic slave trade, resulting in admixed populations with higher levels of African ancestry compared to most inland populations across the continent. However, the sub-continental origins of African populations that contributed to present-day Caribbean genomes remain greatly under-characterized.
Disentangling the origin and interplay among ancestral components during the process of admixture enhances our knowledge of Caribbean populations and populations of Caribbean descent, informing the design of next-generation medical genomic studies involving these groups. Here, we present SNP array data for 251 individuals of Caribbean descent sampled in South Florida using a parent-offspring trio design and 79 native Venezuelans sampled along the Caribbean coast. The family-based samples include individuals with grandparents of either Cuban, Haitian, Dominican, Puerto Rican, Colombian, or Honduran descent. The 79 native Venezuelan samples are of Yukpa, Warao, and Bari tribal affiliation. We construct a unique database which includes public and data access committee-controlled data on genomic variation from over 3,000 individuals including HapMap , 1000 Genomes , and POPRES populations, and African and Native American SNP data from diverse sub-continental populations employed as reference panels. We apply admixture deconvolution methods and develop a novel ancestry-specific PCA method (ASPCA) to infer the sub-continental origin of haplotypes along the genome, yielding a finer-resolution picture of the ancestral components of present-day Caribbean and surrounding mainland populations. Additionally, by analyzing the tract length distribution of genomic segments attributable to distinct ancestries, we test demographic models of the recent population history of the Greater Antilles and mainland populations since the onset of inter-continental admixture.
Population structure of the Caribbean
To characterize population structure across the Antilles and neighboring mainland populations, we combined our genotype data for the six Latino populations with continental population samples from western Africa, Europe, and the Americas, as well as additional admixed Latino populations (see Table S1). To maximize SNP density, we initially restricted our reference panels to representative subsets of populations with available Affymetrix SNP array data (Figure 1A). Using a common set of ∼390 K SNPs, we applied both principal component analysis (PCA) and an unsupervised clustering algorithm, ADMIXTURE , to explore patterns of population structure. Figure 1B shows the distribution in PCA space of each individual, recapitulating clustering patterns previously observed in Hispanic/Latino populations : Mexicans cluster largely between European and Native American components, Colombians and Puerto Ricans show three-way admixture, and Dominicans principally cluster between the African and European components. Ours is the first study to characterize genomic patterns of variation from (1) Hondurans, which we show have a higher proportion of African ancestry than Mexicans, (2) Cubans, which show extreme variation in ancestry proportions ranging from 2% to 78% West African ancestry, and (3) Haitians, which showed the largest average proportion of West African ancestry (84%). Additional clustering patterns obtained from higher PCs are shown in Figure S1.
A) Areas in red indicate countries of origin of newly genotyped admixed population samples and blue circles indicate new Venezuelan (underlined) and other previously published Native American samples. B) Principal Component Analysis and C) ADMIXTURE clustering analysis using the high-density dataset containing approximately 390 K autosomal SNP loci in common across admixed and reference panel populations. Unsupervised models assuming K = 3 and K = 8 ancestral clusters are shown. At K = 3, Caribbean admixed populations show extensive variation in continental ancestry proportions among and within groups. At K = 8, sub-continental components show differential proportions in recently admixed individuals. A Latino-specific European component accounts for the majority of the European ancestry among Caribbean Latinos and is exclusively shared with Iberian populations within Europe. Notably, this component is different from the two main gradients of ancestry differentiating southern from northern Europeans. Native Venezuelan components are present in higher proportions in admixed Colombians, Hondurans, and native Mayans.
We used the program ADMIXTURE to fit a model of admixture in which an individual's genome is composed of sites from up to K ancestral populations. We explored K = 2 through 15 ancestral populations (Figure S2) to investigate how assumptions regarding K impact the inference of population structure. Assuming a K = 3 admixture model, population admixture patterns are driven by continental reference samples with no continental subdivision (Figure 1C, top panel). However, higher Ks show substantial substructure in all three continental components. Log likelihoods for successively increasing levels of K continue to increase substantially as K increases (Figure S3a), which is not unexpected since higher values of K add more parameters to the model (thereby improving the fit). Using cross-validation we found that K = 7 and K = 8 have the lowest predicted error (Figure S3b); thus, we focused on these two models.
The first sub-continental components that emerge are represented by South American population isolates, namely the three Venezuelan tribes of Yukpa, Warao, and Bari. At higher-order Ks, we recapitulate the well-documented North-to-South American axis of clinal genetic variation described by us and others , , as Mesoamerican (Maya/Nahua) and Andean (Quechua/Aymara) populations are assigned to different clusters (Figure S2). Interestingly, Mayans are the only group showing substantially higher contributions from the native Venezuelan components (Figure 1C, bottom panel). Both Mesoamerican and Andean Native American samples contain considerable amounts of European ancestry, due to post-Columbian admixture. Above K = 7, we observe a North-to-South European differentiation, which is consistent with previous analyses , . Surprisingly, we observe another European-specific component emerge as early as K = 5 and remain constant through K = 15 (Figure S2). This component accounts for the majority of the Caribbean Latinos' European ancestry, and it only appears in Mediterranean populations, including Italy, Greece, Portugal, and Spain at intermediate proportions. Throughout this paper, we refer to this component as the “Latino European” component, and it can be seen clearly in Figure 1C (“black” bars represent the Latino European component, “Red” bars represent the “Northern European”, and pink the “Mediterranean” or “Southern European” component). At K = 8, when the clinal gradient of differentiation between Southern and Northern Europeans appears, the Latino European component is seen only in low proportions in individuals from Portugal and Spain, whereas it is the major European component among Latinos (Figure 1C, bottom panel).
To identify possible sex-biased gene flow in Caribbean populations, we compared the ancestry proportions of the X chromosome vs. the autosomes in each population. We observe a significant skew towards a higher proportion of Native American ancestry on the X chromosome than on the autosomes (p-value<10−5, Figure S4), consistent with previous reports on Hispanic/Latino populations . Interestingly, whereas some insular populations such as Cubans and Puerto Ricans also showed a significant increase of African ancestry on the X chromosome (p-value<0.01), the average difference in mainland populations was not significant (p-value>0.05, Figure S4). Overall, we find evidence of a high Native American, and to a lesser extent African, female contribution in Caribbean populations.
Additionally, our data show a strong signature of assortative mating based on genetic ancestry among Caribbean Latinos, as suggested by previous studies . In particular, we see a strong correlation between maternal and paternal ancestry proportions (Figure S5). To assess significance, we compared correlation of ancestry assignments among parent pairs to 100,000 permuted male-female pairs for each continental ancestry. All p-values were highly significant (p<0.00001, Table S2). It should be noted that these tests are not independent since the three components of ancestry by definition must sum to one. Further, apparent assortative mating could be due to random mating within structured sub-populations. To control for this, we performed permutations within countries of origin, and found significant correlations among individuals from every single population (p-value<0.05), except for Haiti. Although Haitians do show the same trend, with only two parent pairs, it is nearly impossible to assess significance (Table S2).
Demographic inference since the onset of admixture
An overview of our analytic strategy for characterizing admixed genomes is presented in Figure 2. Due to meiotic recombination, the correlation in ancestry among founder chromosomes is broken down over time. As a consequence, the length of tracts assigned to distinct ancestries in admixed genomes is informative of the time and mode of migration . To explore the population genetic history of the Caribbean since European colonization, we considered the length distribution of continuous ancestry tracts in each of the six population samples. First, we estimated local ancestry along the genome using an updated version of PCAdmix which was trained using trio-phased data from the admixed individuals and three continental reference populations. Next, we characterized the length distribution of unbroken African, European, and Native American ancestry tracts along each chromosome for each population. Finally, we applied the extended space Markov model implemented in Tracts to compare the observed data with predictions from different demographic models considering various migration scenarios.
The starting point consists of genome-wide SNP data from family trios. Unrelated individuals are used to estimate global ancestry proportions with ADMIXTURE, whereas full trios are selected for BEAGLE phasing and PCA-based local ancestry estimation using continental reference samples. From here, two orthogonal analyses are performed: 1) Ancestry-specific regions of the genome are masked to separately apply PCA to European, African, and Native American haplotypes combined with large sub-continental reference panels of putative ancestral populations. We refer to this methodology as ancestry-specific PCA (ASPCA) and the code is packaged into the software PCAmask. 2) Continental-level local ancestry calls are used to estimate the tract length distribution per ancestry and population, which is then leveraged to test different demographic models of migration using Tracts software.
The simplest model considers a single pulse of migration from each source population, allowing the admixture process to begin with Native American and European chromosomes, followed by the introduction of African chromosomes. In such a scenario, each population contributes migrants at a discrete period in time, and the average length of ancestry tracts is expected to decrease with time after admixture, resulting in an exponential decay in the abundance of tracts as a function of tract length. Alternative models include a second pulse of either European or African segments migrating into the already-admixed gene pool. Allowing for continuous or repeated migration typically results in a concave log-scale distribution, caused by the increase of longer tracts after the second migration event. Table 1 and Figure 3 summarize the results of the best-fitting migration models for each population based on Bayesian Information Criterion (BIC) comparisons, and Figure S6 shows the full results of all models tested. We observed that multiple pulses of admixture exhibited a better BIC in all cases.
We used the length distribution of ancestry tracts within each population from A) insular and B) mainland Caribbean countries of origin. Scatter data points represent the observed distribution of ancestry tracts, and solid-colored lines represent the distribution from the model, with shaded areas indicating 68.3% confidence intervals. We used Markov models implemented in Tracts to test different demographic models for best fitting the observed data. Insular populations are best modeled when allowing for a second pulse of African ancestry, and mainland populations when a second pulse of European ancestry is allowed. Admixture time estimates (in number of generations ago), migration events, volume of migrants, and ancestry proportions over time are given for each population under the best-fitting model. The estimated age for the onset of admixture among insular populations is consistently older (i.e., 16–17) compared to that among mainland populations (i.e., 14).
The best-fit model for Colombians and Hondurans involves admixture between Native Americans and Europeans starting 14 generations ago, followed by a second pulse of European ancestry starting 12 and 5 generations ago, respectively. Of note is that between the first and second pulse of migration in Colombians, the proportion of European ancestry increased from 12.5% to 75% in two generations, implying that the European segments in today's Colombians date back to European gene flow happening in a short period of time; thus, tracing their ancestry to a smaller number of European founders compared to other Latino populations.
In contrast with mainland population samples, the best-fit model for all four populations from the Caribbean islands involves older time estimates of the initial contact between Native Americans and Europeans. Namely, 17 generations ago for Cubans and 16 generations ago for Puerto Ricans, Dominicans, and Haitians. Historical records state that the first European colonies in the Antilles were established soon after the initial contact in 1492 ; that is, ∼500 years ago or 16.6 generations ago (considering 30 years per generation ), in excellent agreement with our time estimates. Another major distinction between mainland and Caribbean populations is that the best model for each of the latter involves a second pulse of African ancestry, occurring seven to five generations ago, with higher migration rates in Haitians and Dominicans, followed by Cubans and Puerto Ricans.
Sub-continental ancestry of admixed genomes
The genomes of admixed populations contain information about both continental and sub-continental genetic ancestry. To explore within-continent population structure, we performed PCA on genomic segments assigned to Native American, African, or European ancestry. Because the masking out of the other ancestries results in large amounts of missing data, we implemented a novel variation of PCA that allows us to perform the analysis on the remaining sites alone. Throughout this paper, we refer to this approach as ancestry-specific PCA (ASPCA), and the mathematical details are described in Text S1. We applied this methodology for analyzing phased genomic segments of inferred Native American, European, and African continental ancestry together with sub-continental reference panels of parental populations (see diagram in Figure 2). Our implementation is analogous to the subspace PCA (ssPCA) approach by Johnson et al. , but it can take advantage of phased data, allowing us to include segments of the genome that are heterozygous for ancestry. In the presence of recent admixture, chromosomal ancestry breakpoints dramatically reduce the proportion of the genome that is homozygous for a given ancestry. Therefore, relying on genotypes and restricting to loci estimated to have two copies of a certain ancestry could severely compromise the resolution of the analysis of admixed genomes. Our haplotype-based implementation of the algorithm is packaged into the software PCAmask and is available at http://bustamantelab.stanford.edu. Details on the samples used are available in Materials and Methods and in Text S1.
Native American ancestral components
Our initial structure analysis was based on our high-density dataset (i.e., ∼390 K SNPs, see Table S1), and was thus limited to ancestral populations with available Affymetrix SNP array data (i.e., two Mesoamerican, two Andean, and three Venezuelan native populations). To explore possible relationships with additional Native American populations, we expanded our reference panel by combining our data with Illumina 650 K data for 493 individuals from 52 indigenous groups from throughout the Americas . Although this analysis has fewer SNPs (i.e., ∼30 K SNPs), it allows us to resolve within-continent population structure around the Caribbean in much greater geographic detail.
We applied the ASPCA approach described above to the Native American segments of admixed individuals with >3% global Native American ancestry together with the full reference panel of ancestral populations (Figure S7). ASPC1 separates the northernmost populations of the continent from the rest, while the Brazilian Surui and Central American Cabecar define the extremes of ASPC2. Most Native American haplotypes from the admixed genomes fall along this second axis of variation, forming two overlapping population clusters: one represented primarily by Colombians and Hondurans, and the other by Cubans, Dominicans, and Puerto Ricans (no Haitian haplotypes were included due to low levels of Native American ancestry). Figure 4A shows a closer view, in which Colombians and most Hondurans cluster closer to Chibchan-speaking groups from Western Colombia and Central America, including the Kogi, Embera, and Waunana. In contrast, most Caribbean islanders cluster with Amazonian groups from Eastern Colombia, Brazil, and Guiana. The closest ancestral populations include the Guahibo, Piapoco, Ticuna, Palikur, and Karitiana, among others, some of which are settled along fluvial territories of the Orinoco-Rio Negro basin. This location may have facilitated communication from the rainforest to the coast, explaining the relationship with Caribbean native components.
A) Ancestry-specific PCA analysis restricted to Native American segments from admixed Caribbean individuals (colored circles) and a reference panel of indigenous populations (gray symbols) from , grouped by sampling location. Darker symbols denote countries of origin with populations clustering closer to our Caribbean samples. Indigenous Colombian populations were classified into East and West of the Andes to ease the interpretation of their differential clustering in ASPCA. Population labels are shown for samples defining PC axes and representative clusters within locations. B) ADMIXTURE model for K = 16 ancestral clusters considering additional Latino samples, a representative subset of African and European source populations, and 52 Native American populations from , plus three additional Native Venezuelan tribes genotyped for this project. Vertical thin bars represent individuals and white spaces separate populations. Native American populations from are grouped according to linguistic families reported therein. Labels are shown for the populations representing the 12 Native American clusters identified at K = 16. Clusters involving multiple populations are identified by those with the highest membership values. C) Map showing the major indigenous components shared across the Caribbean basin as revealed by ADMIXTURE at K = 16 from B). Namely, Mesoamerican (blue), Chibchan (yellow), and South American (green). Colored bars represent individuals and their approximate sampling locations. Bars pooling genetically similar individuals from more than one population are plotted from left to right following north to south coordinates as listed by population labels. Guarani, Wichi, and Chane from north Argentina are pooled with Arara but only the location of the latter is shown to allow us to provide a zoomed view of the Caribbean region (see for the full map of sampling locations). The thick arrow represents schematically the most accepted origin of the Arawak expansion from South America into the Great Antilles around 2,500 years ago according to linguistic and archaeological evidence . Asterisks next to population labels denote Arawakan populations included in our reference panel. The thin arrow indicates gene flow between South America and Mesoamerica, possibly following a coastal or maritime route, accounting for the Mayan mixture and supporting pre-Columbian back migrations across the Caribbean.
Interestingly, the indigenous component of insular Caribbean samples seems to be shared across the different islands, suggesting gene flow across the Caribbean basin in pre-Columbian times. To explore this possibility into more detail, we performed a model-based clustering analysis using the full reference panel of 52 Native American populations from Reich et al. in addition to our three native Venezuelan populations. Individual admixture proportions from K = 2 through 20 are given in Figure S8. Focusing on Native American components, the first sub-continental signal (at K = 4) comprised a Chibchan component mainly represented by the Cabecar from Costa Rica and the Bari from Venezuela. Higher-order clusters pulled out Amazonian population isolates such as the Surui and Warao, as well as northern populations including the Eskimo-Aleut and Pima, in agreement with the outliers detected in our ASPCA analysis (Figure S7). Interestingly, from K = 5 through 10, the Chibchan component is shared at nearly 100% with the Yukpa sample located near the Venezuelan coast, and at nearly 20% with Mayans from the Yucatan peninsula and Guatemala (Figure S8). Higher-order clusters maintain the connection between Mayans and South American components. For example, at K = 16 (the model with the lowest cross-validation error; Figure S9b), an average of 35% of the genome in Mayans is shared with a mixed South American component mainly represented by the Ticuna, Piapoco, Guahibo, Arhuaco, Kogi, Embera, Palikur, and Wichi, among others (Figure 4B and C). The presence of considerable proportions of Central and South American components in the Mayan sample is indicative of possible “back” migrations from Central America and northern South America into the Yucatan peninsula, revealing active gene flow across the Caribbean, probably following a coastal or maritime route. This observation is in agreement with our ASPCA results from admixed genomes and reinforces the notion of an expansion of South American-based Native American components across the Caribbean basin.
European ancestral components
We performed ASPCA analysis restricted to European segments of admixed individuals with >25% of European ancestry and a panel of European source populations, including 1,387 individuals from Europe sampled as part of the POPRES project , as well as additional Iberian samples from Galicia, Andalusia, and the Basque country in Spain . The combined dataset included 2,882 European haplotypes and 255 haplotypes of European ancestry from the admixed populations. Figure 5 shows the first two PCs, where, as reported previously, the reference samples recapitulate a map of Europe , . While most of the additional Iberian samples cluster together with the POPRES individuals sampled as Portuguese and Spanish, the Basques cluster separately from the centroid of most Iberian samples. The Basques are known for their historical and linguistic isolation, which could explain their genetic differentiation from the main cluster due to drift. Given the known Iberian origin of the first European settlers arriving into the Caribbean and surrounding territories of the New World, one would expect that European blocks derived from admixed Latino populations should cluster with other European haplotypes from present-day Iberians. Indeed, our Latino samples aggregate in a well-defined cluster that overlaps with the cluster of samples from the Iberian Peninsula (i.e., Portugal and Spain). However, we observed that the centroid is substantially deviated with respect to the Iberian cluster (bootstrap p-value<10−4, see Materials and Methods), suggesting the possibility of a bottleneck and drift impacting the European haplotypes of Latinos.
ASPCA is applied to haploid genomes with >25% European ancestry derived from insular Caribbean (black symbols) and mainland populations (gray symbols) combined with a reference panel (colored labels) of 1,387 POPRES European samples with four grandparents from the same country , and 54 additional Iberian individuals (in yellow) from . PC1 values have been inverted and axes rotated 16 degrees counterclockwise to approximate the geographic orientation of population samples over Europe. Population codes are detailed in Table S1 and regions within Europe are labeled as in . Inset map: countries of origin for POPRES samples color-coded by region (areas not sampled in gray and Switzerland in intermediate shade of green to denote shared membership with EUR W, EUR C, and EUR S). Most Latino-derived European haplotypes cluster around the Iberian cluster. One of the two Haitian individuals included in the analysis clustered with French speaking Europeans (black arrow), in agreement with the colonial history of Haiti and illustrating the fine-scale resolution of our ASPCA approach.
Importantly, when we applied ASPCA using the exact same reference panel of European samples but analyzing Mexican haplotypes of European ancestry (Moreno-Estrada, Gignoux et al., in preparation), we did not observe a deviated clustering pattern from the Iberian cluster: the effect is much weaker and not significant (bootstrap p-value = 0.099, see Figure S10). Furthermore, the deviation of the European segments of Mexican individuals from the distribution of the rest of Iberian samples is even smaller than the deviation of the Portuguese from the Spanish samples. We further evaluated whether the dispersion of the different subpopulations within the Caribbean cluster follow particular patterns along ASPC2, the axis driving the deviation from the Iberian centroid. We observed that Colombians and Hondurans tend to account for lower (more deviated) ASPC2 values compared to Cubans, Dominicans, and Puerto Ricans (Figure S11), suggesting a mainland versus insular population differentiation. We performed a Wilcoxon rank test to contrast ASPC2 for mainland (Colombia and Honduras) versus island (Cuba, Dominican Republic and Puerto Rico) populations, resulting in a highly significant p-value (1.5×10−15). Because >25% of European ancestry was required for inclusion in ASPCA, only two Haitian haplotypes were analyzed, and thus these were not included in the statistical analysis. Nonetheless, it is noteworthy that one of them clusters with the French, in agreement with historical and linguistic evidence regarding European settlements on the island (see arrow on Figure 5).
Among European populations, Iberians also have the highest proportion of identical by descent (IBD) segments that are shared with Latino populations, as measured by a summed pairwise IBD statistic that is informative of the total amount of shared DNA between pairs of populations (see Materials and Methods and Figure S12). To explore the distribution of IBD sharing within continental groups, we considered Caribbean Latinos and Europeans separately by summing the cumulative amount of DNA shared IBD between each pair of individuals within each group. If European segments from Latino populations derive from a reduced number of European ancestors, then IBD sharing should be higher among Caribbean individuals compared to Europeans. Indeed, we observed a higher number of pairs sharing larger total IBD segment lengths among Latino individuals than among Europeans (Figure S13). Within-population cryptic relatedness is also compatible with increased IBD sharing. However, this is more likely to occur between individuals from the same subpopulation (e.g., COL-COL) rather than individuals from geographically separated subpopulations (e.g, COL-PUR). For this reason, we repeated the analysis, excluding within-population pairs of Latino individuals, and compared the IBD distribution to that of Iberian source populations (i.e., Spanish and Portuguese). Once again, we observed an increased proportion of IBD sharing among Latinos, arguing for a shared founder effect (Figure S13).
These results are in agreement with our cluster-based analysis focused on global ancestry proportions, where the European ancestry of Latinos is dominated by a shared Latino-specific component differentiated from both southern and northern European components, although shared to some extent with Spanish and Portuguese (Figure 1C). Bottlenecked populations may exhibit differentiation from their parental gene pool due to loss of genetic diversity and stochastic shifts in allele frequencies. One way of quantifying the extent of genetic drift is to compare FST estimates among the K = 8 ancestral clusters from Figure 1C. In the absence of drift, we would expect the southern-derived Latino component and the southern European component to show a very low level of FST. However, we observe an FST = 0.021 (Table S3). To put this into perspective, the FST of southern vs. northern Europe is FST = 0.02, meaning that the differentiation of the Latino-specific component with respect to southern Europeans is at least as high as the north-south differentiation within Europe. This observation was replicated when including additional Latino and ancestral populations (Figure S8). Given the increased number of divergent clusters, we focused on K = 18 through 20, in which all sub-continental European components were jointly detected. In this case, the Latino-specific component shows further fragmentation into two components: one predominantly shared among insular Caribbean samples and the other among mainland Latinos. The FST value for southern versus northern European differentiation was 0.039, while values for southern versus insular (0.041) or mainland Latinos (0.04) were slightly inflated (Table S4), supporting the notion of additional differentiation impacting the European component of present-day admixed Latinos.
African ancestral components
The Caribbean region has a complex history of population exchange with the African continent as a result of slave trade practices during European colonialism. Its proximity to the North Atlantic Ocean facilitated nautical contact with the West African coast, increasing the exposure of the local population to slave trade routes and ultimately resulting in genetic admixture between Caribbean and African individuals. We found the proportion of African ancestry to be higher in Caribbean populations compared to those from the mainland (Figure 1C), a finding that is consistent across studies , , . To explore the sub-continental composition of African segments derived from Caribbean admixed genomes, we performed ASPCA analysis on individuals with more than 25% of African ancestry using a diverse panel of African populations as potential sources (see Table S1). Our first approximation showed no dispersion of Afro-Caribbean haplotypes over PCA space. Instead, they form a relatively tight cluster that overlaps with that of the Yoruba sample from southwestern Nigeria (Figure S14). This is a plausible result, given the extensive historical record supporting a West African origin for the African lineages in the Americas.
However, according to our tract length analysis, there is strong genetic evidence for the occurrence of at least two pulses of African migrants imprinting different genomic signatures in present day admixed Caribbean populations. This result raises the question of whether both pulses involved the same source population during the admixture process. If this were the case, it would easily explain our ASPCA results, where all African haplotypes point to a single source.
Alternatively, if more than one source were involved and if enough mixing occurred since the two pulses, it is possible that what we see in ASPCA is the midpoint of the two source populations, causing the difference to remain undetected by our standard approach (which gives a point estimate averaging the signature of all African blocks along the genome). Hence, we applied a different strategy, in which ASPCA is performed separately for short (thus older) and long (younger) ancestry tracts. For this purpose, we split the African segments of each haploid genome into two categories based on a 50-cM length cutoff and intersected the data with a reference panel of West African populations (Figure 6A). Then, for each individual, we computed assignment probabilities of coming from each of the putative parental populations based on bivariate normal distributions fitted around each PCA cluster (see Materials and Methods, Figure S15). In Figure 6B we present the scaled mean probabilities for long (>50 cM) versus short (<50 cM) African tracts in Puerto Rican individuals. The pattern that emerges reveals that African haplotypes shorter than 50 cM are more likely to have originated from populations in the coastal Northwest region, such as the Mandenka and Brong; whereas longer haplotypes show higher probabilities of coming from populations closer to the Gulf of Guinea and Equatorial West Africa, including Yoruba, Igbo, Bamoun, Fang, and Kongo (see map on Figure 6A). The significant increase in old, short Mandenka tracts when compared to longer, more recent tracts was replicated in other insular Caribbean populations, including Cubans and Dominicans. The Brong also seem to have had a greater contribution deeper in the past, not only in Puerto Ricans, but also in Dominicans, Hondurans, and to a lesser extent in Colombians. In Cubans, the trend is reversed, and the Brong seem to have contributed more to long tracts than to short ones (Figure S16).
A) Map of West Africa showing locations of reference panel populations. Samples in black are more likely to represent the origin of short ancestry tracts and those in red of long ancestry tracts, according to B) assignment probabilities for each putative ancestral population of being the source for short (<50 cM in black) and long (>50 cM in red) ancestry tracts. African ancestry tracts for Puerto Ricans are shown and results for all populations are available in Figure S16. C) Proportion of African ancestry of inferred Mandenka origin as a function of block size in the combined set of Caribbean genomes. By running PCAdmix within the previously inferred African segments, we obtained posterior probabilities for Mandenka versus Yoruba ancestry. Overall, we found evidence for a differential origin of the African lineages in present day Afro-Caribbean genomes, with shorter (and thus older) ancestry tracts tracing back to Far West Africa (represented by Mandenka and Brong), and longer tracts (and thus younger) tracing back to Central West Africa.
One caveat of this analysis is that short ancestry tracts are more likely to be misassigned. To rule this out as a source of the signal, we added an intermediate block size category (>5 cM and <50 cM) and repeated the size-based ASPCA analysis. We observed that, despite the signal being somewhat weaker due to less data, a similar trend was observed after excluding extremely short tracts (Figure S16). Finally, we gathered additional evidence by running local ancestry estimation on the African blocks alone to distinguish Mandenka vs. Yoruba ancestry tracts (see Materials and Methods). We then binned all segments of inferred Mandenka ancestry into different block sizes and observed that the proportion of the African ancestry called Mandenka is higher within shorter block sizes and decreases as block size increases (Figure 6C). This result gives additional support for the differential origin of African segments and argues that the signal is not driven by the shortest genomic segments alone; rather, the signal is characterized by a progressive decay of haplotype length from older migrations, as younger segments (of different ancestry) account for the majority of longer African tracts in Caribbean genomes.
Models of admixture for Caribbean and mainland populations
Our results reveal consistent differences in the admixture processes occurring on Caribbean islands as compared to neighboring mainland populations. First, admixture timing estimates are consistently different between these two groups, with admixture starting around 16–17 generations ago in the islands and 14 generations ago in mainland populations. Second, in the Caribbean, we find evidence of a single pulse of Native American ancestry into admixed populations. Since Native American tracts are shorter, on average, than tracts of any other ancestry (and therefore older), this suggests an initial contribution at the time of European contact with limited subsequent contribution, consistent with the rapid decimation of the native population. Mainland populations from Colombia and Honduras, on the other hand, exhibit longer Native American tracts and are best fit by a model with a greater contribution of Native American ancestry. Third, Caribbean populations show evidence of limited number of European pulse events, suggesting a limited number of founders contributed disproportionally to the present day population. Continental populations, on the other hand, show evidence of repeated migration events of European ancestry, consistent with a continuing expansion of Europeans during colonialism. Finally, our data also suggest that multiple pulses of African migration contributed significantly to genetic ancestry in the Caribbean, consistent with records of historical slave trade routes. In contrast, African ancestry tracts in mainland populations are consistent with a more limited influx of African migrants.
The abundance of historical accounts regarding European colonization of the New World facilitates the contrast between written and genetic records. Our models show remarkable agreement with historical records. The earliest European contact in the Americas dates back to 1492, involving the Caribbean island of Hispaniola (today's Dominican Republic and Haiti). First contact dates are upper bounds on the time at which demographically substantial admixture would have taken place. The fact that our admixture timing estimate (i.e., 16–17 generations ago) is so close to first contact emphasizes that the colonization proceeded rapidly, with substantial admixture taking place very quickly, as opposed to it being a more drawn out process. Later European voyages reached the coasts of Central and South America, so permanent European settlements did not occur in the mainland until the first half of the 16th century, consistent with an approximate difference of two generations between the estimated onset of admixture according to our island and mainland models. Here we have focused on Colombians and Hondurans as population samples from mainland territories with coastal access to the Caribbean, but we have previously reported admixture timing estimates for Mexicans as well, starting 15 generations ago . The settlement of Europeans in mainland Mexican territory is documented to have occurred between 1519 and 1521 (i.e., 27–29 years apart from the first contact in 1492 in the Caribbean); consistent with this, there is one generation between our average estimate for the onset of admixture in the Caribbean compared to our model based on Mexican data (16 vs. 15 generations, respectively).
South American origin of indigenous components in the Caribbean
In contrast to other regions in the Americas where indigenous peoples are numerous, the genetic characterization of Native American components in the Caribbean required indirect reconstruction via genomic assembly of indigenous ancestry tracts transmitted to extant admixed individuals. By applying ancestry-specific PCA and cluster-based analyses integrating a large number of indigenous groups throughout the Americas, we found that Amazonian populations from South America show the closest relationship with Caribbean indigenous components. This was also observed in a different sample set from the 1000 Genomes Project (Gravel et al., submitted). Despite covering a large geographic area of South America (ranging from eastern Colombia to central Brazil and Guiana), most Amazonian sampled populations cluster together in PCA space, suggesting a common origin. Logical candidates for the origin of the ancestors of Caribbean populations include indigenous coastal groups south of the Lesser Antilles. Here, therefore, we have included three additional tribes from the Venezuelan coast. However, despite their closer geographic location, none of these groups primarily accounted for the indigenous ancestry of the insular Caribbean samples, pointing to an inland origin rather than a coastal one. Nonetheless, our cluster-based analysis revealed that native Venezuelan components do share membership with several Central American indigenous populations, such as the Costa Rican Cabecar, and, to a lesser extent, with Mayan groups from Guatemala and the Yucatan peninsula of present day Mexico, suggesting substantial gene flow across the Caribbean Sea in pre-Columbian times. Archaeological evidence, including the distribution of jade, obsidian, pottery, and other commodities, supports the existence of maritime-based interaction networks between central Mesoamerica, the Isthmo-Colombian area, and northern Venezuela . Our results demonstrate that such long-distance negotiations were accompanied by genetic exchange between previously diverged native populations and give new insight into the dynamics between the inhabitants of the Caribbean basin prior to European contact.
In a recent genomic survey of the relationships between Native American peoples, Reich and colleagues described the Chibchan speakers on both sides of the Panama isthmus as an exception to the simple model of continental colonization involving a southward expansion with sequential population splits and little subsequent gene flow. Instead, Central Americans, such as the Cabecar from Costa Rica, were modeled as a mixture of South and North American ancestry, which the authors reported as evidence for a back-migration from South into Central America. Our findings support these interpretations and also suggest a distant connection between Caribbean Mesoamerica and South American inland territories. Specifically, the fact that Mayans from the Yucatan peninsula share 35% of their genome with the Amazonian Ticuna, Guahibo, and Piapoco, and even with the more distant Paraguayan Guarani and north Argentinian Wichi, supports the expansion of an inland South American component across the Caribbean. For context, it is noteworthy that in ASPCA, the native ancestry tracts of Colombians and Hondurans cluster with geographically closer indigenous tribes, such as Chibchan speakers from western Colombia and Central America.
How do we account, then, for a shared clustering between more distant tribes, mostly of Amazonian origin, and insular Caribbean haplotypes? One possible explanation is that the fluvial nature of most of these settlements (across the Amazon and Orinoco basins) may have facilitated the movement of people to the coast, from which they migrated north through the Lesser Antilles and eventually contributed to Caribbean native components. Our results are consistent with archaeological records suggesting that the ancestors of the indigenous people that Columbus encountered might have come from populations that migrated from the Lower Orinoco Valley around 2.5 to 3 kya , , .
Additionally, our results align with the classification of languages spoken by pre-Columbian inhabitants of the Caribbean. The Taínos were the major group living in the Greater Antilles and surrounding islands at the moment of European contact. Taínos and insular Caribs spoke Arawakan languages , whose geographic distribution across northern South America resembles the distribution of the genetic component shared across multiple Amazonian individuals (Figure 4C). Arawakan-speaking groups in our reference panel include the Piapoco from eastern Colombia, the Palikur from Guiana, and the Chane from northern Argentina, all of which show primary ancestral membership to the Amazonian genetic component (Figure 4C) and cluster together with Native American haplotypes from admixed Caribbean individuals (Figure 4A), supporting a South American origin of the Arawakan expansion into the Caribbean. Although now located far from Amazonia, the Chane are believed to have historically migrated from the Amazon rainforest to the Argentinian Gran Chaco . Neighboring Wichi individuals also show similar genetic memberships and ASPCA clustering patterns, despite belonging to a different linguistic family. Previous genetic studies have also pointed to a South American origin for Taínos , . Based on mitochondrial haplogroups ascertained from pre-Columbian Taíno remains, Lalueza-Fox and colleagues found that only two of the major mtDNA lineages, namely C and D, were present in their sample (N = 27). Given that high frequencies of C and D haplogroups are more common in South American populations, the authors argued for that sub-continent as the homeland of the Taíno ancestors.
Overall, our analysis of indigenous ancestry tracts from extant admixed genomes supports previous linguistic, archaeological, and ancient DNA evidence about the peopling of the Caribbean; furthermore, it points to a greater involvement of inland Amazonian populations during the last migration into the Antilles prior to European contact. Earlier migrations may have occurred (e.g., from Mesoamerica or the Florida peninsula), as pre-ceramic archaeological evidence of human presence in the Greater Antilles dates back more than 7,000 years ago . However, the fact that the Amazonian component is shared among the indigenous haplotypes from different insular and continental populations supports either a single South American origin of Caribbean settlers or a major population replacement involving a more recent migration of agriculturalists from inland South America.
Founder effect in the European lineage of admixed Latinos
We find genomic patterns compatible with the effect of a founder event in the ancestral European population of present-day admixed Latinos. Supporting evidence includes the following: 1) a Latino-specific European component revealed by clustering algorithms, which is not assigned to source populations within Europe except Spain and Portugal, and detected at lower-order clusters compared to other European and Native American sub-continental components; 2) inflated FST values between the Latino-specific and southern European components, compared to southern versus northern Europe differentiation; 3) significant deviation of the distribution of European haplotypes from the main cluster of Iberian samples in ASPCA space; and 4) increased IBD sharing among Latino individuals compared with Europeans. Additionally, a similar signature was observed in an independent dataset of Latino samples from the1000 Genomes Project using a combined approach that integrates IBD and local ancestry tracts (Gravel et al., submitted). These findings suggest that early European waves of migration into the New World involved a reduced ancestral population size, mainly composed of Iberians, bearing a subset of the diversity present within the source population and causing the derived admixed populations to diverge from current European populations. Furthermore, we find differences between mainland and insular Caribbean populations including 1) different time estimates for the onset of admixture as revealed by ancestry tract length analysis (Figure 3); 2) separate memberships in cluster-based analyses (Figure 4B, Figure S8); and 3) significantly shifted distributions of European haplotypes within the Latino cluster in ASPCA space (Figure 5, Figure S11). The fact that mainland Colombians and Hondurans show not only the highest proportions of the Latino-specific European component in ADMIXTURE but also the most extreme deviation from the Iberian cluster in ASPCA suggests stronger genetic drift in these populations, compatible with a two-stage European settlement involving insular territories at first, and mainland populations subsequently absorbing a subset of migrants from the islands.
There is documented evidence of extensive migration from the islands to the continent throughout the 16th century . There were only two viceroyalties of the Spanish Empire in the New World until the 18th century: the Viceroyalty of New Spain (capital, Mexico City) and the Viceroyalty of Peru (capital, Lima). An additional viceroyalty in South America was created in 1717 with Bogota as capital (Viceroyalty of New Granada), promoting economic and population growth.
Interestingly, the estimated time for the second pulse of European migrants into the ancestors of present-day Colombians (i.e., 12 generations ago) coincides with the creation of the Colombian-based Viceroyalty of New Granada, accounting for the large increase (from 12.5% to 75%) of European ancestry in the model based on tract length distributions. This small contribution of European ancestry at the onset of admixture in Colombians reinforces the idea that their patterns of European diversity are heavily impacted by a reduced number of founders. In contrast, Mexican-derived European haplotypes do not appear to be impacted by founder events as much as the Caribbean populations analyzed here. A possible explanation is that present-day Mexico was the center of the wealthy Viceroyalty of New Spain, one of the largest European settlements under Spanish rule. This status ensured continuous exchange with Spain throughout colonial times, resulting in a larger ancestral population size.
Space and time distinction of African migrations into the Caribbean
We find that populations from the insular Caribbean are best modeled as mixtures absorbing two independent waves of African migrants. Assuming a 30-year generation time , the estimated average of 15 generations ago for the first pulse (circa 1550) agrees with the introduction of African slaves soon after European contact in the New World. At first, local natives were used as the source of forced labor, but populations were decimated rapidly, giving rise to the four-century-long transatlantic slave trade, which is usually divided into two eras. The first one accounted for a small proportion (3–16%) of all Atlantic slave trade, whereas the second Atlantic system peaked in the last two decades of the 18th century, accounting for more than half of the slave trade. This period of increased activity coincides with the estimated age of the second (and stronger) pulse of African tracts according to our model (e.g., 7 generations ago in Dominicans), pointing to the late 18th century. In other words, the estimated time separation between these two pulses (i.e., 8 generations or ∼240 years) based on genetic data is in extraordinary agreement with historical records, recapitulating the span between the onset of African slave trade and its period of maximum intensity right before its rapid decline during the 19th century .
To address the question of whether there was also a separation in space between the origins of these two pulses, we relied on the fact that chromosomes from older contributions to admixture have undergone more recombination events, thus leading to shorter continuous African ancestry tracts. By conducting two different but complementary size-based analyses restricted to genomic segments of inferred African ancestry, we provide compelling evidence that short African tracts are enriched with haplotypes from northern coastal West Africa, represented by Mandenka samples from Senegal and Brong from western Ghana, near the Ivory Coast. This is in agreement with documented deportation flows during the 15th–16th centuries, wherein most enslaved Africans were carried off from Senegambia and departed for the Americas from the Gorée Island, near Cape Verde . African slaves were obtained by European traders in ports along the West African coast, but raiding zones extended inland with the involvement of local African kingdoms. The Mandinka Kingdom of Senegambia was part of the Mali Empire, one of the most influential domains in West Africa, spreading its language, laws, and culture along the Niger River. The empire's total area included nearly all the land between the Sahara Desert and coastal forests, and by 1530 reached modern-day Ivory Coast and Ghana, possibly accounting for the shared pattern between the Mandenka and Brong with respect to the Caribbean's short ancestry tracts. While this interpretation is supported by the fact that the Mandenka and Brong are the westernmost population samples of our reference panel, the lack of additional samples from northern West Africa prevent us from determining whether this pattern is shared with other tribes as well. On the other hand, the greater affinity of the longer ancestry tracts with the rest of the African samples, which cover much of the central West African coast, is compatible with the greater involvement of such regions in the slave trade during the 18th century.
The volume of captives being embarked from the bights of Benin (e.g., today's Nigeria) and Biafra (e.g., today's Cameroon) was so elevated after 1700 that part of its shore soon became known as the “Slave Coast” . Population samples around this area represented in our reference panel include the Yoruba and Igbo from Nigeria, and the Bamoun and Fang from Cameroon, all of which show higher probabilities of being assigned as the source for longer African ancestry tracts in the admixed Latino groups analyzed. Together with Brazil, the Caribbean Islands were the major slave import zone during the 18th century. Later deportation flows in the 19th century involved ports of origin near the Congo River in West Central Africa. The closest population sample of our reference panel from this region is represented by the Kongo, which also shows higher affinity with longer ancestry tracts, compatible with a later contribution to admixture in the Caribbean. The 19th century also saw the abolition of slavery in most parts of the world; however, the massive international flow of people it involved remains as one of the deepest signatures in the genomes of descendent populations. While the geographic extension of the regions of origin of African slaves brought to the Americas has been widely documented, it was unclear until now the extent to which particular sub-continental components have shaped the genomic composition of present-day Afro-Caribbean descendants. Our ancestry-specific and size-based analyses allowed us to discover that African haplotypes derived from Caribbean populations still retain a signature from the first African ancestors despite the later dominance of African influx from multiple sub-continental components.
Our genome-wide dense genotyping data from six different populations of Caribbean descent, coupled with the availability of large-scale reference panels, allowed us to address long-standing questions regarding the origin and admixture history of the Caribbean Basin. The differences between insular and continental Caribbean populations underscore the importance of characterizing admixed populations at finer scales. We report ancestry-specific recent bottlenecks affecting particular Latino groups, but not others, which may have important implications in the expected relative proportion of deleterious mutations and elevated allele frequencies that can be detected via association studies in theses populations. Finally, the extensive population stratification within sub-continental components implies that medically relevant genetic variants may be geographically restricted, reinforcing the need for sequencing target populations in order to discover local variants that may only be relevant in Latino-specific association studies for disease.
Materials and Methods
Samples and data generation
Generated data and assembled datasets for this study are summarized in Table S1. A total of 251 individuals representing six different Caribbean-descent populations were recruited in South Florida, USA. Participants were required to have at least three grandparents from their countries of origin, thus limited ethnographic and anonymous pedigree information was collected. The majority of pedigrees (94.3%, n = 82) had four grandparents from the same country. Only 5 pedigrees (5.7%) had one grandparent from a different country. Informed consent was obtained from all participants under approval by the University of Miami Institutional Review Board (study no. 20081175). A total of 76 trios, 2 duos, and 19 parents were genotyped using Affymetrix 6.0 SNP arrays, which included: 80 Cubans, 85 Colombians, 34 Dominicans, 27 Puerto Ricans, 19 Hondurans, and 6 Haitians. Genotype data will be made available through dbGaP under the Genomic Origins and Admixture of Latinos (GOAL) study. Out of 173 founders, 18 samples were filtered from structure analyses due to cryptic relatedness as inferred by IBD>10%. Four trios were not considered for trio phasing due to an excess of Mendelian errors (>100 K), two trios were removed due to 3rd or higher degree of relatedness between parents as inferred by IBD, and five trios were filtered due to cryptic relatedness between members of different trios above 10% IBD. After filtering, 65 complete trios remained for haplotype-based analyses. To study population structure and demographic patterns involving relevant ancestral populations, 79 previously collected samples from three native Venezuelan tribes were genotyped using the same array (i.e., 25 Yukpa [aka Yucpa], 29 Bari, and 25 Warao). We combined our data with publicly available genomic resources and assembled a global database incorporating genome-wide SNP array data for 3,042 individuals from which two datasets with different SNP densities were constructed (see Table S1). The high-density dataset included populations with available SNP data from Affymetrix arrays; namely African, European, and Mexican HapMap samples , Europeans from POPRES , West Africans from Bryc et al. , and Native Americans from Mao et al. . After merging and quality control filtering, 389,225 SNPs remained and representative population subsets were used in different analyses as detailed through sections below. Our lower density dataset (30,860 SNPs) resulted from the intersection of our high-density dataset with available SNP data generated on Illumina platform arrays, including 52 additional Native American populations , as well as additional Latino populations sampled in New York City and 1000 Genomes Latino samples . The resulting dataset combines genomic data for 1,262 individuals from 80 populations. Full details on the population samples are available in Table S1.
An unsupervised clustering algorithm, ADMIXTURE , was run on our high-density dataset to explore global patterns of population structure among a representative subset of 641 samples, including seven Native American, eleven POPRES European, HapMap3 Nigerian Yoruba, HapMap3 Mexican, and our six new Caribbean Latino populations (see Table S1). Fourteen ancestral clusters (K = 2 through 15) were successively tested. Log likelihoods and cross-validation errors for each K clusters are available in Figure S3. FST based on allele frequencies was calculated in ADMIXTURE v1.22 for each identified cluster at K = 8 and values are available in Table S3. Our low-density dataset comprising 1,262 samples (detailed in Table S1) was used to run K = 2 through 20. Log likelihoods, cross validation errors and FST values from ADMIXTURE are available in Figure S9 and Table S4. Principal component analysis (PCA) was applied to both datasets using EIGENSOFT 4.2 and plots were generated using R 2.15.1. Sex bias in ancestry contributions was evaluated by selecting only females (to ensure we compare a diploid X chromosome to diploid autosomes), and running ADMIXTURE at K = 3 on the X chromosome and autosomes separately. The Wilcoxon signed rank test, a non-parametric version of the paired Student's t-test that does not require the normality assumption, was applied to assess the significance of the difference in X and autosomal ancestry proportions. This tests whether the average difference of ancestry proportions assigned to a given source population for the X and for the autosomes of each sample is significantly different from zero. The test was applied to the entire collection of Latino samples, revealing an over-arching trend, and then to each population in turn to identify any between-population differences. A rejection of the null hypothesis means that the ancestry proportions on the X and the autosomes are significantly different from one another but does not imply which proportion is larger. We provide box plots as a visual aid to show the direction of the difference (Figure S4). Global ancestry estimates from ADMIXTURE at K = 3 were used to test the correlation between male and female ancestry proportions considering all trio founders within each Caribbean population as well as within the full set of admixed trios. Linear models and permutations (up to 100,000) were performed using R 2.15.1.
Phasing and local ancestry assignment
Family trio genotypes from our six Caribbean populations and continental reference samples were phased using BEAGLE 3.0 software . Local ancestry assignment was performed using PCAdmix (http://sites. google.com/site/pcadmix/ ) at K = 3 ancestral groups. This approach relies on phased data from reference panels and the admixed individuals. To maintain SNP density and maximize phasing accuracy we restricted to a subset of reference samples with available Affymetrix 6.0 trio data, namely 10 YRI, 10 CEU HapMap3 trios, and 10 Native American trios from Mexico . Each chromosome is analyzed independently, and local ancestry assignment is based on loadings from Principal Components Analysis of the three putative ancestral population panels. The scores from the first two PCs were calculated in windows of 70 SNPs for each panel individual (in previous work we have estimated a suitable number of 10,000 windows to break the genome into when inferring local ancestry using PCAdmix, and in this case, after merging Affymetrix 6.0 data from admixed and reference panels, a total of 743,735 SNPs remained/10,000 = window length of ∼70 SNPs). For each window, the distribution of individual scores within a population is modeled by fitting a multivariate normal distribution. Given an admixed chromosome, these distributions are used to compute likelihoods of belonging to each panel. These scores are then analyzed in a Hidden Markov Model with transition probabilities as in Bryc et al. . The g (generations) parameter in the HMM transition model was determined iteratively so as to maximize the total likelihood of each analyzed population. Local ancestry assignments were determined using a 0.9 posterior probability threshold for each window using the forward-background algorithm. In analyses that required estimating the length of continuous ancestry tracts, the Viterbi algorithm was used. An assessment of the accuracy of this approach is given in .
Tract length analysis
We used the software Tracts to identify the migratory model that best explains the genome-wide distribution of ancestry patterns. Specifically, we considered three migration models, each featuring a panmictic population absorbing migrants from three source populations. The models differ by the number of allowed migration events per population. In the simplest model, the population is founded by Native American and European individuals, and later receives a pulse of African migrants. The initial ancestry proportion and timing, as well as the African migration amplitude and timing, are fitted to the data as described below. The other two models feature an additional input of either European or African migrants; the timing and magnitude of this additional pulse result in two additional parameters that must be fitted to the data. Here, the data consisted of Viterbi calls from PCAdmix (see previous section and Figure 2), that is, the most probable assignment of local ancestry along the genomes. To fit parameters to these data, we tallied the inferred continuous ancestry tracts according to inferred ancestry and tract length using 50 equally spaced length bins per population, and one additional bin to account for full chromosomes. Given a migration model and parameters, Tracts calculates the expected counts per bin. Assuming that counts in each bin are Poisson distributed, it produces a likelihood estimate that is used to fit model parameters. For each population, we report the model with the best Bayesian Information Criterion (BIC) −2 Log(L)+k Log (n), with n = 153. Because we imposed a fixed number of migration pulses, we must keep in mind that migrations are likely to have been more continuous than what is displayed in the best-fitting models. One way to interpret the pulses are time points that the migrations probably spanned. Resolving the duration of each pulse would likely require refined models and a great deal more data.
Ancestry-Specific Principal Component Analysis (ASPCA)
To explore within-continent population structure, we applied the following approach for each of the continental ancestries (i.e., Native American, European, and African) of admixed genomes. The general framework is shown in Figure 2. It comprises locus-specific continental ancestry estimation along the genome, followed by PCA analysis restricted to ancestry-specific portions of the genome combined with sub-continental reference panels of ancestral populations. For this purpose, we used our continental-level local ancestry estimates provided by PCAdmix to partition each genome into ancestral haplotype segments, and retained for subsequent analyses only those haplotypes assigned to the continental ancestry of interest. This is achieved by masking (i.e., setting to missing) all segments from the other two continental ancestries. Because ancestry-specific segments may cover different loci from one individual to another, a large amount of missing data results from scaling this approach to a population level, which limits the resolution of PCA. To overcome this problem, we adapted the subspace PCA (ssPCA) algorithm introduced by Raiko et al. to implement a novel ancestry-specific PCA (ASPCA) that allows accommodating phased haploid genomes with large amounts of missing data. Our method is analogous to the ssPCA implementation by Johnson et al. , which operates on genotype data. In contrast, ASPCA operates on haplotypes, allowing us to use much more of the genome (rather than just the parts estimated to have two copies of a certain ancestry) and to independently analyze the two haploid genomes of each individual. Finally, ancestry-specific haplotypes derived from admixed individuals are combined with haplotypes derived from putative parental populations and projected together onto PCA space. Details of the ASPCA algorithm and constructed datasets are described in Text S1.
Differentiation of sub-European ancestry components
To measure the observed deviation in ASPCA of European haplotypes derived from admixed Caribbean populations with respect to the cluster of Iberian samples, a bootstrap resampling-based test was performed. The null distribution was generated from comparing bootstraps of Portuguese and Spanish ASPCA values as models of the intrinsic Iberian population structure. We then compared the ASPCA values of the admixed individuals and tested if the observed differences between Iberian ASPCA values and those of the admixed individuals are more extreme than the differences within Iberia. The distance was determined using the chi-squared statistic of Fisher's method combining ASPC1 and ASPC2 t-tests for each bootstrap. We ran 10,000 bootstraps to determine one-tailed p-values. As Iberians we considered: POPRES Spanish, POPRES Portuguese, Andalusians, and Galicians; and as Caribbean Latinos: CUB, PUR, DOM, COL, and HON. Additional tests were performed comparing Portuguese versus the rest of Iberians and between an independent dataset of Mexican individuals analyzed by Moreno-Estrada, Gignoux et al. (in preparation) projected onto ASPCA space using the same reference panel of European populations. A bivariate test was performed to measure the relative deviation from the Iberian cluster of the distribution given by the Caribbean versus the Mexican dataset. To determine whether insular versus mainland Caribbean populations disperse over significantly different ranges in ASPC2, a Wilcoxon rank test was performed between (COL+HON) versus (CUB, PUR, DOM). Haitians were excluded due to low sample size (N = 2 haplotypes). Boxplot is available in Figure S11. Population differentiation estimates between clusters inferred with ADMIXTURE were visualized and compared across runs where both the Latino-specific and southern European components were detected. Values are available in Table S3 and Table S4. To provide independent evidence on the sub-continental ancestry of European haplotypes, we considered segments that are identical by descent (IBD) between unrelated Latino individuals and a representative subset of European populations. We used our high-density dataset to extract a subset of 203 POPRES European individuals and the founders of the 65 complete admixed trios. We first performed a genome-wide pairwise IBS estimation using PLINK to ensure that the dataset contains no samples with more than 10% IBS with any other sample. Then we used fastIBD to phase the data and estimate segments shared IBD longer than 2 Mb to eliminate false positive IBD matches and assuming that ancestry will be shared among pairwise IBD hits of segments this long. All 2 Mb or greater segments shared IBD between pairs of individuals were summed, and histograms were created for pairwise matches within each group (i.e., POPRES Europeans, Iberians, and Caribbean Latinos). To inform about the proportion of shared DNA between pairs of populations we calculated a summed pairwise IBD statistic, which is the sum of lengths of all segments inferred to be shared IBD between a given European population and each Latino population, normalized by sample size.
Size-based ASPCA analyses
Given the evidence from our tract length analysis for a second pulse of African migrants into the admixture of insular Caribbean Latinos, a modified size-based ASPCA analysis was performed. A reference panel was built integrating three different resources , , and focusing on putative source populations from along the West African coast, including Mandenka from Senegal, Yoruba and Igbo from Nigeria, Bamoun and Fang from Cameroon, Brong from western Ghana, and Kongo from the Democratic Republic of the Congo. We begin with the continental local ancestry inference from PCAdmix K = 3. For each individual we then divide African ancestry tracts into small (0 to 50 cM) and large (>50 cM) size classes. Given a partition of African ancestry tracts, we take all sites included in one tract class, say short tracts, and run PCA on our sub-continental West African reference populations for only these sites. Using the first two PCs from this analysis, we fit a bivariate normal distribution to each reference population cluster. We then project our test sample into this PCA space, and estimate the probability of it coming from each reference population using the fitted distributions. This procedure is repeated for each tract class, for each individual. For each admixed Caribbean population, we can then estimate the probability that a given class of African ancestry tracts comes from a specific West African source population as the average probability of assignment to this population across all individuals. Finally, under the assumption that a given class of African tracts must come from one of the provided reference populations, we rescale these probabilities to sum to one. Each assignment estimate is also provided with error bars representing the standard error of the mean. We compare the short and long assignment probabilities for each Caribbean population to identify distinct sources for “older” and “younger” West African migratory source populations. Haitians were not included in the analysis due to low sample size (n = 4). Due to concerns that shorter tracts have a higher likelihood of mis-assignment, we added a medium tract size class (5 cM to 50 cM) to see if the results were simply due to very short (0 cM to 5 cM) European or Native American tracts being mis-classified as African. We compare the results for short and medium tracts and find that the trends are maintained suggesting the observation that older shorter tracts appear to be primarily from the Mandenka and Brong source populations is not simply due to short tract mis-assignment
Local ancestry estimation within African tracts
To identify likely regions of Yoruba versus Mandenka ancestry in the African component, we modified our implementation of PCAdmix to perform local ancestry deconvolution solely of the African segments of the admixed genomes. The modification is achieved in the final step of the algorithm: whereas the standard approach estimates a single HMM across an entire chromosome, here we fit J disjoint HMMs spanning each of the J blocks of African ancestry in a given chromosome for a given individual. Applying the method, we obtained posterior probabilities for Mandenka versus Yoruba ancestry within the previously inferred African segments. We then selected only those sub-regions that were confidently called as Mandenka or Yoruba, and stratified them by physical size.
Principal component 1 versus lower order PCs defining sub-continental components among Native American populations. Top: PC5 separates Venezuelan population isolates from the rest of Native Americans. Bottom: PC7 separates Mesoamerican from Andean groups. Mexicans and Hondurans distribute between the European and Mesoamerican clusters, whereas Colombians slightly deviate towards the Andean and Venezuelan clusters. Global PCA analysis based on the high-density dataset (∼390 K SNPs) and thus limited to reference panel populations with available Affymetrix SNP array data (see Table S1 for details).
ADMIXTURE results from K = 2 through 15 based on the high-density dataset (∼390 K SNPs) including 7 admixed Latino populations and 19 reference populations. A low-frequency Southern European component restricted to Mediterranean populations at lower order Ks and specifically to Iberian populations at higher order Ks, accounts for the majority of European ancestry among Latinos (black bars). It further decomposes into population-specific clusters (purple bars) denoting higher similarities within the European portion among Latinos compared to European source populations.
ADMIXTURE metrics at increasing K values based on Log-likelihoods (A) and cross-validation errors (B) for results shown in Figure S2.
Comparison of ADMIXTURE estimates obtained from autosomes and the X chromosome in different Latino/Caribbean populations. A) Cluster-based results for K = 3 using the same set of ancestral populations as in Figure S2. Because the X chromosome is diploid, the analysis was restricted to female individuals from the seven admixed Latino populations. Within each population, individuals are sorted from largest to smallest proportion of European ancestry. B) Box plot showing the directionality of the difference between X and autosomal ancestry proportions considering all populations together. P-values on top correspond to the Wilcoxon signed rank test applied to assess statistical significance (see Materials and Methods). C) Box plots and statistical tests for each population (Haitians excluded due to low sample size). The observed pattern strongly supports the presence of sex-biased gene flow during the process of admixture throughout the Caribbean, with significantly higher contribution from Native American, and to a lesser extent West African, ancestors into the composition of the X chromosome, which largely reflects the female demographic history of a population.
Correlation between male and female continental ancestries. Parents' ancestry proportions from each trio were used to compare correlation coefficients between the observed values and 100,000 permuted male-female pairs (p-values shown for the combined set of Latino Caribbean samples and for each population in Table S2).
Ancestry tract lengths distribution per population and demographic model tested in Tracts. For each demographic scenario, the observed distribution is compared to the predictions of the best-fitting migration model (displayed below each distribution). Solid lines represent model predictions and shaded areas are one-sigma confidence region surrounding the predictions. Three different demographic scenarios were considered, all of which assume the involvement of European and Native American tracts at the onset of admixture, followed by the introduction of African migrants (denoted by EUR,NAT+AFR). The second and third models allow for an additional pulse of European (EUR,NAT+AFR+EUR) and African (EUR,NAT+AFR+AFR) ancestry, respectively. Likelihood values for each model are shown on top of each plot. Pie charts above each migration model are proportional to the estimated number of migrants being introduced at each point in time (black arrows). GA: generations ago.
ASPCA analysis of Native American haplotypes derived from admixed genomes (solid circles) and reference panel populations from grouped by linguistic families as reported therein. Top panels: ASPCA with the full reference panel of Native American populations. Bottom panels: Filtered ASPCA without extreme outliers (Aleutians, Greenlanders, and Surui excluded from the analysis). Each individual from the reference panel is represented by the corresponding population label centered on its PCA coordinates. A zoomed version of PC1 vs. PC2 for the filtered set (bottom left) grouped by geographic sampling location is available in Figure 4A.
ADMIXTURE results from K = 2 through 20 based on the low-density dataset (∼30 K SNPs) including additional admixed Latino and Native American reference populations (see Table S1 for details). The presence of the Latino European component (black and gray bars) is recaptured among independently sampled Latino populations. FL: Florida (this study); NY: New York; 1KG: 1000 Genomes Project samples. Native American populations from are grouped according to linguistic families reported therein. Labels are shown for the populations representing the 15 Native American clusters identified at K = 20 (four of the remaining five being of European ancestry and one of West African ancestry). Clusters involving multiple populations are identified by those with the highest membership values. Throughout lower and higher order Ks, several South American components (yellow and green bars), show varying degrees of shared genetic membership with Mesoamerican Mayans, accounting for up to nearly half of their genome composition (see Figure 4 for more details).
ADMIXTURE metrics at increasing K values based on Log-likelihoods (A) and cross-validation errors (B) for results shown in Figure S8.
ASPCA distribution of Iberian samples (red circles) compared to European haplotypes derived from our Latino Caribbean samples (top panel) and from an independent cohort of Mexican samples (bottom panel). The relative deviation from the Iberian cluster is significantly different comparing the Caribbean versus the Mexican dataset (see the main text for details).
ASPC2 values per population from the European-specific PCA analysis shown in Figure 5 and Figure S10. Population codes as in Table S1. The boxplot shows that low ASPC2 values are enriched with mainland Colombian and Honduran haplotypes, whereas insular Caribbean populations show less deviated values from the Iberian cluster. A Wilcoxon rank test between mainland (COL, HON) versus insular samples (CUB, PUR, DOM) demonstrated that these two groups disperse over significantly different ranges in ASPC2 (Haitians excluded due to low sample size).
IBD sharing between different Caribbean Latino populations and a representative subset of POPRES European populations as measured by a summed pairwise IBD statistic. For each Latino population, maximum pairwise IBD levels were observed in those pairs involving Spanish and, to a lesser extent, Portuguese samples, in agreement with our ASPCA results.
IBD sharing between pairs of individuals within A) Caribbean Latinos and B) a representative subset of POPRES European populations. Inset histograms display counts lower than 50 for the same binning categories. The overall count of pairs sharing short segments of total IBD is higher among Europeans, probably as a result of an older shared pool of source haplotypes. In contrast, the higher frequency of longer IBD matches among Latinos is compatible with a recent European founder effect. After excluding within-population pairs of Latino individuals (top right), there are still more and longer IBD matches among Caribbean populations compared to Iberians (bottom right).
ASPCA analysis of African haplotypes derived from admixed genomes with >25% of African ancestry (black symbols) and a representative subset of African HapMap3 and other West African reference panel populations from . Colombians and Hondurans excluded due to lower overall proportions of African ancestry.
ASPCA analysis of short versus long African ancestry tracts from admixed genomes and West African reference panel populations. To exemplify our size-based ASPCA approach, the African genome of a Puerto Rican individual is displayed (denoted by PUR). Left: PUR clusters with Mandenka when only sites within short ancestry tracts (<50 cM) are considered to perform PCA. Right: a similar background distribution is obtained but the same PUR individual no longer clusters with Mandenka when considering long ancestry tracts (>50 cM).
African ancestry size-based ASPCA results per population sample. Considering three different classes of ancestry tract lengths (black: short; red: long; blue: intermediate), scaled assignment probabilities are shown for each African source population. Values on the y-axis are the average probability of assignment to each potential source population across all individuals within each Latino population (see Materials and Methods for details).
Summary of Latino populations and assembled reference panels.
Correlation p-values of male vs. female ancestry.
FST divergences between estimated populations for K = 8 using ADMIXTURE.
FST divergences between estimated populations for K = 20 using ADMIXTURE.
Methodology of the Ancestry-Specific PCA (ASPCA) implementation.
We thank study participants for generously donating DNA samples, Brenna M. Henn, Martin Sikora, and Meredith Carpenter for helpful comments on earlier versions of the manuscript, Scott Huntsman for data management, and David Reich and Andres Ruiz-Linares for sharing their Native American dataset (obtained from the University of Antioquia through a data access agreement dated July 26, 2012). We acknowledge the NIH GWAS Data Repository for granting access to the Population Reference Sample (POPRES) dataset. The datasets used for the analyses described in this manuscript were obtained from dbGaP through accession number phs000145.v1.p1.
Conceived and designed the experiments: AME CDB ERM MLC JLM JCMC. Performed the experiments: ERM JLM RJM DJH CE. Analyzed the data: AME FZ SG JKB CRG PAOT SAA. Contributed reagents/materials/analysis tools: ERM MLC JLM RWM KS PJN ZL PP EGB CDB. Wrote the paper: AME SG FZ JKB CRG CDB.
- 1. Bustamante CD, Burchard EG, De la Vega FM (2011) Genomics for the world. Nature 475: 163–165. doi: 10.1038/475163a
- 2. Wang S, Ray N, Rojas W, Parra MV, Bedoya G, et al. (2008) Geographic patterns of genome admixture in Latin American Mestizos. PLoS Genet 4: e1000037. doi: 10.1371/journal.pgen.1000037
- 3. Bryc K, Velez C, Karafet T, Moreno-Estrada A, Reynolds A, et al. (2010) Colloquium paper: genome-wide patterns of population structure and admixture among Hispanic/Latino populations. Proc Natl Acad Sci U S A 107 Suppl 2: 8954–8961. doi: 10.1073/pnas.0914618107
- 4. Via M, Gignoux CR, Roth LA, Fejerman L, Galanter J, et al. (2011) History shaped the geographic distribution of genomic admixture on the island of Puerto Rico. PLoS ONE 6: e16513. doi: 10.1371/journal.pone.0016513
- 5. Kidd JM, Gravel S, Byrnes J, Moreno-Estrada A, Musharoff S, et al. (2012) Population genetic inference from personal genome data: impact of ancestry and admixture on human genomic variation. Am J Hum Genet 91: 660–671. doi: 10.1016/j.ajhg.2012.08.025
- 6. Abecasis GR, Auton A, Brooks LD, DePristo MA, Durbin RM, et al. (2012) An integrated map of genetic variation from 1,092 human genomes. Nature 491: 56–65.
- 7. Martinez-Cruzado J, Toro-Labrador G, Viera-Vera J, Rivera-Vega M, Startek J, et al. (2005) Reconstructing the population history of Puerto Rico by means of mtDNA phylogeographic analysis. Am J Phys Anthropol 128: 131–55. doi: 10.1002/ajpa.20108
- 8. Consortium IH, Frazer KA, Ballinger DG, Cox DR, Hinds DA, et al. (2007) A second generation human haplotype map of over 3.1 million SNPs. Nature 449: 851–861.
- 9. Nelson MR, Bryc K, King KS, Indap A, Boyko AR, et al. (2008) The Population Reference Sample, POPRES: a resource for population, disease, and pharmacological genetics research. Am J Hum Genet 83: 347–358. doi: 10.1016/j.ajhg.2008.08.005
- 10. Bryc K, Auton A, Nelson MR, Oksenberg JR, Hauser SL, et al. (2010) Genome-wide patterns of population structure and admixture in West Africans and African Americans. Proc Natl Acad Sci U S A 107: 786–791. doi: 10.1073/pnas.0909559107
- 11. Reich D, Patterson N, Campbell D, Tandon A, Mazieres S, et al. (2012) Reconstructing Native American population history. Nature 488: 370–374.
- 12. Alexander DH, Novembre J, Lange K (2009) Fast model-based estimation of ancestry in unrelated individuals. Genome Res 19: 1655–1664. doi: 10.1101/gr.094052.109
- 13. Sandoval K, Moreno-Estrada A, Mendizabal I, Underhill PA, Lopez-Valenzuela M, et al. (2012) Y-chromosome diversity in Native Mexicans reveals continental transition of genetic structure in the Americas. Am J Phys Anthropol 148: 395–405. doi: 10.1002/ajpa.22062
- 14. Wang S, Lewis C, Jakobsson M, Ramachandran S, Ray N, et al. (2007) Genetic Variation and Population Structure in Native Americans. PLoS Genet 3: e185. doi: 10.1371/journal.pgen.0030185
- 15. Novembre J, Johnson T, Bryc K, Kutalik Z, Boyko AR, et al. (2008) Genes mirror geography within Europe. Nature 456: 98–101. doi: 10.1038/nature07331
- 16. Auton A, Bryc K, Boyko A, Lohmueller K, Novembre J, et al. (2009) Global distribution of genomic diversity underscores rich complex history of continental human populations. Genome Res 19: 1–30. doi: 10.1101/gr.088898.108
- 17. Risch N, Choudhry S, Via M, Basu A, Sebro R, et al. (2009) Ancestry-related assortative mating in Latino populations. Genome Biology 10: R132. doi: 10.1186/gb-2009-10-11-r132
- 18. Pool JE, Nielsen R (2008) Inference of Historical Changes in Migration Rate From the Lengths of Migrant Tracts. Genetics 181: 711–719. doi: 10.1534/genetics.108.098095
- 19. Brisbin A, Bryc K, Byrnes J, Zakharia F, Omberg L, et al. (2012) PCAdmix: Principal Components-Based Assignment of Ancestry Along Each Chromosome in Individuals with Admixed Ancestry from Two or More Populations. Hum Biol 84: 343–364. doi: 10.3378/027.084.0401
- 20. Gravel S (2012) Population genetics models of local ancestry. Genetics 191: 607–619. doi: 10.1534/genetics.112.139808
- 21. Fernandez-Mendez E (1970) Historia cultural de Puerto Rico. San Juan, Puerto Rico: Ediciones El Cemí.
- 22. Tremblay M, Vezina H (2000) New estimates of intergenerational time intervals for the calculation of age and origins of mutations. Am J Hum Genet 66: 651–658. doi: 10.1086/302770
- 23. Johnson NA, Coram MA, Shriver MD, Romieu I, Barsh GS, et al. (2011) Ancestral components of admixed genomes in a mexican cohort. PLoS Genet 7: e1002410. doi: 10.1371/journal.pgen.1002410
- 24. Botigue LR, Henn BM, Gravel S, Maples BK, Gignoux CR, et al. (2013) Gene flow from North Africa contributes to differential human genetic diversity in southern Europe. Proc Natl Acad Sci U S A 110: 11791–11796. doi: 10.1073/pnas.1306223110
- 25. Lao O, Lu TT, Nothnagel M, Junge O, Freitag-Wolf S, et al. (2008) Correlation between genetic and geographic structure in Europe. Curr Biol 18: 1241–1248. doi: 10.1016/j.cub.2008.07.049
- 26. Torgerson DG, Gignoux CR, Galanter JM, Drake KA, Roth LA, et al. (2012) Case-control admixture mapping in Latino populations enriches for known asthma-associated genes. J Allergy Clin Immunol 130: 76–82. doi: 10.1016/j.jaci.2012.02.040
- 27. Rodríguez Ramos R (2010) What is the Caribbean? An archaeological perspective. Journal of Caribbean Archaeology 3: 19–51.
- 28. Rouse I (1986) Migrations in prehistory : inferring population movement from cultural remains. New Haven: Yale University Press. XIV, 202 p.
- 29. Rouse I (1993) The Tainos: Rise and Decline of the People who greeted Columbus. New Haven: Yale University Press. 224 p.
- 30. Diamond J, Bellwood P (2003) Farmers and their languages: the first expansions. Science 300: 597–603. doi: 10.1126/science.1078208
- 31. Ruhlen M (1991) A guide to the world's languages. StanfordCalifornia: Stanford University Press. 463 p.
- 32. Demarchi DA, Garcia Ministro A (2008) Genetic Structure of Native Populations from the Gran Chaco Region, South America. Int J Hum Genet 8: 131–141.
- 33. Lalueza-Fox C, Calderon FL, Calafell F, Morera B, Bertranpetit J (2001) MtDNA from extinct Tainos and the peopling of the Caribbean. Ann Hum Genet 65: 137–151. doi: 10.1046/j.1469-1809.2001.6520137.x
- 34. Eltis D, Richardson D (2010) Atlas of the Transatlantic Slave Trade. New Haven: Yale University Press. 307 p.
- 35. Mao X, Bigham AW, Mei R, Gutierrez G, Weiss KM, et al. (2007) A genomewide admixture mapping panel for Hispanic/Latino populations. Am J Hum Genet 80: 1171–1178. doi: 10.1086/518564
- 36. Price AL, Patterson NJ, Plenge RM, Weinblatt ME, Shadick NA, et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nat Genet 38: 904–909. doi: 10.1038/ng1847
- 37. Browning S, Browning B (2007) Rapid and Accurate Haplotype Phasing and Missing-Data Inference for Whole-Genome Association Studies. Am J Hum Genet 81: 1084–1097. doi: 10.1086/521987
- 38. Raiko T, Ilin A, Karhunen J (2007) Principal component analysis for large scale problems with lots of missing values. Machine Learning: ECML 2007: 691–698. doi: 10.1007/978-3-540-74958-5_69
- 39. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, et al. (2007) PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet 81: 559–575. doi: 10.1086/519795
- 40. Li JZ, Absher DM, Tang H, Southwick AM, Casto AM, et al. (2008) Worldwide human relationships inferred from genome-wide patterns of variation. Science 319: 1100–1104. doi: 10.1126/science.1153717
|
<urn:uuid:cb6c0906-6b51-43f2-8674-826396419fd1>
|
CC-MAIN-2016-26
|
http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1003925
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00039-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.906389
| 20,268
| 2.8125
| 3
|
Caring for someone with Alzheimer’s disease is a balancing act. You keep your loved one safe and comfortable, keep track of his medications and doctor’s appointments, and give him your love and support. But your life matters, too. It’s just as important to keep up with your work, family, and social life.
In your role as a caregiver, do what you can to bewell informed and prepared, and ask for help and support when you need it.
Today, there is no cure for Alzheimer's. Researchers are still trying to fully understand how the disease leads to memory loss and other problems with thinking and behavior. They hope to one day reverse those changes to prevent or stop the disease.
But if you or a loved one has Alzheimer’s, there are treatments that can make a difference. Some therapies ease the symptoms and help people do better for longer. Because the disease’s effects change over time, people often need to have their treatments...
It helps to keep in mind how the disease affects people who have it. If you know what changes to expect, it can help you understand how your role may be different with time.
Alzheimer’s disease is different for everyone who has it. A person’s condition can change a lot. There may be times when your loved one seems pretty normal and can handle his usual activities. Other times, he may be very dependent. The way medications affect him also can vary. The changes can be confusing and may make your loved one seem demanding or dishonest. But it’s just a natural part of the disease.
Your loved one’s symptoms will get worse as years go by. While medicines can slow down this progress, they can’t stop it.
Depression is a part of Alzheimer's as well. It can make symptoms worse and change how well your loved one manages day to day. It’s important to know the signs he might be depressed and let his doctor know right away.
|
<urn:uuid:7dacf247-37be-44bc-b924-7e075d66b327>
|
CC-MAIN-2016-26
|
http://www.webmd.com/alzheimers/caregivers-09/reduce-stress
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00033-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968559
| 417
| 2.875
| 3
|
Bluetooth is designed for high-interference environments, and it is more robust than many wireless technologies. However, you can get some interference from appliances such as cordless phones, Wi-Fi network equipment, and microwave ovens.
If you experience interference between your headset and a paired device, go closer to the paired device and remove any obstacles between the headset and paired device.
Devices with Bluetooth version 1.2 or higher use a technology caused “adaptive frequency hopping”. Bluetooth automatically hops away from frequencies with lots of interference towards frequencies with less interference.
Bluetooth radios can switch frequency 1,600 times per second. The data packets are so small that interference from other RF sources is not likely.
Was this document helpful?
|
<urn:uuid:507630a9-fa28-4f08-9137-2cf8d910bb85>
|
CC-MAIN-2016-26
|
http://www.p4c.philips.com/cgi-bin/dcbint/oleeview.pl?view=aa12_view_iframe.html&scy=us&slg=AEN&refnr=0082320&dct=QAC&ctn=SHB9100/28
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00194-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.915764
| 153
| 2.65625
| 3
|
Many people have heard about the history of Thanksgiving in the First New England Colony. It was very cold those first few years and the Mayflower Passengers were trying to get ready for the hard winters ahead. Luckily thanks to the men like Edward Winslow (my ancestor) and William Bradford (also my ancestor) and their leadership they were able to keep the group alive and made it through those first very tough winters. This is where the first Thanksgiving concept came from. But did you it wasn't until much later in 1621 after the harvest that the Pilgrims held a feast. It was not called Thanksgiving, but this is where the concept came from. In fact it was not an annual event like it is today at all.
The Mayflower Passengers and Pilgrims did give thanks indeed as they were very into their religion, but for them giving thanks of any kind would have been done by fasting not over indulging? Are you looking for a reason to go on a diet this Thanksgiving Season? Well, you have one now and just think a skinner you is on the horizon. The fable of the annual Thanksgiving Feast comes from the writings of my two ancestors as they kept very good journals, which is a blessing for historians. Thanks to William Bradford and Edward Winslow my ancestors and their excellent account of the feast we were all able to recreate this into a Holiday for all Americans. I am proud to be a descendent of both men;
During the 1621 Harvest Feast they did invite the Indians (90 of them) we do know that and they may or may not have had an actual Turkey, if they did they would have BBQ'ed it or roasted it and we know they had to have eaten outside, as there were a ton of people at the event and those houses were really small.
Historians are pretty sure that there were pumpkins, pie, bread, wine and probably birds such as ducks, geese and perhaps turkeys although we just don't know that for sure. There was venison, which was brought by the Tribal Leaders for all. The feast lasted three days and this is what makes it so memorable. It has been a pleasure to help you understand what great men these were and how the Colony helped start this great nation from scratch. Think on this.
"Lance Winslow" - Online Think Tank forum board. If you have innovative thoughts and unique perspectives, come think with Lance; www.WorldThinkTank.net/wttbbs/
|
<urn:uuid:b8068e5f-4cd6-44a3-86fb-3ebb09e406fb>
|
CC-MAIN-2016-26
|
http://www.friendsofvista.org/articles/article55113.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00198-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.990876
| 507
| 2.890625
| 3
|
Clivia caulescens is a strong grower with stems to 2m, flowers in midsummer, and will attract birds to the garden.
Description: Clivia caulescens is an evergreen, bulbous plant, producing rhizomes which tend to sucker and in time, grows into a large plant. The stem develops to a height of 0.5-2.0 m and 30-40 mm in diameter. In time, the lower portion of the stem becomes leafless, leaving only the leaf scars. The leaves are dark green, 400-900 x 50 mm. The inflorescence consists of 15-20 orange to cream-coloured, pendulous flowers. The petals curve outwards at their tips which are green. This species flowers in summer (October to November). The light yellow to almost purple berries vary from round to oblong and ripen after nine months.
Distribution: This species is fairly common in the Mpumalanga and northern provinces, occurring at Sabie, Mount Sheba, God`s Window and as far north as the Soutpansberg, at altitudes ranging from 1 500 m to over 1 770 m at Elandshoogte where the mountains can be covered in snow in the winter months. The area experiences summer rainfall.
Clivia caulescens occurs in forests in leaf mould, in leaf mould on rocks, even on old decaying tree stumps or on the branches of trees. Populations at altitudes of 1 500 m are subject to mist, snow and extreme cold conditions. Like all clivia species, C. caulescens is a long-lived plant and can survive indefinitely under ideal conditions which consist of light shade, a well-drained growing medium, cool conditions ranging from 3ºC-28ºC, and adequate moisture.
Although a mature plant of this species will survive frost, all the leaves will be burnt and the plant will take a couple of years to recover.
Derivation of name and historical aspects
Clivia caulescens was observed for a number of years and collected on several occasions before it was described in 1943. Initially there was doubt whether there was sufficient justification, as C. nobilis and C. gardenii are similar, with the exception that they do not develop the tall stem which C. caulescens does. The specific name caulescens refers to the fact that this species has a stem.
Natural hybrids of C. miniata and C. caulescens do occur and produce vigorous plants of about a metre in height with broad, dark green leaves and delightful pendulous, pale salmon-coloured flowers in midsummer and sometimes in midwinter.
The populations of Clivia caulescens occur in isolated pockets today as the forests of Africa have shrunk through time. Little is known about the pollinating agents of Clivia, although it is thought that the pendulous species, with their large number of flowers, are self-pollinating as well as bird-pollinated, as they produce nectar which would attract birds and insects. The seed has been seen to be transported by samango and vervet monkeys as well as by the Knysna Loerie and other birds. Rodents are also responsible for distributing the seed. Mice as well as rats consume the soft tissue which covers the seed and they then leave the seed to germinate once they have finished their meal.
Uses and cultural aspects
Fortunately, Clivia caulescens does not appear to be sought after for medicinal and spiritual purposes by the indigenous people. Many of the populations occur in inaccessible places such as vertical cliffs, therefore the species is not regarded as threatened.
Growing Clivia caulescens
In cultivation, Clivia caulescens requires a frost-free area with light shade, a well-drained growing medium, cool conditions and not a great deal of water. As the plant grows fairly tall, the following shade-loving plants compliment this species: Streptocarpus species, Asparagus densiflorus, Veltheimia bracteata, Clivia miniata and C. gardenii. The two additional Clivia species flower at different times to C. caulescens and give added interest and colour in the garden. It also makes a good potplant.
An annual application of a 100 mm layer of compost plus an organic fertilizer will keep the clivia in good condition.
Propagation from seed requires harvesting the seed nine months after flowering. Remove the soft covering tissue and sow the fresh seed in a growing medium of matured pine bark at a depth which just covers the seed. A 15 cm pot is ideal for sowing the seed which must be kept moist and in the shade. Once the leaves of the seedlings are 50 mm long they can be pricked out into a 15 cm pot (3 seedlings per pot ) and left to grow on for a year. Regular feeding with a balanced fertilizer is essential for good growth.
After a year, repot the seedlings into individual 20 cm pots where they should flower 3 to 4 years from sowing. The other method of propagation is by dividing the large plant. Firstly remove all of the growing medium from the roots. The numerous suckers which have developed can now be carefully separated from the main plant. They can either be planted in a pot or directly into the garden. They should flower the following year. Division can be done at any time of the year except when the plant is in flower.
References and further reading
- Abel, C. & Abel, J. 2003. Some observations on Clivia caulescens. Clivia 5: 66. Clivia Society,
- Dyer, R.A. 1943. Clivia caulescens. The Flowering Plants of South Africa 23: t. 891.
- Koopowitz, H. 2002. Clivias. Timber Press, Portland, Oregon .
|
<urn:uuid:830f1537-5e70-4e00-8822-cb951728f18e>
|
CC-MAIN-2016-26
|
http://www.plantzafrica.com/plantcd/cliviacaul.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00082-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937618
| 1,209
| 2.625
| 3
|
Our Program Models
Strum & Sing
Learn to Play Guitar, Sing, Lead Songs, and Facilitate Student Songwriting
This is our basic model that trains teachers, staff and volunteers to play simplified guitar, sing and lead songs first in Open G tuning then in standard tuning. Strum & Sing training aims to get everyone comfortable on the guitar, and applies to teachers in every grade and teaching environment. It includes songs that appeal to children of all ages, on topics that are taught across the grades. Teachers also learn to write songs for teaching lessons by applying specific song forms to the lessons at hand. Once they can do this, they learn to help their students do the same. They may also- at the Strummer levels of the program- learn to facilitate hands-on experiences with guitars in their classrooms! The Strum & Sing classes are offered at these levels sequentially: Beginner, Beginner Plus, Strummer, Strummer Plus and Song Leader.
The AMIGO Program
Achievement through Music Integration with Guitars
Singing songs loaded with new vocabulary and important ideas helps students acquiring English as a second language learn faster than ever. The magic is Oral Language Practice or- in regular terms- learning, using and repeating the new language as a natural and fun part of singing songs in class. AMIGO relieves fear and shyness for new English speakers because they can sing in their new language with their classmates. It is practice in a joyful, shared experience in which everyone participates equally.
This eight-week class helps teachers working with English Learners (ELs) make the most powerful difference possible for their students. Test scores go up with the students’ ability to memorize new language and reproduce it at test time by singing silently to themselves. The proof is on the paper because these songs really stick, boosting student understanding and competence in every different subject area.
The program has been generously funded by The NAMM Foundation.
Early Childhood Education
Musical Learning When It Counts the Most
Young children acquire language and musicality at the same time if given the opportunity because these skills occur in the same parts of our brains. Between birth and six, we humans experience rapid brain development. So getting music into the lives and hands of young children when they are learning everything from sounds, letters and words to numbers, colors, weather, transportation, feelings and so much more truly sets them on a musical path to success.
Of course, preschools have always been filled with music. But few early childhood educators actually know how to play guitar, write songs with children or how to facilitate musical learning beyond singing familiar songs. Many lead songs using cassettes or CDs or by singing unaccompanied. Playing live guitar is so much better because it allows the music to be led in a way that is responsive to the children, and it gives the teacher a powerful tool for captivating the students and keeping them engaged. It also builds everyone’s ability to sing in tune and with feeling. The early childhood educator who models being a music maker for young children may inspire them to similar achievement.
The GITC ECE program trains teachers working with children from ages birth to six to instill basic musicality in young children through the integration of songs, games, activities, movement, and fingerplays. Teachers also learn to facilitate first hands-on experiences for young children with guitar and ukulele as a part of this program. These programs happen in Head Start classrooms, early learning centers, daycare centers, kindergartens and preschools.
MIRSE (pronounced “mercy”)
Music Integration for Resource and Special Education
Students in Special Education classes or who receive services through their school’s resource program often miss out on making music at school. They may be pulled out of class to see a specialist during music time or may not be visited by the music teacher at all unless they are in a mainstreamed classroom. GITC aims to help these students receive daily integrated musical experiences to help them learn, grow, enjoy school and excel. Sometimes making music proves to be the way students who learn differently can really shine. No matter what, it can help them connect with other students and engage in challenging lessons.
MIRSE is a pilot program intended to develop ways that special educators can lead music meaningfully for students with special needs through differentiated instruction. This works because music is so adaptable. When a teacher has many ways to vary the dynamics, tailor the song choices, physically adapt the use of instruments, and spotlight students to do what they love and help lead the music making, she can create break through experiences for her students.
This year, GITC looks forward to working closely with the staff at TERiinc.org to learn new ways to provide integrated musical learning for students in all of TERi’s programs. In 2013-14, we hope to offer MIRSE trainings more broadly, based on discoveries and best practices established in this exciting pilot program. To learn about TERi’s mission and vision, please visit http://www.teriinc.org/about-us/mission-a-vision.html
Please share our Stories of MIRSE: Musical Miracles, a beautiful compilation of stories about music’s power to change the lives of individuals with special needs, written by GITC faculty members, here. We hope you enjoy them.
|
<urn:uuid:7a5e5d9a-2f5b-4b04-9721-e0fe32673091>
|
CC-MAIN-2016-26
|
http://www.guitarsintheclassroom.org/our-programs/gitc-program-models/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00130-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955569
| 1,110
| 3.03125
| 3
|
School Culture and Diversity
|Reflective Professionals (Agents of Change) are able to create a school culture that acknowledges the diverse needs of students; To this end, the teacher must have a meaningful understanding of how cultural differences are related to school achievement; as well as an appreciation of the need to promote an inclusive and equitable school environment for all students.
Education and diversity are intricately related. Historically, public schools have assumed the responsibility of educating an increasingly diverse population of students. In 2010, 82% of the U.S. population lived in urban centers (Central Intelligence Agency, 2012). In 2002, there were over 37 million Latinos/Hispanics living in the United States (67% Mexican origin, 14% Central and South American, 9 % Puerto Rican, 4% Cuban). "Nearly half of all youth (in 2012) are something other than non-Hispanic white," says Nicholas Jones of the Census Bureau's racial statistics branch. The populations of black and white children are declining. "It's clear that the growth of our nation's children will continue to be dependent on new minorities," says demographer William Frey of the Brookings Institution. Minorities now make up more than 36% of the population, up from 31% in 2000. (Al Nasser, H., Census: Hispanic, Asian Populations Soar, USA Today, 3/25/2011)
How can teacher educators respond to our increasingly diverse population of students and prepare candidates to effectively instruct students from vast panoplies of racial, cultural, linguistic, sexual, ability, and economic backgrounds in a culturally responsive way that respects and honors their divergent perspectives (Price-Dennis & Souto-Manning, 2011)?
The IUN SOE vision of diversity, represented by an adaptation of the Circle of Courage from the Sioux Nation, is a medicine wheel, a sacred circle, divided into 4 quadrants. The sacred circle suggests the interconnectedness of life and represents the sacredness of the number four - the four directions, the four elements of the universe, and the four races. According to this model, all four parts of an individual's "circle" must be intact for that person to have a self-secure, pro-social approach to life. Relative weakness in any of the four areas of development results in adjustment difficulties.
Our Circle of Courage at the School of Education transposes the original four quadrants into four constructs: belonging; equity/social justice; cultural awareness/self-identity; and family/community. Belonging, within the original Circle of Courage, is maintained due to its importance in education.
Within the equity/social justice construct candidates are prepared to assume an active role in shaping the social, cultural, and political future of their communities and beyond. Teachers must know how and be able to provide students with equitable access to knowledge and an understanding of the realities of their lives. Teacher educators, therefore, help candidates to acknowledge and support the personal and individual dimensions of experiences while making connections to and illuminating the systemic dimensions of social group interaction. Teacher educators also help candidates to develop effective strategies for managing classroom situations of discrimination or cultural conflict by differentiating classroom management based on the needs of the student and ultimately leading to successful classroom community memberships.
Cultural awareness is the ability and willingness to objectively examine the values, beliefs, traditions and perceptions within our own and other cultures. At the most basic level, it is the ability to walk in someone else's shoes in terms of his or her cultural origins. In a Teacher Education Program Cultural Awareness and Self-Identity are important in that they allow the preservice teacher to reflect on, evaluate and acknowledge their own cultural identity and how that identity shapes their perceptions of and relationships to the students they serve. In addition,
As such, the unit believes that it is important for our candidates to assess how their personal background and experiences creates biases and assumptions that impact how they interact with students in creating a learning environment. Students must reflects on, evaluate and acknowledge their own cultural identity, knowledge and the experiences he or she brings to the learning environment and how their knowledge and experiences may differ from those of their students.
The final quadrant, family and community, represents an expansive literature base representative of the social sciences, education, and medicine. In order to effectively instruct all learners, teachers must understand the various ways people envision family structures, and how diverse family and community values and practices can affect educational motivation and achievement. To better teach an individual, the individual needs to understand the family context and the family within the context of the micro and macro cultures (Szapocznik & Krutines, 1993).
Poverty impacts many children and their families living in urban areas. Studies have documented the association between family poverty and children's health, achievement, and behavior. Studies have clarified how family income has substantial effects on child and adolescent well-being (Brooks-Gunn, & Duncan, 1997). Specifically, children who live in extreme poverty or who live below the poverty line for multiple years suffer the worst outcomes. Additionally, children who lived in poverty during their preschool and early school years had lower rates of school completion than children and adolescents who experienced poverty only in later years.
For teachers to effectively teach an increasingly diverse student population, they need to develop specific knowledge, skills, and dispositions related to diversity. Teachers can increase their understanding of their students’ cultures by listening to families with respect and without judgment (Pang, 2011).
|
<urn:uuid:c6b30520-97bc-4e36-b25b-0eb586b41be8>
|
CC-MAIN-2016-26
|
http://www.iun.edu/education/initial-programs/kb-school-culture-and-diversity.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00115-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948779
| 1,115
| 3.515625
| 4
|
Easby Abbey or the Abbey of St Agatha is an abandoned Premonstratensian abbey on the eastern bank of the River Swale on the outskirts of Richmond in the Richmondshire district of North Yorkshire. The site is maintained by English Heritage and can be reached by a riverside walk from Richmond Castle.
The Abbey of St. Agatha, Easby, was founded in 1152 by Roald, Constable of Richmond Castle. The inhabitants were canons rather than monks. The Premonstratensians wore a white habit and became known as the White Canons.The White Canons followed a code of austerity similar to that of Cistercian monks. Unlike monks of other orders, they were exempt from episcopal discipline. They undertook preaching and pastoral work in the region (such as distributing meat and drink).
|
<urn:uuid:c645232d-454e-4eaf-8bf9-093f55ee4d52>
|
CC-MAIN-2016-26
|
http://www.redbubble.com/people/hanspeder/works/11737188-abbey-of-st-agatha
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00100-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971367
| 170
| 2.53125
| 3
|
Cleaning supplies often contain harmful chemicals that, when introduced to the environment or to the human body, can have devastating effects. 10% of the reported poisonings at Poison Centers are caused by cleaning supplies. Many contain VOCs, phosphates, EDTA and phenolic compounds. These compounds can be carcinogenic to humans and create hazards, such as algal blooms, in the environment. Air pollution is another major concern. An EPA study found that cleaning products can contribute to elevated indoor air pollution levels that register up to 200 times those typically found outside. Additionally, many cleaners are not biodegradable and persist in the environment for long periods of time. They especially create problems in aquatic habitats where they often wind up after improper disposal. A staggering quantity of cleaning supplies are disposed of improperly. In Minnesota alone, 700 tons of cleaners are dumped down the drain every month.
Look for the following characteristics when purchasing cleaning products:
- Both Biodegradable and Non-Toxic: To humans, as well as fish and animals.
- No EDTA or NTA: These ingredients should be avoided because they are suspected carcinogens.
- No Phosphates: These are responsible for algal blooms that devastate aquatic environments and should be avoided.
- No Chlorine Bleach: If bleach containing chlorine is introduced to the waste stream, it can react with other compounds to create harmful chlorinated organic compounds.
- Vegetable Based: Look for cleaners that use vegetable based surfactants.
- Low VOC Content: Green Seal recommends using cleaners that have less than 10% VOC content by weight in the concentration intended for use.
- Concentrated: Buy cleaners in concentrated form to cut down on packaging.
- Dilution in Cold Water: this cuts down on energy costs associated with heating water.
TheGreenOffice.com offers a wide selection of of eco-friendly cleaning supplies
Using environmentally-sound cleaners in a conscientious manner will considerably improve your office’s green performance:
- Dilute cleaners in cold water to reduce the energy consumption associated with heating water.
- Use cleaners sparingly – a little goes a long way.
- Try to clean messes with soap and water before moving to heavy-strength cleaners.
- Use cotton rags instead of paper-towels: They can be reused and then recycled.
- Consider cleaning with vinegar, baking soda, and lemon.
|
<urn:uuid:c03e12bc-9227-4a1e-b1c6-e22f37c9428c>
|
CC-MAIN-2016-26
|
http://www.thegreenoffice.com/go-green_greening-guide_white-papers_products-and-materials_cleaning-supplies
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00039-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918091
| 496
| 3.140625
| 3
|
February 2, 2012
New National Archives Video Short Documents 1297 Magna Carta Encasement Project
Magna Carta to Return to Public Display on February 17
Washington, DC…The National Archives is today releasing a short documentary video, “The Encasement of Magna Carta.” The video is part of the ongoing series Inside the Vaults, and can be viewed on the National Archives YouTube channel: http://tiny.cc/MAGNACARTA. The video shows the fascinating behind-the-scenes creation of the case which will display the 715-year-old document for the world’s viewing. The 1297 Magna Carta being encased is one of only four remaining 1297 originals. Magna Carta is said to have influenced early American settlers and been an inspiration for the Constitution of the United States.
Magna Carta is on loan to the National Archives from its owner, philanthropist and co-founder of the Carlyle Group, David M. Rubenstein. Mr. Rubenstein underwrote the conservation treatment of the document and the fabrication of its new encasement. The encasement was designed by the National Archives in cooperation with the National Institute of Standards and Technology (NIST) who fabricated the encasement.
Mark Luce, director of fabrication services at NIST, Jay Brandenburg, project engineer and Charles Tilford, a physicist now retired from NIST, explain how the encasement was fabricated and assembled. Project manager Catherine Nicholson and supervisory conservator Terry Boone, both of the National Archives, discuss the conservation treatment and mounting of Magna Carta inside the encasement.
The encasement was machined at NIST out of two solid blocks of aluminum and sits on a unique cart designed to support the document on exhibit. The encasement is air tight and filled with humidified argon, an inert gas that unlike oxygen will not degrade the document. Elaborate instruments continuously monitor conditions within the encasement for humidity and evidence of leaks.
This is the second short documentary produced by the National Archives about Magna Carta. The first, “The Conservation Treatment of Magna Carta” can be viewed at this link: http://tiny.cc/MAGNACARTA2
Background on “Inside the Vaults”
“Inside the Vaults” is part of the ongoing effort by the National Archives to make its collections, stories, and accomplishments more accessible to the public. “Inside the Vaults” gives voice to Archives staff and users, highlights new and exciting finds at the Archives, and reports on complicated and technical subjects in easily understandable presentations. Earlier topics include the conservation of the original Declaration of Independence, the new Grace Tully collection of documents at the Franklin Delano Roosevelt Presidential Library, the transfer to the National Archives of the Nuremberg Laws, and the launch of a new National Archives user-friendly search engine. The film series is free to view and distribute on our YouTube channel at http://tiny.cc/Vaults
Created by a former broadcast network news producer, the "Inside the Vaults" video shorts series presents “behind the scenes” exclusives and offer surprising glimpses of the National Archives treasures. These videos are in the public domain and not subject to any copyright restrictions. The National Archives encourages the free distribution of them.
# # #
For Press information, contact the National Archives Public Affairs staff at 202-357-5300.
|
<urn:uuid:ea4d79cf-536a-4144-a75c-a8c0fd532a8d>
|
CC-MAIN-2016-26
|
http://www.archives.gov/press/press-releases/2012/nr12-63.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00113-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.89016
| 716
| 2.671875
| 3
|
You are hereSt Mary's Naval Barracks, Chatham
St Mary's Naval Barracks, Chatham
The fortification of Chatham started in 1756 and was further improved between 1805 and 1812 in the face of French aggression and the Napoleonic War. Demolished in the 1960's, St. Mary’s Barracks dated from between 1779 and 1782 and was built to house the prisoners who were used to build fort. This of course included French prisoners. St Mary's Barracks were later converted to a large powder magazine and then into a store for the Royal Engineers before becoming a barracks to house infantry between 1844 and 1881. Between 1881 and 1941 when it was taken over by the Royal Navy, it was occupied by various units of the Royal Engineers.
It is while St Mary's Barracks was being used by the Royal Navy that reports of a ghost were made. According to Daily Mirror 31 May 1946 in an article entitled 'Navy log ghost on crutches at midnight'.
“The Royal Navy has officially logged a complaint ratings have made about a ghost on crutches that walks at midnight.
“The ratings, sentries on night watch at a naval barracks, say that the ghost follows them around during the middle watch (midnight to 4am) and they have protested against doing solitary duty on that watch.
“The ghost that has been officially recognised by the Navy haunts St Mary’s Barracks, Chatham, Kent.
“Sentries doing duty on the ramparts overlooking the moat around the barracks say that when they are doing their rounds footsteps they cannot account for and a tapping, as of someone walking with crutches, are heard.
“One young sentry who felt that the ghost was near him panicked and ran to the guardroom for protection.
“Another rating claims to have seen the ghost. He described it as dressed in naval uniform of Nelson’s days, and said it was hobbling along the ramparts on crutches ...”
In 'Phantoms Legends, Customs and Superstitions Of The Sea' (1972) Raymond Lamont Brown says 'In the log book at Chatham Naval Barracks this simple entry can be read: 'Ghost reported seen during middle watch'. This refers to the ghost of a peg-legged sailor of Admiral Lord Nelson's time. This ghost hobbles around with a crutch and has been seen twice, in 1947 and 1949 in Room 34 of the Cumberland block, the oldest part of St Mary's Barracks. This apparition is thought to be the ghost of a sentry murdered by escaping French prisoners during the Napoleonic wars. The unfortunate sentry was apparently beaten to death as he was making for Room 34 to wake his relief, who was late for duty.'
|
<urn:uuid:e5fdaa75-343c-4242-b46f-016b88f84da1>
|
CC-MAIN-2016-26
|
http://www.mysteriousbritain.co.uk/england/kent/hauntings/st-marys-naval-barracks-chatham.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00018-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981456
| 585
| 2.875
| 3
|
Nathaniel Hawthorne was on a remarkable run in the summer of 1852. He had previously won acclaim for his short tales, but a succession of novels that included The Scarlet Letter in 1850, The House of the Seven Gables in 1851, and The Blithedale Romance early in 1852 had launched him into the pantheon of great American writers. He had dug deep in the soil of his native Massachusetts to produce stories rich in historical meaning, contemporary relevance, and psychological intrigue. Given the success of the formula, it is something of a surprise that his next move was to cross the border into New Hampshire and begin plowing up the rocky dirt of the Granite State to produce a political biography of Franklin Pierce, the eventual 14th president of the United States.*
Though Pierce is often spoken of as one of the more handsome commanders-in-chief, he is also regarded as one of the worst. What was one of America’s greatest writers doing shilling for a candidate who was counted a mediocrity even in his own day? Campaign biographies had been a regular feature of presidential politics since Andrew Jackson’s emergence in the 1820s, but as one contemporary reviewer noted, “there are ‘hacks’ enough ... in every city, who would be right and well fitted to perform such filthy work.” Then, as now, The Life of Franklin Pierce (which the same review called a “venal homage to ambitious mediocrity”) seemed beneath the talents of Nathaniel Hawthorne.
The easy explanation for why Hawthorne chose to “degrade his pen into a party tool” is that he and Pierce were old mates from their student days at Bowdoin College in the 1820s. When Pierce won the Democratic Party’s nomination in 1852, Hawthorne quickly volunteered to write “the necessary biography,” seemingly as a gesture of friendship. In the book’s preface, he explained that he was “so little of a politician” that he could hardly “call himself the member of any party.” As such, he was entirely unaccustomed to this brand of writing “intended to operate on the minds of multitudes during a presidential canvass.” The book was merely the testimonial of a friend who had known the candidate at a formative period of life.
Yet the actual content of The Life of Franklin Pierce, and the spoils of victory that Hawthorne received for writing it, suggest other layers to the story. The book is a political biography as adept at aligning Pierce’s life story with the needs of the party as anything that could have come from the pen of a committed Democratic operative. And Hawthorne was acutely aware that, within the workings of 19th-century party politics, being serviceable to a victorious cause could come with handsome rewards in the form of a lucrative government office. When Pierce won, he gave his old friend a plush diplomatic post in England, but in the end, neither man proved better off for having taken their new job.
Writing a sympathetic life of Franklin Pierce was a tall order that would take every last bit of Hawthorne’s literary talent. Members of the opposition Whig Party raised a fair question when they chanted jubilantly, “Who is Frank Pierce?” Pierce’s supporters had given him the hopeful moniker, “Young Hickory of the Granite Hills,” but his accomplishments in life seemed a far cry from those of Andrew Jackson (“Old Hickory”), whom the name invoked. Though Pierce had served in the Congress and the Senate (being the youngest man yet elected to the latter body), he had hardly distinguished himself and had little reputation beyond his native New Hampshire. There, he had risen quickly in politics, in part through his charm, but mostly by virtue of being the son of Benjamin Pierce, a Revolutionary War hero and former governor of the state. Pierce had served in the recent Mexican-American War (1846-1848), but he had certainly not earned the glories of his opponent in the election, Gen. Winfield Scott.
Indeed, the uncertainty surrounding Pierce’s Mexican War record was a significant cause for concern. He had volunteered in 1846, and was soon promoted to the rank of brigadier general on the strength of his political stature and connections. To date, his military qualifications consisted of nothing more than a stint as the organizer and captain of the “Bowdoin Cadets,” a college group he led in marching exercises across the quadrangle (and of which Hawthorne had briefly been a member).
Pierce’s lack of experience in battle quickly became apparent. In his brigade’s first engagement, his horse spooked at the sound of artillery fire and began bucking and rearing wildly. A strong kick of the back legs sent Pierce lurching forward into an awkward and blindingly painful pelvic encounter with the pommel of his saddle. The hero fainted and fell to the ground, only to have his horse fall on his knee and a subordinate allegedly call him a “damned coward” when he didn’t get up. Unable to walk or ride, Pierce’s superiors ordered him to withdraw from action. He gallantly insisted on staying in the field, which he did until the next day, when he fainted again after wrenching his bad knee marching across marshy terrain. Pierce’s brigade was present for the climactic battle of Chapultepec, but Pierce himself was conspicuously absent, suffering terribly from Montezuma’s revenge in the sick ward.
Even leaving aside the undistinguished years in Congress and rumors of a weakness for the bottle, Pierce’s war record was enough to make a swiftboat—or, less anachronistically—saddle-pommel campaign an easy and perhaps even truthful enterprise. Given the circumstances, Hawthorne’s brother-in-law, the famous education reformer Horace Mann, made what was to be a common mockery of the biography, claiming that “if he makes Pierce out to be either a great or brave man, then it will be the greatest work of fiction he ever wrote.”
The Life of Franklin Pierce is not fiction, at least not the kind that has tormented generations of high-school students trying to get through The Scarlet Letter. But it is a kind of literary invention, one that anyone who has come across a recommendation for a so-so student (which, incidentally, Pierce was) will recognize. As Hawthorne told a friend after finishing the book, “though the story is true, yet it took a romancer to do it.”
“The gist of the matter,” Hawthorne said, lay in explaining how Pierce remained “so obscure” in spite of “such extraordinary opportunities for eminent distinction, civil and military.” Hawthorne’s answer was to emphasize Pierce’s growth and development, which he claimed in a tortured turn of phrase “has always been the opposite of premature.” Pierce’s lack of distinction was the result of his tendency toward slow, measured progress. Hence, if Pierce wasn’t quite “distinguished for scholarship” in his early years at Bowdoin, Hawthorne assured readers that he worked hard to rise in his class rankings “without losing any of his vivacious qualities as a companion”; if he didn’t initially “give promise for distinguished success” as a lawyer, he had eventually become an able advocate at the New Hampshire bar.
So, too, with Pierce’s congressional and military records. Detailing Pierce’s generally quiet tenure in Congress during the 1830s and early 1840s, Hawthorne notes that Pierce “rendered unobtrusive, though not unimportant, services to the public.” So unobtrusive was Pierce in Congress that even Hawthorne must admit that had Pierce been a bit more ostentatious of his “genuine ability” while in Congress, “it would greatly have facilitated the task of his biographer.” If Hawthorne’s Brig. Gen. Pierce is somewhat hapless with horses and poorly timed illnesses, he emerges through some delicate narration and clunky dialogue as a dutiful and determined leader of men. In one exchange, he convinces his superior officer and opponent in the presidential race, Gen. Scott, to allow him to fight on in spite of his injured knee. When Pierce’s second fainting spell seems to bear out the wisdom of Scott’s original order, Hawthorne notes that the fall came “within full range of enemy fire.” Thus, if he went down “faint and insensible,” he at least did so valiantly.
Yet burnishing Pierce’s political and military background was only part of Hawthorne’s charge in writing the book. In addition to a testimony to Pierce’s character and mettle, Hawthorne also needed to produce a document that would reveal him as a Democrat who could hold both the party and the country together, no easy task in the wake of the Mexican-American War that had not quite made him famous. Victory in that war had brought enormous swaths of western territory under American control while raising politically explosive questions about the future status of slavery there. Rancorous debate over how or whether to restrict the westward spread of the peculiar institution had created bitter divisions within the two major political parties and spawned a third, the Free Soil Party. The Compromise of 1850 had established a rough truce, but the election of 1852 threatened to break tenuous political alignments.
Hawthorne needed to show that Pierce, however blurry his background, was the perfect man for the moment. Though hardly the Democrats’ first choice (he won the nomination on the 49th ballot at the convention), he made sense as a selection first and foremost as a “doughface”—a Northerner with Southern sympathies and conservative views on slavery. Any indication that Pierce’s sympathies did not lie with the preservation of slavery would cause the party’s Southern wing to bolt and potentially open a path to victory for Scott. At the same time, emphasizing Pierce’s status as a doughface threatened to push anti-slavery Democrats into the splinter Free Soil Party, fracturing the party in the North.
Hawthorne found a solution in deftly playing up Pierce’s longstanding support for the South while shaming any Northerners who would place their opposition to slavery above the preservation of the party and the Union. Noting Pierce’s support as a Congressman for the so-called “gag rule” that automatically tabled any anti-slavery petitions that came before Congress, Hawthorne added that Pierce had “dared to love …. his whole, united, native country” over “the mistiness of a philanthropic theory.” More recently, Pierce had given his undying support to the Compromise of 1850 and its controversial Fugitive Slave Law, which granted the government broad powers to return escaped slaves to Southern masters. That support, Hawthorne claimed, was the result of a deeper wisdom than that possessed by the “least scrupulous” of anti-slavery agitators. The wise view “looks upon slavery as one of those evils which divine Providence does not leave to be remedied by human contrivance,” one that would eventually “vanish like a dream.” Better, he said, to stick with Pierce and the Union and let a higher power take care of the rest.
Reviews of the book were mixed, with opinions tending to split along party lines, but Hawthorne found the reaction from his fellow New England writers particularly negative. His politicking was unseemly enough, yet the fact that it included such forceful and sincere condemnation of abolitionism made it far worse. In a year that saw the wild success of Harriet Beecher Stowe’s anti-slavery novel Uncle Tom’s Cabin and impassioned opposition to the fugitive slave law, Hawthorne was moving against the stream of sentiment within the literary community. The abolitionist minister Theodore Parker noted that Hawthorne’s statements made him one of the two “men of Genius in this age” (the other being the Scottish writer Thomas Carlyle) to come out “on the side of slavery” and “the enemies of mankind.” In a letter to a friend Hawthorne admitted that “the biography has cost me hundreds of friends, here at the north . . . in consequence of what I say on the slavery question.” For the trouble, he added, “Pierce owes me something.”
Hawthorne was no stranger to the political spoils system, having famously been given the post of surveyor at the Salem Custom House by Democratic friends in 1846 only to be turned out by Whig foes in 1849. When Pierce carried all but four states in the election, Hawthorne returned to the right side of the spoils. The book had made him into a regular party functionary, and while Hawthorne contemplated different diplomatic posts he might take, friends and relations crawled out of the woodwork seeking his influence with the new president. Among the supplicants lobbying for Hawthorne’s help in finding their own jobs in the Pierce administration was Herman Melville, still down on his luck after the supreme flop of Moby-Dick the previous year. Unfortunately, there were limits to Hawthorne’s sway; he was able to find an aged uncle work as a repairman at the Salem Custom House, but his efforts yielded nothing for Melville.
Hawthorne himself landed precisely the position he wanted—the consul of Liverpool, a post that his wife claimed was “second in dignity to the Embassy to London.” Dignity aside, it was a lucrative one that promised to make up for the meager financial rewards of writing. In addition to paying a regular salary, the post also came with a cut of American shipping that went through the port. Hawthorne may have welcomed the income, but it came at a high cost. He struggled to balance his diplomatic work with his writing and seven years passed before his next work, The Marble Faun. Meanwhile, at home, Pierce did not rise to the office of the presidency, as the biography’s theme of development suggested he would. Shattered by the tragedy of his son’s death just months before taking office, Pierce seemed out of his depth in the face of escalating sectional conflict.
By the time Hawthorne returned to the United States in 1860, the country was well on its way to war. It was a story that Hawthorne hadn’t imagined from his safe distance in England and later Italy, though it was one that his friend Pierce had at least some role in making during his single term in office. As the war became a merciless struggle against slavery that hardly fit with Hawthorne’s image of the institution vanishing like a dream, he remained hostile to the cant of abolitionism. Troubled by the events unfolding around him and hobbled by failing health, Hawthorne lost his literary voice and struggled to bring any of his work to completion. Pierce, for his part, opposed what he called a “cruel, heartless, aimless, unnecessary war.” With Hawthorne by his side, he bitterly denounced Lincoln, emancipation, and the course of the conflict in a poorly timed speech on July 4, 1863—just a day after the Union triumph at Gettysburg. Both men seemed increasingly estranged from the world being wrought by the Civil War, so it was fitting that the two should have embarked on a carriage tour of Pierce’s New Hampshire the following May in hopes of restoring Hawthorne’s health. His health, however, was too far gone; he died in his sleep on May 19, 1864, discovered early that morning by his old friend Franklin Pierce.
Correction, Sept. 17, 2012: This article originally referred to Franklin Pierce as the 13th president of the United States. Pierce was the 14th president. (Return to the corrected sentence.)
|
<urn:uuid:8258d43c-7ccf-42df-af35-01d3cdeb8c3b>
|
CC-MAIN-2016-26
|
http://www.slate.com/articles/news_and_politics/history/2012/09/nathaniel_hawthorne_s_biography_of_franklin_pierce_why_d_he_write_it_.single.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00142-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981597
| 3,393
| 2.859375
| 3
|
This site has reviewed quantum computing algorithms and tried to clarify how quantum computers relate to encryption and decryption
Some quantum computers can or could use Shor’s algorithm to break the main public key cryptosystems. Those based on the difficulty of factoring and the discrete logarithm, but there are still public key cryptosystems which are so far resistent to both quantum and classical attacks (like those based on certain shortest vector in a lattice problems.) Quantum computers can’t break any code in existence.
There are several families of quantum computers.
There is more than one way to make a quantum computer.
Depending upon which way you make it determines what problems it can solve.
Lots of people ask questions like:
How many qubits?
What decoherence time?
Think about an analogue computer vs. a digital one, or a programmable logic chip vs. a general purpose computer.
● Gate model - 'Standard ' model
● Adiabatic Quantum Computation – a close contender
● Cluster state (measurement based) – slightly more obscure
● Topological quantum computing – slightly more obscure
A discussion of quantum computer algorithms
IEEE - Adiabatic Quantum Computation is Equivalent to Standard Quantum Computation
In 2007, Geordie Rose stated that he believes quantum simulation (Abrams-Lloyd Algorithm) is the most important quantum computer algorithm
In 2007, Dwave talked about using their system for image recognition. They have done that work with Google
In 2008 Scott Aaronson said that Dwave talking about lining up customers was comically premature. So was 3 years comically premature ? Given that this was the release of an entirely new class of computer I do not think so. There are video games that are delivered with more than a 3 year delay and Microsoft has had delays in releasing operating systems with more than 3 year delays.
In 2008, I discussed how quantum annealing can achieve a speedup of one million times over classical annealing and classical computers
Some laser based quantum computers are able to run Shor's algorithm.
There are many kinds of quantum computers and most cannot or have not run Shor's algortihm
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
|
<urn:uuid:807751b4-9edd-400f-9cdb-48cd1b6288ee>
|
CC-MAIN-2016-26
|
http://nextbigfuture.com/2011/05/reviewing-some-history-of-dwave.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00101-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91907
| 485
| 2.890625
| 3
|
Environmental Factor, November 2008, National Institute of Environmental Health Sciences
Study Finds Elevated PBDEs in California
By Eddy Ball
Investigators funded by an NIEHS Environmental Justice Program grant report high levels of polybrominated diphenyl ethers (PBDEs), ubiquitous compounds used as a fire retardant in furniture, in the house dust and serum of people living in California. Serum levels of the compound penta-BDE in residents of California were twice the national average. House dust levels in California ranged from 4 to 10 times levels found in homes in other parts of the United States - and 200 times levels reported in Germany.
Published online October 1 by Environmental Science and Technology, the study was a collaboration involving researchers at the Silent Spring Institute (http://www.silentspring.org/), the University of California Berkeley, Brown University and Communities for a Better Environment, a California-based environmental justice group. In addition to NIEHS financial support, which is overseen by NIEHS Program Analyst Liam O'Fallon, the study also received funding from the New York Community Trust.
Animal studies have demonstrated that PBDEs are associated with thyroid hormone disruption and adverse reproductive and neurodevelopmental effects. According to an Environmental Protection Agency review published earlier this year, the primary route of exposure is incidental ingestion and dermal contact with house dust, which raises special concerns about exposures in infants and toddlers.
Although PBDEs have been banned by the European Union and 11 states in the U.S. - and U.S. manufacturers discontinued production of PBDEs in 2004 - the scientists maintain that a substantial exposure reservoir remains in the environment. Older furniture continues to be an important source of exposure, and imported furniture containing PBDEs is still sold in many states. California's levels may be higher, the authors of the study speculate, because of the state's stringent furniture flammability standards adopted more than 30 years ago and the potentially harmful substitutes for PBDEs now in use or proposed.
"Virtually all the penta-BDE produced globally was used to meet this [California] fire standard," explained lead author and Silent Spring Postdoctoral Research Fellow Ami Zota, ScD, "and now these chemicals have been detected in nearly every species across the globe."
The study, led by scientists from the Silent Spring Institute, compared the concentrations in dust collected in 49 California homes with concentrations in dust collected under the same protocol from 120 homes in Massachusetts. The researchers also compared concentrations in their studies to reports by other investigators. The study used data from the 2003-2004 National Health and Nutrition Examination Survey (NHANES) to compare serum levels of PBDEs in residents of California with those in participants living elsewhere in the U.S.
Although the study's authors acknowledged several limitations to the study, they argued that the ubiquity and persistence of the PDBE exposure reservoir should be a lesson for regulators. "These findings suggest the need for more anticipatory assessments of the environmental health impacts of consumer product decisions [about other untested flame retardants] prior to their implementation," they concluded.
The researchers also called on NHANES to reinstate its earlier practice of measuring thyroid hormone so that direct correlations between PBDE and thyroid can be made for humans participating in that large-scale annual survey.
Citation: Zota AR, Rudel RA, Morello-Frosch RA, Brody JG. (http://www.niehs.nih.gov/) 2008. Elevated house dust and serum concentrations of PBDEs in California: Unintended consequences of furniture flammability standards? Environ Sci Technol [Epub ahead of print] doi: 10.1021/es801792z
|
<urn:uuid:f475f180-05a8-49e9-9a72-7798f8c70589>
|
CC-MAIN-2016-26
|
http://www.niehs.nih.gov/news/newsletter/2008/november/studyfinds-pbdes.cfm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00111-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930921
| 767
| 2.625
| 3
|
learn only 3 things about them ...
They may have more or fewer than 5 arms.
They take a long time to regenerate lost arms.
are hurt if they are removed from water for a long time.
Sea stars are encountered on most of our shores. Even the most "beat
up" shore will have some kind of sea star. But our Northern shores
appears to have the richest variety of sea stars. Some sea stars are
small and well hidden. Others are large and colourful.
What are sea stars? Although often
called starfish, these creatures are not fish at all! So it is more
correct to call them sea stars. Sea stars belong to the Phylum Echinodermata
and Subclass Asteroidea. There are about 1,800 known species of sea
stars, of which about 300 are found in shallow waters. Sea stars form
the second largest group of Echinoderms after the brittle stars (Subclass
Features: Almost everyone knows
what a sea star looks like! 'Asteroidea' means 'star-like'. Like other
echinoderms, sea stars are symmetrical along five axes, have a spiny
skin and tube feet.
An Armful: Sea stars have arms
that blend into one another before joining the central disk. Some
sea stars seem to be all arms with long narrow arms and a small central
disk. Others have arms that are so short they look like pentagons.
Most species of sea stars have five arms, although some may have more.
However, sometimes, you might come across a sea star with fewer than
five arms. Some kinds of sea stars are really flat, others may have
more cylindrical arms, and some are so round that they look like cushions.
Each arm is usually tipped with one or more sensory tube feet, and
an eye spot that detects light and dark but does not form an image.
Sometimes confused with brittle
stars. Unlike sea stars, brittle stars have very flexible and
long arms attached to a small central disk. Most brittle stars are
much smaller than sea stars, although some have very long arms.
Handicapped Stars: Sea stars are
famous for their ability to regenerate lost arms. But this takes time
and resources. Some species take up to a year to replace a lost limb.
In the meantime, the sea star is probably disadvantaged. If the central
disk is damaged, the sea star may die. Only a few species of sea stars
are known to regenerate from a piece of an arm. So you won't necessarily
get two sea stars when an arm of a sea star is separated. So please
don't purposely mutilate sea stars.
Mouth to the ground: The mouth
is on the underside facing the ground. Some sea stars have jaws made
up of five or more teeth arranged in a star around the mouth. Some
sea stars can extend their stomachs out of their mouths! Part of the
digestive system of a sea star extends into its arms.
Not all sea stars have an anus. Those that don't, spit out indigestible
bits through their mouth. In those that do, the anus is on the upper
surface of the central disk.
In the groove: Radiating from
the mouth and extending under each arm are grooves (ambulacral grooves).
These grooves usually contain 2-4 rows of tube feet. The margins of
the groove are guarded by moveable spines that can close over the
groove. Out of the water, a sea star will usually retract its tube
feet into the grooves so it looks rather lifeless.
Fancy footwork: Sea stars use
their tube feet to move around. Unlike brittle stars, sea stars move
mainly by undulating waves of their tube feet and not by bending their
arms. Sea stars appear to have special glands in their tube feet that
secrete a glue so the feet stick to things, and another substance
to release the tube feet. In some sea stars, the tube feet ends in
suckered disks. These act like suction cups when pressure is applied
by the sea star. Burrowing sea stars may have suckerless tube feet
that end in points to better dig into the ground. Sea stars also use
their tube feet to manipulate food. Some sea stars also breathe through
their tube feet!
Skin and bones: Sea stars have
an internal (not extenal) skeleton. A sea star's body is made up of
tiny ossicles (plates made mostly of calcium carbonate), connected
by a special kind of connective tissue called 'catch connective tissue'.
This connective tissue can rapidly change from almost liquid to rock
hard and allows them to slowly bend and move their arms to climb,
right themselves and clasp prey. Sea stars can also purposely drop
off an arm when stressed or attacked, by rapidly changing the consistency
of this tissue. The entire sea star has a skin that covers all of
the body, including the spines.
Jaws all over the body! Some sea
stars also have tiny structures called pedicellariae that look like
a pair of jaws, or tiny clams. The main function of these is to keep
the body of the sea star free of parasites, encrusting organisms and
debris. These little jaws can snap and those on big sea stars can
even pinch inquisitive human fingers! Pedicellariae may also be used
to collect food.
Water of Life: Like other echinoderms,
sea stars have a water vascular system, a network of internal canals
supported and pumped mainly with seawater. They suck seawater into
their bodies through the madreporite: a sieve-like structure that
usually appears as a spot on the upperside near the centre. By expanding
or contracting chambers in the internal system, the water pressure
in canals within the body can be directed and changed. This is how
they move their tube feet. A study also found that the water within
a sea star may help
it keep cool when exposed at low tide. As they rely on seawater,
it is stressful for sea stars to be left out of water for too long.
Try not to remove sea stars from the water. If you have to do so,
please return them quickly to where you found them.
What do they eat? Some sea stars
placidly gather edible bits from the water or surface. But most sea
stars are scavengers or carnivores, 'sniffing' out their meal by the
chemicals released by the prey or dead animals. Among the more common
prey are snails, bivalves, crustaceans, worms and other echinoderms.
Some sea stars specialise in a certain prey. Some sea stars feed on
sponges, sea anemones and corals. Some carnivorous sea stars eat detritus
when there's nothing better to eat.
Some prey of sea stars have developed various ways to escape from
sea stars. Bivalves such as scallops
(Family Pectinidae) may leap, while others burrow away quickly, some
snails may somersault.
Stomach Turning Table Manners:
Some sea stars, especially those with long arms, can evert their stomachs.
This ability is particularly useful for carnivorous sea stars that
feed on bivalves. How does it do it? A carnivorous sea star uses its
tube feet to hold the bivalve againsts its central mouth. It then
pushes out its stomach through its mouth and inserts its stomach into
the bivalve's shell through imperfections in the fit of the two shells.
If there are no such imperfections, the sea star simply pulls the
shells apart to create a tiny gap! Once inside the shell, digestive
juices are poured on the hapless victim. Digested material is moved
by cilia (minute hairs) on tracks into the sea star. Thus the prey
is partially digested in its own shell!
The Crown-of-Thorns sea star (Acanthaster planci) pushes its
stomach out of its mouth to digest coral polyps in their skeletons.
Sea stars that eat detritus may push out their stomachs to mop up
whatever is on the surface.
However, sea stars with short arms usually don't push out their stomachs
and simply swallow their prey whole and digest them in their stomachs.
What eats them? While some fishes
may nibble on adult sea stars, it appears they are not considered
tasty by most other animals.
Dead or Alive? All the sea stars
that you see are probably alive. You are unlikely to come across a
skeleton of a sea star. Dead sea stars disintegrate quickly and do
not leave behind whole skeletons. A live sea star also has moving
tube feet. When removed from the water, however, sea stars will retract
their tube feet and may appear dead.
Don't pick up sea stars! Many
sea stars can purposely drop off an arm if it feels threatened. This
is how they might escape the jaws of a predator, or if a stone should
accidentally trap an arm. If you pick up a sea star by the arm, you
may trigger off the same reaction. Also, it is stressful for a sea
star to be out of water for a long time. So please admire the sea
stars where they are.
Should I put a sea star that is high and
dry on the sand back into the water? Intertidal sea stars
are used to being out of water during low tide. It is best to leave
sea stars were they are.
Don't make a sea star flip over
Not all sea stars can do this easily. Even for those than can, it
consumes energy and if the same sea star is made to do this several
times, it can exhaust and thus injure the animal.
Aren't sea stars bad for reefs? Don't they
eat up all the hard corals? The Crown-of-Thorns sea star
(Acanthaster planci) is notorious for decimating reefs. This
sea star eats the polyps of hard corals leaving behind dead white
skeleton. These sea stars are only a danger to reefs when there is
a population explosion of them. Such a situation is generally is believed
to be due to an imbalance in the natural system. For example, when
their predators are overharvested. When there are low numbers of this
sea star, they do not cause massive damage. This sea star has not
been encountered on our shores.
Living with a star: Tiny snails
may live on the upper surface of a sea star, or under their arms.
Sea star babies: Sea stars have
separate genders and are usually either male or female. Eggs and sperm
are stored in their arms. Most species practice external fertilisation,
releasing eggs and sperm simultaneously into the water while standing
on tip toes. More
about this spawning posture on
the Echinoblog. Some
can produce lots of eggs; a single female may produce millions! Sea
stars undergo metamorphosis and their larvae look nothing like the
adults. The form that first hatches from the eggs are bilaterally
symmetrical and free-swimming, drifting with the plankton. They eventually
settle down and develop into tiny sea stars.
Human uses: Sea stars are generally
not eaten, and in fact it is advised not to eat them as many are toxic.
There are stories of pets which have eaten sea stars and died. More
about this on The
Echinoblog. They are also not that popular for the live aquarium
trade as they tend to eat their tank-mates. However, in some places,
sea stars are harvested alive and dried to be sold as cheap ornaments.
This is cruel indeed! In some coastal areas, sea stars are harvested
and chopped up as fish meal or fertiliser. Some sea stars are considered
pests on mussel, oyster and scallop farms.
Status and threats: Many of our
sea stars are listed among the threatened animals of Singapore. They
have become uncommon in Singapore mainly because of habitat loss due
to reclamation or human activities along the coast that affect the
water quality. Trampling by careless visitors and overharvesting can
also have an impact on local populations.
The large Knobbly sea star
is an icon of Chek Jawa.
Chek Jawa, Jun 05
Juvenile Knobbly sea stars are common on
Cyrene Reefs but not elsewhere.
Cyrene Reefs, Apr 08
The Cushion star is more pentangonal
than star shaped.
Terumbu Ular, Apr 06
The Eight-armed sand star
has more than five arms.
Chek Jawa, Jul 07
The shorter arm of this Sand sea star
Changi, Jul 03
Long pointed tube feet of the Sand sea star
helps it move quickly over the sand.
Chek Jawa, Apr 05
Underside of a Common sea star.
Greenish stomach outside the central mouth, tube feet emerging from
groove beneath the arms.
Chek Jawa, Jan 03
Huge bivalved pedicellaria
(pincer-like structures) on the
underside of the Cake sea star.
Chek Jawa, Jun 04
Tiny white snails are sometimes seen on the upperside of a Sand
Changi, Jun 05
A sea star disintegrating, possibly
due to flooding and a drop in salinity.
Chek Jawa, Jan 07
The Crown sea star can come in dull
or bright colours (below).
Chek Jawa, Jan 03
Chek Jawa, May 05
Asteroidea recorded for Singapore
from Wee Y.C. and Peter K. L. Ng. 1994. A First Look at Biodiversity
*from Lane, David J.W. and Didier Vandenspiegel. 2003. A Guide
to Sea Stars and Other Echinderms of Singapore, and Didier VandenSpiegel
et al. 1998. The
Asteroid fauna (Echinodermata) of Singapore with a distribuion table
and illustrated identification to the species.
in red are those listed among the threatened
animals of Singapore from Ng, P. K. L. & Y. C. Wee, 1994. The
Singapore Red Data Book: Threatened Plants and Animals of Singapore
callosus (EN: Endangered)
Fromia monilis (Peppermint
sea star) (VU: Vulnerable)
(Icon sea star) (VU:
equestris (Galloping sea star)
species (Luidia sand star) with list of species recorded for Singapore.
insignis (EN: Endangered)
- Lane, David
J.W. and Didier Vandenspiegel. 2003. A
Guide to Sea Stars and Other Echinoderms of Singapore.
Singapore Science Centre. 187pp.
- Didier VandenSpiegel
et al. 1998. The
Asteroid fauna (Echinodermata) of Singapore with a distribution
table and illustrated identification to the species. The Raffles
Bulletin of Zoology 1998 46(2): 431-470.
- Ng, P. K.
L. & Y. C. Wee, 1994. The
Singapore Red Data Book: Threatened Plants and Animals of Singapore.
The Nature Society (Singapore), Singapore. 343 pp.
- Wee Y.C.
and Peter K. L. Ng. 1994. A First Look at Biodiversity in Singapore.
National Council on the Environment. 163pp.
Neville. 2007. Sea
stars: Echinoderms of Asia/Indo-Pacific. Neville Coleman's
Underwater Geographic Pty Ltd, Australia.136pp.
Ashely. 2002. Sea Urchins of Australia and the Indo-Pacific.
Capricornia Publications. 180pp.
Terrence M., David W. Behrens and Gary C. Williams. 1996. Coral
Reef Animals of the Indo-Pacific: Animal life from Africa to Hawaii
exclusive of the vertebrates
Sea Challengers. 314pp.
- Allen, Gerald
R and Roger Steene. 2002. Indo-Pacific
Coral Reef Field Guide.
Tropical Reef Research. 378pp.
- Edward E.
Ruppert, Richard S. Fox, Robert D. Barnes. 2004.Invertebrate
Brooks/Cole of Thomson Learning Inc., 7th Edition. pp. 963
Jan A., 2005. Biology
of the Invertebrates.
5th edition. McGraw-Hill Book Co., Singapore. 578 pp.
Gordon, John E. Miller, David L. Pawson and Porter M. Kier, 1995.
Stars, Sea Urchins, and Allies: Echinoderms of Florida and the
Smithsonian Institution Press. 390 pp.
Sabine, 2000. Echinoderms
of the Philippines: A guide to common shallow water sea stars,
brittle stars, sea urchins, sea cucumbers and feather stars.
Times Edition, Singapore. 144 pp.
Neville. undated. Sea
Stars of Australasia and their relatives. Neville Coleman's
World of Water, Australia. 64pp.
|
<urn:uuid:ef0a8995-b63b-4b98-b693-596d2a1d649a>
|
CC-MAIN-2016-26
|
http://www.wildsingapore.com/wildfacts/echinodermata/asteroidea/asteroidea.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00191-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.883126
| 3,736
| 3.515625
| 4
|
What do you really know about the Federal Reserve? Sure, you have heard of it and if you are like most people, you probably have a basic understanding of how it works. You might know that the Fed has something to do with interest rates and that it is an independent body, but you might have some ideas about the Fed that are not accurate.
First, the Federal Reserve is a central bank. The function of the Fed and central banks all over the world is to regulate the country's money supply. In addition to placing money into and removing it from circulation, central banks keep a watchful eye on the value of money by taking steps to control inflation.
The Federal Reserve (or Fed) is the central bank of the United States. In 1977, Congress amended The Federal Reserve Act, creating what is now known as its dual mandate. The Fed is charged with creating an environment for maximum employment and stable prices.
How well it has met this mandate since the most recent recession is a subject of discussion among armchair politicians and economists everywhere, but one thing is clear: there are many fallacies about the Fed. The following misconceptions are among the most popular.
1. The Fed Is Not Audited
According to Eric Rosengren, President and CEO of the Federal Reserve Bank of Boston, all 12 Federal Reserve Banks employ internal auditors along with an outside auditing firm, Deloitte & Touche. The Federal Reserve's Inspector General, as mandated by Congress, also audits the Fed.
2. The Fed Operates in Secret
In the same speech, Rosengren addressed this issue. He acknowledged that when he joined the Fed 25 years ago, transparency was not a priority. He went on to say that when the economy was essentially melting down in 2008 and 2009, diverting the crisis was the priority. Communicating with the public was secondary.
However, he later cites the addition of published federal funds rate targets, minutes of committee meetings, and the latest announcements of interest rate targets and how long the Fed sees those targets remaining in place.
These changes, along with now-regular press conferences with the Chairman of the Federal Reserve, have put it on the path of increasing transparency.
3. The Fed Is Immune to Politics
The Fed, by mandate, is independent from other areas of government, but government pressure is very real. Former President George W. Bush appointed current Fed Chairman Ben Bernanke. President Barack Obama later reconfirmed Bernanke's appointment. Bernanke regularly testifies before Congress, where he often faces pressure from lawmakers to take steps to lower unemployment and stimulate the economy.
4. The Fed Sets Interest Rates
On Oct. 24, the Fed announced that it would keep interest rates unchanged at near 0%. Reading the many articles in the financial media would seem to indicate that the Fed unilaterally sets interest rates and the banks follow, but that is not true, according to the Federal Reserve.
There are two types of interest rates to consider. The federal funds rate is the rate at which banks borrow money and the prime rate is the rate at which banks lend the money they borrowed. Just as a retail store is free to sell most merchandise at any price it chooses to achieve the profit margin it needs, a bank can do the same thing.
The prime rate reported by the Fed is the average interest rate reported by the 25 largest banks. Many of those banks choose to set their rates based on the federal funds rate. Although the Fed does not directly set interest rates, its actions have an effect on them.
5. The Fed "Prints" Money
The treasury is the government agency that prints money. The Fed injects or removes money by printing or collecting physical currency. Today, electronic transfer injects money into the economy. Just as most consumers look at a computer screen or bank statement to view their money, so do banks that do business with the Fed.
6. More Money Equals More Inflation
Conventional wisdom holds that when more of something enters the market, the value falls and this creates inflation. This view may be oversimplified. Since banks do not "print" money, they buy financial products from a bank and deposit that money into the bank's account. The bank, based on how it views its own financial health, may choose to lend that money out or hold on to it. If the bank holds the money, then it does not enter circulation. If it is not in the economy, it cannot cause inflation.
7. Unwinding the Stimulus Measures Will Cause Negative Economic Effects
The argument is something like this: U.S. financial markets are at artificially high levels because the Fed has intervened and stimulated them through programs like Quantitative Easing, Operation Twist and the lowering of interest rates. When that entire stimulus is removed, the markets will plunge, taking any real economic growth with them. There will be no gradual unwinding because at some point, inflation will rise, forcing the Fed to take drastic measures.
According to Philly Fed President, Charles Plosser, it is not that the Fed does not have the tools to wind down the stimulus; he is worried that it may not act at the right time. He asks, "Will we have the fortitude to take the heat when it comes time to utilize [the tools]?"
The Bottom Line
Republican Presidential candidate, Ron Paul, in his book, End the Fed, argues that the Fed is a corrupt institution that does more harm to the economy than good. Others argue that the Fed, like central banks around the world, is essential to keeping a country's currency stable.
This controversy, much like all political debates, will live on. One thing is certain. The Fed is misunderstood by many, even some of those with financial professions.
|
<urn:uuid:44dbad95-4272-4196-afd7-4bf445dd4b7d>
|
CC-MAIN-2016-26
|
http://www.investopedia.com/articles/economics/12/misconceptions-about-fed.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00032-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961573
| 1,178
| 3.34375
| 3
|
What is Vehicle Stability Control?
Vehicle Stability Control: Introduction
Vehicle stability control is a technology designed to arrest skids. The intention is to enable a driver to maintain control of a vehicle during sudden and/or abrupt changes of direction. The goal is to keep the vehicle traveling on the path intended by the driver.
Different manufacturers refer to the technology by a number of different names; electronic stability control or “ESC”, AdvanceTrac, Electronic Stability Program or “ESP”, StabiliTrac, Vehicle Dynamics Control or “VDC”, and Vehicle Stability Assist are a few of the designations employed.
Designed to prevent the conditions known as understeer and oversteer, the technology was developed by the Robert Bosch Corporation. Vehicle stability control is generally agreed to have been initially employed in Mercedes-Benz and BMW models.
|
<urn:uuid:7762122e-e44e-4b9c-acde-f02bdbb4903b>
|
CC-MAIN-2016-26
|
http://www.autobytel.com/car-ownership/safety/what-is-vehicle-stability-control-114483/6/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00021-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957512
| 182
| 3.265625
| 3
|
The CDC officially updated its reported number of Lyme disease cases when it was shown the numbers were higher than 30,000 new cases per year they counted.
In fact, there are more than 300,000 new cases of Lyme disease annually just in the USA. Lyme disease can be found world wide with over 80 countries reporting evidence of infected ticks, animals and/or humans.
IF YOU DON'T LOOK FOR IT, YOU WON'T FIND IT!
The Florida Department of Health reports that between 2002 and 2011 only 23% of its Lyme cases were acquired in the State of Florida, indicating 77% were acquired while the person was traveling in another state or country. Although these statistics are far from accurate, they are a slight improvement over the once promoted claim "there is no Lyme disease in Florida". This inaccurate portrayal of the situation caused many Florida residents to be misdiagnosed, which led to unsuspecting citizens becoming chronically ill and disabled, while some, unfortunately, died.
Of the tick borne infections acquired in Florida, the majority, according to the Florida DOH, were reported from counties in the northern and central regions of the state. This region has a number of dedicated volunteers working hard to educate the public and health care professionals in spite of the Florida Department of Health's claim Lyme was not of concern. The more education and awareness efforts made by these volunteers, the more people in that area typically will be educated and diagnosed.
Between 1990 to 2012 approximately 12,730 new cases (using the CDC's 10-fold figures) of Lyme disease were reported, making Florida one of the Top 20 States with the most cases of Lyme disease in the USA. Unlike some geographical locations, due to the mild winters in the south, Lyme disease cases are typically reported in Florida year-round.
More than 3 people every day are contracting Lyme disease in Florida. Many will not be adequately treated in the early stages, if at all. Of those treated with 2-4 weeks of antibiotics (the current, but inadequate recommendations published by a handful of infectious disease doctors for insurance companies), up to 40% will relapse and may experience the late and chronic symptoms, requiring additional treatment.
Lyme Disease- Not Just a Rash and a Swollen Knee
The Lyme disease bacterium has the ability to enter the brain less than 24 hours after a tick bite. It is called the “great imitator,” because the symptoms can mimic lupus, arthritis, MS, fibromyalgia, dementia, ALS, ADD, depression, anxiety, chronic fatigue, Parkinson’s, Alzheimer’s and even autism.
Animal studies indicate in less than a week the Lyme spirochete (Borrelia burgdorferi) can be deeply embedded inside tendons, muscles, tissues, the heart and the brain. As the spirochetes invade tissues they replicate then destroy their host cell as they emerge. The cell wall can collapse around the bacterium, forming a cloaking device (or biofilm), allowing it to evade detection by many tests and by the body’s own immune system.
The Lyme disease spirochete (Bb) is pleomorphic, meaning that it can radically change form. This protective measure allows spirochetes to hide and protect themselves from the threat of a persons own immune system and antibiotics. Once the threat is removed (antibiotics are stopped, for example), the spirochetes can change forms once again, multiply, continue to damage tissues and organs, and patients may relapse, experiencing varying symptoms.
In humans, infection with the Lyme disease bacteria can lead to early symptoms such as headaches, debilitating fatigue, fever, joint and muscle pain, and possibly skin rashes. Later stage infection can affect the central nervous system and can negatively affect the brain, heart and muscular-skeletal system.
Symptoms of Lyme disease vary for each individual patient, and also vary in intensity over the course of the disease. The later stages have been described in studies as being equivalent to experiencing moderate cognitive impairment combined with a level of physical dysfunction similar to patients with congestive heart failure, and fatigue comparable to patients with multiple sclerosis.
On average, patients with chronic Lyme disease demonstrated symptoms for 1.2 years before being correctly diagnosed, with some patients suffering with debilitating symptoms for ten or more years before receiving a proper diagnosis. At the highest risk of acquiring Lyme disease are children. In a study of children with Lyme disease, researchers noted that an average of four doctors were seen before a proper diagnosis was made.
As many as half of the cases of Lyme disease report having no known tick bite or the classic “bulls-eye” rash. In one report more than 50% of patients developed serious brain or central nervous system involvement, many requiring hospitalization. Over 40% of Lyme patients have reported arthritic symptoms, such as painful joint swelling.
Studies by an international team of researchers indicates Lyme can be sexually transmitted. Spirochetes that cause Lyme disease (related to syphilis) have been detected in breast milk, umbilical cords, the uterus, semen, urine, blood, the cervix, tears, the brain, and other body fluids and tissues. Often entire families are found to be infected.
Lyme Disease Tests
According to a study from Johns Hopkins, Lyme tests miss 75% of the people who are infected with Borrelia burgdorferi (Lyme disease). Some medical literature indicates up to 90% of patients are missed using the current testing procedures. This is one reason those experienced with treating chronically ill Lyme patients say- "forget the test, treat the patient".
Outdated, Inaccurate, Insurance-Friendly Treatment Guidelines
The seriously outdated, highly contested Infectious Diseases Society of America (IDSA) 2006 Lyme disease treatment guidelines (favored by insurance companies) recommend that patients should have not one, but two positive Lyme tests before receiving treatment. Insurance companies have routinely used IDSA guidelines as a basis to deny reimbursement for diagnosis and treatment of Lyme disease.
CT Attorney General, Richard Blumenthal (currently a US Senator), ordered a lengthy investigation of the IDSA guidelines development process and issued the results in May 2008. He uncovered serious flaws in the IDSA guideline development process. Blumenthal stated in his press release-
"The IDSA's 2006 Lyme disease guideline panel undercut its credibility by allowing individuals with financial interests -- in drug companies, Lyme disease diagnostic tests, patents and consulting arrangements with insurance companies -- to exclude divergent medical evidence and opinion.”
Due to pressure from the IDSA (some guideline authors and its editor are from Johns Hopkins) these guidelines remain in effect and are the number one reason people are suffering from a chronic phase of the illness that the IDSA and Hopkins insists doesn't exist.
The Department of Human & Human Service's National Guideline Clearinghouse removed the IDSA guidelines from its list in early 2016, citing they were outdated and failed to meet minimal standards.
The IDSA is in the process of updating its guidelines; however, in 2016 they admitted the task of bringing the guidelines up to a minimal standard set forth by the Institute of Medicine (IOM) was too difficult for them to handle and the process is now in the hands of Tuft's University.
Ticks and The Diseases They Carry
Over 300 strains of Lyme (Bb) have been identified and the list continues to grow. Standard tests only detect exposure to ONE of the Borrelia (Lyme) strains. Florida has at least 8 different strains of Borrelia, only one (Borrelia burgdorferi) able to be detected on current tests on the market.
Florida also is home to over 60 additional tick and vector borne diseases.
Over 20 strains of Babesia (a tick borne organism) are unable to be detected in humans using standard blood tests; however, two strains are currently known to infect patients in growing numbers (Babesia microti and Babesia duncani- WA1). Tests can be ordered for both of these strains from speciality labs.
The Florida Department of Health states: "Babesiosis is not considered a significant human health issue in Florida. However, it is important to be aware of the disease as human cases continue to be diagnosed in northeastern states. Babesiosis is not currently a reportable disease in Florida."
Again, if you don't look for it, you won't find it.
As for Rocky Mountain Spotted Fever, according to the Florida DOH: "In Florida, the reported incidence has increased markedly in recent years, possibly to increased disease awareness and reporting. 163 cases of RMSF were reported from 2002 through 2011. Of these, 77% were acquired in Florida and the rest were acquired while the person was traveling in another state or country. Of the infections acquired in Florida, the majority were reported from counties in the northern and central regions of the state [again, where volunteer support groups are educating the public]. In Florida, cases of RMSF/SFR are reported year-round without distinct seasonality, though peak transmission typically occurs during the summer months."
More recently discovered Borrelia organisms, such as Borrelia miyamotoi and STARI (Southern Tick Associated Rash Illness) and multiple other strains found in Florida cannot be detected using current Lyme disease tests on the market. Studies indicate these spirochetes may be found in 10-20% of ticks studied and there are other identified and unidentified microbes present in the ticks.
Researchers are advising physicians to change their approach to diagnosis and treatment of tick bites, including treating the bite immediately and adequately instead of waiting for symptoms to appear or tests to become positive.
Lab tests were not designed to detect antibodies to Lyme disease (Borrelia burgdorferi) until 30 or more days after a person has been bitten by an infected tick.
The Florida Department of Health states: "STARI has been discovered in Florida and research on the occurrence of the disease is underway. A recent study has suggested that some STARI cases in the southern US may be attributable to previously undetected B. burgdorferi sensu lato. However, it may take some time before all the necessary information can be collected since much is still unknown about STARI."
Lyme disease, Babesiosis, Bartonella henselae and quintana (cat scratch fever and trench fever), Rocky Mountain spotted fever, Rickettsia amblyommii,histoplasmosis, Brucellosis, ehrlichiosis, anaplasmosis, Q-fever, Borrelia miyamotoi, Southern Tick Associated Rash Illness (STARI), Tularemia (rabbit fever), Mycoplasma, leptospirosis, parvo B-19 virus, salmonella,Morgellons, and Masters disease are some of the various infections (some life-threatening) that may be passed to animals or humans through the bite of an infected tick or other vector.
People with chronic Lyme disease may also test positive for trichinosis and Epstein Barr virus. According to the CDC, deaths due to Lyme disease over the last few years are currently equal those of Rocky Mountain Spotted Fever.
Many health care professionals are not familiar with the the growing number of infections found in ticks and other vectors; therefore, they are not testing, diagnosing, reporting, or treating them. Untreated or undertreated patients can quickly advance to late or chronic stages of all of the tick borne diseases. Once reaching the chronic stage, Lyme disease and tick borne infections are more expensive, time consuming and more difficult to treat and cure.
Reports are on the rise concerning the death of people receiving donated blood that contained tick borne disease organisms. The Red Cross admits their storage procedures do not kill the spirochetes that cause Lyme disease, nor do they kill Babesia or Bartonella organisms. Our nation’s blood supply is not routinely tested for vector borne infectious diseases, putting many American’s at risk.
The Financial Cost to Society
The long-term cost of Lyme disease to families, school systems, the health care system and the economy is shocking. The average diagnosis and treatment costs, and lost wages related to chronic Lyme disease are $61,688.00 per year, per patient for those with neurological involvement. If arthritis symptoms occur the cost goes up an additional $34,354.00. If there is cardiac involvement the costs increase an additional $6,845.00 per patient.
Mothers and fathers are losing their jobs and their homes due to the inability to work and the cost of chronic Lyme disease treatment. Many must apply for disability after failing to get a proper diagnosis and treatment in the early stages and becoming, as a result, chronically ill and disabled.
Children are often unable to attend school and costs for educating them are increasing. Using your tax dollars, the federal and state government foots the bill for many of the misdiagnosed and chronic Lyme cases, a responsibility insurers purposely fail to acknowledge thanks to the Infectious Diseases Society of America Lyme disease guidelines.
A preponderance of the evidence indicates active ongoing spirochetal infection is the cause of the persistent symptoms found in chronic Lyme disease patients. Extended antibiotic treatment has been effective in improving the quality of life for many who are chronically ill. All patients who fail to sustain lasting improvement after initial Lyme treatment should be re-evaluated and tested for additional tick borne diseases, then treated appropriately. The most current recommended Lyme and tick borne disease treatment guidelines can be found here.
*** The above facts and figures were gleaned from reports by the CDC, FDA, NIH, International Lyme and Associated Disease Society (ILADS), Lyme Disease Association (LDA), Yale, Johns Hopkins, National Library of Medicine, Florida Department of Health and the Maryland Department of Health and Mental Hygiene (DHMH).
For more information please contact Lucy Barnes- AfterTheBite@gmail.com
|
<urn:uuid:7c95fb88-88e4-46fd-8d37-bc3a74253240>
|
CC-MAIN-2016-26
|
https://sites.google.com/site/floridalyme/florida-fact-sheet
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00050-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.951705
| 2,868
| 3.109375
| 3
|
A Moore machine produces an output for each state. An FSA is thus a Moore machine with two outputs, success and failure, corresponding to final and nonfinal states respectively. A Mealy machine produces an output for each transition (state/input pair). A Moore machine can be transformed into an equivalent Mealy machine by associating the output of each state with every transition that leads to that state. The languages accepted are the same (although the Mealy machine doesn't recognize the empty word).
Finite State Machines with Output (Mealy and Moore Machines)
Articles which converts an FSA to equivalent Moore and Mealy machines and discusses their equivalence.
Implementing Mealy and Moore Machines
This site has an example of conversion of a FSA to equivalent Moore and Mealy machines.
Wikipedia article on Mealy machines, which are simple transducers.
Wikipedia article on Moore machines which are FSA with output determined by current state alone.
Sequential Log Implementation
A set of slides comparing Moore and Mealy machines and showing how they are used in designing the logic for vending machines and traffic light controllers.
Specification of Sequential Systems
The application of Moore and Mealy machines in the design of synchronous sequential systems.
Last update:January 2, 2007 at 19:58:46 UTC
|
<urn:uuid:4cf50ea3-0381-4049-a1e9-b5fc207456e6>
|
CC-MAIN-2016-26
|
http://www.dmoz.org/Computers/Computer_Science/Theoretical/Automata_Theory/Mealy_and_Moore_Machines/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00120-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.869082
| 266
| 3.109375
| 3
|
The saliva of a four-year-old boy, tested outside the Chinese capital of Beijing last week, is raising a question that the world would rather not ask: Could this be a forewarning of the next human pandemic?
In the saliva, Chinese scientists discovered an avian virus called H7N9. First reported three weeks ago, H7N9 has now emerged among people in China’s north, centre and eastern seaboard. Of 77 cases, 16 have died. Many of those infected have multiple-organ failure and brain damage. Masks are out on the streets of Shanghai, and the Chinese poultry industry has suffered losses of more than Rs.8,300 crore.
These are worrying but not unusual manifestations of a disease outbreak. But the scientific world is extraordinarily concerned because the four-year-old was the first person to be asymptomatic—he displayed no signs of infection. This means scores of people could be infected without anyone knowing it. H7N9 could even be masquerading as a common cold.
“We are concerned by the sudden emergence of these infections and the potential threat to the human population,” wrote Rongbao Gao and colleagues at China’s National Institute for Viral Disease Control and Prevention in an 11 April report published in the New England Journal of Medicine, after studying genetic, epidemiological and virological data of the three who died in March. The illness of the three victims began as a fever and cough, but their circumstances were diverse.
One was a 27-year-old butcher who did not kill birds, although he worked in a market where they were sold. The second was a 35-year-old housewife who visited a chicken market a week before she fell ill. The third was an 87-year-old man who had no known contact with birds. Gao and colleagues wrote: “An understanding of the source and mode of transmission of these infections, further surveillance, and appropriate counter measures are urgently required.”
Translated: We guess the virus is coming from birds, but we do not really know if other animals are involved. We do not know how it is transmitted. We do not know how to cure or prevent it.
In an accompanying perspective, Nancy Cox, director of the influenza division at the Centers for Disease Control and Prevention (CDC) in the US, and Timothy Uyeki, a CDC physician, explain why the latest edition of the bird flu is of such concern. “In addition to causing severe illness and deaths, the novel H7N9 viruses reported by Gao and colleagues have genetic characteristics that are of concern to public health,” wrote Uyeki and Cox.
One, the rapidly mutating virus appears to have developed the ability to reproduce in the cooler human respiratory tract, as opposed to its warmer natural habitat, the digestive system in birds. Two, this mutation has previously showed up in ferrets frequently used in flu research. Three (though Uyeki and Cox do not mention it), the mutation was also found in viruses that caused flu epidemics in 1957 and 1968.
In short, H7N9, which until last month resided quietly in birds, appears to have swiftly—and virtually undetected—transformed itself into a human virus, mutating roughly eight times as fast as a standard flu virus.
Unlike H5N1, which caused global panic in 2006 and is highly pathogenic—meaning it has a strong ability to cause disease and damage the host—the H7N9 strain, according to the latest genetic data, causes only mild disease in domestic poultry and wild birds. This means transmission to humans might occur through infected but seemingly normal poultry, a potential epidemiological nightmare because it will be hard to discern outbreak sites.
Thus far, the anxiety is under control because there appears to be no evidence of human-to-human transmission, which still hasn’t happened with the older and more notorious H5N1 strain.
The larger concern around H7N9, and indeed a variety of bird and other viruses, is the rising threat of zoonotic infections, or infectious diseases that jump from animals to people. In his new book, Spillover, science writer David Quammen explains that as predators favour particular prey, so do pathogens. And just as a lion might occasionally kill a cow instead of a zebra, a pathogen can choose a new target, leaping from some non-human animal to a person, establishing itself as “an infectious presence”. Zoonosis, says Quammen, is “a word of the future, destined for heavy use in the twenty-first century”.
Some examples of zoonotic diseases: rabies (dogs to humans), AIDS (monkeys to humans), bubonic plague (rats to humans), and the Spanish flu of 1918-1919, which sprang from a wild bird, passed through an unknown chain of domestic animals, killing with unprecedented speed as many as 50 million people—the single deadliest event in recorded human history—before “receding into obscurity”. Will H7N9 recede into obscurity without the virulence of its ancestor? Even if it does, what else is out there?
Samar Halarnkar is a Bangalore-based journalist. This is a fortnightly column that explores the cutting edge of science and technology. Comments are welcome at firstname.lastname@example.org.
To read Samar Halarnkar’s previous columns, go to www.livemint.com/frontiermail-
|
<urn:uuid:6b8312e0-29a9-41c0-8d52-05213a451a61>
|
CC-MAIN-2016-26
|
http://www.livemint.com/Opinion/5nS4kHmAR2jgHeUrPZiADO/The-startling-rise-of-H7N9.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00053-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.96381
| 1,149
| 2.875
| 3
|
Monday, November 11, 1996
Home range of the wolves
Other wolf researchers have learned that the amount of food (or "prey density") in an area has a great effect on how large an area the wolves travel in. This area is called their "home range." The size of the home range depends in part on how much ground the wolves need to cover in order to find enough prey to survive.
At certain times of the year, and throughout some entire years, there is not enough food (berries, acorns, etc.) available in the higher elevation forests for the animals that the wolves prey upon. The lack of food causes the prey animals to move to other areas. It may also limit their ability to reproduce and raise young. This in turn forces the wolves to go elsewhere to search for prey. If the wolves settle on private property or begin preying upon domestic animals, we have to capture them and return them to pens or move them to another recovery area.
We use radio-tracking to follow the 10 collared red wolves here in the Park. In general, the home range is 16 square kilometers (6 square miles) in Cades Cove. It is as much as 10 to 20 times larger for wolves in the higher elevation forests. Breeding pairs, especially the females, limit their range during the denning season, but expand it again as the pups grow and need more food. The current population is between 10 and 24 wolves. The Park is 500,000 acres (780 square miles), but we cannot predict how many wolves can be supported without more accurate information on prey densities throughout the park.
We caught 52 raccoons in Cades Cove but have only caught 9 raccoons at Tremont. The low number is not as bad as it seems at first because we are only using half the number of traps at Tremont as we did at Cades Cove. We also caught opossums, rabbits, skunks, and a gray fox. The rabbits and opossums are also prey for the red wolves but they rarely kill skunks or foxes. We hope to use the information from the raccoon studies in the future to select areas to release red wolves that will give them the best chances of surviving.
|
<urn:uuid:aa792587-a548-48e8-b86f-34d05783ef8a>
|
CC-MAIN-2016-26
|
http://teacher.scholastic.com/wolves/wjournal/rjurn12.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00013-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955566
| 457
| 3.625
| 4
|
Back to Home Page or Contents Page or Religions and Sects or Index
by Alan G. Hefner
Universism is a philosophy of The Universist Movement. Universism, though not a definitive philosophy, is a progressive, naturalistic worldview in which all meaning and purpose is understood through personal reason and experience; thus, being a religion of reason. This allows the Universist (pronounced universe-ist) to be an Atheist, Agnostic, Deist, Pantheist, Transcendentalist, or anyone holding similar beliefs. Universism may further be defined as a philosophy of the universe; a metaphysical philosophy, yet propose the possibility that there may be nothing metaphysical. Further exploration will show such uncertainty lies at the heart of Universism.
On March 6, 2005 United Universists changed its name to The Universist Movement due to rising cultural awareness that was resulting in inappropriate name confusion with the Unitarian Universalist Church.
Universism may be thought of as a new religious philosophy originating from the Deus Project founded in 1999. This new religious philosophy was conceived by examining the perspectives that unite most people when applying reason to metaphysical questions; the end result was Universism which gained importance after September 11, 2001. The Deus Project won the support of the thousands of people who contacted it and in 2002 gained the support of two scientific luminaries Steven Pinker and Edward O. Wilson. In 2003 the Project closed and changed it name to The Universist Movement, with the purpose of promoting Universism.
Universism evolved from the conclusion of the Deus Project. The purpose of the Project was to address Deism with the mission of making it the "religion of the future." The consensus was that something was wrong with Deism, whatever it was had to be fixed in order to make a satisfying replacement for faith. The conclusion of this effort was uncertainty, the opposite of faith, was the necessary antidote. It was felt that uncertainty needed to be embraced and celebrated as it contributes to daily living and human progress as a whole.
Through a process of embracing and celebration of uncertainty emerged a religious philosophy called Universism; in essence, this philosophy can be thought of as a rational religion, employing the term religion from the Latin religare, which means "to bind." This rational religion, however, is faithless and differs from other religions in that the members of The Universist Movement are not bound to one or more metaphysical truths, but are bound by the commitment to their ongoing search, as described in their five principles.
Although Universists hold no set metaphysical beliefs they believe in freedom of religion and respect the rights of others to hold such beliefs and to be members of such faith-based religions as Christianity, Islam, Judaism, Hinduism, Buddhism, and others. However, Universism is very subjective in nature because its adherents hold diverse views, which makes it very open-minded toward religious matters. Therefore Universists respect religious tolerance, per se, but are concerned with the destructive effect that religious intolerant language and action, such as discrimination, hatred, ignorance, and violence, as exhibited by some adherents of faith-based religions have on this nation and planet. Universists acknowledge that many people believe that a personal faith is required of them and others for salvation. Also, acknowledged and believed by Universists is the concept that the promotion of personal faith is socially dangerous. Such danger is readily seen in the past by the impact of faith systems on almost every life through the detrimental alterations in the course of history and the changing of social mores. Universists do not value, respect or honor any faith except for the individual's right to hold it. The goal of Universists is to end the power of faith in the Twenty-First century. Religious views, faithful or faithless, should be a matter of personal selection without societal and community imposition to sustain social justice and species survival.
According to Universism this survival depends on the turning away from faith, a deliberate choice. This concept is produced from the work of evolutionary scientists Edward O. Wilson and Steven Pinker who write about the biology of belief, which resembles religion (aka faith) in some ways, but fails to give the human species a choice en masse; and, also, from the material produced by neuroscientists and neurologists concerning the variety of ways in which the brain can encourage and enforce the misconception of reality. Universism contends that faith promotes the continuation of such misconceptions; people of faith do not choose but often follow blindly. It is possible that significant changes in behavior could occur if it was not for the dictates of long-established religious faith. Steadfastness in faith (a belief) dulls if not destroys curiosity; the faithful fearing to view the world, or anything, except within the prescribed, authoritarian way develops a tunnel-vision worldview; those holding such a view lack understanding and compassion for new and different ideas and for those seeking them. This type of view causes rapid stagnation because status-quo of almost everything is the principle goal. Whereas the seeker of new information, the questioner, is amicable toward his fellow seekers, ideas are explored and exchanged, new technology and scientific discoveries are rapidly explored, not disregarded because some high authority declares them immoral. This is a choice of remaining in the bounds of certainty or venturing into the unknown.
Universists may be described seekers of the unknown. It is their sense that they are part of something bigger than what is already known that causes them to forsake the certain for the uncertain. This uncertainty, the challenge of the unknown--the continual questioning, is the heart of Universism making its adherents eagerly form grassroots networks and groups to facilitate questioning and exploration of members' thoughts and experiences. Through such cooperative and explorative efforts the Universists seek to provide a sense of hope for the future. Universism affirms the incredible power of every individual, free him from blind faith, rigid dogma, and the irrational belief that supernatural powers interfere in the world. Thus the Universists are free to work for a better day for themselves and humanity.
This valuing of the incredible power within each individual forms the basis of the Universists' ethics. Although individual morality is considered to be relative, it certainly affects ethical behavior; ethics constituting such behavior are derived from reason and not from supernatural revelation or church dogma. Humans are believed to be innately noble and do not have to suffer the indignity of supernatural coercion and threats of eternal damnation to behave "morally." However, the development of personal ethics is tricky because, for good or bad, it can become culturally and personally subjective. From a societal standpoint, many Universists believe the basics of John Stuart Mill's "Harm Principle" provide an ample reason-based ethical framework:
"...the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinion of others, to do so would be wise, or even right...The only part of the conduct of anyone, for which he is amenable to society, is that which concerns others. In the part which merely concerns himself, his independence is, of right, absolute. Over himself, over his own body and mind, the individual is sovereign." --John Stuart Mills
To value the human being for his noble nature is a major reason why Universists do not believe or participate in any religion or philosophy based on faith and dogma such as Judaism, Christianity, Islam, and many others including Baha'i Faith and Unification Church of Scientology. The main reason for the aversion toward these faith-based religions is their demeaning of humankind. Again, the Universist philosophy regards humans through the use of reason capable of exerting correct behavior. It is thought that through reason and searching, and not by the commands of some mythological deity written in stone, that humans come to display ethical behavior. Humans acquire behavior; it's within them; they do not need to look to an imaginary figure or stone tablets to know what to do. They learn from experience. If a person does not want others to takes what he has then he will reasonably conclude that it is best not to take what others have; this person needs no commandment, "Thou shall not steal," the commandment, if you will, has became part of this person's nature. Likewise, through reason and experience, those feeling the compassion of others will be compassionate not because they are commanded "to love thy neighbor," but because it is a good experience to care for humanity.
The Universists exhibit internal-motivated behavior, and not external. This is the core of The Universist Movement. The Movement is built on a sociological concept which states there can be no freedom without structure. The Universists express this as personal freedom requires mutual respect for the freedom of everyone. The concept that freedom requires structure may seem contradictory, but it is not. Think of it this way, referring to the old saying, a man's home is his/her castle, which is true as long as that person has control, the ability to do as he pleases, in the home. However, if someone, say a thief, enters the home knocks the occupant out or ties him up then it is no long his castle during the time the thief has control. The man has freedom as long as he has control of the structured setting, the home, but once he loses control his freedom is gone. The home was just mentioned as an example of a structured setting; it can be anything. Imagine a four-way intersection where there are no traffic lights of signs and the rule of right-of way that the car on the right goes first is not obeyed; you have a great setting for a demolition derby. Simply stated, if an Universist wants to be truly free then he must desire freedom for all people; his motto is to harm no one and do what he wants.
The desire for freedom for everyone is one of the two generally applied statements which Universism makes lending itself to the statement that an overall morality doesn't exist. Only relative morality exits for Universists, that is, the behavior which feels appropriate to the individual, but such behavior cannot be deemed for another individual; that is the choice for the other person to make. This is true if the Universist recognizes the rights of each individual; the Universist may think people should act a certain way, but still recognizes his thoughts are not moral authority.
This leads to Universists taking a relative stand, decided on case basis, on various issues. The question of slavery may be given as an example. First, the wrongness of slavery is that it forces one individual's will upon another, thus depriving him of his ability to search for his personal answers. This also applies to persons murdering and injuring each other; there is a violation of another's rights. Universism, however, maintains that each case must be decided on its own basis as there is no transcendent right or wrong. This does not mean that Universism infers the abandonment of societal and legal laws; but what is inferred is that such laws, as a whole, do not automatically apply justly in every case, and should be separately applied. In the view of Universism the legal system, judges and juries, are only useful when a crime has been determined to have been committed by one person against another to determine a corrective action, but they have no duty to determine the ultimate, cosmic rightness or wrongness of the action; that is the responsibility of the persons affected.
Universism judges governmental institutions by their "usefulness," that is, to the degree to which they facilitate eudaimonia, Greek term for flourishing; the term is used instead of "good" because Universism does not recognize the existence of good; therefore, to say the institutions were good when increasing in magnitude as the help more people to achieve eudaimonia would be the same as saying the institutions could reach could reach some ideal plane of existence, which would be nonsense because the institutions are just tools by which people eudaimonia. Since Universism is a more personal, religious philosophy it espouses no specific political philosophy, although most adherents tend to lean toward liberalism and libertinism. This is mainly because of their free and subjective thinking. If there is a political philosophy at all, it seems to be a mixture of the two, a libertine might advocate pure capitalism, people do better left on their own, while a liberal would advocate more of a social welfare form of government; the desired result would be a government that helps people to flourish.
Again, this is why Universists do not favor or agree with faith-based, dogmatic religions or institutions. People coming in contact with them, including their adherents, simply are not completely free; the organizational dogmatic rule is imposed on the people by the nature of the organizations; people could be thought of as slaves to religion. This is the reason for separation of church and state in the United States. Religions of faith seek to control people. This is why Universists are for religious symbols being kept out of and remove from government building; they are not against the symbols themselves but the religious domination which they represent.
To say that Universists are against faith-based religions is not to say Universists do not hold personal religious views. Persons attracted to other philosophies such as Deism, Atheism, Agnosticism, Humanism, Transcendentalism, Scientific Materialism, Pantheism, and many others are also attracted to Universism and welcomed as well. They may identify themselves just as Universists, or as a Deist Universist, or whatever designation they choose. However, once becoming a Universist the person consciously makes a decision to constantly continue to question his decisions and choices. This is so because this constant questioning, using reason as in the scientific method along with experience to examine reality, the constant exploring of uncertainty, is the central activity of Universism. Specifically to the Deist Universist this means asking himself why he is a deist, or the agnostic, why he is an agnostic. Statements such as there is a God or there is no God, which the deist or the agnostic would make, are unacceptable in Universism because they are declarations of a belief, a statement believed to be true but which cannot be supported by fact; also such statements indicate a belief or non-belief in a spirit which cannot be substantiated by reason. However, if the deist or agnostic states that he feels that there is a God, or that there is no God, such statement is different from the first. The difference is that the first statement attempts to proclaim a truth not supported by reason while the second statement declares a feeling or emotion which does not necessitate a reason; although because of his continuing path of uncertainty the Universist would certainly try to discover the reason or reasons for his feeling.
Other examples are if the Universist was a Pantheist with Gnostic leanings. The Universist would then question his feelings toward Pantheism and Gnosticism. He would certainly have to admit his attachment for the Goddess and other mythological deities was purely emotional since these deities are spiritual whose existence cannot be proven by reason. Similar circumstances can be ascribed to Gnosticism, except there is one similarity between Universism and Gnosticism; both urge their adherent to question or search. However, in Gnosticism the search is believed to end; adherents believe Jesus told them to search until they find, and then search no more. An Universist, therefore, would most likely surrender his Gnostic leanings because his belief in Jesus could not be reasonably substantiated, and he could never stop his search with anything found, such an abortion of the search would indicate the object found was the true goal, not likely proven by reason, so the search would have to continue.
This constant questioning uncertainty does principally two things for the Universist: it clarifies his own thinking by helping him know through reason whether he is dealing with facts, things known through reasoning and experience, or emotions; and how to interpret the statements of others. With this in mind, free will can be discussed, rather do Universists believe in free will? Universism takes no position on whether the individual has free will or not. Through reason it is determined that the universe is made up of energy and matter forming reality; some Universists believe a spiritual element is involved too; and in this reality are multiple of variables which the individual freely interacts with. In this sense the term free will becomes meaningless, for it is practically impossible to determine how and why each reaction occurs. Therefore the only restraint on individual reaction is the laws of nature. And, to question the existence of free will is to postulate the existence of a spirit which Universism does not do. Thus the belief that a mythological deity gave each human being is categorically denied.
It has been asked if Universist is postmodernist. It is postmodernist as far as religion is concerned; but not as related to the natural world. Universalism is a relative reality, and science is the tool for which it is deciphered. Metaphysics play no part in this deciphering, as science cannot decipher them, only personal reason, intuition, experiences, and perceived environment are perceived as valid sources composing personal religious views, and considered more valid than claims from revealed sources. Modernism has come and gone, and in the academic community postmodernism has come and gone as well. We have learned from both movements. Universism attempts to take what is best from both and apply it to religion. There is the modernist passion for the search, the eternal optimism, and yet there is a recognition of uncertainty and a desire to appreciate that as part of the human experience, as a motivation, and as a force for good in promoting respect among all fellow seekers.
The habitual questioning uncertainty is destined to lead the Universist to the culmination of life, death. Universism says that what occurs after death, an afterlife, cannot be known; it cannot be discovered through science or be known by reason. To proclaim there is an afterlife is just as unacceptable and declaring that there is no afterlife; both statements are based on unproven beliefs. The only certainty in respect to the collective fear of death is that we all must wait and see. But, perhaps the fear will be more therapeutic than cognitive dissonance and motivate us to improve earthly life in unanticipated ways.
Previously in this article it was mentioned that through Universism is possible that significant changes in behavior could occur. At present this author can readily think of a few. One is the present societal and religious debate over stem cell research. Without going into details, which this author confesses that he is unfamiliar with, the reason for the religious objection to such research is the stem cell which can produce life is destroyed. This objection does not consider the fact that if the stem cell, obtained through an abortion, is not used in the regular birth process dies anyway. Therefore, in this argument only the destruction of the stem cell, the killing of possible life, is considered, and there is complete disregard for societal benefits, curing of certain diseases and physically handicaps, which could be derived from the research. The religious objection as this author sees it is base on the commandment "Thou shall not kill." In this sense, all life, even the potential for life regardless of whether it will eventually be destroyed, is sacred, a belief not based on reason or scientific evidence, and shall not be destroyed. It is not known whether this argument will be solved or not, but what is known, as this author recently heard, is that through research it has been determine that in the future the stem cell will not have to be destroyed. The main point is that if society had stopped all stem cell research on the authority of dogmatic, religious leaders the research would never have proceeded as far as it has, this new result would have never been discovered, and society would have been robbed of enumerable future benefits. The author does not know the stand of Universism on stem cell research but he surmises it would be similarly stated since Universism pledges to provide a global clue to help humanity find better ways of living together, something that no institutionalized religious organization has ever done.
Speaking from a personal point of view this author sees the advantage of Universism in the study of psychic phenomena, particularly clairaudiovoyance and clairvoyance. People experiencing such phenomena are often said to be psychic gifted with the inference that the gift is spiritual in nature. The author has experienced several clairaudiovoyance incidents and has corresponded with people having one or both phenomenon, mostly clairvoyance. In many of these incidences there was a feeling of spiritual influence such as saying the ability eemed to have came from out there, or there's something out there. When using Universism, one knows that the person experiencing the psychic phenomenon recognizes it, especially if he experiences it several times or repeatedly; he also recognizes the sensation surrounding it; these recognitions can be described as valid experiences. However, the belief that the cause of such phenomenon is spiritual or came out there cannot be explained be either reason or science. The belief, becoming invalid, is unacceptable. This does not mean that the sensation that there is a cause for the phenomenon is invalid, but believing it has some mystical origin is invalid since this cannot be proven.
Several years ago this author related some of his clairaudiovoyance experiences to a friend who commented perhaps the author has unconsciously trained his mind to receive them. At the time, still holding the spiritual cause theory, the author did not put much significance in his friend's comments. However, after reading the views of Universism the comments assume more importance. Perhaps the friend was right or perhaps the answer to the cause of such phenomena lies elsewhere. For instance it is known that some aborigines could foretell a change in the weather by feeling the rise or fall of barometric pressure against their skin; with the wearing of clothes man lost this ability. The similarity is could all the revelation doctrine which has been forced upon man made him lose some or most of his psychic ability. This uncertainty shifts the inference; man's psychic ability does not come from somewhere out there, but may lie dominant and forgotten within man. The prospect of invigorating these abilities again would possibly change, if not enhance, our lifestyles.
Perhaps these tentative uses of Universism seem strange, but they seem to indicate a possible new and positive perspective of the world; a world progressing through reason and science instead of deteriorating from faith and rhetoric; a land of plenty where many will flourish. The apparent aim of Universism is to form a global society of freethinkers, respecting the individual opinions of each other, but safeguarding them as well from faith-base religions and institutions. The world is to be hallowed through reason, science and individual experience.
During its short existence The Universist Movement has grown to over 8.000 members; people seeking the goal which the Movement provides. More members are sought and welcomed. New members may sign up at Universist.org/signup.htm. The philosophy of Universism was written by Ford Vox, Founder. Offices of The Universist Movement include Director and President, Todd Strickler, Secretary and Treasurer, E. Frank Smith, and Assistant Director John Armstrong. Among the websites operated by The Universist Movement are the Universist Global Meeting, and the Faithless Community. It also nswers and generates media inquiries, engages faith-based organizations in the culture debate, and communicates with the individuals who make up the Universist Movement, offering them advice, information and inspiration to succeed.
This author wishes to thank The Universist Movement for the opportunity to write this article and to thank them for allowing the use of the Frequently Asked Questions for the article's basis.
|
<urn:uuid:4a8957a6-c813-4d06-a084-f51db03e1855>
|
CC-MAIN-2016-26
|
http://www.themystica.com/mystica/articles/u/universism.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00186-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969328
| 4,964
| 2.734375
| 3
|
The recently released Inter-governmental Panel on Climate Change (IPCC) report – ' Climate Change 2014: Impacts, Adaptation, and Vulnerability' – finds that the effects of climate change are already occurring on all continents and across the oceans. It also underlines that the prediction of ice-free periods in the Arctic Ocean is generally underestimated. Relevant maritime-focused excerpts from the report follow:
Shipping from major European ports to Shanghai is some 40 percent shorter via the Northern Sea Route compared with the Suez Canal. Shorter shipping distance cuts emissions, but increases the probability of shipping accidents with severe consequences for the fragile Arctic environment.
- The global ocean will continue to warm during the 21st century. Heat will penetrate from the surface to the deep ocean and affect ocean circulation.
- It is very likely that the Arctic sea ice cover will continue to shrink and thin and that Northern Hemisphere spring snow cover will decrease during the 21st century as global mean surface temperature rises. Global glacier volume will further decrease.
- Global mean sea level will continue to rise during the 21st century. Under all RCP scenarios, the rate of sea level rise will very likely exceed that observed during 1971 to 2010 due to increased ocean warming and increased loss of mass from glaciers and ice sheets.
- Climate change will affect carbon cycle processes in a way that will exacerbate the increase of CO2 in the atmosphere (high confidence). Further uptake of carbon by the ocean will increase ocean acidification.
A total of 309 coordinating lead authors, lead authors, and review editors, drawn from 70 countries, were selected to produce the report. They enlisted the help of 436 contributing authors, and a total of 1,729 expert and government reviewers.
The report and other related information can be accessed at: http://www.ipcc.ch/
|
<urn:uuid:42782ff3-4df2-4fb6-8e19-d3bd369ca1cd>
|
CC-MAIN-2016-26
|
http://www.marinelink.com/news/northern-usable-report366235.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00167-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.919256
| 369
| 3.328125
| 3
|
It is ironic that Van Til charged Clark with rationalism when Van Til held to the logical conclusion of Hegelianism. The doctrine of internal relations essentially states that “everything has some relation, however distant, to everything else” (link). If this is doctrine is true, as I think it must be – I may write a post on this later – then the question is begged as to how one can learn anything. I have argued (here, for example) that internal relations means that the source of knowledge must be an eternally omniscient being.The initial sentiment is true, but unfortunately, I confounded the doctrine of internal relations with the idea that everything is related to everything else. Well, my lingering confusion was pretty much what I deserved for relying on Wikipedia. Clark more precisely defines internal relations in Historiography: Secular and Religious on pgs. 225-226 (1971):
Reenactment of a thought is possible, nonetheless, because it can be separate from this immediacy without alteration. Not only so, it can be separated from other thoughts without alteration. Thus history becomes possible.
This self-identity of the act of thought has been denied by two extreme views. The first view is that of idealism, the theory of internal relations, the notion that everything is what it is because of its context. This makes history impossible. To know any one thing it would be necessary to know its context; i.e. to know the whole universe. Knowledge would thus be restricted to the explicit consciousness of the omniscient Absolutes; and Collingwood, though he may be Beckett, does not claim to be the Absolute.That is, the theory of internal relations does indeed posit that everything is related to everything else - an idea which I agree with and which Clark himself defends in the same book (cf. pgs. 179, 183) - but it does so in such a way that partial knowledge is impossible.
With this definition in mind, Clark's reference to "idealism" and especially "the [omniscient] Absolute" clearly indicates he is thinking about Hegel. Subsequent reference to Hegel on pg. 334 when mentioning the problem of partial knowledge also confirms this:
What follows if it is true that psychological analysis presupposes a “complete knowledge of the psychological possibilities of life”? It would follow, would it not, that historical analysis also presupposes a complete knowledge of historical possibilities. In short, it would be impossible to know anything without knowing everything.
Such a Platonic or Hegelian requirement of omniscience is a serious philosophical problem. It is not to be dismissed thoughtlessly. The meticulous scholar, J. H. Hexter, in his Reappraisals of History, castigates historical relativism as a fad and insists on the “rudimentary distinction” between knowing something and knowing everything. But he omits all philosophic justification for this distinction.
Undoubtedly this distinction must be maintained, if a human being is able to know anything at all. Make omniscience the prerequisite of partial knowledge, and partial knowledge vanishes. But Bultmann, like Hexter, offers no help: less help, in fact, for Bultmann lets the requirements of omniscience stand.
That relations are internal, and especially that the truth is the whole, are themes hard to deny. Yet their implications are devastating. So long as you or I do not know the relationships which constitute the meaning of cat or self, we do not know the object in question. If we say that we know some of the relationships – e.g., a cat is not-a-dog and admit that we do not know other relationships – e.g., a cat is not-an-(animal we have never heard of before) – it follows that we cannot know how this unknown relationship may alter our view of the relationship we now say we know. The alteration could be considerable. Therefore we cannot know even one relationship without knowing all. Obviously we do not know all. Therefore we know nothing.
This criticism is exceedingly disconcerting to an Hegelian, for its principle applies not merely to cats, dogs, and selves, but to the Absolute itself. The truth is the whole and the whole is the Absolute. But obviously we do not know the whole; we do not know the Absolute. In fact, not knowing the Absolute, we cannot know even that there is an Absolute. But how can Absolute Idealism be based on absolute ignorance? And ours is absolute ignorance, for we cannot know one thing without knowing all. (Christian Philosophy, pg. 153)Now, I said in the above link that Van Til was more truly Hegelian than Clark, and the reason why has to do with internal relations. The first quote from that link provides the key:
Gordon was absolutely insistent that we did know some of the same things that God knew. If not, he insisted, it would be impossible for us to know any truth at all! That 2 plus 2 equals 4 is true, he felt. Thus he insisted that in and of itself it is true as a statement without the necessity of examining another proposition. He carefully insisted upon a propositional concept of truth while Van Til insisted upon the fact that to have truth in one's mind that mind must be built upon other propositions. The truthfulness or falsity demanded that the individual proposition be held in the midst of certain other basic propositions that must be consciously present in that mind in order to correctly know truth. Now, of course, God knows every proposition in the context of all other propositions for Van Til, and, therefore, the limited human mind never knows it the way God does. Van Til had an expression, of repeated: "true as far as it goes," meaning, of course, that for that mind which holds all propositions in a system, the more complete the system, the more full the truth. With growth in the knowledge of basic propositions, the further than mind had the truth. Van Til's concept is that for relative human beings, they can have all needful truth but never perceive it as God does with his infinite knowledge of everything that affects any proposition. He charged Clark, therefore, with denying the incomprehensibility of God and Clark charged him with agnosticism since he that that for him it was impossible to know anything as God did. Clark wanted an absolute even if it were only in the single proposition. (Gordon Clark: Personal Recollections, pgs. 103-104)Note that for Van Til, it is because "God knows every proposition in the context of all other propositions" that "the limited human mind never knows it the way God does." Well, how does this follow? The idea that "God knows every proposition in the context of all other propositions" is true, but Van Tilians must go further by arguing that knowledge [which is univocal to God's] of even one proposition necessarily would imply knowledge of all propositions, which is why our knowledge must be analogical - for we aren't omniscient. But the only reason such would necessarily follow is if the doctrine of internal relations is true.
Recall that above, Clark defined "the theory of internal relations" as "the notion that everything is what it is because of its context... To know any one thing it would be necessary to know its context." This is precisely what Van Tilians argue univocal knowledge would entail. This is the only explanation of how "God knows every proposition in the context of all other propositions" can allegedly imply "the limited human mind never knows it the way God does."
On the contrary, that God necessarily knows everything in the context of everything else is what functions as the basis according to which we can be assured that the doctrine of internal relations is false. Who better than an omniscient God could know that partial knowledge is possible? Revelation is sufficient.
|
<urn:uuid:7281c8f0-33bd-483f-a0ce-af8c6dd35118>
|
CC-MAIN-2016-26
|
http://unapologetica.blogspot.com/2013/06/hegelian-internal-relations-and-van.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00006-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958062
| 1,621
| 2.78125
| 3
|
NASA Reveals Key to Unlock Mysterious Red Glow in Space
MOFFETT FIELD, Calif. -- NASA scientists created a unique collection of polycyclic aromatic hydrocarbon (PAH) spectra to interpret mysterious emission from space. Because PAHs are a major product of combustion, remain in the environment, and are carcinogenic, the value of this PAH spectral collection extends far beyond NASA and astronomical applications.
For years, scientists have been studying a mysterious infrared glow from the Milky Way and other galaxies, radiating from dusty regions in deep space. By duplicating the harsh conditions of space in their laboratories and computers, scientists have identified the mystifying infrared emiters as PAHs. PAHs are flat, chicken-wire shaped, nano-sized molecules that are very common on Earth.
“PAHs in space are probably produced by carbon-rich, giant stars. A similar process produces soots here on Earth,” said Louis Allamandola, an astrochemistry researcher at NASA’s Ames Research Center, Moffett Field, Calif. “Besides astronomical applications, this PAH database and software can be useful as a new research tool for scientists, educators, policy makers, and consultants working in the fields of medicine, health, chemistry, fuel composition, engine design, environmental assessment, environmental monitoring, and environmental protection.”
To manage the research data, NASA built a database that now can be shared over the internet. It’s the world’s largest collection of PAH infrared data, and the website contains nearly 700 spectra of PAHs in their neutral and electrically charged states. In addition, it has tools to download PAH spectra ranging in temperature from minus 470 to 2000 degrees Fahrenheit. Thanks to these spectra, PAHs are now known to be abundant throughout the universe, but in exotic forms not readily found on Earth.
This mysterious infrared radiation from interstellar space was discovered in the 1970’s and 1980’s. While the infrared signature hinted that PAHs might be responsible, laboratory spectra of only a handful of small, individual PAHs were available to test this idea. To make matters worse, these were only for neutral, solid PAHs, not representive for PAHs as they would be in space, where they’d be electrically charged, very cold, individual molecules floating in the gas.
By the mid-1990's, observations showed this infrared emission as surprisingly common and widespread across the universe, implying that the unknown carrier was abundant and important. To better understand PAHs, then thought to be too complex to be present in space, their spectra were measured under astronomical conditions.
To capture their spectra, Allamandola led a team of scientists to measure PAH spectra under simulated astronomical conditions and with computer software. This team consisted of experts in many different fields. "This group made a tremendous effort to make this a reality," said Allamandola. "There are now nearly 700 spectra in the database. Six hundred of these have been theoretically computed, and sixty have been measured in the laboratory. The theoretical spectra span the range from two to 2000 microns, the experimental spectra cover two to 25 microns."
The spectra have given insights into the PAHs in space that were impossible to get any other way. Scientists predict that in the near future these spectra will be especially valuable for interpreting observations made with NASA's new airborne observatory, the Stratospheric Observatory for Infrared Astronomy (SOFIA) and the recently launched European Space Agency's (ESA) Herschel Telescope.
They tried to make the website user friendly for researchers. One can explore the database by charge, composition and spectral signatures. Tools allow users to do analyses online. For example, spectra can be combined to create a `composite' signature that can be compared directly to the spectrum of ‘unknown’ material.
"We will expand the database and tools,” said Christiaan Boersma, a NASA postdoctoral fellow at Ames, who designed and developed many parts of the website and tools. "We now use the database to interpret astronomical observations from star and planet forming regions in our galaxy, the Milky Way, and even other galaxies."
“Initially, our hope was to help interpret the experimental spectra, but over time, our computational capabilities made it possible to study molecules much larger than can be studied in the laboratory,” said Charles Bauschlicher, Jr., a world renowned computational chemist at NASA Ames.
"Thanks to the great sensitivity of the Spitzer Telescope, PAHs are seen across the universe, removing any doubt of the importance of these species,” said Allamandola.
The database is available at http://www.astrochem.org/pahdb
. More information about the database and graphics are available at http://www.astrochem.org/pahdb/pressrelease
- end -
text-only version of this release
To receive Ames news releases via e-mail, send an e-mail with the word "subscribe" in the subject line to
To unsubscribe, send an e-mail to the same address with "unsubscribe" in the subject line.
NASA Image Policies
|
<urn:uuid:9c50e263-e1cb-4a50-8994-c4fd87e69bb3>
|
CC-MAIN-2016-26
|
http://www.nasa.gov/centers/ames/news/releases/2010/10-65AR.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00036-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.913955
| 1,103
| 3.75
| 4
|
Light / Dark Relay Switch
A simple light / dark activated relay switch circuit, suitable for many applications like the automatic switching of the lights in a shop window or a room according to the ambient light level. The circuit uses a light dependent resistor (LDR). A light/dark option has been incorporated. The term 'light/dark' is used because the circuit has a PCB-mounted switch on board. In one switch position a light-to-dark transition will activate the relay. In the other position a dark-to-light transition is required. So you can use the falling light on the detector to switch on a normally off circuit or switch off a normally on circuit. The relay is on when LDR uncovered and relay off when LDR covered. Adjust VR1 for light sensitivity. LED will turn on at the same time with relay.
This DIY lightning detector circuit is a very sensitive static electricity detector that can provide an early warning of approaching storms from inter-cloud discharge well before an earth-to-sky return strike takes place. An aerial (antenna) formed of a short length of wire detects storms within a two mile radius.
The circuit emits an audible warning tone from a piezo buzzer, or flashes an LED for each discharge detected, giving you advance warning of impendig storms so that precautions may be observed.
LM386 Audio Probe Amplifier
LM386 audio probe amplifier is an essential tool for troubleshooting audio stages in audio related circuits such as amplifiers, oscillators, function generators, phone circuits, radios and lots of our other projects. It is a very handy piece of test equipment that can be built on pre-drilled board and will make a perfect addition to your electronic collection. There are lots of things that can go wrong with an audio stage. It can produce distortion or a “hollow” sound, go weak or simply fail altogether.
Likewise tone circuits can present a number of faults and it is very handy to be able to “hear” what is going wrong.
It is not sufficient to measure the DC voltages on these stages. This only gives a partial picture of the conditions and does not tell you the quality of the audio being processed. To determine this you need a piece of test equipment that will let you see or hear what is being processed. Some of the projects you can test with the Mini Bench Amplifier are tone circuits while others are audio circuits. Tone circuits and audio stages are surprisingly difficult to test unless you have an audio probe or a oscilloscope. Oscilloscope is an ideal piece of equipment but if your budget does not extend this far, the next best thing is an audio probe.
Microphone Amplifier with TDA7050 IC
Here's a simple microphone amplifier based around TDA7050 IC. There are many schematics, the choice fell on the amplifier chip TDA7050, the only downside was it that it was not appointed to the microphone. By adding a resistor R1 in the scheme by 4,7 kOhm - amplifier can no longer work with conventional condenser microphones.
If you remove the resistor R3 - the alarm will be put on a small speaker with an impedance of winding 32 Om'a.
Voltage amplifier is powered by 3 - 5 volts, but as the amplifier must be nourished from 12 in the scheme of linear stabilizer was added at 5 V.
This scheme was checked with headphones (current consumption was 50 mA) and connect to the line input of your TV.
Linear scale - Small size
40 to 208 beats per minute
Motion Activated Camera
I made a relatively simple attachment to my Canon SLR to create a motion activated camera using Arduino. A lot of this was based on and inspired by the intervalometer project at The Honey Jar. I made some changes to his circuit to use a 4N35 optocoupler instead of reed relays.
One Button Digital On-Off Switch
The load is driven by the MOSFET IRFZ44 and 4093 AND gates are used in the circuit. The output of the 4093 IC drives the MOSFET. Only one button allows you to change the on-off state of the electronic circuits in which you use this switch. The circuit schematics is above. Click image to see the larger schematics diagram.
Oscilloscope Probe Schematic & Anatomy
Passive Probe are the most general used scope probe. As the name "passive" suggest, it is made from passive components resistor, capacitor & wires. The leading scope probe maker are LeCroy, Tektronix & Agilent.
Passive probe usually comes with attenuation factor of 1:1, 10:1 and 100:1. Attenuation factor of 1:1 means whatever signal being probe at the probe tip will be shown exactly as it is at the oscilloscope input. So a signal of 1V at the probe tip will be detected as 1V at the scope input.
Attenuation factor of 10:1 means that a signal of 1V at probe tip will be detected as 0.1V at the scope input.
PAL/NTSC Video Text Generator
Perfect low cost solution for:
* New video security installations
* SSTV transmitters
* Amateur video
* Existing installed security installations
* Scientific experimentation monitoring
* and any other application that needs the time and date recorded on an image!
Park Assist Circuit
Park Assist circuit was designed as an aid in parking the car near the garage wall when backing up. LED D7 illuminates when bumper-wall distance is about 20 cm., D7+D6 illuminate at about 10 cm. and D7+D6+D5 at about 6 cm. In this manner you are alerted when approaching too close to the wall. All distances mentioned before can vary, depending on infra-red transmitting and receiving LEDs used and are mostly affected by the color of the reflecting surface. Black surfaces lower greatly the device sensitivity. Obviously, you can use this circuit in other applications like liquids level detection, proximity devices etc.
|
<urn:uuid:379f467e-5f08-4692-af45-cf7fb18022ab>
|
CC-MAIN-2016-26
|
http://electronics-diy.com/electronic_schematics.php?schematics=miscellaneous_circuits&page=5
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00104-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.913553
| 1,263
| 2.75
| 3
|
Hypolordosis (flatback) of the lumbar spine is when your low back has too little of a curve from what is considered a normal.
The word is of Greek origin where “hypo” means “under” and “lordosis” means “bending backwards”.
The lumbar lordotic curve starts at the first lumbar vertebra and ends at the top of the sacrum.
The lumbar curve is not a true arc. The shape of the body of the vertebrae and their shared discs is what determines the general shape of the curvature. Because the bodies of the vertebrae and the discs are somewhat thicker in the front helps to determine the backward curve or lordosis of the lumbar spine.
There is a greater degree of angulation in the lower lumbar and a lesser degree in the upper lumbar. If there was a true arc the transition from one spinal area to the other (lumbar to thoracic) that would require a greater angulation at the transitional zone. This would place extreme stress on the transition area which would result in disc and facet joint injury and premature degeneration.
Therefore, the transition must be a gradual one.
When there is a hypolordosis, or a lesser curve, there is added stress on the anterior (front) structures, normally the discs and the bodies of the vertebrae. This added stress will create compressive forces on the disc and the vertebra which will cause reactive bone changes creating osteophytes or bone spurs.
This flat back elongates the posterior back muscles, shortens the hamstrings, lengthens or pulls on the anterior hip flexor muscles...all which will cause a posterior pelvic tilt.
As the body attempts to remain in balance there is reactive muscle contractions and relaxations throughout the entire body.
What will happen is a systemic neuro-musculo-skeletal reaction that will render you susceptible to not only chronic recurrent strain but the development of trigger points, joint irritation/inflammation, disc degeneration, abnormal neuro-muscular function, compression fractures of the vertebrae, headaches, etc.
The basic corrective approach to hypolordosis is the same corrective actions that would be applied to any abnormal posture. A total, complete and thorough analysis from the foundation (feet) all the way to the top (head) should be conducted.
This will include identifying and correcting all structural and functional deficits that are correctable. The feet need to be evaluated for droped arches (pes planus), pronation, supination, Morton's foot, ligament laxity, muscular imbalances and flexibility. Attention and recommendation should be given to proper footwear as well.
Proper posture should be developed and maintained.
If you are overweight or obese this will create unnecessary stresses and strains not only on your low back but throughout your entire body.
All habits and lifestyle activities must be considered as a contributing and causing factors.
A regular physical fitness program should be incorporated into each day with special emphasis on a brisk daily walk of ideally 60 minutes.
As always, you are responsible for your health and well-being and your actions or lack of will ultimately determine your level of health.
|
<urn:uuid:6414f2cb-502e-4678-a93e-bee4629d1b3d>
|
CC-MAIN-2016-26
|
http://www.lowerbackpain-help.com/hypolordosis.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00102-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.901004
| 676
| 3.34375
| 3
|
The Strengths of Latino Families
Did you know that Latino children begin school with strong social and emotional skills that are quite similar to those of children from middle-class white families? Did you know that math scores for first-generation Latinos in elementary school are strong, even though many of these children have limited English skills? Did you know that eight out of 10 Latino toddlers are being raised in two-parent homes?
These are some of the findings reported in a research brief, "The Cultural Strengths of Latino Families: Firm Scaffolds for Children and Youth." The writers of the brief encourage writers and editors to feature some of the strengths of Latino families in stories as well as some of the challenges facing Latinos, such as poverty. It's probably good advice for educators as well to recognize the strengths of Latino families in trying to support Latino children and youths to do well in school.
The brief was released by New Journalism on Latino Children, which is a project of the Education Writers Association and the National Panel on Latino Children and Schooling, based at Berkeley's Institute of Human Development.
A second research brief released at the same time by New Journalism on Latino Children, "Getting Latino Youth Through High School," describes programs that have been effective in helping Latinos stay in school. The reasons that the dropout rate is high among Latinos may include poverty, a lack of literacy skills, and low quality of schooling, according to the brief.
|
<urn:uuid:602a6c06-80f3-41a8-8ef1-c9164d445902>
|
CC-MAIN-2016-26
|
http://blogs.edweek.org/edweek/learning-the-language/2009/07/the_strengths_of_latino_famili.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.975188
| 291
| 3.453125
| 3
|
How do you think the new GigE standards will influence the machine vision industry?
Respond or ask your question now!
A collimator is an optical instrument similar to a telescope, but works in the opposite way. When an object is placed in the focal plane of the collimator and viewed, it seems very far away (virtually at infinity), Rattan said.
A typical application consists of placing the output of an optical fibre in a collimator's focal plane in order to obtain a star simulator. Another application is to have a reference target in the collimator's focal plane to calibrate telescopes and other spaceborne cameras and other instruments.
Collimators are thus well suited to test spacecraft instruments, either astronomical or aiming at earth observation, like the ones manufactured, assembled and tested by ISRO/SAC.
Earlier contracts executed by AMOS for ISRO include two collimators of one metre diameter for ISRO-SAC, Ahmedabad. The company is also engaged in the design, manufacturing and installation of a space simulator for ISRO in Bangalore.
AMOS has been supplying equipment to ESA (European Space Agency), NASA and Space Agency of Korea.
(THROUGH ASIA PULSE)
<<Press Trust of India -- 03/28/06>>
|
<urn:uuid:ddeaf351-8429-4d89-b070-f22ac96089a0>
|
CC-MAIN-2016-26
|
http://www.advancedimagingpro.com/online/article.jsp?siteSection=3&id=2521&pageNum=2
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00189-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918061
| 268
| 2.71875
| 3
|
A successful design/build project is completed on time and within budget, and meets the customer’s needs and expectations. Customers expect a reliable, maintainable system.
These characteristics are not abstract terms—they are characteristics of a power distribution system that need to be considered in the design as much as load magnitude and other load characteristics.
The term “reliability” as it is used in this article means a system’s ability to operate without failure for an
acceptable period of time. The customer must define the degree of reliability required, consider the consequences of an outage, and work with the electrical contractor to select the system that provides the necessary reliability.
“Availability” is the system’s ability to function when needed. The term is predictive and deals with the future ability of a power system, subsystem, or component to operate when needed.
The term “reliability” is often used when “availability” would be more appropriate. For example, regarding the utility power supply to a facility, “reliability” would be the characteristic of interest to the customer because it refers to the likelihood of the primary utility power supply operating without interruption. But “availability” might more appropriately describe the likelihood of the emergency power system operating when the utility system did fail.
Quantifying system reliability and availability Power system reliability and availability can be quantified through a risk assessment study that is typically called a “reliability study.” This study for a commercial, industrial, or institutional power system uses industry data on the failure rates of individual components such as switchgear, transfomers, and cable to estimate the owner’s risk of losing power.
“Reliability” is quantified as the probability of failure when needed and “availability” is quantified as the probability that a system will function when needed. Using study results, the expected cost of power outage can be calculated and design alternatives compared, to balance the customers expected costs and benefits.
In most cases, a reliability study that quantifies the risk of a power outage is unnecessary.
Understanding the customer’s business and the consequences of a power outage will provide insight to the degree of reliability that is expected and the type of system needed. However, if the cost of a power outage is significant to the customer or if its design/build project performance requirements specify a minimum system reliability, then a study should be performed and the results reviewed and approved by the customer.
Factors impacting reliability
Power distribution systems’ reliability is impacted by the primary power source, system configuration, equipment operating environment, and other factors.
Reliability of primary power source. The reliability of a facility’s primary power source is an important factor in the design of the customer’s power distribution system.
Commercial, industrial, and institutional facilities almost always get their primary source of power from the local utility. The first step in designing for reliability is to evaluate the reliability of the utility supply at the facility’s location. This information can be obtained from the local utility as well as from discussions with the facility managers of surrounding properties.
If a more reliable source of power is required than what can be provided by the local utility, then the installation of a standby power source may be required, which could include a second utility feed, on-site generation, or uninterruptible power supplies (UPS).
System configuration. In a perfect world, a simple radial distribution system [Figure 1] would be adequate for all commercial, industrial, and institutional facilities. A radial distribution system is the most common, least expensive, and simplest configuration that can be installed.
However, a radial system has only a single connection from the primary power source to the load, which may not provide the reliability needed for the customer’s loads. Redundancy in power sources, distribution equipment, and feeders will improve the reliability.
A number of arrangements allow the load to be fed from multiple feeders or through multiple transformers.
These arrangements include primary [Figure 2] and secondary selective [Figure 3] arrangements, as well as secondary spot networks [Figure 4] using specially matched transformers and network protectors on the transformer secondaries. For example, a double-ended substation is a secondary selective arrangement where the loads on the substation can be fed from either of two primary feeders or stepdown transformers.
Equipment operating environment. Equipment and materials must be designed, tested, and approved for use in their intended thermal and physical environments. If they are unsuitable for these environments, they may fail prematurely and cause outages.
Economics of reliability
A balance between reliability and cost must be struck for the design to be successful. If the degree of reliability designed into the system cannot be justified by the customer’s needs, then the money spent achieving the excessive reliability is wasted. The contractor should use its collective knowledge and experience to provide the customer with a power distribution system that meets its needs as economically as possible.
A good design considers power distribution system maintainability. Otherwise, needed preventive maintenance may be unduly deferred or neglected, reducing system reliability. The system should be designed and equipment selected with ease of maintenance in mind. Adequate working space and access around equipment exceeding the minimums specified in Article 110 of the National Electrical Code (NEC) should be provided.
Equipment should be arranged and located to facilitate maintenance activities. The ability to switch to an alternate feeder or an internal equipment bypass should be provided to allow critical loads to operate while distribution equipment such as UPS systems are taken out of service for preventive maintenance.
Finally, electrical equipment needs to be specified with the necessary access and accessories to make it easy to maintain.
Use your service advantage
Service personnel who deal with reliability and maintainability problems every day should be involved in design/build projects, especially if the design/build project customer is a regular one. They know how the systems will be used, their expectations, customer preferences, the physical environment within the customer’s facility, the access required for regular maintenance, and the customer’s maintenance program, among other things.
This article is the result of ongoing research into the development of service contracting business by electrical contracting firms sponsored by the Electrical Contracting Foundation, Inc. The author would like to thank the foundation for its continuing support.
GLAVINICH is Chair and Associate Professor of Architectural Engineering at The University of Kansas. He can be reached at (785) 864-3435 or email@example.com.
|
<urn:uuid:63107893-4504-4b84-93d0-718943f2bd22>
|
CC-MAIN-2016-26
|
http://www.ecmag.com/section/your-business/designing-reliability-and-maintainability?qt-issues_block=1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00002-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936572
| 1,349
| 2.90625
| 3
|
SPF File Validation - Test Your SPF Records
What is SPF? SPF (Sender Policy Framework) fights return-path address forgery and makes it easier to identify spoofs.
Domain owners identify sending mail servers in DNS. SMTP receivers verify the envelope sender address against this information, and can distinguish authentic messages from forgeries before any message data is transmitted.
This SPF testing tool is brought to you by the good folks at DNS Report and DNS Stuff, an excellent resource for network and domain name tools.
Possible Results from the SPF Testing Utility from DNS Stuff
- This IP is authorized to send email from this domain.
- This IP is not authorized to send email from this domain.
- This IP probably is not authorized to send email from this domain, but the domain owners are not certain.
- The domain does not know if the IP is allowed to send email or not.
- A temporary error occurred. The email should be retried later.
- A permanent error was encountered. The email should be rejected.
- No SPF record was found. It cannot be determined if the IP is allowed to send email from this domain.
|
<urn:uuid:63cbeacc-7e7a-48c8-b502-fb9446f221a9>
|
CC-MAIN-2016-26
|
http://www.seoconsultants.com/tools/spf/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.874333
| 244
| 2.65625
| 3
|
Mount Tabor Reservoirs
The Mount Tabor reservoir complex was built between 1894 and 1911 and is an integral part of the City of Portland’s water supply system. Three reservoirs at Mt. Tabor (No.'s 1, 5, and 6) are in operation. The dam at Reservoir No. 1 is a 30-foot high concrete structure with a downstream earthfill buttress. The dam at Reservoir No.5 is similar but 55 feet high. Both are built on the upper slopes of Mount Tabor, a dormant volcanic vent. The dam enclosing Reservoir No. 6 is an earth embankment 2,100 feet long and up to 28 feet high. The City generates power from the 110 feet of head between Reservoirs No. 5 and No. 6.
The foundation below the embankment dam at Reservoir No. 6 contains clean, alluvial sand deposits, 14 to 47 feet thick. A FERC dam safety study raised concern about potential liquefaction of the foundation sands. Cornforth Consultants was retained by the Portland Water Bureau to assess the liquefaction hazard and risk posed to the reservoirs. The study included 19 exploratory borings to evaluate soil type and density; gradation testing to determine silt content; a probabilistic seismic hazard evaluation to develop the design earthquake; a finite element analysis to determine in-situ stresses below the embankment; and an assessment of cyclic stresses within the foundation.
The results of the analyses determined that localized pockets of foundation sand could liquefy; however, the vulnerable pockets were isolated and not laterally continuous. Therefore; excess pore-water pressures generated in the localized pockets would dissipate into denser soil without threatening the stability of the embankment.
|
<urn:uuid:960b7bed-c8d8-4dd4-a561-40fa6cfbde27>
|
CC-MAIN-2016-26
|
http://www.cornforthconsultants.com/projects-earthquake-mt-tabor.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00151-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931829
| 364
| 3
| 3
|
After learning about the four quadrants in the coordinate plane students in Ms. Krista Chapman’s class at the Carl H. Kumpf Middle School in Clark are asked to plot themselves.
Students are given an ordered pair and a large scale coordinate plane constructed on the floor using each tile as a unit. Students learned that their placement would be determined both by the x and y value given to them. While students are experienced at plotting positive ordered pairs, plotting negatives values added a challenge.
When called the student began at the origin of the graph or point (0,0) facing quadrants I and II. The student then walked the appropriate number of tiles alone the x-axis.
Students quickly learned that they would need to walk to the left for a negative x-value and to the right for a positive x-value. After moving to the correct x-value students would walk forward or backward to the given y-value.
Each student had a paper displaying the ordered pair for the class to view. It was the students’ job to make sure their peers found the correct point. After all students were standing on their ordered pair, students had the opportunity to guess what shape the points created.
Andrew Lakkis and Deno Nicholas hold their ordered pair while Michael Shakleton helps Nick Ridente find his spot on the Coordinate plane in Ms. Chapman’s Mathematics Class.
|
<urn:uuid:09b24d27-5420-434d-9cd4-b4f1a4bd754b>
|
CC-MAIN-2016-26
|
http://www.nj.com/suburbannews/index.ssf/2013/02/kumpf_sixth_graders_in_clark_l.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958684
| 287
| 4.15625
| 4
|
Lessons & Chapters
Lessons 13 & 14 How did all the animals fit on Noah’s Ark?and How did freshwater and saltwater fish survive the Flood? (Chapters 13 & 14)
The Creation Answers Book Study Guide
Lesson 18: How did all the different ‘races’ arise (from Noah’s family)?
Text: The Creation Answers Book, chapter 18
What factors determine skin colour?
What is the function of melanin?
Explain, in your own words, what factors have been involved and how the characteristics of various people groups could have come about.
Look up a definition of the ‘founder effect’. What does this suggest about the speed of physical changes that can occur when a small number of people found a new population?
Is it likely that, after the Babel event, many of these small people-groups became extinct due to in-breeding? Why or why not? (See also The Creation Answers Book, chapter 8.)
What hints does the Bible give on how quickly the different people-groups developed their distinctive features? (Genesis 27:11, Numbers 13:32–33, Isaiah 18:2,7, Jeremiah 13:23)
What is the connection between the Babel event (Genesis 11) and the Pentecost event (Acts 2:1–13)? (See also Colossians 3:11, Isaiah 28:10–13, Deuteronomy 28:49). Hint: think about the scope of Jesus’ death in fixing the effects of sin.
Read Genesis 9:18–27.
Who received the curse pronounced by Noah?
Why did Noah pronounce this curse?
Are black people the result of a curse on Ham? (See The Creation Answers Book chapter 18, section “Is black skin due to the curse on Ham?”)
Read Genesis 4 and list some accomplishments of our pre-Flood forebears.
If your family were suddenly cut off from civilization:
What types of skills would already be present within your group?
What skills would be there if your extended family were included?
What would be missing?
Research present-day ‘cave people’. (See also The amazing cave people of Malta.)
- Write an essay on how wrong ideas about the origin of people groups have affected the spread of the Gospel. What are the implications for how we do cross-cultural evangelism now? How have evolutionary-based ideas provided support for racism?
Section: ‘Interracial marriage’
See The Bible and interracial marriage for help with the following section on ‘interracial’ marriage.
What was God’s purpose in scattering the people who were building the Tower of Babel?
What was the mechanism God used to disperse the people?
What is the biblical basis and purpose for marriage?
What are the biblical requirements for a Christian’s potential marriage partner?
List some biblical examples of so-called ‘interracial marriages’ and explain the significance of them.
|
<urn:uuid:c60c928b-5999-4d72-aaa2-8526d57d17d2>
|
CC-MAIN-2016-26
|
http://creation.com/creation-answers-book-study-guide-lesson-18
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00187-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95152
| 636
| 3.5625
| 4
|
With Mac OS X, Apple is bringing Unix to a large, new audience. In part one of this article, I offered a brief history of Unix and mapped out how Unix will provide the basis of Mac OS X. The Macintosh user community is comprised of well over 25 million people, so as Apple paves a new path - even if most don't follow it immediately (or ever) - the implications for the industry are significant. Apple's last major change of direction, the iMac, introduced translucent colors, a strikingly original case design, USB, the removal of floppy disks and serial ports, and Internet access as a major feature. The iMac had a profound impact on the whole industry - even PC and PDA users without iMacs were affected by the iMac's endorsement of colors and USB. To understand Apple's latest decisions behind Mac OS X and its impact, it's necessary to examine Mac OS X, Unix, and the industry as a whole.
Competition Makes Strange Bedfellows -- In this industry, the dominant player is obvious: Microsoft. In fact, Microsoft is so much larger and more entrenched than any other company, including Apple, that they're almost a feature of the landscape. All Apple's plans for years have been made around the realities of playing with, against, and off of Windows PCs. As it turns out, this is just as true for Linux and BSD Unix users - perhaps even more so, because the PCs that Linux and BSD generally run on can (and often do) also run Windows. This raises an interesting question: are Apple and users of Unix-based systems natural allies, trying to carve different niches from the Windows market? It would seem that by basing Mac OS X on BSD Unix, at least Apple is endorsing this view.
Despite their fundamental differences, the Mac OS and Unix have a number of interesting similarities. Both platforms are shadowed by Microsoft's dominance but boast vigorous support within their own communities. The Mac OS and Unix have to "fit in and stand out," and success is often determined by how well they integrate with Windows. Windows can't (without the addition of a utility like Mediafour's MacDrive 2000) read Mac or Unix file formats or disk formats, but Macs and many Unix systems can both read Windows (FAT) floppies and hard disks. In contrast, Windows has so much market share that various "private" Microsoft technologies, such as the Word .doc file format and the Win32 APIs, have become de facto standards. In turn, Macs and Unix machines support these Microsoft-originated technologies to varying degrees, with Mac OS features like File Exchange and third-party products like Thursby Software's DAVE, which enables Macs to do Windows file sharing. The Mac OS and Unix must offer major advantages to be considered in spite of compatibility issues and have to take a much more open attitude towards compatibility and interoperability.
Because the capability to run other operating systems, particularly Windows, is so valuable, emulators are popular on Macs and Unix machines. Full emulators like Virtual PC provide all the capabilities of a foreign computer system, allowing other operating systems to run within the emulator. In this way, Virtual PC can run Windows, Linux, and other operating systems intended for Intel-based PCs. An alternative is to replicate only the operating system's functionality with a replacement compatibility layer. This approach is popular on PCs, where the processors are the same, so emulating just Windows, instead of a whole PC, provides a workable system. This is also how the Classic environment in Mac OS X works, and how the free Mac-on-Linux project runs Mac OS 8.6 and later under Linux on PowerPC-based computers.
Unix is often seen as the operating system for serious computer experts. At the other end of the continuum, Macs are "computers for the rest of us". Together, Unix and the Mac OS bracket Microsoft's huge lump in the bell curve of platform usage. Macs and Unix often differentiate themselves from Windows on the same issues, but take opposite tacks in doing so. Examples of such divergence include the Mac's ease of use, tight hardware-software integration, and - until now - unified control over hardware and operating system development; in contrast, Unix supporters tout advantages such as flexibility, control, broad hardware support, and reliance on open source projects.
Real World Differences -- Despite these similarities, we shouldn't lose sight of the fact that the Mac OS and Unix are in many ways utterly different. Unix has a long and distinguished history as a collaborative research project and programming environment. Over the years, it has matured into a robust and efficient networking platform, while remaining excellent as a development environment. In obvious contrast, Apple considers Macs to be powerful appliances, or sometimes technological agents, but doesn't expect users to develop software or explore the system. As open source advocates love to point out, Unix development is a worldwide and long-running effort, so Unix is very mature in their terms - stable and fast. On the other hand, Mac OS 9's maturity is visible in its consistency among applications and its well-honed interface. This is part of why the recent QuickTime and Sherlock interfaces (and many of the changes in Mac OS X's interface) cause such dismay among Mac users - they throw years of interface improvement and familiarity out the window, abandoning a long history of deliberate and incremental improvement in favor of novelty and glitz.
Over its long history, Unix has developed an extensive stable of software, especially in the networking, programming, and security arenas. "Productivity" applications, however, are much less common on Unix than on Macs and Windows, where they're staples - used by millions of people each day. A quick glance at the Freshmeat Linux/Unix software release site shows a wealth of programming tools, servers, and hacks, but little in the word processing, publishing, and spreadsheet areas. This makes a lot of sense when you remember that Linux machines can also run Windows, so many Linux users may also be using Microsoft Office under Windows on the same machines they use for Linux, or on secondary machines or client workstations, reserving the Unix machines as servers or programming environments. This is something of a self-fulfilling prophecy - because Unix is so impoverished in business software, Unix users generally require additional systems for such work, and because they have alternatives, there's less demand for these applications on Unix. As a result, Unix remains an excellent server platform, with notably different usage patterns than Mac OS and Windows.
Grand Unixification -- With Mac OS X, Apple has done a fair job of reconciling these two worlds in a brand-new combination, and an excellent job of isolating them enough that users can remain within a single familiar environment if desired. There are rough edges (particularly the three different views of the file structure: Mac OS 9/Classic, the slightly different Mac OS X layout, and the NeXT/Unix structure), but when Mac OS X is running, it's easy to ignore the Unix aspects, and remain in a familiar Mac environment with bigger icons, different buttons, and a much more limited Desktop.
This is apparently Apple's expectation for most users - that they will completely ignore the underlying Darwin layer, while still benefiting from its stability and performance. Alternatively, if you use the included Terminal program to log into the Darwin environment, you encounter a fairly normal Unix installation (except that, again, files are in strange places - a leftover from Mac OS X's NeXT heritage).
In additional to trying to create a unified system, Apple is also trying to move the proprietary work NeXT did on NeXTstep back into the BSD/Unix mainstream. Apple has repeatedly stated that their goal is to use as much generic BSD code as possible, thus saving time and money for maintenance of proprietary Apple software. As part of this process, Apple has released Darwin under an open source license, which means the program code is available for non-Apple developers to see, critique, and modify. In licensing terms, Mac OS X consists of two parts. The Darwin code is public and free, and the rest (the graphical and Mac-specific parts) is proprietary. This is a reasonable division, as Apple's focus has never been robust core operating system functionality, but rather the user interface. If taking Darwin open source proves successful [and comments from Darwin developers at MacHack 2000 seemed to indicate it already has been -Adam], Apple will garner significant development support from other developers, helping to improve the Darwin foundation for Mac OS X, and freeing more Apple developers to focus on Apple's strengths.
This split between the open source foundation and proprietary upper layers gives Apple what they've been desperately seeking for years: a version of the Mac OS that includes all the buzzwords important for a good, fast, stable operating system. BSD is stable and features preemptive multitasking, and provides excellent virtual memory and crash protection. Apple's hope is that existing Macintosh users will appreciate these features, and that they'll also attract a new class of users: serious network users and server administrators. With Mac OS X, Apple is taking a stride towards making the Mac an excellent server platform - even for serving Windows users. Plus, with high-bandwidth Internet connections becoming available and popular, Apple might just be on the cusp of empowering another leap in self-publishing. Mac OS X now includes Apache, gcc, cron, ssh, mainstream Perl, and a whole slate of Unix-based staples which were simply unavailable for Macs before, or required interface hacks and significant porting effort to run on the Mac. Mac OS X with Apache is already a much better personal server platform than Windows 98 or Microsoft's new Windows Me.
What Does Unix Mean to Me? Historically, Macs have had limited support for the latest Internet protocols and security tools. Although Mac OS 9 has an excellent track record for security, and there are several excellent mail, Web, FTP, and news clients, Macs have been too small a population to garner the same level of support as Windows from many vendors. This results in fewer options for virtual private networks (VPNs), PPP over Ethernet (PPoE, required for many cable and DSL ISPs), and similar networking tools and utilities. Mac OS X brings Unix-based tools to fill these needs. In many areas, this move should help eliminate the problems of being a niche player which have plagued the Mac OS for years.
The union of the Mac OS with Unix also has interesting sociopolitical implications for Mac users in the larger industry. With the Apple and Apple II, Apple made computers much more affordable and accessible to individual users. With the original Power Macintoshes, Apple became the only high-volume RISC (Reduced Instruction Set Computing, a design model that enabled PowerPCs to be so much faster than the previous Motorola 68000 series) computer vendor, bringing a major speed improvement to its users. If Mac OS X is even somewhat successful, within a year it will more than double the number of computer systems running BSD-based operating systems, even though Mac OS X users won't see their computers as Unix systems.
It will be interesting to see if and how Apple uses this new leverage into the Unix world, and if Apple takes advantage of the power of Unix directly, or instead restricts its focus to Aqua-based graphical applications. Thanks to hybrid applications, Apple may not have to make the choice. The FizzillaMach Web browser, for instance, uses a Carbon front end with an Aqua interface, but the standard Unix-based Mozilla back end for high-performance threaded networking. In the future, I hope to see Mac developers using the powerful Unix utilities included in Darwin from their Mac applications, perhaps through AppleScript scripts that pass text from Carbon to command-line programs like grep, sed, and wget, (which find matches, find and replace text, and get Web pages and sites, respectively) returning results to the Mac applications.
Apple is bringing us into the Unix world, like it or not. It is important to remember that Mac OS X's Darwin foundation offers major advantages for Mac users in two very different areas. First, it provides much better reliability and power than Mac OS 9, almost invisibly. Even users who completely ignore Darwin will silently benefit from its robustness and performance. Second, Darwin provides access to the tools and operating system facilities that make Unix so powerful, like shell scripting and networking tools.
Each user of Mac OS X will have to make their own decisions on whether and how much to venture beyond familiar Macintosh territory into the domain of Unix, but the capability will always be there. For me, at least, it's been the beginning of an exciting journey.
[Chris Pepper is a Linux and Solaris system administrator in New York, and he's just delighted that his Mac workstations are now running Unix like the servers he coddles for a living. If you want Chris to coddle your servers, check out his resume and contact him directly. His Mac OS X Software and Information site has links to useful information and a few Unix ports for Apple's new operating systems.]
|
<urn:uuid:cfbc19c9-844d-40b1-8342-af7cb3df7d74>
|
CC-MAIN-2016-26
|
http://tidbits.com/article/6234
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00126-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940055
| 2,694
| 2.65625
| 3
|
May 13, 2007
Jogging Over a Distance
Jogging with a partner beats jogging alone, writes ABC's News in Science, but it can be difficult to find someone nearby and who runs at roughly the same pace.
"Now prototype technology developped at the University of Melbourne, makes it possible to jog with the perfect partner, even if the person lives in another city.
The Jogging over a Distance system uses mobile phone and GPS technology as well as a custom computer program to transform a phone conversation into a 3D audio experience.
While you run, you can hear your partner's voice coming from the front, to the side, or behind, depending on how fast or slow he or she is running."
The Permanent Link to this page is: https://textually.org/textually/archives/2007/05/015897.htm
|
<urn:uuid:6d71ac18-8aa8-45c3-8c5e-c3c31f04de11>
|
CC-MAIN-2016-26
|
http://www.textually.org/textually/archives/2007/05/015897.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00197-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.932099
| 178
| 2.6875
| 3
|
A Probability of Precipitation or POP is a formal measure of the likelihood of precipitation that is often published from weather forecasting models.
In US weather forecasting, POP is the probability that more than 1/100th of an inch of precipitation will fall in a single spot, averaged over the forecast area.
For instance, if there is a 100% probability of rain covering one side of a city, and a 0% probability of rain on the other side of the city, the POP would be 50%.
A 50% chance of a rainstorm covering the entire city would also lead to a POP of 50%.
The POP measurement is meaningless unless it is associated with a period of time. US forecasts commonly use POP that is defined over a 12-hour periods (POP12), though 6-hour periods (POP6) and other measures are also published.
The mathematical definition of Probability of Precipitation is defined as: PoP = CxA
C = the confidence that precipitation will occur somewhere in the forecast area.
A = the percent of the area that will receive measurable precipitation, if it occurs at all.
For example; a forecaster may be 40% confident that under the current weather conditions that precipitation will occur, and that should rain happen to occur it will happen over 80% of the area. This results in a PoP of 32% ((0.4x0.8)x100 = 32%)
|
<urn:uuid:ee673bb2-395c-4782-9ab9-f094f8c91280>
|
CC-MAIN-2016-26
|
http://www.wfmz.com/weather/What-is-Probability-of-Precipitation/185260
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00011-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947049
| 296
| 3.8125
| 4
|
Guess what. Men and women are different. Socially, the differences offer an attractive mystique. But in professional situations and in the workplace, the significant differences in male and female communication styles can cause problems. There is no denying that women and men vary significantly in their verbal inflections and tone, their body language, and how they listen to others. They pick up on different cues in conversations, and often the meaning they interpret is not the message the speaker intends. It's almost as if the two genders speak different dialects. And, in fact, that's nearly the case. Communication confusion and breakdown between men and women at work can lead to inefficiency and expensive business errors. This course explains the differences in the language and communication behavior of men and women so each can more easily understand what the other is really saying. The course also provides tips on how to modify your own communication behavior to be more clearly understood by the opposite gender.
Anyone who works or interacts with the opposite gender in a professional setting
|
<urn:uuid:e38aa920-ce9e-47d9-8675-b755f6cb4126>
|
CC-MAIN-2016-26
|
http://library.skillport.com/coursedesc/comm_12_a03_bs_enus/summary.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00152-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.961161
| 204
| 3.84375
| 4
|
MOSCOW – Russia is to revise its space program, the national space agency said on Tuesday after a newspaper published a report that billions of dollars of cuts may be on the way, including to ambitious moon exploration plans.
Several Russian government ministries were engaged in revising the space program up to 2025, Roscosmos said in a written statement.
It did not give details. But the Roscosmos statement and a report in Izvestia newspaper suggested Russia’s prestigious space program may also have fallen victim to government cutbacks brought on by hard times.
Buffeted by low oil prices, Western sanctions and a falling ruble, the Russian government is in the process of scaling back its spending plans for everything from the health sector to welfare.
The authoritative Izvestia newspaper published details of what it said was a draft proposal sent by Roscosmos to the government. The draft showed big spending cuts were being proposed to the moon exploration program.
Russian Deputy Prime Minister Dmitry Rogozin announced in April last year that Moscow planned to build a big base on the moon, which he said would serve as a platform for scientific breakthroughs.
Izvestia reported Roscosmos was proposing to cut the manned-flight segment of lunar exploration by 88.5 billion rubles ($1.22 billion) to 329.67 billion rubles, but said funding to build a spaceship to fly to the moon would not suffer seriously.
Roscosmos, in its statement, declined to comment on those figures, saying the revised program was still extensive.
“The revised project of the federal space program for 2016-25 envisages the study of the moon by automated orbiters, as well as by building up scientific and technical potential for further studies, including by manned missions,” it said.
It declined to say whether Russia’s plans for a moon base are still alive, but said the first manned flight around the moon would not take place before 2029.
Russia was instrumental in building the International Space Station and remains actively involved with the ISS, most recently launching cosmonaut Yuri Malenchenko to the station along with two astronauts on Dec. 15.
President Vladimir Putin has spoken many times of rekindling Soviet-era space glory. The USSR launched the first artificial Sputnik satellite in 1957, sent the first man into space in 1961 and conducted the world’s first spacewalk in 1965.
But its Cold War rival, the United States, made six manned landings on the moon between 1969 and 1972. In the same era the Soviet-built N-1 heavy rocket, which was designed to take cosmonauts to the moon, failed to make a single successful flight.
|
<urn:uuid:3328c97b-2d2d-458d-9a3d-6df146d7b0d8>
|
CC-MAIN-2016-26
|
http://www.japantimes.co.jp/news/2015/12/30/world/science-health-world/russia-reportedly-to-revise-space-program-amid-economic-troubles-including-cuts-to-moon-exploration/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00149-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.964235
| 555
| 2.609375
| 3
|
High definition fiber tracking, or HDFT, provides colorful, detailed images of the brain’s fiber network that accurately reflect brain anatomy observed in surgical and laboratory studies, according to a new report from the University of Pittsburgh School of Medicine in the August issue of Neurosurgery. The findings support the notion that HDFT scans can provide valuable insight into patient symptoms and the prospect for recovery from brain injuries, and can help surgeons plan their approaches to remove tumors and abnormal blood vessels in the brain.
In deep brain surgery, the neurosurgeon may need to cut or push brain fiber tracts, meaning the neuronal cables connecting the critical brain areas, in order to get to a mass, said Juan Fernandez-Miranda, M.D., assistant professor, Department of Neurological Surgery, Pitt School of Medicine. Depending on the location of the tumor and the surgical path the surgeon takes to get to it, fiber tracts that control abilities such as language, memory and motor function could be injured.
“Standard scans such as MRI or CT can show us where a mass lies in the brain, but they cannot tell us whether a lesion is compressing or pushing aside brain fibers, or if it has already destroyed them,” Dr. Fernandez-Miranda said. “While the symptoms the patient is experiencing might give us some hints, we cannot be certain prior to surgery whether removing the mass will disrupt important brain pathways either near it or along our surgical route through brain tissue to get to it.
“Our study shows that HDFT is an imaging tool that can show us these fiber tracts so that we can make informed choices when we plan surgery,” he said.
A sophisticated MR scanner is used to obtain data for HDFT images, which are based on the diffusion of water through brain cells that transmit nerve impulses. Like a cable of wires, each tract is composed of many fibers and contains millions of neuronal connections. Other MR-based fiber tracking techniques, such as diffusion tensor imaging, cannot accurately follow a set of fibers when they cross another set, nor can they reveal the endpoints of the tract on the surface of the brain, said co-author Walter Schneider, Ph.D., professor, Learning and Research Development Center (LRDC), Department of Psychology, University of Pittsburgh, who led the team that developed HDFT.
For the new study, Dr. Fernandez-Miranda and his colleagues obtained HDFT scans of 36 patients with brain lesions, including cancers, and six neurologically healthy individuals. They also dissected the fiber tracts, such as the language and motor pathways, of 20 normal post-mortem human brains.
They found that HDFT correctly replicated important anatomical features, including the peaks and valleys of brain tissue; a region called the centrum semiovale where multiple fiber tracts cross; the sharp curvature of the optic radiations that carry information to the visual cortex; and the endpoints on the brain’s surface of the branches of the arcuate fasciculus, which is involved in language processing.
For the second part of the study, the team conducted HDFT scans in 36 patients prior to surgery, along with the imaging studies that are typically done as part of the pre-operative planning process. They then compared fiber involvement predicted by HDFT with what they found during surgery.
“The scans accurately distinguished between displacement and destruction of fibers by the mass,” said study co-author Robert Friedlander, M.D., professor and UPMC chair in Pitt’s Department of Neurological Surgery. “Post-operative HDFT scans also revealed where surgical incisions had been made, further validating the technique’s imaging power.”
He added it is not yet known how much fiber loss must occur to appear as a disruption or to cause symptoms, or what constitutes irreversible brain damage.
“Although there is more work we must do to optimally develop the technique, HDFT has great potential as a tool for neurosurgeons, neurologists and rehabilitation experts,” Dr. Friedlander said. “It is a practical way of doing computer-based dissection of the brains of our patients that can help us decide what the least invasive route to a mass will be, and what the consequences might be of being aggressive or conservative in the removal of a lesion.”
The team is continuing to assess HDFT’s potential in a variety of studies. Co-authors include Sudhir Pathak, M.S., M.Sc., Kevin Jarbo, B.S., and Timothy Verstynen, Ph.D., LRDC; Jonathan Engh, M.D., and Arlan Mintz, M.D., Department of Neurological Surgery, Pitt School of Medicine; Fernando Boada, M.D., Ph.D., Magnetic Resonance Research Center, Department of Radiology, Pitt School of Medicine; and Frank Yeh, M.D., Ph.D., Department of Biomedical Engineering, Carnegie Mellon University.
The project was funded by The Copeland Fund of The Pittsburgh Foundation and the Defense Advanced Research Projects Agency. For more information, go to www.hdft.info.
A video of Dr. Fernandez-Miranda discussing high definiton fiber tracking is available at http://www.youtube.com/watch?v=vrGLXWKdrTM.
The research paper explaining more about high definition fiber tracking is available at http://journals.lww.com/neurosurgery/Fulltext/2012/08000/High_Definition_Fiber_Tractography_of_the_Human.36.aspx.
|
<urn:uuid:a5e30bef-b1cc-4afa-8cd6-62ce55c5db25>
|
CC-MAIN-2016-26
|
http://scienceblog.com/55721/high-def-fiber-tracking-accurately-reflects-brain-anatomy/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00184-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.930446
| 1,159
| 2.5625
| 3
|
A monopoly is a market structure in which there is a single supplier of a good or service. A firm that is the single supplier of a good or service for which there are no close substitutes is also called a monopoly or a monopolist. For example, if only one company produced and sold computers the company would have a monopoly on computers. When only one firm provides a good or service,there is no competition to force the firm to produce the highest quality good or service or to sell it at the lowest possible cost. When a true monopoly exists, consumers cannot even buy substitute goods or services because nobody sells them. Certainly, businesses have the potential to profit greatly if they can become monopolies. However, many economists do not believe that monopolies are good for the economy, and the U.S. government has outlawed monopolies. In this lesson, you will play the role of a newspaper editorialist. You will consider whether or not governments should allow monopolies to exist. You will also think about how the Internet has influenced the ability of firms to set themselves up as monopolies.
The publisher of your local newspaper has invited you to serve as the newspaper editor. As a newspaper editorialist, you are expected to present your opinions on a variety of different issues. For the next issue of your paper, you have decided to present your opinions on monopolies. Should monopolies be illegal in the U.S. economy?
Your teacher wants to buy a pen from a student in your class. You want your teacher to buy the pen from you. What type of a pen will you try to sell your teacher? How much will you charge your teacher for this pen? How will the fact that other students in your class also want your teacher to buy their pen influence the price that you charge and the quality of the pen that you make available?
Now, imagine that only one student in the class has a pen to sell to the teacher. How do you think this situation would affect the price that the student could charge? Why? How would this situation affect the quality of pen that the student tried to sell? Why?
A firm, or company, has a monopoly when it is the only firm to sell the product or service. Therefore a student who controls all the pens in a class has a monopoly in the classroom economy. Do you think that it is fair for monopolies to exist in economies? Why/why not?
One way to think about values and ideas that are important in the United States is to consider the laws adopted by the U.S. government. For example, freedom of speech is an important value in the United States. Therefore there are numerous laws, in addition to the Bill of Rights, that protect an individual's freedom of speech. In order to learn more about the view that the U.S. government takes of monopolies, complete the following worksheet Standard Oil Company and the Sherman Anti-Trust Act .
Though the U.S. government has prohibited the establishment of monopolies, many firms would love to be monopolies. Why do you think that business firms would want to be monopolies? What is the objective of business firms? How would being a monopoly help business firms achieve this objective? In what ways might firms try to become monopolies?
- Pretend that you are an editorialist, and write an editorial considering whether or not you believe that monopolies should be illegal.
Once you have completed your editorial, please trade editorials with two classmates. Write responses to these editorials using the attached form .
Remember that the U.S. government first outlawed monopolies in the 1890s. Imagine how confused people from the 1890s would be if they came back to life today. They would not even recognize washing machines or dishwashers, let alone the Internet, blogs, and podcasts. How do you think that technology has influenced the difficulties that firms would encounter if they tried to establish themselves as monopolies today? Has technology made it harder or easier to develop a monopoly?
In this lesson, you have considered the role that monopolies play in economies. You have learned that monopolies are illegal in the United States. You have considered why firms would want to be monopolies. You have also stated and explained your views about monopolies in an editorial, and you have commented on several classmates' editorials. Finally, you have considered how technology today can influence the ability of firms to establish themselves as monopolies.
In groups of two or three, you should develop radio interviews considering the nature of monopolies today. Your radio interviews should respond to the following questions:
- What is a monopoly?
- How has today's the technological infrastructure influenced the establishment of monopolies?
- Do you think that there has to be a law today outlawing monopolies? Why or why not?
Your task here is to develop a strategy for creating a monopoly. And, for the monopoly you plan, you must also think about the effect it would be likely to have on the goods or services made available to cosumers. To consider this question in more detail, please complete the worksheet entitled Creating a Monopoly .
|
<urn:uuid:d25a66cc-b5de-4f9f-a665-bcad07033449>
|
CC-MAIN-2016-26
|
http://www.econedlink.org/lessons/projector.php?lid=686&type=student
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00082-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970239
| 1,045
| 4.0625
| 4
|
a column by
The appeal of unusual pitches and sounds has been evident from the effect of Balinese music on Debussy when he first heard it in 1888, and the history of early 20th century popular music is rich in examples. Consider Paul Whiteman’s “Japanese Sandman” or the immense popularity of the British light classical composer Albert Ketèlbey, who could construct engaging melodies from divergent scales to evoke exotic settings. His “In a Persian Market” was eventually performed by Wilbur DeParis, The Ventures and Sammy Davis Jr. and there’s a YouTube video of “In a Chinese Temple Garden” being played by an orchestra of Asians.
Ultimately, though, this touches on the ways we organize our worlds. The intersections of tonal systems and their frictions are at the roots of jazz, making it, in a sense, an early “world” music. You hear it in the scalar biases of some early blues performers (Blind Willie Johnson comes to mind) and in the discrepancies between pitch-inflected “blue notes” and the tempered scale of the piano. It’s been especially significant in the past half-century since the expanding harmonic complexities of bebop ran into Miles Davis’s modal approach, and then in the proliferation of noise elements and microtones in free jazz and its minglings with chance, musique concrète, and various musical cultures. Between the 1930s and the 1960s, saxophonists largely avoided the soprano because it was so hard to play “in tune”; it was likely adopted by John Coltrane, La Monte Young and Terry Riley because it was so easy to bend notes, as well as for its timbral resemblance to a shenai.
British pianist Veryan Weston’s Tessellations is one of the more riveting examinations of pitch systems in an improvisational context, in some ways harkening back to the period when John Coltrane started exploring Nicolas Slonimsky’s Thesaurus of Scales and Melodic Patterns and then exploring modes and triadic harmony in the context of each other. In some ways, Tessellations has a significance akin to Terry Riley’s In C, a work that changes the way we feel tonal relationships.
“Tessellation” is a term in geometry referring to a pattern of figures that fills a plane with no overlaps or gaps. It comes from the Latin “tessella” for one of the small cubes used to make mosaics. This patterned tiling appears in the drawing of M.C. Escher as well as in a honeycomb. It appears three-dimensionally in the Alhambra Palace and in Buckminster Fuller’s geodesic dome.
In Weston’s transformation of the geometric idea, Tessellations is a series of 52 pentatonic scales, with each successive scale altering one note from the preceding scale. It’s a kind of musical tiling that results in a sense of continuous structural transformation of tonal materials within an improvising structure, stretching to an hour in length. Part of what is remarkable about the structure of Tessellations is that it mediates two almost diametrically opposed musical languages. Pentatonic scales are at the roots of music. Often based on the “perfect” intervals of the fourth and fifth, they tend to divide the octave into two fourths, sometimes symmetrical. Pentatonics are at the pre-harmonic core of tonal organization, giving the distinctive character to certain world musics, making music like the Balinese gamelan instantly recognizable. Intervals in pentatonic scales are rarely justified. Microtones abound and they are at fundamental odds with the harmonic system of modulation from key to key at the 12-tone core of European art music (the system constructed on justified pitch—the thing that makes an A# and a Bb on a piano the same thing).
The materials for Tessellations developed over a long period of time in Weston’s music, but he first began recording it around 2000. In 2003 he recorded Tessellations for Luthéal Piano at the Musical Instruments Museum in Brussels. Released on Emanem, the CD documents the remarkable character of the instrument as well as the piece, Weston exploring the Luthéal piano’s organ-like stops for harpsichord and harp-like sounds.
Weston has continued to explore the materials and this year Emanem has released a remarkable continuation of the process, Different Tessellations. It contains part of a performance of the piece—now Tessellations I— by the young pianist Leo Svirsky, offering a different perspective on the piano piece; it also includes a new transformation of the piece—Tessellations II—performed by a group of nine singers, including Weston, called the Vociferous Choir. The new incarnation resonates not just with the previous piece but also with Weston’s long interest in vocal music (notably with Phil Minton whether in duo, quartet or the band Four Walls) and his interest in non-standard pitches (previously explored in depth in his recordings with Jon Rose on Temperament [Emanem]). On the one hand, Tessellations II expands the scat vocal group to a staggering nine voices, but it also expands the tonal range of Tessellations, taking in the microtonal practices of Africa, South Indian and Siberian singing as well as beat-boxing. The vocal piece is consistently wordless, emphasizing the expansive, pan-cultural nature of its content. Whether it’s Svirsky’s reinterpretation of the piano piece or the choir version, Tessellations is a piece and an improvisatory practice that can reach across generations and across cultures.
SB: When did you first begin to explore the relationships between pentatonic scales that have gone into the construction of Tessellations?
VW: This happened I suppose in my early twenties, when I was living in Brixton Hill in London [in the early 1970s]. I decided to develop a methodology for practicing. This practicing might have a slightly hand-in-glove relationship with improvisation which was my main interest … and still is. After all, how can you practice improvising? What I thought was to write down daily interests and discoveries in a practice book. This would then perhaps create a kind of clearer focus in time. Any actual musical content practiced, though, would be absorbed but not necessarily regurgitated note-for-note in an improvising situation….so the work done practicing might then only have a sub-conscious effect in intuitive… improvisational…playing situations.
At the time most of the emphasis was to find ways of applying linear ideas for the left hand, (that is, moving away from the “vamping role” of the left hand) and so I was interested in finding 12-note sequences in order to maximize melodic and tonal possibilities when improvising. But it was on one of the practice pages that I wrote “If only I could find some symmetry with pentatonic scales and twelve tone scales” during the winter of 1972.
SB: What whetted your interest and kept you working at the systematization so you could create the modulation patterns?
VW: So the first main discovery was with the common anhemitonic pentatonic scale (sorry about that mouthful … the pentatonic scale without semitones) in that there was a way of modulating from one scale to another in I suppose exactly the same way as you do with the classic diatonic cycle-of-fifths key system (that is, by either sharpening or flattening one note when moving a similar scale type up or down a fifth). So at this point I take it that your readers will mainly be musicians … as this might seem pretty boring and be seen to be “talking shop” to a non-musical music enthusiast! … so, end of theory.
Going back to your question here about “what whetted” my interest, it was not only the theoretical possibilities of exploration as just mentioned, but also the time of my youth happened to be when one of your fellow-countrymen described our society as fast becoming a “global village.” With the explosion of the mass-media in this global village came opportunities to listen and re-listen to the same music from all round the world, and vinyl records provided me, as well as many others from our generation, with this opportunity. For example, I remember hearing the Laotian Khène on a UNESCO vinyl record and feeling inspired to try and find a way of almost emulating this sound on the piano…and of course the scales used were often pentatonic.
SB: You’ve mentioned the influence of your time in Trevor Watts’ Moiré Music contributing to Tessellations. What was it you heard there?
VW: Well, Trevor drew my attention to rhythm, and how poor mine was, but what potential it might have as a dimension in creative music making. I think it’s fair to say that most of the post-Webern trends in European contemporary music (and this includes the aesthetic found by and which inspired contemporary improvisers in the ‘70s) was a development of texture and color and the actual fragmentation of rhythmic pulse. I realize this is a gross over-simplification, but being a pianist with an instrument with limited color opportunities (unless delving into the world of preparation and the instrument’s insides), I felt that Trevor’s Moiré Music provided me with opportunities to improve right and left hand independence as well as explore and expand a feel for rhythm.
SB: I think it’s remarkable how Tessellations actually combines the most divergent musical systems: 12 tone-rows, the end of the western system, and modes, the ancient organization of exact pitches. It creates a kind of abstraction of world musical tonality – empathies with Africa, Asia, earlier European music, etcetera.
VW: Yes, it does reflect many of these cultures … and often quite literally. I have the deepest admiration for musicians like Phil Minton, Roger Turner and John Butcher who have ploughed their own very personal furrows. They almost seem to consciously turn their backs on the kinds of musical activity I am still involved with which could be seen to be dangerously eclectic.
SB: Can you tell me how there are two ways to use each pentatonic scale in the series? I can understand your written explanation but I’m not sure I can convey it.
VW: This is difficult, Stuart … mmmmm … ok, let’s go from the start of Tessellations I and just look at the first two ideas.
The first idea is playing at the top of the piano two different kinds of interval which are inversions of each other – perfect 7th and perfect 9th. I just explore sequences of each of these options that can be found in the five pitches of the first pentatonic scale. I also explore the actual sound of each with the damper pedal either off or on.
The second idea is to depress silently the same first chord in any inversion in the middle-to-lower region of the piano. I then strike one or two pitches from the same chord with the other hand which create ringing harmonics that always seems to give infinite possibilities of sound … and let me contradict myself by saying “and color!”
SB: Would you comment on Leo Svirsky’s approach to playing Tessellations? How did he come to play this?
VW: Leo is very young – only 22 – but is already a rapacious reader of radical and outlaw culture. After leaving Maryland Academy of Music in USA, he decided to enroll at Den Haag Conservatoire, which has a comfortable proximity to Amsterdam. My Brazilian friend Yedo Gibson who also studies at this Academy recommended him to me. Leo is an amazingly gifted pianist technically, but also, in spite of his classical chops, naturally so. For example when he learnt the Tessellations I piece with me over a few days, he applied very different solutions to some of the technical demands of the piece, and in doing this, was able to open very different possibilities for improvising within the piece….so he is now a co-composer when he plays this piece….that’s for sure.
SB: How did the Vociferous Choir come about?
VW: The choir came about through an invitation from one of the singers in the choir, Annette Giesriegl, who had been a friend who I had played some gigs with in London. She asked me to come to Graz to be part of the gamsbART– JAZZ 2010 festival which had the theme “Singers & Voices.” She invited me to compose/construct a piece for 9 singers (including her and myself). She teaches voice and performance at the main Jazz Academy in Graz and knew these singers well, some as students and some as friends. Her choice reflected some of the broad cultural origins of some of these singers, for example Anush Apoyan from Armenia, the yodeling and pretty eclectic urban Styrians from around Graz itself, Sofija Knezevic from Serbia and Siruan Küng whose father is Kurdish. It is possible to hear traces of their origins when they improvise in duets or as soloists.
SB: How does the piece change from a piano piece to a choir piece?
VW: The piano piece, Tessellations I, was constructed in a similar way to a piece I called Open Score constructed in 1980, and the name helps to describe the form. It consists of notated ideas for improvising. So the notation just gives a base to work from. With the Tessellations I piece though, each idea uses as the foundation one of the pentatonic scales. There are 52 scales and roughly two ideas for each scale so it can last a long time.
The second Tessellations piece for the choir uses the same sequence of pentatonic scales but each scale provides materials that are mostly loops and cycles, but there are melodies and some interludes that are used to explore the possibilities of pentatonic harmonic progression.
SB: Are there specific points where you can indicate the divergence?
VW: Yes, in this case, each piece is constructed very differently although using the same sequence of scales. As the first is for a solo performer, it is much more “open-ended,” whereas Tessellations II has structures that are specific and directly related to other structures happening at the same time. So a loop has a particular synchronicity with another loop. This was the way I worked in and was inspired by Trevor’s Moiré Music group. Similarly, with the choir, we play unison lines to give the sound more weight, but as a result, the music requires both a lot of learning and a lot of learning to trust the other singers. Working with them in both rehearsal as well as performance was wonderfully inspiring and a lot of fun … I just hope we can do this more … please.
SB: I think it’s a piece that can just keep transforming.
|
<urn:uuid:1631ae31-b25f-4ebe-9494-ce753b308e26>
|
CC-MAIN-2016-26
|
http://www.pointofdeparture.org/PoD35/PoD35Ezz-thetics.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00086-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95516
| 3,202
| 2.5625
| 3
|
Quotations in an Academic Paper
Why use a quote in your paper? Quotes, used correctly, add drama and authority to your paper. They give the reader a glimpse of the scholarly or disciplinary literature that bears on your topic. Your ability to select and skillfully use quotes makes you a more credible witness to the scholarly conversation on the topic of your paper, even if you are not yet an authority on the topic.
When to use a quote: Paraphrase an author in your own words when you want to convey and summarize a complex point, show a consensus among several sources, or give necessary background before you lead the reader to your conclusions. Use a quote when the author’s words are so concise or appropriate to your argument that you cannot improve on them or because the author’s credentials add special import to his/her statement. Excessive quoting weakens the effect.
How to use a quote: There are two aspects of quoting a source that need attention. First, the function of a quote must be carefully considered. A quote is but one method of achieving your goal for the paper and is not a substitute for the idea development, argumentation, and language you must create as the writer of your paper. The function of a quote is to provide background for your argument, support your argument, or initiate a debate that relates to your argument. Using a quote shows that you have closely read and understand the sources you are citing. Avoid thinking that quotations state the right answers and that you only have to choose between agreeing or disagreeing with them.
The other key aspect to quoting sources is the form quotes should take. Professional style guidelines (e.g. APA, MLA, CMS) will vary slightly in this area, but the following practices are common in all styles.
What to DO with Quotes
Frame your quote: Clearly identify who you are quoting, preferably in the same sentence as the quote. Follow the quote with a sentence or two that reveals the significance of the quote, in a specific context. The example below shows proper framing:
In their article, “Are Multi-hospital Systems More Efficient?” economists Dranove, Durkac, and Shanley write that although “the conventional wisdom is that horizontal mergers will generate efficiencies in the production of services, surprisingly little systematic evidence exists to support this view.”3 The three researchers address this gap in the research with their study of Californian hospital systems in the 1980s and 1990s. They show that horizontal integration improves efficiency in marketing systems more than it improves production services. This finding has consequences for hospitals whose primary concern is to improve production services.
Size your quote: Quotes should not exceed several lines or 40 + words in your text. Longer quotes should be offset in single-spaced, indented paragraphs. They still require framing, as above.
Punctuate your quote: As you add a quote to your paper, check with the style guide you are using for all of the punctuation—quotation marks, commas, periods, dashes, capitalization, italics, brackets, etc.—around and within each quote. Letters and punctuation within the quote should be identical to the riginal unless you must omit or change them for fluency. The punctuation of quotes in indented, block quotes will differ from that used with quotes the running text.
What NOT TO DO with Quotes
The following quote has numerous flaws in it. It does not clearly show the source of the quote, the quote itself is too long and is miscopied, and it lacks a frame to integrate it into the student’s paragraph. The punctuation errors and vague pronoun use add to the confusion. The result is a quote that doesn’t advance the student’s argument.
Some professors resent the fact that the students take up their precious time: time that could be better used for research. Page Smith writes about how some of his colleagues go out of their way to avoid their students. They go as far as making strange office hours to avoid contact. “There is no decent, adequate, respectable education, in the proper sense of that much-abused word, without personal involvement by a teacher with the needs of and concerns of his/her students is at odds with the everyday reality confronting university professors in the United States.
Another way to misuse quotations is to fail to indicate direct quotes. This is called a misuse of sources if done in ignorance of quoting conventions or plagiarism if it is intentional. Skilled readers often know when a writer is using someone else’s words without proper citation. The underlined phrases sound like direct quotes from the previously noted source, particularly as they lacks the errors of grammar, diction, spelling, and punctuation found in the neighboring phrases.
The types of students that follow sports and they are likely to be in the social circles that the athletes move in identify more closely with their colege and are more competative then the students that do not attend the games (McDonald 128). This competitive nature is not only seen on the sports field, but also in other social circumstances. Also, for many of the athletes their success on the sporting field will also increase their attractiveness for the opposite sex including cheerleaders and non- sporting males and females, which we must remember may put a strain on thier academic studies.
Writing handbooks and websites frequently advise students to “avoid plagiarism” in academic writing. When you know how to properly introduce and attribute quotations you are using quotes to your advantage rather than avoiding a pitfall. Quoting and citing material in your paper adds excitement and authenticity and makes you a more credible source by association with respected authorities. Make your paper original and convincing by using the words of others sparingly, appropriately, and respectfully.
Resources for Learning
The Academic Help Desk tutor can help you through any academic impasse ... Learn more
World Languages Initiative
Cultivating a global perspective is a cherished goal of the Prescott ... Learn more
|
<urn:uuid:09d53ac9-e2cb-40a1-b921-c8506546e47b>
|
CC-MAIN-2016-26
|
http://www.prescott.edu/library/learning-commons/writing-center/language-quotations.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00127-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935384
| 1,233
| 2.953125
| 3
|
Internet turns 30
The Internet, a revolutionary communications system used daily by billions of people the world over, turned 30 Tuesday.
The computer network officially began functioning when it fully substituted previous networking systems Jan 1, 1983, the Telegraph reported.
On that day, it was the first time the US Department of Defence-commissioned Arpanet network fully switched to use of the Internet protocol suite (IPS) communications system.
This new method of linking computers paved the way for the arrival of the World Wide Web (www).
Based on designs by Welsh scientist Donald Davies, the Arpanet network began as a military project in the late 1960s.
It was developed at many American universities, including the University of California - Los Angeles (UCLA) and the Stanford Research Institute.
In 1973, work on the IPS and Transmission Control Protocol (TCP) technology began. The new systems were designed to replace the more vulnerable Network Control Program (NCP) used previously, and made sure the network was not exposed to a single point of failure.
By Jan 1 1983, the substitution of the older system for the new Internet protocol had been completed and the Internet was born.
British computer scientist Tim Berners-Lee later used it to host a system of interlinked hypertext documents in 1989, known as the World Wide Web.
|
<urn:uuid:590a2f8f-5bdc-4c60-b935-6bf988d78d15>
|
CC-MAIN-2016-26
|
http://www.deccanherald.com/content/302250/internet-turns-30.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00026-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.943
| 272
| 3.296875
| 3
|
Why Is There Child Support and Alimony?
Talk to a Lawyer
Enter Your Zip Code to Connect with a Lawyer Serving Your Area
The idea of court ordered spousal support and child support emerged from English Common Law. The government did not want men to financially abandon their wives and children. The Crown felt obligated to help the poor and needy. The English Courts gradually created a legal obligation for husbands. With very few exceptions, men were primarily responsible for supporting their wives and children.
When the courts granted a divorce, this obligation continued. If another person provided support to a man’s wife and children, that person could sue the husband to seek reimbursement for their “necessaries.” If a hospital or doctor provided medical care for a man’s wife or child, the husband was the person financially responsible to pay.
As to children born outside of marriage, the mother was primarily responsible for their support. A man could voluntarily take responsibility and to acknowledge his children. Otherwise, the children were legal “bastards” and their mothers were likewise responsible to third parties that provided for them. Of course, this brings to question the entire idea of support. If the state was fearful of public charges, why not make the fathers of these children pay as well. While this was true, the Crown believed in the sanctity of marriage. If the state provided support for these children, women, who had control over their own bodies, it was reasoned, might not wait until marriage to engage in intercourse, and might destroy the institution of marriage. Thus, the state rationale was balanced in favor of marriage and against the support of legal bastards.
When the United States was born, the state courts generally adopted English Common Law (except Louisiana that was French). Legal obligations remained pretty much in check until the 1960s, with only a few states tipping the scale in favor of supporting out of wedlock children. In 1968, the United States Supreme Court entered a landmark opinion that gave children born out of wedlock equal treatment with children born during a lawful marriage. Therefore, just as men were responsible for supporting their children born during a lawful marriage, so were they liable to support their children born outside of marriage.
Only after the United States Supreme Court adopted gender equality did the states require mothers to pay support for their children.
|
<urn:uuid:65db782c-e1e4-4f1b-8a8f-1b61bf4d1188>
|
CC-MAIN-2016-26
|
http://www.lawfirms.com/resources/child-support/why-is-there-child-support-alimony
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00031-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.986753
| 476
| 3
| 3
|
Container red and blue
Any help for the below problem will be much appreciated.
Container A contains 250 red marbles and 200 blue marbles.
Container B contains 600 red marbles and 150 blue marbles.
How many red and blue marbles must be moved from Container A to Container such that 25% of the marbles in Container A are red and 75% of the marbles in Container B are red?
Note: A geometrical approach is recommended.
I'm not sure how to do the problem geometrically, but if you move r red marbles and b blue marbles, there will be:
250-r red marbles and 200-b blue marbles in Container A
600+r red marbles and 150+b blue marbles in Container B
So we need 25% of the marbles in Container A to be red:
250-r = 0.25 * (250-r + 200-b)
And 75% of the marbles in Container B to be red:
600+r = 0.75 * (600+r + 150+b)
From here, you solve the two equations in the two unknowns r and b.
|
<urn:uuid:955367df-fa2b-41db-8954-5695cdb95386>
|
CC-MAIN-2016-26
|
http://mathhelpforum.com/algebra/145411-container-red-blue-print.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00102-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.842528
| 250
| 3.1875
| 3
|
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99...
Also called Didron aîné; archaeologist; together with Viollet-le-Duc and Caumont, one of the principal revivers of Christian art in France; b. 13 March, 1806, at Hautvillers, near Reims, where his father was a collector of taxes, d. at Paris, 13 November, 1867.
After completing his early studies at the preparatory seminaries of Meaux and Reims, he went to Paris in 1826, became there a professor of history, and devoted his leisure hours to following courses of law, medicine, etc. The reading of Victor Hugo's "Notre Dame de Paris" gave him a taste for the study of the antiquities of the Middle Ages. Having been admitted to the circle of the poet in 1829, he there formed the plan of a tour in Normandy, a province noted above all others for its historical buildings. His reading of the legends of the saints, his knowledge of Scripture, and certain abstract notions of theology directed the young amateur to the study of iconography. In 1835 Guizot named him secretary to the committee entrusted with the publication of the unedited documents concerning the history of France. Didron published, entirely unaided, the first four volumes of the reports of the committee. In 1839 the portion concerning the iconography of the monumental monographs of the cathedral of Chartres was reserved for him. This work did not appear in complete form. In 1838 he opened a course of iconography at the Royal Library. He published (under the title of "Manuel d'Iconographie") a French version of the famous "Painters' Book of Mount Athos", discovered there by him, and wrote the "Histoire de Dieu", the first part of a more general work. His greatest work is the review known as "Annales archéologiques", in which are to be found accounts of his travels and numerous studies in iconography. For many years Didron published in the "Univers" letters on archaeology. He also founded a library of archaeological literature, and finally, in 1849, constructed a glass-manufactory, which produced some remarkable pieces of work and continued to exist after his death. He also produced some good examples of work from the goldsmiths' workshop which he had established in 1858, but which was short-lived.
His principal works are: "Bulletin arehéologique du comité des arts et monuments" (4 vols., Paris, 1840-1847); "Histoire de Dieu, iconographie des personnel divines" (Paris, 1843); "Manuel d'iconographie chrétienne, grecque et latine" (Paris, 1845); "Annales archéologiques" (Paris, 1844-81). See also "Ann. arch." (1881), XXVIII, 184.
GUILHERMY, Didron in Ann. arch. (1868), XXV, 377-395.
APA citation. (1908). Adolphe-Napoleon Didron. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/04783a.htm
MLA citation. "Adolphe-Napoleon Didron." The Catholic Encyclopedia. Vol. 4. New York: Robert Appleton Company, 1908. <http://www.newadvent.org/cathen/04783a.htm>.
Transcription. This article was transcribed for New Advent by Joseph E. O'Connor.
Ecclesiastical approbation. Nihil Obstat. Remy Lafort, Censor. Imprimatur. +John M. Farley, Archbishop of New York.
Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
|
<urn:uuid:1b623275-3186-4479-a28f-4f7b59e730a7>
|
CC-MAIN-2016-26
|
http://newadvent.org/cathen/04783a.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00028-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929123
| 882
| 2.53125
| 3
|
After analyzing starch residue on dozens of ancient stone tools found in a cave in Mozambique, archaeologist Julio Mercader of the University of Calgary came to a surprising conclusion. The residue was sorghum, a wild cereal grain. Previous archaeological evidence has suggested that grains entered the human diet perhaps 23,000 years ago (and grain storage started more recently, around 11,000 years ago).
But these tools were about 105,000 years old!
A snippet from the press release:
"These residues could have come from wild sorghum and imply that the site's inhabitants were consuming this grain, in contrast to the conventional assumption that seed collecting was not an important activity among the Pleistocene foragers of southern Africa."Looking up more information about this, I came across several blogs and online discussions that ask a question I've never considered: Do we need to eat grains at all?
Interestingly, many of those who argue that we don't need grains (or should only eat them sparingly) are influenced by something called The Paleo Diet, which "encourages dieters to replace dairy and grain products with fresh fruits and vegetables—foods that are more nutritious than whole grains or dairy products."
Here's the premise of that diet:
During the Paleolithic, we evolved a specific genome that has only changed approximately 0.01 per cent in these last 10,000 years. However, during this recent time span mass agriculture, grains/grain products, sugars/sugar products, dairy/dairy products, and a plethora of processed foods have all been introduced as a regular part of the human diet. We are not eating the foods we are genetically and physiologically adapted to eat (99.9% of our genetic profile is still Paleolithic); and the discordance is an underlying cause for much of the "diseases of civilization."I'll be interested to see if this evidence of early sorghum consumption changes anything for Paleo Diet proponents. The new finding certainly seems to counter the idea that eating grains isn't "natural" because it only started relatively recently.
As usual, I'd like to know what you think...
|
<urn:uuid:308e91e1-6685-4e56-a90b-400d869f083c>
|
CC-MAIN-2016-26
|
http://www.smithsonianmag.com/arts-culture/caveman-cereal-raises-a-question-do-humans-need-grains-75538055/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00194-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968972
| 436
| 3.375
| 3
|
Summary Report for:
29-2053.00 - Psychiatric Technicians
Care for individuals with mental or emotional conditions or disabilities, following the instructions of physicians or other health practitioners. Monitor patients' physical and emotional well-being and report to medical staff. May participate in rehabilitation and treatment programs, help with personal hygiene, and administer oral or injectable medications.
Sample of reported job titles: Behavioral Health Technician, Health Care Technician, Licensed Psychiatric Technician (LPT), Mental Health Assistant (MHA), Mental Health Associate, Mental Health Specialist, Mental Health Technician (MHT), Mental Health Worker, Psychiatric Technician (PT), Residential Aide (RA)
Tasks | Tools & Technology | Knowledge | Skills | Abilities | Work Activities | Detailed Work Activities | Work Context | Job Zone | Education | Credentials | Interests | Work Styles | Work Values | Related Occupations | Wages & Employment | Job Openings | Additional Information
- Take and record measures of patients' physical condition, using devices such as thermometers or blood pressure gauges.
- Monitor patients' physical and emotional well-being and report unusual behavior or physical ailments to medical staff.
- Provide nursing, psychiatric, or personal care to mentally ill, emotionally disturbed, or mentally retarded patients.
- Observe and influence patients' behavior, communicating and interacting with them and teaching, counseling, or befriending them.
- Collaborate with or assist doctors, psychologists, or rehabilitation therapists in working with mentally ill, emotionally disturbed, or developmentally disabled patients to treat, rehabilitate, and return patients to the community.
- Encourage patients to develop work skills and to participate in social, recreational, or other therapeutic activities that enhance interpersonal skills or develop social relationships.
- Restrain violent, potentially violent, or suicidal patients by verbal or physical means as required.
- Train or instruct new employees on procedures to follow with psychiatric patients.
- Develop or teach strategies to promote client wellness and independence.
- Administer oral medications or hypodermic injections, following physician's prescriptions and hospital procedures.
- Issue medications from dispensary and maintain records in accordance with specified procedures.
- Aid patients in performing tasks, such as bathing or keeping beds, clothing, or living areas clean.
- Lead prescribed individual or group therapy sessions as part of specific therapeutic procedures.
- Interview new patients to complete admission forms, to assess their mental health status, or to obtain their mental health and treatment history.
- Escort patients to medical appointments.
- Contact patients' relatives to arrange family conferences.
Tools & Technology
Tools used in this occupation:
- Blood collection syringes — Blood drawing syringes
- Blood pressure cuff kits — Blood pressure cuffs
- Crutches or crutch accessories — Crutches
- Desktop computers
- Electronic medical thermometers — Electronic patient thermometers
- Emergency or resuscitation carts — Emergency carts
- Enema kits or accessories — Enema equipment
- Glucose monitors or meters — Glucometers
- Gurneys or scissor lifts — Gurneys
- Intermittent positive pressure breathing IPPB machines — Intermittent positive pressure breathing IPPB devices
- Medical acoustic stethoscope or accessory — Mechanical stethoscopes
- Medical staff isolation or surgical masks — Surgical masks
- Medical suction or vacuum appliances — Suction machines
- Medical syringe with needle — Hypodermic syringes
- Nasogastric tubes
- Nebulizer or accessories — Nebulizers
- Notebook computers
- Orthopedic traction hardware or weights — Traction equipment
- Oxygen therapy delivery system products accessories or its supplies — Oxygen administration equipment; Oxygen carts
- Patient care beds or accessories for general use — Hospital beds
- Patient stabilization or fall prevention devices or accessories — Patient restraints
- Personal computers
- Protective gloves — Safety gloves
- Pulse oximeter units — Oximeters
- Resuscitation masks or accessories — Bag valve mask BVM resuscitators
- Spill kits — Hazardous material spill kits
- Tablet computers
- Therapeutic heating or cooling pads or compresses or packs — Cold therapy equipment; Heat therapy equipment
- Tuberculin syringes — Tuberculin TB skin test equipment
- Urinary catheterization kit — Urinary catheters
- Vacuum blood collection tubes or containers — Evacuated blood collection tubes
Technology used in this occupation:
- Inventory management software — InfoLogix HealthTrax Engine
- Medical software — Eclipsys Sunrise Clinical Manager; GE Healthcare Centricity EMR; MEDITECH Behavioral Health Clinicals; Netsmart Technologies Avatar Clinical Workstation CWS (see all 8 examples)
- Office suite software — Microsoft Office software
- Spreadsheet software — Microsoft Excel
Hot Technology — a technology requirement frequently included in employer job postings.
- Psychology — Knowledge of human behavior and performance; individual differences in ability, personality, and interests; learning and motivation; psychological research methods; and the assessment and treatment of behavioral and affective disorders.
- Therapy and Counseling — Knowledge of principles, methods, and procedures for diagnosis, treatment, and rehabilitation of physical and mental dysfunctions, and for career counseling and guidance.
- Customer and Personal Service — Knowledge of principles and processes for providing customer and personal services. This includes customer needs assessment, meeting quality standards for services, and evaluation of customer satisfaction.
- Public Safety and Security — Knowledge of relevant equipment, policies, procedures, and strategies to promote effective local, state, or national security operations for the protection of people, data, property, and institutions.
- English Language — Knowledge of the structure and content of the English language including the meaning and spelling of words, rules of composition, and grammar.
- Administration and Management — Knowledge of business and management principles involved in strategic planning, resource allocation, human resources modeling, leadership technique, production methods, and coordination of people and resources.
- Education and Training — Knowledge of principles and methods for curriculum and training design, teaching and instruction for individuals and groups, and the measurement of training effects.
- Social Perceptiveness — Being aware of others' reactions and understanding why they react as they do.
- Speaking — Talking to others to convey information effectively.
- Active Listening — Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
- Monitoring — Monitoring/Assessing performance of yourself, other individuals, or organizations to make improvements or take corrective action.
- Reading Comprehension — Understanding written sentences and paragraphs in work related documents.
- Critical Thinking — Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
- Coordination — Adjusting actions in relation to others' actions.
- Service Orientation — Actively looking for ways to help people.
- Judgment and Decision Making — Considering the relative costs and benefits of potential actions to choose the most appropriate one.
- Writing — Communicating effectively in writing as appropriate for the needs of the audience.
- Instructing — Teaching others how to do something.
- Learning Strategies — Selecting and using training/instructional methods and procedures appropriate for the situation when learning or teaching new things.
- Persuasion — Persuading others to change their minds or behavior.
- Active Learning — Understanding the implications of new information for both current and future problem-solving and decision-making.
- Complex Problem Solving — Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
- Management of Personnel Resources — Motivating, developing, and directing people as they work, identifying the best people for the job.
- Negotiation — Bringing others together and trying to reconcile differences.
- Systems Analysis — Determining how a system should work and how changes in conditions, operations, and the environment will affect outcomes.
- Time Management — Managing one's own time and the time of others.
- Oral Comprehension — The ability to listen to and understand information and ideas presented through spoken words and sentences.
- Oral Expression — The ability to communicate information and ideas in speaking so others will understand.
- Problem Sensitivity — The ability to tell when something is wrong or is likely to go wrong. It does not involve solving the problem, only recognizing there is a problem.
- Deductive Reasoning — The ability to apply general rules to specific problems to produce answers that make sense.
- Inductive Reasoning — The ability to combine pieces of information to form general rules or conclusions (includes finding a relationship among seemingly unrelated events).
- Speech Clarity — The ability to speak clearly so others can understand you.
- Written Comprehension — The ability to read and understand information and ideas presented in writing.
- Written Expression — The ability to communicate information and ideas in writing so others will understand.
- Information Ordering — The ability to arrange things or actions in a certain order or pattern according to a specific rule or set of rules (e.g., patterns of numbers, letters, words, pictures, mathematical operations).
- Near Vision — The ability to see details at close range (within a few feet of the observer).
- Speech Recognition — The ability to identify and understand the speech of another person.
- Category Flexibility — The ability to generate or use different sets of rules for combining or grouping things in different ways.
- Far Vision — The ability to see details at a distance.
- Flexibility of Closure — The ability to identify or detect a known pattern (a figure, object, word, or sound) that is hidden in other distracting material.
- Originality — The ability to come up with unusual or clever ideas about a given topic or situation, or to develop creative ways to solve a problem.
- Selective Attention — The ability to concentrate on a task over a period of time without being distracted.
- Static Strength — The ability to exert maximum muscle force to lift, push, pull, or carry objects.
- Assisting and Caring for Others — Providing personal assistance, medical attention, emotional support, or other personal care to others such as coworkers, customers, or patients.
- Communicating with Supervisors, Peers, or Subordinates — Providing information to supervisors, co-workers, and subordinates by telephone, in written form, e-mail, or in person.
- Documenting/Recording Information — Entering, transcribing, recording, storing, or maintaining information in written or electronic/magnetic form.
- Getting Information — Observing, receiving, and otherwise obtaining information from all relevant sources.
- Making Decisions and Solving Problems — Analyzing information and evaluating results to choose the best solution and solve problems.
- Developing and Building Teams — Encouraging and building mutual trust, respect, and cooperation among team members.
- Identifying Objects, Actions, and Events — Identifying information by categorizing, estimating, recognizing differences or similarities, and detecting changes in circumstances or events.
- Evaluating Information to Determine Compliance with Standards — Using relevant information and individual judgment to determine whether events or processes comply with laws, regulations, or standards.
- Monitor Processes, Materials, or Surroundings — Monitoring and reviewing information from materials, events, or the environment, to detect or assess problems.
- Training and Teaching Others — Identifying the educational needs of others, developing formal educational or training programs or classes, and teaching or instructing others.
- Establishing and Maintaining Interpersonal Relationships — Developing constructive and cooperative working relationships with others, and maintaining them over time.
- Interacting With Computers — Using computers and computer systems (including hardware and software) to program, write software, set up functions, enter data, or process information.
- Organizing, Planning, and Prioritizing Work — Developing specific goals and plans to prioritize, organize, and accomplish your work.
- Processing Information — Compiling, coding, categorizing, calculating, tabulating, auditing, or verifying information or data.
- Resolving Conflicts and Negotiating with Others — Handling complaints, settling disputes, and resolving grievances and conflicts, or otherwise negotiating with others.
- Updating and Using Relevant Knowledge — Keeping up-to-date technically and applying new knowledge to your job.
- Interpreting the Meaning of Information for Others — Translating or explaining what information means and how it can be used.
- Judging the Qualities of Things, Services, or People — Assessing the value, importance, or quality of things or people.
- Analyzing Data or Information — Identifying the underlying principles, reasons, or facts of information by breaking down information or data into separate parts.
- Thinking Creatively — Developing, designing, or creating new applications, ideas, relationships, systems, or products, including artistic contributions.
- Communicating with Persons Outside Organization — Communicating with people outside the organization, representing the organization to customers, the public, government, and other external sources. This information can be exchanged in person, in writing, or by telephone or e-mail.
- Scheduling Work and Activities — Scheduling events, programs, and activities, as well as the work of others.
- Coaching and Developing Others — Identifying the developmental needs of others and coaching, mentoring, or otherwise helping others to improve their knowledge or skills.
- Coordinating the Work and Activities of Others — Getting members of a group to work together to accomplish tasks.
- Developing Objectives and Strategies — Establishing long-range objectives and specifying the strategies and actions to achieve them.
- Performing General Physical Activities — Performing physical activities that require considerable use of your arms and legs and moving your whole body, such as climbing, lifting, balancing, walking, stooping, and handling of materials.
- Performing for or Working Directly with the Public — Performing for people or dealing directly with the public. This includes serving customers in restaurants and stores, and receiving clients or guests.
- Handling and Moving Objects — Using hands and arms in handling, installing, positioning, and moving materials, and manipulating things.
- Performing Administrative Activities — Performing day-to-day administrative tasks such as maintaining information files and processing paperwork.
- Guiding, Directing, and Motivating Subordinates — Providing guidance and direction to subordinates, including setting performance standards and monitoring performance.
- Selling or Influencing Others — Convincing others to buy merchandise/goods or to otherwise change their minds or actions.
Detailed Work Activities
- Collect medical information from patients, family members, or other medical professionals.
- Record patient medical histories.
- Examine patients to assess general physical condition.
- Operate diagnostic or therapeutic medical instruments or equipment.
- Treat patients using psychological therapies.
- Administer non-intravenous medications.
- Inform medical professionals regarding patient conditions and care.
- Perform clerical work in medical settings.
- Maintain inventory of medical supplies or equipment.
- Position patients for treatment or examination.
- Maintain medical facility records.
- Assist healthcare practitioners during examinations or treatments.
- Administer intravenous medications.
- Collaborate with healthcare professionals to plan or provide treatment.
- Train medical providers.
- Interact with patients to build rapport or provide emotional support.
- Move patients to or from treatment areas.
- Care for patients with mental illnesses.
- Assist patients with hygiene or daily living activities.
- Encourage patients or clients to develop life skills.
- Teach health management classes.
- Work With Work Group or Team — 82% responded “Extremely important.”
- Exposed to Disease or Infections — 79% responded “Every day.”
- Contact With Others — 91% responded “Constant contact with others.”
- Face-to-Face Discussions — 82% responded “Every day.”
- Deal With Unpleasant or Angry People — 70% responded “Every day.”
- Responsible for Others' Health and Safety — 61% responded “Very high responsibility.”
- Physical Proximity
- Importance of Being Exact or Accurate — 47% responded “Extremely important.”
- Frequency of Conflict Situations — 61% responded “Every day.”
- Indoors, Environmentally Controlled — 82% responded “Every day.”
- Time Pressure — 59% responded “Every day.”
- Telephone — 28% responded “Once a week or more but not every day.”
- Deal With External Customers — 45% responded “Extremely important.”
- Deal With Physically Aggressive People — 45% responded “Every day.”
- Freedom to Make Decisions — 39% responded “A lot of freedom.”
- Coordinate or Lead Others — 35% responded “Extremely important.”
- Electronic Mail — 61% responded “Every day.”
- Importance of Repeating Same Tasks — 41% responded “Very important.”
- Structured versus Unstructured Work — 32% responded “A lot of freedom.”
- Impact of Decisions on Co-workers or Company Results — 19% responded “Moderate results.”
- Spend Time Walking and Running — 11% responded “More than half the time.”
- Frequency of Decision Making — 61% responded “Every day.”
- Sounds, Noise Levels Are Distracting or Uncomfortable — 34% responded “Every day.”
- Consequence of Error — 38% responded “Serious.”
- Spend Time Standing — 39% responded “More than half the time.”
- Responsibility for Outcomes and Results — 33% responded “Very high responsibility.”
- Duration of Typical Work Week — 52% responded “40 hours.”
- Level of Competition — 48% responded “Moderately competitive.”
|Title||Job Zone Three: Medium Preparation Needed|
|Education||Most occupations in this zone require training in vocational schools, related on-the-job experience, or an associate's degree.|
|Related Experience||Previous work-related skill, knowledge, or experience is required for these occupations. For example, an electrician must have completed three or four years of apprenticeship or several years of vocational training, and often must have passed a licensing exam, in order to perform the job.|
|Job Training||Employees in these occupations usually need one or two years of training involving both on-the-job experience and informal training with experienced workers. A recognized apprenticeship program may be associated with these occupations.|
|Job Zone Examples||These occupations usually involve using communication and organizational skills to coordinate, supervise, manage, or train others to accomplish goals. Examples include food service managers, electricians, agricultural technicians, legal secretaries, occupational therapy assistants, and medical assistants.|
|SVP Range||(6.0 to < 7.0)|
Percentage of Respondents
|Education Level Required|
|Not available||High school diploma or equivalent|
|Not available||Master's degree|
|Not available||Post-baccalaureate certificate|
Interest code: SER
- Social — Social occupations frequently involve working with, communicating with, and teaching people. These occupations often involve helping or providing service to others.
- Enterprising — Enterprising occupations frequently involve starting up and carrying out projects. These occupations can involve leading people and making many decisions. Sometimes they require risk taking and often deal with business.
- Realistic — Realistic occupations frequently involve work activities that include practical, hands-on problems and solutions. They often deal with plants, animals, and real-world materials like wood, tools, and machinery. Many of the occupations require working outside, and do not involve a lot of paperwork or working closely with others.
- Concern for Others — Job requires being sensitive to others' needs and feelings and being understanding and helpful on the job.
- Stress Tolerance — Job requires accepting criticism and dealing calmly and effectively with high stress situations.
- Adaptability/Flexibility — Job requires being open to change (positive or negative) and to considerable variety in the workplace.
- Dependability — Job requires being reliable, responsible, and dependable, and fulfilling obligations.
- Integrity — Job requires being honest and ethical.
- Self Control — Job requires maintaining composure, keeping emotions in check, controlling anger, and avoiding aggressive behavior, even in very difficult situations.
- Attention to Detail — Job requires being careful about detail and thorough in completing work tasks.
- Independence — Job requires developing one's own ways of doing things, guiding oneself with little or no supervision, and depending on oneself to get things done.
- Cooperation — Job requires being pleasant with others on the job and displaying a good-natured, cooperative attitude.
- Persistence — Job requires persistence in the face of obstacles.
- Social Orientation — Job requires preferring to work with others rather than alone, and being personally connected with others on the job.
- Achievement/Effort — Job requires establishing and maintaining personally challenging achievement goals and exerting effort toward mastering tasks.
- Analytical Thinking — Job requires analyzing information and using logic to address work-related issues and problems.
- Initiative — Job requires a willingness to take on responsibilities and challenges.
- Leadership — Job requires a willingness to lead, take charge, and offer opinions and direction.
- Innovation — Job requires creativity and alternative thinking to develop new ideas for and answers to work-related problems.
- Relationships — Occupations that satisfy this work value allow employees to provide service to others and work with co-workers in a friendly non-competitive environment. Corresponding needs are Co-workers, Moral Values and Social Service.
- Support — Occupations that satisfy this work value offer supportive management that stands behind employees. Corresponding needs are Company Policies, Supervision: Human Relations and Supervision: Technical.
- Independence — Occupations that satisfy this work value allow employees to work on their own and make decisions. Corresponding needs are Creativity, Responsibility and Autonomy.
Wages & Employment Trends
|Median wages (2015)||$14.97 hourly, $31,140 annual|
|Employment (2014)||68,000 employees|
|Projected growth (2014-2024)||Average (5% to 8%)|
|Projected job openings (2014-2024)||10,200|
|Top industries (2014)|
Source: Bureau of Labor Statistics 2015 wage data and 2014-2024 employment projections . "Projected growth" represents the estimated change in total employment over the projections period (2014-2024). "Projected job openings" represent openings due to growth and replacement.
Job Openings on the Web
Sources of Additional Information
Disclaimer: Sources are listed to provide additional information on related jobs, specialties, and/or industries. Links to non-DOL Internet sites are provided for your convenience and do not constitute an endorsement.
- Psychiatric technicians and aides . Bureau of Labor Statistics, U.S. Department of Labor. Occupational Outlook Handbook, 2016-17 Edition.
|
<urn:uuid:15518f35-09b4-492d-b527-2639141b093b>
|
CC-MAIN-2016-26
|
http://www.onetonline.org/link/summary/29-2053.00
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00024-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.87202
| 4,897
| 2.5625
| 3
|
Posted by smith on Tuesday, July 2, 2013 at 7:29pm.
. Which of the following will result in a chemical change?
A. Melting ice to obtain water
B. Evaporating alcohol into vapor
C. Drying wood in a shed
D. Burning coal in a furnace
2. A salt is obtained as a reaction between
A. a nonmetal and a metal.
B. a base and water.
C. an acid and oxygen.
D. a base and an acid.
3. The chemical symbol for sodium is
A. S. C. Na.
B. Sn. D. N.
4. According to Table 1 and the other information in your study unit, which of the following
elements is stable?
A. Neon C. Boron
B. Carbon D. Fluorine
5. The primary metallic element thatís added to steel to make stainless steel is
A. antimony.C. tungsten.
B. silver.D. chromium.
6. The atomic number of oxygen is 8, because oxygen has
A. eight protons in the nucleus.
B. electrons in eight shells.
C. a second shell with eight electrons.
D. an atomic mass of 8.
7. Which one of the following chemical equations is balanced?
A. 2NaCl + H2SO4 →HCL + NaSO4
B. NH3 + H2O →2NH4OH
C. KOH + H2SO4 →KHSO4 + H2O
D. 2Na + S →2NaS
8. Which of the following is one way to prevent the corrosion of iron?
A. Protect the iron from polluted air.
B. Add carbon to the iron.
C. Let the iron develop a natural coat of carbonate.
D. Paint exposed iron parts with protective paint.
9. Which one of the following groups of chemical compounds is composed entirely of
A. C2H4O, CH2O, CaSO4, C3H5(OH)3
B. C6H6, C2H5OH, C6H5CH3, C3H5(NO3)3
C. C2H2, CH4, CaCl2, CaCN2
D. Ch3OCH3, Ca3(PO4)2, CO2, H2CO3
10. According to the periodic table in your study unit, the element fluorine is classified as a
A. metal.C. alkali metal.
B. metalloid.D. halogen.
11. The molecular mass of sodium oxide (Na2O) is
A. 22.98977. C. 45.97954.
B. 38.98917. D. 61.97894.
12. Which of the following is represented in the highest percentage by volume in dry air?
A. Oxygen C. Carbon dioxide
B. Nitrogen D. Hydrogen
13. Sodium is preferred over potassium in industrial applications because
A. sodium salts are easily decomposed.
B. sodium is produced more economically.
C. potassium oxidizes more slowly in air.
D. potassium has a higher melting temperature.
14. In the process of electrolysis, current can flow through a liquid because
A. cations of the electrolyte accumulate at the positive electrodes.
B. hydrogen is electrically neutralized in the solution.
C. water molecules become negatively charged.
D. negative ions are attracted to the anode.
15. In the chemical equation Zn + 2HCL →ZnCl2 + H2, the reactants are
A. zinc chloride and hydrogen.
B. zinc and hydrogen carbonate.
C. zinc chlorate and water.
D. zinc and hydrochloric acid.
16. If water and oil are combined in a container, the resulting liquid is a(n)
A. emulsion.C. solution.
B. suspension.D. mixture.
17. Which one of the following statements about sulfuric acid is correct?
A. Sulfuric acid is a known muriatic acid.
B. Sulfuric acid is a strong oxidizing agent.
C. Sulfuric acid has little effect on metals.
D. Sulfuric acid is dangerous to living organisms.
18. The valence of aluminum is +3, and the valence of chlorine is Ė1. The formula for
aluminum chloride is correctly written as
A. Al3Cl. C. ClAl3.
B. AlCl3. D. Cl3Al.
19. An atom of chlorine has several valence electrons in its
A. nucleus.C. second shell.
B. first shell. D. third shell.
20. The names of the compounds FeS, NaCl, NaOH, and Pb(CN)2 all end in the suffix
A. -ide.C. -ic.
B. -ite.D. -ate.
21. Which one of the following is a characteristic of a metal?
A. Conducts heat readily
B. Changes the color of litmus paper to red
C. Reacts with bases to produce salts
D. Becomes negative when combined with other elements
22. Examine the following unbalanced chemical equation: CO2 + C →CO.
Which of the following is the correctly balanced form of this equation?
A. 2CO2 + C →CO C. CO2 + C2 →CO
B. CO2 + C →2CO D. CO2 + C →C2O
23. One atom of silicon can properly be combined in a compound with
A. one atom of chlorine. C. three atoms of hydrogen.
B. two atoms of oxygen. D. four atoms of calcium.
24. Which one of the following formulas represents an aldehyde?
A. C6H10O5 C. C2H5OH
B. C2H4O D. CH3COOH
25. In a nuclear reaction, energy is released by the combination of two elements into a new
element. This process is an example of
A. natural radioactivity.C. nuclear fusion.
B. artificial disintegration.D. nuclear fission.
Answer This Question
More Related Questions
- chemistry - which of the following will result in a chemical change?A.melting ...
- Chemistry - 1. Which of the following will result in a chemical change? A. ...
- Chemistry - Which of the following is an exothermic process? ice melting boiling...
- chemistry - Which of the following is a situation where an objectís potential ...
- Chemistry - 1. Which of the following is one way to prevent the corrosion of ...
- chemistry - Which of the following is an example of a chemical change (change in...
- chemistry - Calculate the temperature change in the water upon complete melting ...
- more physics. - What change in entropy occurs when a 27.9-g ice cube at -12 C is...
- chemistry - an 8.5g ice cube is placed into 255g of water. Calculate the temp ...
- Chemistry - Calculate the temperature change in the water upon complete melting ...
|
<urn:uuid:4e549b2c-ae69-44a1-ad0b-1e2faaa41ff3>
|
CC-MAIN-2016-26
|
http://www.jiskha.com/display.cgi?id=1372807766
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.844886
| 1,564
| 3.703125
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.