index int64 10 229k | q_id stringlengths 5 6 | question stringlengths 4 300 | best_answer stringlengths 13 15k | all_answers list | num_answers int64 1 170 | top_answers list | num_top_answers int64 0 119 | context stringlengths 1.72k 9.92k | orig stringlengths 1.82k 10k | target stringlengths 21 15k |
|---|---|---|---|---|---|---|---|---|---|---|
144,892 | 1aw68y | What was the crime rate among American GIs during World War 2? | Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.During the Battle of the Bulge German soldiers wearing American uniform, or parts thereof, were summarily executed. That's also a war crime. Summary executions are illegal under the Hague Convention. There were summary executions of SS guards during the liberation of Dachau Concentration Camp.In the Pacific it wasn't uncommon for Japanese attempting to surrender, or having already surrendered, to be killed. It's generally explained by the often no quarter given by the Japanese and so it was returned in kind. There were also quite a lot of cases of American soldiers mutilating the bodies of dead Japanese for souvenirs. Not just talking about your common looting of the dead, but the taking of body parts. I imagine someone will come in and try to claim that the Atomic Bombings of Japan was a crime but it was actually perfectly legal under the rules of the Hague Convention. | [
"Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.\n\nDuring the Battle of the ... | 1 | [
"Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.\n\nDuring the Battle of the ... | 1 | <P> BULLET::::- Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Secret wartime files made public in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in the United Kingdom, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in Great Britain, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common. In the 2007 publication "Taken by Force", sociology and criminology professor J. Robert Lilly estimates US soldiers raped around 11,040 women and children during the occupation of Germany. Many armed soldiers committed gang rapes at gunpoint against female civilians and children. According to German historian Miriam Gebhardt, some 190,000 women were raped by American soldiers in Germany
<P> According to the FBI’s Supplementary Homicide Reports, between the early 1960s and the late 1970s, the rate of homicides doubled. For every 100,000 U.S residents, the homicide victim rate elevated from 4.6 to 9.7.
<P> Taken by Force: Rape and American GIs in Europe in World War II is a 2007 book by Northern Kentucky University sociology and criminology professor J. Robert Lilly that examines the issue of rape by U.S. servicemen in European theatre of World War II.
<P> In the United States, murder rates have been higher and have fluctuated. They fell below 2 per 100,000 by 1900, rose during the first half of the century, dropped in the years following World War II, and bottomed out at 4.0 in 1957 before rising again. The rate stayed in 9 to 10 range most of the period from 1972 to 1994, before falling to 5 in present times. The increase since 1957 would have been even greater if not for the significant improvements in medical techniques and emergency response times, which mean that more and more attempted homicide victims survive. According to one estimate, if the lethality levels of criminal assaults of 1964 still applied in 1993, the country would have seen the murder rate of around 26 per 100,000, almost triple the actually observed rate of 9.5 per 100,000.
<P> Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Other estimates vary greatly, with one magazine for former POWs putting the number of deaths from the Gross Tychow march alone at 1,500. A senior YMCA official closely involved with the POW camps put the number of Commonwealth and American POW deaths at 8,348 between September 1944 and May 1945.
| question: What was the crime rate among American GIs during World War 2? context: <P> BULLET::::- Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Secret wartime files made public in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in the United Kingdom, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in Great Britain, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common. In the 2007 publication "Taken by Force", sociology and criminology professor J. Robert Lilly estimates US soldiers raped around 11,040 women and children during the occupation of Germany. Many armed soldiers committed gang rapes at gunpoint against female civilians and children. According to German historian Miriam Gebhardt, some 190,000 women were raped by American soldiers in Germany
<P> According to the FBI’s Supplementary Homicide Reports, between the early 1960s and the late 1970s, the rate of homicides doubled. For every 100,000 U.S residents, the homicide victim rate elevated from 4.6 to 9.7.
<P> Taken by Force: Rape and American GIs in Europe in World War II is a 2007 book by Northern Kentucky University sociology and criminology professor J. Robert Lilly that examines the issue of rape by U.S. servicemen in European theatre of World War II.
<P> In the United States, murder rates have been higher and have fluctuated. They fell below 2 per 100,000 by 1900, rose during the first half of the century, dropped in the years following World War II, and bottomed out at 4.0 in 1957 before rising again. The rate stayed in 9 to 10 range most of the period from 1972 to 1994, before falling to 5 in present times. The increase since 1957 would have been even greater if not for the significant improvements in medical techniques and emergency response times, which mean that more and more attempted homicide victims survive. According to one estimate, if the lethality levels of criminal assaults of 1964 still applied in 1993, the country would have seen the murder rate of around 26 per 100,000, almost triple the actually observed rate of 9.5 per 100,000.
<P> Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Other estimates vary greatly, with one magazine for former POWs putting the number of deaths from the Gross Tychow march alone at 1,500. A senior YMCA official closely involved with the POW camps put the number of Commonwealth and American POW deaths at 8,348 between September 1944 and May 1945.
| answer: Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.During the Battle of the Bulge German soldiers wearing American uniform, or parts thereof, were summarily executed. That's also a war crime. Summary executions are illegal under the Hague Convention. There were summary executions of SS guards during the liberation of Dachau Concentration Camp.In the Pacific it wasn't uncommon for Japanese attempting to surrender, or having already surrendered, to be killed. It's generally explained by the often no quarter given by the Japanese and so it was returned in kind. There were also quite a lot of cases of American soldiers mutilating the bodies of dead Japanese for souvenirs. Not just talking about your common looting of the dead, but the taking of body parts. I imagine someone will come in and try to claim that the Atomic Bombings of Japan was a crime but it was actually perfectly legal under the rules of the Hague Convention. |
153,467 | 2st4aa | what is happening to my body when i get high while simultaneously being drunk? | It sounds like the problem originates in the brain, which ultimately controls the vomiting reflex. Since both alcohol and vegetable-based hallucinogens scramble the neurons' normal functions, somewhere the decision is taken to park the tiger. | [
"It sounds like the problem originates in the brain, which ultimately controls the vomiting reflex. Since both alcohol and vegetable-based hallucinogens scramble the neurons' normal functions, somewhere the decision is taken to park the tiger."
] | 1 | [] | 0 | <P> Alcohol intoxication, also known as drunkenness or alcohol poisoning, is the negative behavior and physical effects due to the recent drinking of ethanol (alcohol). Symptoms at lower doses may include mild sedation and poor coordination. At higher doses, there may be slurred speech, trouble walking, and vomiting. Extreme doses may result in a decreased effort to breathe (respiratory depression), coma, or death. Complications may include seizures, aspiration pneumonia, injuries including suicide, and low blood sugar.
<P> As drinking increases, people become sleepy, or fall into a stupor. After a very high level of consumption, the respiratory system becomes depressed and the person will stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor co-ordination along with poor judgment increases the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury.
<P> Alcohol also limits the production of vasopressin (ADH) from the hypothalamus and the secretion of this hormone from the posterior pituitary gland. This is what causes severe dehydration when alcohol is consumed in large amounts. It also causes a high concentration of water in the urine and vomit and the intense thirst that goes along with a hangover.
<P> Vomiting excessive amounts of alcohol is an attempt by the body to prevent alcohol poisoning and death. Vomiting may also be caused by other drugs, such as opiates, or toxins found in some foods and plants. Food allergies and sensitivities, such as lactose intolerance, can cause vomiting.
<P> Alcohol is a depressant. After consumption, alcohol causes the body’s systems to slow down. Often, feelings of drunkenness are associated with elation and happiness but other feelings of anger or depression can arise. Balance, judgment, and coordination are also negatively affected. One of the most significant short term side effects of alcohol is reduced inhibition. Reduced inhibitions can lead to an increase in sexual behavior.
<P> Alcohol can also cause alterations in the vestibular system for short periods and will result in vertigo and possibly nystagmus due to the variable viscosity of the blood and the endolymph during the consumption of alcohol. The common term for this type of sensation is the "bed spins".
<P> Several other studies have shown that students who were told they were consuming alcoholic beverages (which in fact were non-alcoholic) perceived themselves as being "drunk", exhibited fewer physiological symptoms of social stress, and drove a simulated car similarly to other subjects who had actually consumed alcohol. The result is somewhat similar to the placebo effect.
| question: what is happening to my body when i get high while simultaneously being drunk? context: <P> Alcohol intoxication, also known as drunkenness or alcohol poisoning, is the negative behavior and physical effects due to the recent drinking of ethanol (alcohol). Symptoms at lower doses may include mild sedation and poor coordination. At higher doses, there may be slurred speech, trouble walking, and vomiting. Extreme doses may result in a decreased effort to breathe (respiratory depression), coma, or death. Complications may include seizures, aspiration pneumonia, injuries including suicide, and low blood sugar.
<P> As drinking increases, people become sleepy, or fall into a stupor. After a very high level of consumption, the respiratory system becomes depressed and the person will stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor co-ordination along with poor judgment increases the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury.
<P> Alcohol also limits the production of vasopressin (ADH) from the hypothalamus and the secretion of this hormone from the posterior pituitary gland. This is what causes severe dehydration when alcohol is consumed in large amounts. It also causes a high concentration of water in the urine and vomit and the intense thirst that goes along with a hangover.
<P> Vomiting excessive amounts of alcohol is an attempt by the body to prevent alcohol poisoning and death. Vomiting may also be caused by other drugs, such as opiates, or toxins found in some foods and plants. Food allergies and sensitivities, such as lactose intolerance, can cause vomiting.
<P> Alcohol is a depressant. After consumption, alcohol causes the body’s systems to slow down. Often, feelings of drunkenness are associated with elation and happiness but other feelings of anger or depression can arise. Balance, judgment, and coordination are also negatively affected. One of the most significant short term side effects of alcohol is reduced inhibition. Reduced inhibitions can lead to an increase in sexual behavior.
<P> Alcohol can also cause alterations in the vestibular system for short periods and will result in vertigo and possibly nystagmus due to the variable viscosity of the blood and the endolymph during the consumption of alcohol. The common term for this type of sensation is the "bed spins".
<P> Several other studies have shown that students who were told they were consuming alcoholic beverages (which in fact were non-alcoholic) perceived themselves as being "drunk", exhibited fewer physiological symptoms of social stress, and drove a simulated car similarly to other subjects who had actually consumed alcohol. The result is somewhat similar to the placebo effect.
| answer: It sounds like the problem originates in the brain, which ultimately controls the vomiting reflex. Since both alcohol and vegetable-based hallucinogens scramble the neurons' normal functions, somewhere the decision is taken to park the tiger. |
41,677 | 1wuprh | why fruits get juicier after ripening after they have been cut off the tree | Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. | [
"Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. ",
"Well usually the fruit has a seed and the rest of the fruit is to provide nutrients to the seed. When the fruit ripens, it slowly starts to feed th... | 2 | [
"Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. "
] | 1 | <P> Fruit maturity is not always apparent visually, as the fruits remain the same shade of green until they are overripe or rotting. One usually may sense ripeness, however, by giving the fruit a soft squeeze; a ripe feijoa yields to pressure somewhat like a just-ripe banana. Generally, the fruit is at its optimum ripeness the day it drops from the tree. While still hanging, it may well prove bitter; once fallen, however, the fruit very quickly becomes overripe, so daily collection of fallen fruit is advisable during the season.
<P> Citrus fruits are nonclimacteric and respiration slowly declines and the production and release of ethylene is gradual. The fruits do not go through a ripening process in the sense that they become "tree ripe". Some fruits, for example cherries, physically mature and then continue to ripen on the tree. Other fruits, such as pears, are picked when mature, but before they ripen, then continue to ripen off the tree. Citrus fruits pass from immaturity to maturity to overmaturity while still on the tree. Once they are separated from the tree, they do not increase in sweetness or continue to ripen. The only way change may happen after being picked is that they eventually start to decay.
<P> The fruits are orange, woody arils and may remain on the parent for several years after splitting open. Fruit production is very rare. Studies from 2010-2012 show that most populations continue to produce no fruit.
<P> Although not a disease as such, irregular supplies of water can cause growing or ripening fruit to split. Besides cosmetic damage, the splits may allow decay to start, although growing fruits have some ability to heal after a split. In addition, a deformity called cat-facing can be caused by pests, temperature stress, or poor soil conditions. Affected fruit usually remains edible, but its appearance may be unsightly.
<P> Ripening occurs when a fruit is mature. Ripeness is followed by senescence and breakdown of the fruit. The category “fruit” refers also to products such as aubergine, sweet pepper and tomato. Non-climacteric fruit only ripen while still attached to the parent plant. Their eating quality suffers if they are harvested before fully ripe as their sugar and acid content does not increase further. Examples are citrus, grapes and pineapple. Early harvesting is often carried out for export shipments to minimise loss during transport, but a consequence of this is that the flavour suffers. Climacteric fruit are those that can be harvested when mature but before ripening has begun. These include banana, melon, papaya, and tomato. In commercial fruit marketing the rate of ripening is controlled artificially, thus enabling transport and distribution to be carefully planned. Ethylene gas is produced in most plant tissues and is important in starting off the ripening process. It can be used commercially for the ripening of climacteric fruits. However, natural ethylene produced by fruits can lead to in- storage losses. For example, ethylene destroys the green colour of plants. Leafy vegetables will be damaged if stored with ripening fruit. Ethylene production is increased when fruits are injured or decaying and this can cause early ripening of climacteric fruit during transport.
<P> The plant becomes woody as the fruits develop. As they ripen, the plant begins to die, dries out and becomes brittle. In that state the base of the stem breaks off easily, particularly in a high wind. The plant then rolls readily before the wind and disperses its seeds as a tumbleweed.
<P> A single stem bears 20 to 30 fruiting spikes. The harvest begins as soon as one or two fruits at the base of the spikes begin to turn red, and before the fruit is fully mature, and still hard; if allowed to ripen completely, the fruit lose pungency, and ultimately fall off and are lost. The spikes are collected and spread out to dry in the sun, then the peppercorns are stripped off the spikes.
| question: why fruits get juicier after ripening after they have been cut off the tree context: <P> Fruit maturity is not always apparent visually, as the fruits remain the same shade of green until they are overripe or rotting. One usually may sense ripeness, however, by giving the fruit a soft squeeze; a ripe feijoa yields to pressure somewhat like a just-ripe banana. Generally, the fruit is at its optimum ripeness the day it drops from the tree. While still hanging, it may well prove bitter; once fallen, however, the fruit very quickly becomes overripe, so daily collection of fallen fruit is advisable during the season.
<P> Citrus fruits are nonclimacteric and respiration slowly declines and the production and release of ethylene is gradual. The fruits do not go through a ripening process in the sense that they become "tree ripe". Some fruits, for example cherries, physically mature and then continue to ripen on the tree. Other fruits, such as pears, are picked when mature, but before they ripen, then continue to ripen off the tree. Citrus fruits pass from immaturity to maturity to overmaturity while still on the tree. Once they are separated from the tree, they do not increase in sweetness or continue to ripen. The only way change may happen after being picked is that they eventually start to decay.
<P> The fruits are orange, woody arils and may remain on the parent for several years after splitting open. Fruit production is very rare. Studies from 2010-2012 show that most populations continue to produce no fruit.
<P> Although not a disease as such, irregular supplies of water can cause growing or ripening fruit to split. Besides cosmetic damage, the splits may allow decay to start, although growing fruits have some ability to heal after a split. In addition, a deformity called cat-facing can be caused by pests, temperature stress, or poor soil conditions. Affected fruit usually remains edible, but its appearance may be unsightly.
<P> Ripening occurs when a fruit is mature. Ripeness is followed by senescence and breakdown of the fruit. The category “fruit” refers also to products such as aubergine, sweet pepper and tomato. Non-climacteric fruit only ripen while still attached to the parent plant. Their eating quality suffers if they are harvested before fully ripe as their sugar and acid content does not increase further. Examples are citrus, grapes and pineapple. Early harvesting is often carried out for export shipments to minimise loss during transport, but a consequence of this is that the flavour suffers. Climacteric fruit are those that can be harvested when mature but before ripening has begun. These include banana, melon, papaya, and tomato. In commercial fruit marketing the rate of ripening is controlled artificially, thus enabling transport and distribution to be carefully planned. Ethylene gas is produced in most plant tissues and is important in starting off the ripening process. It can be used commercially for the ripening of climacteric fruits. However, natural ethylene produced by fruits can lead to in- storage losses. For example, ethylene destroys the green colour of plants. Leafy vegetables will be damaged if stored with ripening fruit. Ethylene production is increased when fruits are injured or decaying and this can cause early ripening of climacteric fruit during transport.
<P> The plant becomes woody as the fruits develop. As they ripen, the plant begins to die, dries out and becomes brittle. In that state the base of the stem breaks off easily, particularly in a high wind. The plant then rolls readily before the wind and disperses its seeds as a tumbleweed.
<P> A single stem bears 20 to 30 fruiting spikes. The harvest begins as soon as one or two fruits at the base of the spikes begin to turn red, and before the fruit is fully mature, and still hard; if allowed to ripen completely, the fruit lose pungency, and ultimately fall off and are lost. The spikes are collected and spread out to dry in the sun, then the peppercorns are stripped off the spikes.
| answer: Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. |
28,247 | 1oolvd | I biked home in the rain tonight. Would I have got just as wet from walking? | Consider purely vertical rain.When standing or biking horizontally you're traveling *vertically* through the rain field at the speed v_rain. The amount of water that hits you is the rain density D_rain times the volume you sweep out. That volume is just your vertical cross-section times the speed the rain is falling and the time you sit out in the rain: * W_from-above = D_rain * A_top * v_rain * timeMeanwhile, if you're traveling horizontally, you sweep out a volume from your horizontal motion. If the rain is falling vertically, then those two volumes are independent: * W_from_ahead = D_rain * A_front * V_forward * time or * W_from_ahead = D_rain * A_front * distSo in vertical rain you *always* get wetter by going slower: * W = W_from_above + W_from_ahead = D_rain * (A_top * v_rain * time + A_front * dist)the wetness from above depends on how long you're out (but not directly on how far you travel), and the wetness from in front depends on how far you travel (but not at all on how fast you go).Now, if there's a wind and the rain is coming down diagonally, the calculation gets more complex and depends on which direction you are going. If the wind is blowing in the same direction you want to go, there is an ideal speed that minimizes your total wetness, but if it's blowing crossways or against you, you always do better just to go fast and get it over with.Biking gives you different overhead and frontal cross sections, too -- you have maybe 2-3x greater overhead cross section and maybe 1.5x-2x less frontal cross section. So you get less wet per meter of forward travel, but more wet per second of exposure, if you're on a bike. | [
"Consider purely vertical rain.\n\nWhen standing or biking horizontally you're traveling *vertically* through the rain field at the speed v_rain. The amount of water that hits you is the rain density D_rain times the volume you sweep out. That volume is just your vertical cross-section times the speed the rain is... | 1 | [] | 0 | <P> According to Roll, "I am sure Lance had probably never met a bike racer like me...a person who could still find some joy and happiness in such weather misery. We had eight hours a day, for eight straight days, of continuous riding in the pouring rain - rain in Biblical proportions! I think Lance would've turned things around even without our talks and rides in the Appalachia[n]s, but it turned out to be a pivotal career event for him (and Roll had made a new cycling friend)." A refocused and encouraged Armstrong went on to a successful fourth-place finish in the Vuelta a España, and within a year and a half he had won his first yellow jersey overall victory in the Tour de France road race. Armstrong has since had his yellow jersey wins nullified due to doping. (Roll's tale of the ride is in "Bobke II"; Armstrong's is in "It's Not About the Bike".)
<P> Jason Mraz described the weather as "perfect", saying that "going in and out of rain and sun [...] is a great way to celebrate." He then claimed: "I've never experiences torrential rain in my life." MC Double D of Sneaky Sound System described the differing tolerance for rain in Ireland and his native country. "You like it wet over here. I'm from Australia and we don't like it wet over there. You guys get a bit annoyed if it's too sunny I think. [...] We're playing in a tent so I can't complain."
<P> It was on returning to England that he made a second attempt on the long-distance record. He kept a diary which appeared in a newspaper in Aberdeen and in "The London Bicycle Club Gazette". On his first day he rode into sweeping rain near Bodmin.
<P> In 2005, up to 6,000 cyclists met at Federation Square to have breakfast as part of Ride to Work Day, double the number from 2004. About 10,000 Victorians are estimated to have left their cars at home in favour of the bike.
<P> The Bike Race is another race where the entrant has to ride up the steepest road in the Old Town, using a Butchers Bike in the quickest time possible, without taking their buttocks off the saddle. This event is undertaken in memory of a local fisherman who died during the Great Storm of 1987.
<P> In 1965, Percy Stallard (aged 55) rode his bicycle solo over the Theodul Pass. The Rough Stuff Fellowship, an organisation for enthusiasts of cross-country cycling, acknowledged that it was probably the first time a cyclist had done it. Stallard made it in less than 15 hours, sometimes through deep snow.
<P> I felt he was suspicious because it was raining. He was in-between houses, cutting in-between houses, and he was walking very leisurely for the weather... It didn't look like he was a resident that went to check their mail and got caught in the rain and was hurrying back home. He didn't look like a fitness fanatic that would train in the rain.
| question: I biked home in the rain tonight. Would I have got just as wet from walking? context: <P> According to Roll, "I am sure Lance had probably never met a bike racer like me...a person who could still find some joy and happiness in such weather misery. We had eight hours a day, for eight straight days, of continuous riding in the pouring rain - rain in Biblical proportions! I think Lance would've turned things around even without our talks and rides in the Appalachia[n]s, but it turned out to be a pivotal career event for him (and Roll had made a new cycling friend)." A refocused and encouraged Armstrong went on to a successful fourth-place finish in the Vuelta a España, and within a year and a half he had won his first yellow jersey overall victory in the Tour de France road race. Armstrong has since had his yellow jersey wins nullified due to doping. (Roll's tale of the ride is in "Bobke II"; Armstrong's is in "It's Not About the Bike".)
<P> Jason Mraz described the weather as "perfect", saying that "going in and out of rain and sun [...] is a great way to celebrate." He then claimed: "I've never experiences torrential rain in my life." MC Double D of Sneaky Sound System described the differing tolerance for rain in Ireland and his native country. "You like it wet over here. I'm from Australia and we don't like it wet over there. You guys get a bit annoyed if it's too sunny I think. [...] We're playing in a tent so I can't complain."
<P> It was on returning to England that he made a second attempt on the long-distance record. He kept a diary which appeared in a newspaper in Aberdeen and in "The London Bicycle Club Gazette". On his first day he rode into sweeping rain near Bodmin.
<P> In 2005, up to 6,000 cyclists met at Federation Square to have breakfast as part of Ride to Work Day, double the number from 2004. About 10,000 Victorians are estimated to have left their cars at home in favour of the bike.
<P> The Bike Race is another race where the entrant has to ride up the steepest road in the Old Town, using a Butchers Bike in the quickest time possible, without taking their buttocks off the saddle. This event is undertaken in memory of a local fisherman who died during the Great Storm of 1987.
<P> In 1965, Percy Stallard (aged 55) rode his bicycle solo over the Theodul Pass. The Rough Stuff Fellowship, an organisation for enthusiasts of cross-country cycling, acknowledged that it was probably the first time a cyclist had done it. Stallard made it in less than 15 hours, sometimes through deep snow.
<P> I felt he was suspicious because it was raining. He was in-between houses, cutting in-between houses, and he was walking very leisurely for the weather... It didn't look like he was a resident that went to check their mail and got caught in the rain and was hurrying back home. He didn't look like a fitness fanatic that would train in the rain.
| answer: Consider purely vertical rain.When standing or biking horizontally you're traveling *vertically* through the rain field at the speed v_rain. The amount of water that hits you is the rain density D_rain times the volume you sweep out. That volume is just your vertical cross-section times the speed the rain is falling and the time you sit out in the rain: * W_from-above = D_rain * A_top * v_rain * timeMeanwhile, if you're traveling horizontally, you sweep out a volume from your horizontal motion. If the rain is falling vertically, then those two volumes are independent: * W_from_ahead = D_rain * A_front * V_forward * time or * W_from_ahead = D_rain * A_front * distSo in vertical rain you *always* get wetter by going slower: * W = W_from_above + W_from_ahead = D_rain * (A_top * v_rain * time + A_front * dist)the wetness from above depends on how long you're out (but not directly on how far you travel), and the wetness from in front depends on how far you travel (but not at all on how fast you go).Now, if there's a wind and the rain is coming down diagonally, the calculation gets more complex and depends on which direction you are going. If the wind is blowing in the same direction you want to go, there is an ideal speed that minimizes your total wetness, but if it's blowing crossways or against you, you always do better just to go fast and get it over with.Biking gives you different overhead and frontal cross sections, too -- you have maybe 2-3x greater overhead cross section and maybe 1.5x-2x less frontal cross section. So you get less wet per meter of forward travel, but more wet per second of exposure, if you're on a bike. |
130,315 | 1vhj2t | is there an actual genetic proof to the penis size stereotypes? | You should try [r/askscience](_URL_0_) for a more accurate answer | [
"There are documented statistical patterns among men of varios races. The stereotypes are consistent with these statistical patterns in terms of rank but not in terms of absolute measurements.",
"The difference is only .2 inches or so, but the stereotypes did match up. I'll edit my comment later with a real sourc... | 6 | [
"There are documented statistical patterns among men of varios races. The stereotypes are consistent with these statistical patterns in terms of rank but not in terms of absolute measurements.",
"The difference is only .2 inches or so, but the stereotypes did match up. I'll edit my comment later with a real sourc... | 6 | <P> There are certain genes, like homeobox (Hox a and d) genes, which may have a role in regulating penis size. In humans, the AR gene located on the X chromosome at Xq11-12 which may determine the penis size. The SRY gene located on the Y chromosome may have a role to play. Variance in size can often be attributed to "de novo" mutations. Deficiency of pituitary growth hormone or gonadotropins or mild degrees of androgen insensitivity can cause small penis size in males and can be addressed with growth hormone or testosterone treatment in early childhood.
<P> Morris said that "Homo sapiens" not only have the largest brains of all higher primates, but that sexual selection in human evolution has caused humans to have the highest ratio of penis size to body mass. Morris conjectured that human ear-lobes developed as an additional erogenous zone to facilitate the extended sexuality necessary in the evolution of human monogamous pair bonding. Morris further stated that the more rounded shape of human female breasts means they are mainly a sexual signalling device rather than simply for providing milk for infants.
<P> The belief that penis size varies according to race is not supported by scientific evidence. A 2005 study reported that "there is no scientific background to support the alleged 'oversized' penis in black people". In fact, a study of 253 men from Tanzania found that the average stretched flaccid penis length of Tanzanian males is 11 cm (4.53 inches) long, smaller than the worldwide average, stretched flaccid penis length of 13.24 cm (5.21 inches), and average erect penis length of 13.12 cm (5.17 inches).
<P> In an interview with "The New York Times," Sarich agreed with his critics, who stated that there was little or no scientific basis for his claims about homosexuality, or on the relationship that he was then teaching of brain size to intelligence. He told the "Times" there seems to be a correlation but "there is not a lot of evidence to support that theory because there isn't a lot of research done on the subject."
<P> He later returned to explain how Deon is able to take undistorted pictures of his enormous penis with the iPhone 5's panoramic camera. He also explained that Deon's claim reinforces the stereotype that all black men have larger than average sized penises.
<P> It has been suggested that differences in penis size between individuals are caused not only by genetics, but also by environmental factors such as culture, diet and chemical or pollution exposure. Endocrine disruption resulting from chemical exposure has been linked to genital deformation in both sexes (among many other problems). Chemicals from both synthetic (e.g., pesticides, anti-bacterial triclosan, plasticizers for plastics) and natural (e.g., chemicals found in tea tree oil and lavender oil) sources have been linked to various degrees of endocrine disruption.
<P> The theory of sexual selection has been used to explain a number of human anatomical features. These include rounded breasts, facial hair, pubic hair and penis size. The breasts of primates are flat, yet are able to produce sufficient milk for feeding their young. The breasts of non-lactating human females are filled with fatty tissue and not milk. Thus it has been suggested the rounded female breasts are signals of fertility. Richard Dawkins has speculated that the loss of the penis bone in humans, when it is present in other primates, may be due to sexual selection by females looking for a clear sign of good health in prospective mates. Since a human erection relies on a hydraulic pumping system, erection failure is a sensitive early warning of certain kinds of physical and mental ill health.
| question: is there an actual genetic proof to the penis size stereotypes? context: <P> There are certain genes, like homeobox (Hox a and d) genes, which may have a role in regulating penis size. In humans, the AR gene located on the X chromosome at Xq11-12 which may determine the penis size. The SRY gene located on the Y chromosome may have a role to play. Variance in size can often be attributed to "de novo" mutations. Deficiency of pituitary growth hormone or gonadotropins or mild degrees of androgen insensitivity can cause small penis size in males and can be addressed with growth hormone or testosterone treatment in early childhood.
<P> Morris said that "Homo sapiens" not only have the largest brains of all higher primates, but that sexual selection in human evolution has caused humans to have the highest ratio of penis size to body mass. Morris conjectured that human ear-lobes developed as an additional erogenous zone to facilitate the extended sexuality necessary in the evolution of human monogamous pair bonding. Morris further stated that the more rounded shape of human female breasts means they are mainly a sexual signalling device rather than simply for providing milk for infants.
<P> The belief that penis size varies according to race is not supported by scientific evidence. A 2005 study reported that "there is no scientific background to support the alleged 'oversized' penis in black people". In fact, a study of 253 men from Tanzania found that the average stretched flaccid penis length of Tanzanian males is 11 cm (4.53 inches) long, smaller than the worldwide average, stretched flaccid penis length of 13.24 cm (5.21 inches), and average erect penis length of 13.12 cm (5.17 inches).
<P> In an interview with "The New York Times," Sarich agreed with his critics, who stated that there was little or no scientific basis for his claims about homosexuality, or on the relationship that he was then teaching of brain size to intelligence. He told the "Times" there seems to be a correlation but "there is not a lot of evidence to support that theory because there isn't a lot of research done on the subject."
<P> He later returned to explain how Deon is able to take undistorted pictures of his enormous penis with the iPhone 5's panoramic camera. He also explained that Deon's claim reinforces the stereotype that all black men have larger than average sized penises.
<P> It has been suggested that differences in penis size between individuals are caused not only by genetics, but also by environmental factors such as culture, diet and chemical or pollution exposure. Endocrine disruption resulting from chemical exposure has been linked to genital deformation in both sexes (among many other problems). Chemicals from both synthetic (e.g., pesticides, anti-bacterial triclosan, plasticizers for plastics) and natural (e.g., chemicals found in tea tree oil and lavender oil) sources have been linked to various degrees of endocrine disruption.
<P> The theory of sexual selection has been used to explain a number of human anatomical features. These include rounded breasts, facial hair, pubic hair and penis size. The breasts of primates are flat, yet are able to produce sufficient milk for feeding their young. The breasts of non-lactating human females are filled with fatty tissue and not milk. Thus it has been suggested the rounded female breasts are signals of fertility. Richard Dawkins has speculated that the loss of the penis bone in humans, when it is present in other primates, may be due to sexual selection by females looking for a clear sign of good health in prospective mates. Since a human erection relies on a hydraulic pumping system, erection failure is a sensitive early warning of certain kinds of physical and mental ill health.
| answer: You should try [r/askscience](_URL_0_) for a more accurate answer |
185,726 | 1svtb1 | If I wore goggles that inverted my vision, would my brain adapt and make it seem as if its not? | [Yes.](_URL_0_) *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, "normal vision was restored instantaneously and without any disturbance in the natural appearance or position of objects.* | [
"[Yes.](_URL_0_) \n *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, \"normal vision was restored in... | 3 | [
"[Yes.](_URL_0_) \n *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, \"normal vision was restored in... | 2 | <P> The initial pointing errors induced by the prismatic goggles are caused by the misalignment of the observer's motor and proprioceptive maps. Once the error has been detected, the observer makes a conscious effort to try and fix the error via strategic recalibration. The reduction in error is also helped by an unconscious process referred to as spatial realignment, which gradually realigns the visual and proprioceptive maps (Newport and Schenk, 2012). This means that over a series of repeated attempts, the observer is able to reduce the margin of error and become more accurate in pointing to the visual target despite the visual displacement. Usually it takes an individual as few as 10 trials to adapt to the visual displacement and successfully point to the target (Rosetti et al., 1993).
<P> The brain naturally guards against double vision. In an attempt to avoid double vision, the brain can sometimes ignore the image from one eye, a process known as suppression. The ability to suppress is to be found particularly in childhood when the brain is still developing. Thus, those with childhood strabismus almost never complain of diplopia, while adults who develop strabismus almost always do. While this ability to suppress might seem an entirely positive adaptation to strabismus, in the developing child, this can prevent the proper development of vision in the affected eye, resulting in amblyopia. Some adults are also able to suppress their diplopia, but their suppression is rarely as deep or as effective and takes much longer to establish, thus they are not at risk of permanently compromising their vision. In some cases, diplopia disappears without medical intervention, but in other cases, the cause of the double vision may still be present.
<P> On a later experiment, Stratton wore the glasses for eight whole days. By day four, the images seen through the instrument were still upside down. However, on day five, images appeared upright until he concentrated on them; then they became inverted again. By having to concentrate on his vision to turn it upside down again, especially when he knew images were hitting his retinas in the opposite orientation as normal, Stratton deduced his brain had adapted to the changes in vision.
<P> The purpose of the goggles is to disable the patient's ability to visually fixate on an object while at the same time allowing the examiner to adequately visualize the eye. This is done by using high-powered (+20 diopters) magnifying glasses with an illumination system. With such a high-powered lens, it is unlikely that the patient can adequately focus and visually fixate on an object to suppress nystagmus.
<P> Additionally in adults who have had exotropia since childhood, the brain may adapt to using a "blind-spot" whereby it receives images from both eyes, but no full image from the deviating eye, thus avoiding double vision and in fact increasing peripheral vision on the side of the deviating eye.
<P> Vision deficit usually occurs when lesions grow in the occipital lobe of the brain, causing a blurred daze for patients, especially in sensitivity to light. Focusing upon finer objects becomes a challenge, along with edge and border detection. Driving behind the wheel is dangerous when astroblastoma grows in residual tissue size, since peripheral vision can be insufficient. Horizontal nystagmus and other involuntary eye disorders can occur.
<P> During prism adaptation, an individual wears special prismatic goggles that are made of prism wedges that displace the visual field laterally or vertically. In most cases the visual field is shifted laterally either in the rightward or leftward direction. While wearing the goggles, the individual engages in a perceptual motor task such as pointing to a visual target directly in front of them. A prism adaptation session includes three components: the pre-test, prism exposure, and the post-test. The effects of the prism adaptation paradigm are observed when the performance on the perceptual motor task of the pre-and post-test are compared.
| question: If I wore goggles that inverted my vision, would my brain adapt and make it seem as if its not? context: <P> The initial pointing errors induced by the prismatic goggles are caused by the misalignment of the observer's motor and proprioceptive maps. Once the error has been detected, the observer makes a conscious effort to try and fix the error via strategic recalibration. The reduction in error is also helped by an unconscious process referred to as spatial realignment, which gradually realigns the visual and proprioceptive maps (Newport and Schenk, 2012). This means that over a series of repeated attempts, the observer is able to reduce the margin of error and become more accurate in pointing to the visual target despite the visual displacement. Usually it takes an individual as few as 10 trials to adapt to the visual displacement and successfully point to the target (Rosetti et al., 1993).
<P> The brain naturally guards against double vision. In an attempt to avoid double vision, the brain can sometimes ignore the image from one eye, a process known as suppression. The ability to suppress is to be found particularly in childhood when the brain is still developing. Thus, those with childhood strabismus almost never complain of diplopia, while adults who develop strabismus almost always do. While this ability to suppress might seem an entirely positive adaptation to strabismus, in the developing child, this can prevent the proper development of vision in the affected eye, resulting in amblyopia. Some adults are also able to suppress their diplopia, but their suppression is rarely as deep or as effective and takes much longer to establish, thus they are not at risk of permanently compromising their vision. In some cases, diplopia disappears without medical intervention, but in other cases, the cause of the double vision may still be present.
<P> On a later experiment, Stratton wore the glasses for eight whole days. By day four, the images seen through the instrument were still upside down. However, on day five, images appeared upright until he concentrated on them; then they became inverted again. By having to concentrate on his vision to turn it upside down again, especially when he knew images were hitting his retinas in the opposite orientation as normal, Stratton deduced his brain had adapted to the changes in vision.
<P> The purpose of the goggles is to disable the patient's ability to visually fixate on an object while at the same time allowing the examiner to adequately visualize the eye. This is done by using high-powered (+20 diopters) magnifying glasses with an illumination system. With such a high-powered lens, it is unlikely that the patient can adequately focus and visually fixate on an object to suppress nystagmus.
<P> Additionally in adults who have had exotropia since childhood, the brain may adapt to using a "blind-spot" whereby it receives images from both eyes, but no full image from the deviating eye, thus avoiding double vision and in fact increasing peripheral vision on the side of the deviating eye.
<P> Vision deficit usually occurs when lesions grow in the occipital lobe of the brain, causing a blurred daze for patients, especially in sensitivity to light. Focusing upon finer objects becomes a challenge, along with edge and border detection. Driving behind the wheel is dangerous when astroblastoma grows in residual tissue size, since peripheral vision can be insufficient. Horizontal nystagmus and other involuntary eye disorders can occur.
<P> During prism adaptation, an individual wears special prismatic goggles that are made of prism wedges that displace the visual field laterally or vertically. In most cases the visual field is shifted laterally either in the rightward or leftward direction. While wearing the goggles, the individual engages in a perceptual motor task such as pointing to a visual target directly in front of them. A prism adaptation session includes three components: the pre-test, prism exposure, and the post-test. The effects of the prism adaptation paradigm are observed when the performance on the perceptual motor task of the pre-and post-test are compared.
| answer: [Yes.](_URL_0_) *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, "normal vision was restored instantaneously and without any disturbance in the natural appearance or position of objects.* |
179,115 | 2m56x4 | why would someone want a flexible spending account? | Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.If you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable healthcare expenses. however...If you have kids, there's a number of scheduled checkups, immunizations and whatnot.If you're older, you may have medical problems that require regular visits to the doctor & prescription drugs that you've been taking daily for years.If you have health problems, you'll also have a bunch of medication you need to take.It can also be good if you have some predictable expenses. If you have poor eyesight, you might want to plan ahead to get a new pair of glasses or contacts. A friend planned ahead for her laser eye surgery, effectively getting a 25% discount on the procedure by not paying taxes on the money. | [
"Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.\n\nIf you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable hea... | 1 | [
"Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.\n\nIf you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable hea... | 1 | <P> A flexible spending account (FSA), also known as a flexible spending arrangement, is one of a number of tax-advantaged financial accounts, resulting in payroll tax savings. Before the Patient Protection and Affordable Care Act, one significant disadvantage to using an FSA was that funds not used by the end of the plan year were forfeited to the employer, known as the "use it or lose it" rule. Under the terms of the Affordable Care Act, a plan may permit an employee to carry over up to $500 into the following year without losing the funds.
<P> Having an account planner involved in the account has led to more integration within the agency, which has resulted in better teamwork in trying to combine the needs of the client, the market and the consumer. Account planners stimulate discussions about things that were overlooked before, such as, purchasing decisions, brand-consumer relationship and specific circumstance evaluation.
<P> Advocates of defined contribution plans point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts. This debate parallels the discussion currently going on in the U.S., where many Republican leaders favor transforming the Social Security system, at least in part, to a self-directed investment plan.
<P> He also supports personal accounts for Social Security and Medicare, funded using the employee's portion of FICA payroll taxes, to replace all or part of the benefits paid under the current system. According to Gingrich, private accounts would offer workers retirement and medical benefits much better than what these programs currently offer while greatly reducing the need for government spending.
<P> In 1984, the Internal Revenue Service issued a ruling that, while flexible spending accounts were allowable, employees must elect a certain amount for the plan each year and that any unused amounts would be forfeited at the end of the year. Until that point, some employers had set up flexible spending account plans that allowed employees to simply request reimbursement of any qualifying medical expense with no preset annual limit and no risk of forfeiture by employees.
<P> The cost-benefit relationship constraint is also called cost effectiveness constraints and is pervasive throughout the framework. The companies need to spend money and time in the process of providing financial statements. To be more specific, Costs can constraint the range of information when providing financial reporting on the grounds that the companies must "collect, process, analyze and disseminate relevant information" which need time and money. For investors, they want to know all financial information if possible in ideal condition, which may cause tremendous financial burden in the corporations. Moreover, some financial information may not valuable for external users to acquire a huge benefit, for example, how much money do a company spend for its greening of headquarters. Therefore, when deciding the components of financial reporting, companies need to measure the sense of particular financial information and the expenditure of providing particular information and the benefits they can acquire from this particular information. Properly speaking, If the costs in particular information exceed the benefit they can acquire, companies may choose to not disclose this particular information. For example, If there is $0.1 difference between checkbook register and bank statement, accountant should ignore the $0.1 rather than waste time and money to find the $0.1.
<P> Advocates of Defined contribution plan point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts.
| question: why would someone want a flexible spending account? context: <P> A flexible spending account (FSA), also known as a flexible spending arrangement, is one of a number of tax-advantaged financial accounts, resulting in payroll tax savings. Before the Patient Protection and Affordable Care Act, one significant disadvantage to using an FSA was that funds not used by the end of the plan year were forfeited to the employer, known as the "use it or lose it" rule. Under the terms of the Affordable Care Act, a plan may permit an employee to carry over up to $500 into the following year without losing the funds.
<P> Having an account planner involved in the account has led to more integration within the agency, which has resulted in better teamwork in trying to combine the needs of the client, the market and the consumer. Account planners stimulate discussions about things that were overlooked before, such as, purchasing decisions, brand-consumer relationship and specific circumstance evaluation.
<P> Advocates of defined contribution plans point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts. This debate parallels the discussion currently going on in the U.S., where many Republican leaders favor transforming the Social Security system, at least in part, to a self-directed investment plan.
<P> He also supports personal accounts for Social Security and Medicare, funded using the employee's portion of FICA payroll taxes, to replace all or part of the benefits paid under the current system. According to Gingrich, private accounts would offer workers retirement and medical benefits much better than what these programs currently offer while greatly reducing the need for government spending.
<P> In 1984, the Internal Revenue Service issued a ruling that, while flexible spending accounts were allowable, employees must elect a certain amount for the plan each year and that any unused amounts would be forfeited at the end of the year. Until that point, some employers had set up flexible spending account plans that allowed employees to simply request reimbursement of any qualifying medical expense with no preset annual limit and no risk of forfeiture by employees.
<P> The cost-benefit relationship constraint is also called cost effectiveness constraints and is pervasive throughout the framework. The companies need to spend money and time in the process of providing financial statements. To be more specific, Costs can constraint the range of information when providing financial reporting on the grounds that the companies must "collect, process, analyze and disseminate relevant information" which need time and money. For investors, they want to know all financial information if possible in ideal condition, which may cause tremendous financial burden in the corporations. Moreover, some financial information may not valuable for external users to acquire a huge benefit, for example, how much money do a company spend for its greening of headquarters. Therefore, when deciding the components of financial reporting, companies need to measure the sense of particular financial information and the expenditure of providing particular information and the benefits they can acquire from this particular information. Properly speaking, If the costs in particular information exceed the benefit they can acquire, companies may choose to not disclose this particular information. For example, If there is $0.1 difference between checkbook register and bank statement, accountant should ignore the $0.1 rather than waste time and money to find the $0.1.
<P> Advocates of Defined contribution plan point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts.
| answer: Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.If you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable healthcare expenses. however...If you have kids, there's a number of scheduled checkups, immunizations and whatnot.If you're older, you may have medical problems that require regular visits to the doctor & prescription drugs that you've been taking daily for years.If you have health problems, you'll also have a bunch of medication you need to take.It can also be good if you have some predictable expenses. If you have poor eyesight, you might want to plan ahead to get a new pair of glasses or contacts. A friend planned ahead for her laser eye surgery, effectively getting a 25% discount on the procedure by not paying taxes on the money. |
77,892 | 2qt93z | why does my vision get obscured when a strong light source hit my eyes | Because your eye will adapt (iris will close to a pinhole) to adapt to the bright light, which in turn does not let in much light from faint sources as well. The reason this is done automatically and cannot be overridden by you is because bright light in high doses is quite damaging to your retinas. | [
"Because your eye will adapt (iris will close to a pinhole) to adapt to the bright light, which in turn does not let in much light from faint sources as well. The reason this is done automatically and cannot be overridden by you is because bright light in high doses is quite damaging to your retinas."
] | 1 | [] | 0 | <P> These cause permanent obstruction of aqueous outflow. In some cases, pressure may rapidly build up in the eye, causing pain and redness (symptomatic, or so-called "acute" angle closure). In this situation, the vision may become blurred, and halos may be seen around bright lights. Accompanying symptoms may include a headache and vomiting.
<P> The blinding effect is caused in large part by reduced contrast due to light scattering in the eye by excessive brightness, or to reflection of light from dark areas in the field of vision, with luminance similar to the background luminance. This kind of glare is a particular instance of disability glare, called veiling glare. (This is not the same as loss of accommodation of night vision which is caused by the direct effect of the light itself on the eye.)
<P> As objects radiate light in straight lines in all directions, the eye must also be hit with this light over its outer surface. This idea presented a problem for al-Haytham and his predecessors, as if this was the case, the rays received by the eye from every point on the object would cause a blurred image. Al-Haytham solved this problem using his theory of refraction. He argued that although the object sends an infinite number of rays of light to the eye, only one of these lines falls on the eye perpendicularly: the other rays meet the eye at angles that are not perpendicular. According to al-Haytham, this causes them to be refracted and weakened. He claimed that all the rays other than the one that hits the eye perpendicularly are not involved in vision.
<P> Al-Haytham offered many reasons against the extramission theory, pointing to the fact that eyes can be damaged by looking directly at bright lights, such as the sun. He claimed the low probability that the eye can fill the entirety of space as soon as the eyelids are opened as an observer looks up into the night sky. Using the intromission theory as a foundation, he formed his own theory that an object emits rays of light from every point on its surface which then travel in all directions, thereby allowing some light into a viewer's eyes. According to this theory, the object being viewed is considered to be a compilation of an infinite number of points, from which rays of light are projected.
<P> One can observe the effect of straylight by looking at a distant bright light source against a dark background. If the source is small, it would look like a small bright spot if the eye imaged it perfectly. Scattering in the eye makes the source appear spread out, surrounded by glare. The disability glare caused by such a situation has been found to correspond precisely to the effect of true light. As a consequence, disability glare was subsequently defined by this true light, called "straylight".
<P> Averted vision works because there are virtually no rods (cells which detect dim light in black and white) in the fovea: a small area in the center of the eye. The fovea contains primarily cone cells, which serve as bright light and color detectors and are not as useful during the night. This situation results in a decrease in visual sensitivity in central vision at night. Based on the early work of Osterberg (1935), and later confirmed by modern adaptive optics, the density of the rod cells usually reaches a maximum around 20 degrees off the center of vision.
<P> As with any optical system experiencing a defocus aberration, the effect can be exaggerated or masked by changing the aperture size. In the case of the eye, a large pupil emphasizes refractive error and a small pupil masks it. This phenomenon can cause a condition in which an individual has a greater difficulty seeing in low-illumination areas, even though there are no symptoms in bright light, such as daylight.
| question: why does my vision get obscured when a strong light source hit my eyes context: <P> These cause permanent obstruction of aqueous outflow. In some cases, pressure may rapidly build up in the eye, causing pain and redness (symptomatic, or so-called "acute" angle closure). In this situation, the vision may become blurred, and halos may be seen around bright lights. Accompanying symptoms may include a headache and vomiting.
<P> The blinding effect is caused in large part by reduced contrast due to light scattering in the eye by excessive brightness, or to reflection of light from dark areas in the field of vision, with luminance similar to the background luminance. This kind of glare is a particular instance of disability glare, called veiling glare. (This is not the same as loss of accommodation of night vision which is caused by the direct effect of the light itself on the eye.)
<P> As objects radiate light in straight lines in all directions, the eye must also be hit with this light over its outer surface. This idea presented a problem for al-Haytham and his predecessors, as if this was the case, the rays received by the eye from every point on the object would cause a blurred image. Al-Haytham solved this problem using his theory of refraction. He argued that although the object sends an infinite number of rays of light to the eye, only one of these lines falls on the eye perpendicularly: the other rays meet the eye at angles that are not perpendicular. According to al-Haytham, this causes them to be refracted and weakened. He claimed that all the rays other than the one that hits the eye perpendicularly are not involved in vision.
<P> Al-Haytham offered many reasons against the extramission theory, pointing to the fact that eyes can be damaged by looking directly at bright lights, such as the sun. He claimed the low probability that the eye can fill the entirety of space as soon as the eyelids are opened as an observer looks up into the night sky. Using the intromission theory as a foundation, he formed his own theory that an object emits rays of light from every point on its surface which then travel in all directions, thereby allowing some light into a viewer's eyes. According to this theory, the object being viewed is considered to be a compilation of an infinite number of points, from which rays of light are projected.
<P> One can observe the effect of straylight by looking at a distant bright light source against a dark background. If the source is small, it would look like a small bright spot if the eye imaged it perfectly. Scattering in the eye makes the source appear spread out, surrounded by glare. The disability glare caused by such a situation has been found to correspond precisely to the effect of true light. As a consequence, disability glare was subsequently defined by this true light, called "straylight".
<P> Averted vision works because there are virtually no rods (cells which detect dim light in black and white) in the fovea: a small area in the center of the eye. The fovea contains primarily cone cells, which serve as bright light and color detectors and are not as useful during the night. This situation results in a decrease in visual sensitivity in central vision at night. Based on the early work of Osterberg (1935), and later confirmed by modern adaptive optics, the density of the rod cells usually reaches a maximum around 20 degrees off the center of vision.
<P> As with any optical system experiencing a defocus aberration, the effect can be exaggerated or masked by changing the aperture size. In the case of the eye, a large pupil emphasizes refractive error and a small pupil masks it. This phenomenon can cause a condition in which an individual has a greater difficulty seeing in low-illumination areas, even though there are no symptoms in bright light, such as daylight.
| answer: Because your eye will adapt (iris will close to a pinhole) to adapt to the bright light, which in turn does not let in much light from faint sources as well. The reason this is done automatically and cannot be overridden by you is because bright light in high doses is quite damaging to your retinas. |
143,153 | 2m6jqu | what are you hearing different between 320kbps and 128kbps. also flac, mp3, or aac audio | Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or "blur" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.If you want to hear the differences, put on good quality headphones and listen to music with lots of drums and cymbal crashes - those don't sound quite as good in a 128kbps MP3.A properly encoded 320kbps file is indistinguishable from the original. It's compressed, but the data that's lost is beyond human hearing.FLAC is a format that compresses an audio file *losslessly* - not just beyond human hearing, it doesn't change a single bit in the file. Purists love this, but no listening test has ever shown FLAC to be superior to a 320 kbps MP3. If you're recording and mixing, FLAC makes sense, you don't want to compress your raw audio before mixing.MP3 and AAC are two different "lossy" algorithms for compressing audio. They both throw away details that are hard to hear. They're different algorithms, with different pros and cons, but with similar results. At the same bit rate, AAC is slightly better quality than MP3, but not dramatically. | [
"Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or \"blur\" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.\n\nIf you want to hear the differences, put on good quality headphones a... | 1 | [
"Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or \"blur\" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.\n\nIf you want to hear the differences, put on good quality headphones a... | 1 | <P> E-MU 20K is the commercial name for a line of audio chips by Creative Technology, commercially known as the Sound Blaster X-Fi chipset. The series comprises the E-MU 20K1 (CA20K1) and E-MU 20K2 (CA20K2) audio chips.
<P> Microsoft has sometimes claimed that the sound quality of WMA at 64 kbit/s equals or exceeds that of MP3 at 128 kbit/s (both WMA and MP3 are considered near-transparent at 192 kbit/s by most listeners). In a 1999 study funded by Microsoft, National Software Testing Laboratories (NSTL) found that listeners preferred WMA at 64 kbit/s to MP3 at 128 kbit/s (as encoded by MusicMatch Jukebox).
<P> BULLET::::- AAX files are encrypted M4B's. The audio is encoded in variable quality AAC format. While the vast majority of books are encoded at 64 kbit/s, 22.050 kHz, stereo, some are as low as 32k, mono. Radio plays are often encoded at 128kbit/s and 44.1 kHz. Additionally, many audiobooks in Germany are encoded at the latter bitrate and are marketed as "AAX+"; however, there is no difference in the actual file format.
<P> In listening tests around 64 kbit/s, Opus shows superior quality compared to HE-AAC codecs, which were previously dominant due to their use of the patented spectral band replication (SBR) technology. In listening tests around 96 kbit/s, Opus shows slightly superior quality compared to AAC and significantly better quality compared to Vorbis and MP3.
<P> The Sony NWZ-A826 is one of many MP3 players belonging to the Walkman Z-series. This edition features 4 GB flash memory, as well as a large monitor; in addition the MP3 player offers several audio options in a housing with a thickness of 9.3 mm. The EX earplugs come packaged. There are four audio options: Clear Stereo, Clear Bass, VPT Surround and DSEE Sound Enhancer.The ear plugs are a combination of earplugs and a normal earset in one.
<P> MPEG-1 Layer II (MP2—often incorrectly called MUSICAM) is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. Decoding MP2 audio is computationally simple, relative to MP3, AAC, etc.
<P> 24-bit audio does not require dithering, as the noise level of the digital converter is always louder than the required level of any dither that might be applied. 24-bit audio could theoretically encode 144 dB of dynamic range, but based on manufacturer's datasheets no ADCs exist that can provide higher than ~125 dB.
| question: what are you hearing different between 320kbps and 128kbps. also flac, mp3, or aac audio context: <P> E-MU 20K is the commercial name for a line of audio chips by Creative Technology, commercially known as the Sound Blaster X-Fi chipset. The series comprises the E-MU 20K1 (CA20K1) and E-MU 20K2 (CA20K2) audio chips.
<P> Microsoft has sometimes claimed that the sound quality of WMA at 64 kbit/s equals or exceeds that of MP3 at 128 kbit/s (both WMA and MP3 are considered near-transparent at 192 kbit/s by most listeners). In a 1999 study funded by Microsoft, National Software Testing Laboratories (NSTL) found that listeners preferred WMA at 64 kbit/s to MP3 at 128 kbit/s (as encoded by MusicMatch Jukebox).
<P> BULLET::::- AAX files are encrypted M4B's. The audio is encoded in variable quality AAC format. While the vast majority of books are encoded at 64 kbit/s, 22.050 kHz, stereo, some are as low as 32k, mono. Radio plays are often encoded at 128kbit/s and 44.1 kHz. Additionally, many audiobooks in Germany are encoded at the latter bitrate and are marketed as "AAX+"; however, there is no difference in the actual file format.
<P> In listening tests around 64 kbit/s, Opus shows superior quality compared to HE-AAC codecs, which were previously dominant due to their use of the patented spectral band replication (SBR) technology. In listening tests around 96 kbit/s, Opus shows slightly superior quality compared to AAC and significantly better quality compared to Vorbis and MP3.
<P> The Sony NWZ-A826 is one of many MP3 players belonging to the Walkman Z-series. This edition features 4 GB flash memory, as well as a large monitor; in addition the MP3 player offers several audio options in a housing with a thickness of 9.3 mm. The EX earplugs come packaged. There are four audio options: Clear Stereo, Clear Bass, VPT Surround and DSEE Sound Enhancer.The ear plugs are a combination of earplugs and a normal earset in one.
<P> MPEG-1 Layer II (MP2—often incorrectly called MUSICAM) is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. Decoding MP2 audio is computationally simple, relative to MP3, AAC, etc.
<P> 24-bit audio does not require dithering, as the noise level of the digital converter is always louder than the required level of any dither that might be applied. 24-bit audio could theoretically encode 144 dB of dynamic range, but based on manufacturer's datasheets no ADCs exist that can provide higher than ~125 dB.
| answer: Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or "blur" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.If you want to hear the differences, put on good quality headphones and listen to music with lots of drums and cymbal crashes - those don't sound quite as good in a 128kbps MP3.A properly encoded 320kbps file is indistinguishable from the original. It's compressed, but the data that's lost is beyond human hearing.FLAC is a format that compresses an audio file *losslessly* - not just beyond human hearing, it doesn't change a single bit in the file. Purists love this, but no listening test has ever shown FLAC to be superior to a 320 kbps MP3. If you're recording and mixing, FLAC makes sense, you don't want to compress your raw audio before mixing.MP3 and AAC are two different "lossy" algorithms for compressing audio. They both throw away details that are hard to hear. They're different algorithms, with different pros and cons, but with similar results. At the same bit rate, AAC is slightly better quality than MP3, but not dramatically. |
907 | 7i45d5 | how was the internet made? like how did they discover coding, etc? | Computers predate the internet by several decades, but the origins of the internet can be traced back to a US Military project in the 1960's called Arpanet. They wanted to see if they could get computers to communicate with each other. The first data packet was sent from a computer at UCLA to one at Stanford in 1969. The technology that came out of Arpanet ultimately led to the commercial internet. | [
"Computers predate the internet by several decades, but the origins of the internet can be traced back to a US Military project in the 1960's called Arpanet. They wanted to see if they could get computers to communicate with each other. The first data packet was sent from a computer at UCLA to one at Stanford in 19... | 1 | [] | 0 | <P> As the Internet grew from a forum for sharing information to a marketplace for doing business, a technology matured that allowed computers to transact with each other more easily. Out of these Internet roots, web service technology was born.
<P> While the Internet began with a U.S. Government research project in the late 1950s, the web in its present form did not appear on the Internet until after Tim Berners-Lee and his colleagues at the European laboratory (CERN) proposed the concept of linking documents with hypertext. But it was not until Mosaic, the forerunner of the famous Netscape Navigator appeared, that the Internet became more than a file serving system.
<P> The history of the Internet begins with the development of electronic computers in the 1950s. Initial concepts of wide area networking originated in several computer science laboratories in the United States, United Kingdom, and France. The U.S. Department of Defense awarded contracts as early as the 1960s, including for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI).
<P> Made with Code is an initiative launched by Google on 19 July 2014. Google aimed to empower young women in middle and high schools with computer programming skills. Made with Code was created after Google's own research found out that encouragement and exposure are the critical factors that would influence young females to pursue Computer Science. It was reported that Google is funding $50 million to Made with Code, on top of the initial $40 million invested since 2010 in organizations like Code.org, Black Girls Code, and Girls Who Code. The Made with Code initiative involves both online activities as well as real life events, collaborating with notable firms like Shapeways and App Inventor.
<P> Internetworking started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The original term for an internetwork was catenet.
<P> The origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia since the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life.
<P> In the 1950s and 1960s, with the creation of computers, is where the history of the Internet begins. In 1969 came the invention of Arpanet, the first network to run on packet-switching technology. These were the first hosts on what would one day become the Internet. The concept of email was first created by Ray Tomlinson in 1971, and this innovation was followed by Project Gutenberg and eBooks. Tim Berners-Lee is considered the inventor of the World Wide Web; he implemented the first successful communication between a HyperText Transfer Protocol client and a server.
| question: how was the internet made? like how did they discover coding, etc? context: <P> As the Internet grew from a forum for sharing information to a marketplace for doing business, a technology matured that allowed computers to transact with each other more easily. Out of these Internet roots, web service technology was born.
<P> While the Internet began with a U.S. Government research project in the late 1950s, the web in its present form did not appear on the Internet until after Tim Berners-Lee and his colleagues at the European laboratory (CERN) proposed the concept of linking documents with hypertext. But it was not until Mosaic, the forerunner of the famous Netscape Navigator appeared, that the Internet became more than a file serving system.
<P> The history of the Internet begins with the development of electronic computers in the 1950s. Initial concepts of wide area networking originated in several computer science laboratories in the United States, United Kingdom, and France. The U.S. Department of Defense awarded contracts as early as the 1960s, including for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI).
<P> Made with Code is an initiative launched by Google on 19 July 2014. Google aimed to empower young women in middle and high schools with computer programming skills. Made with Code was created after Google's own research found out that encouragement and exposure are the critical factors that would influence young females to pursue Computer Science. It was reported that Google is funding $50 million to Made with Code, on top of the initial $40 million invested since 2010 in organizations like Code.org, Black Girls Code, and Girls Who Code. The Made with Code initiative involves both online activities as well as real life events, collaborating with notable firms like Shapeways and App Inventor.
<P> Internetworking started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The original term for an internetwork was catenet.
<P> The origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia since the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life.
<P> In the 1950s and 1960s, with the creation of computers, is where the history of the Internet begins. In 1969 came the invention of Arpanet, the first network to run on packet-switching technology. These were the first hosts on what would one day become the Internet. The concept of email was first created by Ray Tomlinson in 1971, and this innovation was followed by Project Gutenberg and eBooks. Tim Berners-Lee is considered the inventor of the World Wide Web; he implemented the first successful communication between a HyperText Transfer Protocol client and a server.
| answer: Computers predate the internet by several decades, but the origins of the internet can be traced back to a US Military project in the 1960's called Arpanet. They wanted to see if they could get computers to communicate with each other. The first data packet was sent from a computer at UCLA to one at Stanford in 1969. The technology that came out of Arpanet ultimately led to the commercial internet. |
64,562 | a99h6s | Which plane was the first to have radar? | Very first country to develop the technology of mobile radar systems was the Great Britain. It was established on plane 'Avro Anson' in 1937 with coverage of approximately 1 mile for airborne targets and 3 miles to ships.On the other hand, serial production of these systems later called 'Al Mk. IV' began in 1940. They were mounted on 'Bristol Blenheim' bombersIf you are interested, here the information about other major countries of that period: •The USA received its first mobile radar in 1941 specially for 'Douglas P-70' fighters, but the actual serial production started in 1942 for 'Northrop P-61 Black Widow' night fighters. •The USSR developed its first prototype in 1941 which was then fully developed only in 1942, where was mounted on 'Pe-2' fighters in Stalingrad battle •Germany started testing mobile radar systems in 1941. But the fully operational model was deployed only in 1942 for Ju-88 night fighters •Japan's first mobile radars were introduced in 1942, but their mass production began in 1944. They were first mounted on H8K 'Emily' flying boats. | [
"Very first country to develop the technology of mobile radar systems was the Great Britain. It was established on plane 'Avro Anson' in 1937 with coverage of approximately 1 mile for airborne targets and 3 miles to ships.\nOn the other hand, serial production of these systems later called 'Al Mk. IV' began in 1940... | 1 | [] | 0 | <P> Initially, the radar was designed to detect fighter aircraft at 100 miles and 16,000 feet. The radar used five transmitters that operated at S-band frequencies ranging from 2700 to 3019 MHz. It took twenty-five people to operate the radar.
<P> The Air-Surface Vessel Mark I, using electronics similar to those of the AI sets, was the first aircraft-carried radar to enter service, in early 1940. It was quickly replaced by the improved Mark II, which included side-scanning antennas that allowed the aircraft to sweep twice the area in a single pass. The later ASV Mk. II had the power needed to detect submarines on the surface, eventually making such operations suicidal.
<P> The specially designed and built AN/APS-70 Radar with its massive internal antenna was the best airborne radar system built for detecting other aircraft because its low frequency penetrated weather and showed only the more electronically visible returns. A large radome on top of the envelope held the height-finding radar.
<P> The first version of this radar, Type 79X, was mounted on the RN Signal School's tender, the minesweeper , in October 1936. This equipment used a frequency of 75 MHz and a wavelength of 4 metres and its antennae were strung between the ship's masts. They detected an aircraft at an altitude of and a range of during tests in July 1937.
<P> Primary radar operation is based on the principle of echolocation. Electromagnetic pulses of high power emitted by the radar antenna are converted into a narrow wavefront which propagates at the speed of light (300 000 000 m/s). This is reflected by the aircraft and then picked up again by the rotating antenna on its own axis. A primary radar detects all aircraft without selection, regardless of whether or not they possess a transponder.
<P> Airborne Interception radar, Mark VIII, or AI Mk. VIII for short, was the first operational microwave-frequency air-to-air radar. It was used by Royal Air Force night fighters from late 1941 until the end of World War II. The basic concept, using a moving parabolic antenna to search for targets and track them accurately, remained in use by most airborne radars well into the 1980s.
<P> The experiments with pulsed radar were continued, primarily in improving the receiver for handling the short pulses. In June 1936, the NRL's first prototype radar system, now operating at 28.6 MHz, was demonstrated to government officials, successfully tracking an aircraft at distances up to . Their radar was based on low frequency signals, at least by today's standards, and thus required large antennas, making it impractical for ship or aircraft mounting.
| question: Which plane was the first to have radar? context: <P> Initially, the radar was designed to detect fighter aircraft at 100 miles and 16,000 feet. The radar used five transmitters that operated at S-band frequencies ranging from 2700 to 3019 MHz. It took twenty-five people to operate the radar.
<P> The Air-Surface Vessel Mark I, using electronics similar to those of the AI sets, was the first aircraft-carried radar to enter service, in early 1940. It was quickly replaced by the improved Mark II, which included side-scanning antennas that allowed the aircraft to sweep twice the area in a single pass. The later ASV Mk. II had the power needed to detect submarines on the surface, eventually making such operations suicidal.
<P> The specially designed and built AN/APS-70 Radar with its massive internal antenna was the best airborne radar system built for detecting other aircraft because its low frequency penetrated weather and showed only the more electronically visible returns. A large radome on top of the envelope held the height-finding radar.
<P> The first version of this radar, Type 79X, was mounted on the RN Signal School's tender, the minesweeper , in October 1936. This equipment used a frequency of 75 MHz and a wavelength of 4 metres and its antennae were strung between the ship's masts. They detected an aircraft at an altitude of and a range of during tests in July 1937.
<P> Primary radar operation is based on the principle of echolocation. Electromagnetic pulses of high power emitted by the radar antenna are converted into a narrow wavefront which propagates at the speed of light (300 000 000 m/s). This is reflected by the aircraft and then picked up again by the rotating antenna on its own axis. A primary radar detects all aircraft without selection, regardless of whether or not they possess a transponder.
<P> Airborne Interception radar, Mark VIII, or AI Mk. VIII for short, was the first operational microwave-frequency air-to-air radar. It was used by Royal Air Force night fighters from late 1941 until the end of World War II. The basic concept, using a moving parabolic antenna to search for targets and track them accurately, remained in use by most airborne radars well into the 1980s.
<P> The experiments with pulsed radar were continued, primarily in improving the receiver for handling the short pulses. In June 1936, the NRL's first prototype radar system, now operating at 28.6 MHz, was demonstrated to government officials, successfully tracking an aircraft at distances up to . Their radar was based on low frequency signals, at least by today's standards, and thus required large antennas, making it impractical for ship or aircraft mounting.
| answer: Very first country to develop the technology of mobile radar systems was the Great Britain. It was established on plane 'Avro Anson' in 1937 with coverage of approximately 1 mile for airborne targets and 3 miles to ships.On the other hand, serial production of these systems later called 'Al Mk. IV' began in 1940. They were mounted on 'Bristol Blenheim' bombersIf you are interested, here the information about other major countries of that period: •The USA received its first mobile radar in 1941 specially for 'Douglas P-70' fighters, but the actual serial production started in 1942 for 'Northrop P-61 Black Widow' night fighters. •The USSR developed its first prototype in 1941 which was then fully developed only in 1942, where was mounted on 'Pe-2' fighters in Stalingrad battle •Germany started testing mobile radar systems in 1941. But the fully operational model was deployed only in 1942 for Ju-88 night fighters •Japan's first mobile radars were introduced in 1942, but their mass production began in 1944. They were first mounted on H8K 'Emily' flying boats. |
225,452 | 5u32gg | what makes gordon ramsay such an incredible chef? wouldn't the skill level of top level culinary artists not vary a lot? | He's an incredible restauranteur, which is a bit different. He understands the entire business.Creating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant business, choosing good staff, and setting standards. | [
"He's an incredible restauranteur, which is a bit different. He understands the entire business.\n\nCreating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant b... | 3 | [
"He's an incredible restauranteur, which is a bit different. He understands the entire business.\n\nCreating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant b... | 1 | <P> Ramsay's reputation is built upon his goal of culinary perfection, which is associated with winning three Michelin stars. His mentor, Marco Pierre White noted that he is highly competitive. Since the airing of "Boiling Point", which followed Ramsay's quest of earning three Michelin stars, the chef has also become infamous for his fiery temper and use of expletives. Ramsay once famously ejected food critic A. A. Gill, whose dining companion was Joan Collins, from his restaurant, leading Gill to state that "Ramsay is a wonderful chef, just a really second-rate human being." Ramsay admitted in his autobiography that he did not mind if Gill insulted his food, but a personal insult he was not going to stand for. Ramsay has also had confrontations with his kitchen staff, including one incident that resulted in the pastry chef calling the police. A 2005 interview reported Ramsay had retained 85% of his staff since 1993. Ramsay attributes his management style to the influence of previous mentors, notably chefs Marco Pierre White and Guy Savoy, father-in-law, Chris Hutcheson, and Jock Wallace, his manager while a footballer at Rangers.
<P> Chef Ramsay is closely followed during eight of the most intense months of his life as he opens his first (and now flagship) restaurant in Royal Hospital Road in Chelsea in September 1998. This establishment would ultimately earn him the highly prestigious (and rare) three Michelin Stars. It also covers his participation in the dinner made at the Palace of Versailles on 11 July 1998 to celebrate the closing of the 1998 World Cup and features young chefs Marcus Wareing and Mark Sargeant at the early stages of their careers, as well as mentor Marco Pierre White.
<P> Gordon James Ramsay (born 8 November 1966) is a British chef, restaurateur, writer, television personality and food critic. Born in Johnstone, Scotland, and raised in Stratford-upon-Avon, England, Ramsay's restaurants have been awarded 16 Michelin stars in total and currently hold a total of seven. His signature restaurant, Restaurant Gordon Ramsay in Chelsea, London, has held three Michelin stars since 2001. Appearing on the British television miniseries "Boiling Point" in 1998, by 2004 Ramsay had become one of the best-known and most influential chefs in the UK.
<P> Gordon Ramsay is a Scottish Chef, restaurateur, writer, television personality and food critic. He has owned and operated a series of restaurants since he first became head chef of Aubergine in 1993. He owned 25% of that restaurant, where he earned his first two Michelin stars. Following the sacking of protege Marcus Wareing from sister restaurant L'Oranger, Ramsay organised a staff walkout from both restaurants and subsequently took them to open up Restaurant Gordon Ramsay, at Royal Hospital Road, London. His self-titled restaurant went on to become his first and only three Michelin star restaurant. Ramsay has become one of the chefs with the most Michelin stars in the world. In 2008, following the awarding of two stars for Gordon Ramsay at The London in New York, he drew with Alain Ducasse as the holder of the most Michelin stars with twelve. However, he has since been overtaken by both Ducasse and Joël Robuchon and currently has eight stars as of the 2014 New York City Michelin Guide.
<P> Ramsay's Best Restaurant is a television programme featuring British celebrity chef Gordon Ramsay broadcast on Channel 4. During the series restaurants from all over Britain competed in order to win the "Ramsay's Best Restaurant" title. The initial 16 restaurants were selected by Ramsay from a pool of some 12,000 entries submitted by Channel 4 viewers.
<P> Ramsay's flagship restaurant, Restaurant Gordon Ramsay, was voted London's top restaurant in "Harden's" for eight years, but in 2008 was placed below Petrus, a restaurant run by former protégé Marcus Wareing. In January 2013, Ramsay was inducted into the Culinary Hall of Fame.
<P> In 1998, Ramsay opened his own restaurant in Chelsea, Restaurant Gordon Ramsay, with the help of his father-in-law, Chris Hutcheson, and his former colleagues at Aubergine. The restaurant gained its third Michelin star in 2001, making Ramsay the first Scot to achieve that feat. In 2011, "The Good Food Guide" listed Restaurant Gordon Ramsay as the second best in the UK, only bettered by The Fat Duck in Bray, Berkshire.
| question: what makes gordon ramsay such an incredible chef? wouldn't the skill level of top level culinary artists not vary a lot? context: <P> Ramsay's reputation is built upon his goal of culinary perfection, which is associated with winning three Michelin stars. His mentor, Marco Pierre White noted that he is highly competitive. Since the airing of "Boiling Point", which followed Ramsay's quest of earning three Michelin stars, the chef has also become infamous for his fiery temper and use of expletives. Ramsay once famously ejected food critic A. A. Gill, whose dining companion was Joan Collins, from his restaurant, leading Gill to state that "Ramsay is a wonderful chef, just a really second-rate human being." Ramsay admitted in his autobiography that he did not mind if Gill insulted his food, but a personal insult he was not going to stand for. Ramsay has also had confrontations with his kitchen staff, including one incident that resulted in the pastry chef calling the police. A 2005 interview reported Ramsay had retained 85% of his staff since 1993. Ramsay attributes his management style to the influence of previous mentors, notably chefs Marco Pierre White and Guy Savoy, father-in-law, Chris Hutcheson, and Jock Wallace, his manager while a footballer at Rangers.
<P> Chef Ramsay is closely followed during eight of the most intense months of his life as he opens his first (and now flagship) restaurant in Royal Hospital Road in Chelsea in September 1998. This establishment would ultimately earn him the highly prestigious (and rare) three Michelin Stars. It also covers his participation in the dinner made at the Palace of Versailles on 11 July 1998 to celebrate the closing of the 1998 World Cup and features young chefs Marcus Wareing and Mark Sargeant at the early stages of their careers, as well as mentor Marco Pierre White.
<P> Gordon James Ramsay (born 8 November 1966) is a British chef, restaurateur, writer, television personality and food critic. Born in Johnstone, Scotland, and raised in Stratford-upon-Avon, England, Ramsay's restaurants have been awarded 16 Michelin stars in total and currently hold a total of seven. His signature restaurant, Restaurant Gordon Ramsay in Chelsea, London, has held three Michelin stars since 2001. Appearing on the British television miniseries "Boiling Point" in 1998, by 2004 Ramsay had become one of the best-known and most influential chefs in the UK.
<P> Gordon Ramsay is a Scottish Chef, restaurateur, writer, television personality and food critic. He has owned and operated a series of restaurants since he first became head chef of Aubergine in 1993. He owned 25% of that restaurant, where he earned his first two Michelin stars. Following the sacking of protege Marcus Wareing from sister restaurant L'Oranger, Ramsay organised a staff walkout from both restaurants and subsequently took them to open up Restaurant Gordon Ramsay, at Royal Hospital Road, London. His self-titled restaurant went on to become his first and only three Michelin star restaurant. Ramsay has become one of the chefs with the most Michelin stars in the world. In 2008, following the awarding of two stars for Gordon Ramsay at The London in New York, he drew with Alain Ducasse as the holder of the most Michelin stars with twelve. However, he has since been overtaken by both Ducasse and Joël Robuchon and currently has eight stars as of the 2014 New York City Michelin Guide.
<P> Ramsay's Best Restaurant is a television programme featuring British celebrity chef Gordon Ramsay broadcast on Channel 4. During the series restaurants from all over Britain competed in order to win the "Ramsay's Best Restaurant" title. The initial 16 restaurants were selected by Ramsay from a pool of some 12,000 entries submitted by Channel 4 viewers.
<P> Ramsay's flagship restaurant, Restaurant Gordon Ramsay, was voted London's top restaurant in "Harden's" for eight years, but in 2008 was placed below Petrus, a restaurant run by former protégé Marcus Wareing. In January 2013, Ramsay was inducted into the Culinary Hall of Fame.
<P> In 1998, Ramsay opened his own restaurant in Chelsea, Restaurant Gordon Ramsay, with the help of his father-in-law, Chris Hutcheson, and his former colleagues at Aubergine. The restaurant gained its third Michelin star in 2001, making Ramsay the first Scot to achieve that feat. In 2011, "The Good Food Guide" listed Restaurant Gordon Ramsay as the second best in the UK, only bettered by The Fat Duck in Bray, Berkshire.
| answer: He's an incredible restauranteur, which is a bit different. He understands the entire business.Creating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant business, choosing good staff, and setting standards. |
120,644 | 10n8gg | Shouldn't there be a theoretical limit to data storage capacity per mass? Do we know what this limit is? | The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can be stored and retrieved is much lower. | [
"DNA is the only thing I can think of with the highest data:size ratio, but there may be smaller.",
"The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can b... | 3 | [
"The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can be stored and retrieved is much lower. "
] | 1 | <P> It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.
<P> Storing large volumes of data – When storing XML to either file or database, the volume of data a system produces can often exceed reasonable limits, with a number of detriments: the access times go up as more data is read, CPU load goes up as XML data takes more power to process, and storage costs go up. By storing XML data in Fast Infoset format, data volume may be reduced by as much as 80 percent.
<P> For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.
<P> The original data contains a certain amount of information, and there is a lower limit to the size of file that can carry all the information. Basic information theory says that there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely. As an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data.
<P> The limits of data storage depend on the technology to write and read such data. For example, an 8″ × 10″ (roughly A4 without margins) 300dpi 8-bit greyscale image map contains 7.2 megabytes of data—assuming a scanner can accurately reproduce the printed image to that resolution and color depth, and a program can accurately interpret such an image. A similarly sized image in 2400dpi 24-bit true color theoretically contains 1.38 gigabytes of information.
<P> The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes).
<P> Assuming your data cannot be compressed, the 8.192 seconds to transmit a 64 kilobyte file over a 64 kilobit/s communications link is a theoretical minimum time which will not be achieved in practice. This is due to the effect of overheads which are used to format the data in an agreed manner so that both ends of a connection have a consistent view of the data.
| question: Shouldn't there be a theoretical limit to data storage capacity per mass? Do we know what this limit is? context: <P> It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.
<P> Storing large volumes of data – When storing XML to either file or database, the volume of data a system produces can often exceed reasonable limits, with a number of detriments: the access times go up as more data is read, CPU load goes up as XML data takes more power to process, and storage costs go up. By storing XML data in Fast Infoset format, data volume may be reduced by as much as 80 percent.
<P> For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.
<P> The original data contains a certain amount of information, and there is a lower limit to the size of file that can carry all the information. Basic information theory says that there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely. As an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data.
<P> The limits of data storage depend on the technology to write and read such data. For example, an 8″ × 10″ (roughly A4 without margins) 300dpi 8-bit greyscale image map contains 7.2 megabytes of data—assuming a scanner can accurately reproduce the printed image to that resolution and color depth, and a program can accurately interpret such an image. A similarly sized image in 2400dpi 24-bit true color theoretically contains 1.38 gigabytes of information.
<P> The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes).
<P> Assuming your data cannot be compressed, the 8.192 seconds to transmit a 64 kilobyte file over a 64 kilobit/s communications link is a theoretical minimum time which will not be achieved in practice. This is due to the effect of overheads which are used to format the data in an agreed manner so that both ends of a connection have a consistent view of the data.
| answer: The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can be stored and retrieved is much lower. |
176,246 | 2xl733 | Why was Jazz considered degenerated music? | Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.There were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the time.Chances are this didn't matter either way, Jazz was considered degenerate because its pioneers were black, end of story. | [
"Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.\n\nThere were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the ti... | 1 | [
"Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.\n\nThere were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the ti... | 1 | <P> Jazz music during the first half of the '60s was largely a continuation of '50s styles, retaining its core audience of young, urban, college-educated whites. By 1967, the death of several important jazz figures such as John Coltrane and Nat King Cole precipitated a decline in the genre. The takeover of rock in the late '60s largely spelled the end of jazz as a mainstream form of music, after it had dominated much of the first half of the 20th century.
<P> Jazz culture was transformed, by way of Rhythm and Blues into Rock and Roll culture. There are various suggested candidates for which record might have been the First rock and roll record. At the same time, jazz culture itself continued but changed into a more respected form, no longer necessarily associated with wild behaviour and criminality.
<P> The breakdown of form and rhythmic structure has been seen by some critics to coincide with jazz musicians' exposure to and use of elements from non-Western music, especially African, Arabic, and Indian. The atonality of free jazz is often credited by historians and jazz performers to a return to non-tonal music of the nineteenth century, including field hollers, street cries, and jubilees (part of the "return to the roots" element of free jazz). This suggests that perhaps the movement away from tonality was not a conscious effort to devise a formal atonal system, but rather a reflection of the concepts surrounding free jazz. Jazz became "free" by removing dependence on chord progressions and instead using polytempic and polyrhythmic structures.
<P> Although jazz is considered difficult to define, in part because it contains many subgenres, improvisation is one of its defining elements. The centrality of improvisation is attributed to the influence of earlier forms of music such as blues, a form of folk music which arose in part from the work songs and field hollers of African-American slaves on plantations. These work songs were commonly structured around a repetitive call-and-response pattern, but early blues was also improvisational. Classical music performance is evaluated more by its fidelity to the musical score, with less attention given to interpretation, ornamentation, and accompaniment. The classical performer's goal is to play the composition as it was written. In contrast, jazz is often characterized by the product of interaction and collaboration, placing less value on the contribution of the composer, if there is one, and more on the performer. The jazz performer interprets a tune in individual ways, never playing the same composition twice. Depending on the performer's mood, experience, and interaction with band members or audience members, the performer may change melodies, harmonies, and time signatures.
<P> In the late 1940s, during the "anti-cosmopolitanism" campaigns, jazz music suffered from ideological oppression, as it was labeled "bourgeois" music. Many bands were dissolved, and those that remained avoided being labeled as jazz bands.
<P> Jazz quickly replaced the blues as American popular music, in the form of big band swing, a kind of dance music from the early 1930s. Swing used large ensembles, and was not generally improvised, in contrast with the free-flowing form of other kinds of jazz. With swing spreading across the nation, other genres continued to evolve towards popular traditions. In Louisiana, Cajun and Creole music was adding influences from blues and generating some regional hit records, while Appalachian folk music was spawning jug bands, honky tonk bars and close harmony duets, which were to evolve into the pop-folk of the 1940s, bluegrass and country.The American Popular music reflects and defines American Society.
<P> Since the emergence of bebop, forms of jazz that are commercially oriented or influenced by popular music have been criticized. According to Bruce Johnson, there has always been a "tension between jazz as a commercial music and an art form". Traditional jazz enthusiasts have dismissed bebop, free jazz, and jazz fusion as forms of debasement and betrayal. An alternative view is that jazz can absorb and transform diverse musical styles. By avoiding the creation of norms, jazz allows avant-garde styles to emerge.
| question: Why was Jazz considered degenerated music? context: <P> Jazz music during the first half of the '60s was largely a continuation of '50s styles, retaining its core audience of young, urban, college-educated whites. By 1967, the death of several important jazz figures such as John Coltrane and Nat King Cole precipitated a decline in the genre. The takeover of rock in the late '60s largely spelled the end of jazz as a mainstream form of music, after it had dominated much of the first half of the 20th century.
<P> Jazz culture was transformed, by way of Rhythm and Blues into Rock and Roll culture. There are various suggested candidates for which record might have been the First rock and roll record. At the same time, jazz culture itself continued but changed into a more respected form, no longer necessarily associated with wild behaviour and criminality.
<P> The breakdown of form and rhythmic structure has been seen by some critics to coincide with jazz musicians' exposure to and use of elements from non-Western music, especially African, Arabic, and Indian. The atonality of free jazz is often credited by historians and jazz performers to a return to non-tonal music of the nineteenth century, including field hollers, street cries, and jubilees (part of the "return to the roots" element of free jazz). This suggests that perhaps the movement away from tonality was not a conscious effort to devise a formal atonal system, but rather a reflection of the concepts surrounding free jazz. Jazz became "free" by removing dependence on chord progressions and instead using polytempic and polyrhythmic structures.
<P> Although jazz is considered difficult to define, in part because it contains many subgenres, improvisation is one of its defining elements. The centrality of improvisation is attributed to the influence of earlier forms of music such as blues, a form of folk music which arose in part from the work songs and field hollers of African-American slaves on plantations. These work songs were commonly structured around a repetitive call-and-response pattern, but early blues was also improvisational. Classical music performance is evaluated more by its fidelity to the musical score, with less attention given to interpretation, ornamentation, and accompaniment. The classical performer's goal is to play the composition as it was written. In contrast, jazz is often characterized by the product of interaction and collaboration, placing less value on the contribution of the composer, if there is one, and more on the performer. The jazz performer interprets a tune in individual ways, never playing the same composition twice. Depending on the performer's mood, experience, and interaction with band members or audience members, the performer may change melodies, harmonies, and time signatures.
<P> In the late 1940s, during the "anti-cosmopolitanism" campaigns, jazz music suffered from ideological oppression, as it was labeled "bourgeois" music. Many bands were dissolved, and those that remained avoided being labeled as jazz bands.
<P> Jazz quickly replaced the blues as American popular music, in the form of big band swing, a kind of dance music from the early 1930s. Swing used large ensembles, and was not generally improvised, in contrast with the free-flowing form of other kinds of jazz. With swing spreading across the nation, other genres continued to evolve towards popular traditions. In Louisiana, Cajun and Creole music was adding influences from blues and generating some regional hit records, while Appalachian folk music was spawning jug bands, honky tonk bars and close harmony duets, which were to evolve into the pop-folk of the 1940s, bluegrass and country.The American Popular music reflects and defines American Society.
<P> Since the emergence of bebop, forms of jazz that are commercially oriented or influenced by popular music have been criticized. According to Bruce Johnson, there has always been a "tension between jazz as a commercial music and an art form". Traditional jazz enthusiasts have dismissed bebop, free jazz, and jazz fusion as forms of debasement and betrayal. An alternative view is that jazz can absorb and transform diverse musical styles. By avoiding the creation of norms, jazz allows avant-garde styles to emerge.
| answer: Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.There were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the time.Chances are this didn't matter either way, Jazz was considered degenerate because its pioneers were black, end of story. |
26,796 | 1xc8kd | in prehistoric times, why didn't insects evolve to become much larger? | they did _URL_0_ | [
"they did _URL_0_",
"Insects were actually a lot bigger during the dinosaur era. For instance, there was a Giant Dragonfly that was approximately the size of a large seagull in wingspan, there was a Giant Centipede that was more than 8 feet long and 3 feet wide.\n\nTo answer your question: they did. ",
"It's ... | 4 | [
"they did _URL_0_",
"Insects were actually a lot bigger during the dinosaur era. For instance, there was a Giant Dragonfly that was approximately the size of a large seagull in wingspan, there was a Giant Centipede that was more than 8 feet long and 3 feet wide.\n\nTo answer your question: they did. "
] | 2 | <P> The differences between modern and prehistoric varieties can be essential, and, like many other creatures of prehistory, the latter tended to be much larger than their contemporary equivalents. This size difference is thought to be due to higher atmospheric oxygen levels (allowing diffusion through spiracles over greater distances), higher temperatures (enhancing metabolism), and the absence of birds as key predators of insect life.
<P> BULLET::::- Lack of predators. Other explanations for the large size of Meganeurids compared to living relatives are warranted. suggested that the lack of aerial vertebrate predators allowed pterygote insects to evolve to maximum sizes during the Carboniferous and Permian periods, perhaps accelerated by an evolutionary "arms race" for increase in body size between plant-feeding Palaeodictyoptera and Meganisoptera as their predators.
<P> Controversy has prevailed as to how insects of the Carboniferous period were able to grow so large. The way oxygen is diffused through the insect's body via its tracheal breathing system puts an upper limit on body size, which prehistoric insects seem to have well exceeded. It was originally proposed in that "Meganeura" was only able to fly because the atmosphere at that time contained more oxygen than the present 20%. This theory was dismissed by fellow scientists, but has found approval more recently through further study into the relationship between gigantism and oxygen availability. If this theory is correct, these insects would have been susceptible to falling oxygen levels and certainly could not survive in our modern atmosphere. Other research indicates that insects really do breathe, with "rapid cycles of tracheal compression and expansion". Recent analysis of the flight energetics of modern insects and birds suggests that both the oxygen levels and air density provide a bound on size.
<P> The small size has forced many species to sacrifice some of their anatomy, like the heart, crop and gizzard. While the exoskeleton and respiration system of the insects seems to be the major limiting factors regarding how large they can get, the limit for how small they can become appears to be related to the space required for their nervous and reproductive systems.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Maraapunisaurus", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Amphicoelias", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion. Throughout their evolutionary history, sauropod dinosaurs were found primarily in semi-arid, seasonally dry environments, with a corresponding seasonal drop in the quality of food during the dry season. The environment of "Amphicoelias" was essentially a savanna, similar to the arid environments in which modern giant herbivores are found, supporting the idea that poor-quality food in an arid environment promotes the evolution of giant herbivores. Carpenter argued that other benefits of large size, such as relative immunity from predators, lower energy expenditure, and longer life span, are probably secondary advantages.
<P> Recent theories propose that theropod body size shrank continuously over a period of 50 million years, from an average of down to , eventually evolving into modern birds. This was based on evidence that theropods were the only dinosaurs to get continuously smaller, and that their skeletons changed four times as fast as those of other dinosaur species.
| question: in prehistoric times, why didn't insects evolve to become much larger? context: <P> The differences between modern and prehistoric varieties can be essential, and, like many other creatures of prehistory, the latter tended to be much larger than their contemporary equivalents. This size difference is thought to be due to higher atmospheric oxygen levels (allowing diffusion through spiracles over greater distances), higher temperatures (enhancing metabolism), and the absence of birds as key predators of insect life.
<P> BULLET::::- Lack of predators. Other explanations for the large size of Meganeurids compared to living relatives are warranted. suggested that the lack of aerial vertebrate predators allowed pterygote insects to evolve to maximum sizes during the Carboniferous and Permian periods, perhaps accelerated by an evolutionary "arms race" for increase in body size between plant-feeding Palaeodictyoptera and Meganisoptera as their predators.
<P> Controversy has prevailed as to how insects of the Carboniferous period were able to grow so large. The way oxygen is diffused through the insect's body via its tracheal breathing system puts an upper limit on body size, which prehistoric insects seem to have well exceeded. It was originally proposed in that "Meganeura" was only able to fly because the atmosphere at that time contained more oxygen than the present 20%. This theory was dismissed by fellow scientists, but has found approval more recently through further study into the relationship between gigantism and oxygen availability. If this theory is correct, these insects would have been susceptible to falling oxygen levels and certainly could not survive in our modern atmosphere. Other research indicates that insects really do breathe, with "rapid cycles of tracheal compression and expansion". Recent analysis of the flight energetics of modern insects and birds suggests that both the oxygen levels and air density provide a bound on size.
<P> The small size has forced many species to sacrifice some of their anatomy, like the heart, crop and gizzard. While the exoskeleton and respiration system of the insects seems to be the major limiting factors regarding how large they can get, the limit for how small they can become appears to be related to the space required for their nervous and reproductive systems.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Maraapunisaurus", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Amphicoelias", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion. Throughout their evolutionary history, sauropod dinosaurs were found primarily in semi-arid, seasonally dry environments, with a corresponding seasonal drop in the quality of food during the dry season. The environment of "Amphicoelias" was essentially a savanna, similar to the arid environments in which modern giant herbivores are found, supporting the idea that poor-quality food in an arid environment promotes the evolution of giant herbivores. Carpenter argued that other benefits of large size, such as relative immunity from predators, lower energy expenditure, and longer life span, are probably secondary advantages.
<P> Recent theories propose that theropod body size shrank continuously over a period of 50 million years, from an average of down to , eventually evolving into modern birds. This was based on evidence that theropods were the only dinosaurs to get continuously smaller, and that their skeletons changed four times as fast as those of other dinosaur species.
| answer: they did _URL_0_ |
193,841 | ea51vt | Why does the French Foreign Legion have such a romantic reputation while other foreign formations have been forgotten? | Hi there!First off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order to answer it, I would argue that the core reason for their romance is their longevity, continued existence, and achieving their zenith in comparatively modern conflicts. I would also argue they are not the first to enjoy such a reputation, and that the Irish Brigade in French service plotted a similar trajectory a century prior, and this answer should help contextualise the romantic image surrounding the legion. Before tackling this, it may be wise to evaluate exactly what you mean by "romantic". Obviously the Foreign Legion has had an illustrious service in the late 19th and twentieth centuries in particular, and if we choose to equate a celebrated combat record to romance then yes, absolutely they are romantic. The idea of a band of vagabonds given one more chance, a clean slate, and the toughest missions the nation requires of them is the underdog story at its most primal. One only has to look at cinema to see countless examples of this kind of story; The Dirty Dozen, the Magnificent Seven, even Rogue One (if the mods pardon some sci-fi!) to name a few off the top of my head. A distinction therefore needs to be drawn between the foreign nature of their soldiers, and the incredible challenges they overcame. Are they romantic due to their multicultural composition? Or rather are they celebrated due to their military record? Let us consider the military angle first of all, and the origin of the Legion's celebrated status. The Legion's first 'famous' battle was the skirmish at Camerone in 1863. Here, a small detachment of legionaries held off a numerically superior force, and were allowed to walk away. Famously their commanding officer Captain Danjou was killed during the battle, and his wooden hand became a touchstone and relic of the legion thereafter. Every year the Legion celebrate Camerone, and with such a celebration one can see the origin of a staunch regimental tradition. This is a crucial factor when considering longevity. Old regiments with traditions associated with them are likely to catch the public's eye. They mark themselves out from the generic rank and file, and when this is actively encouraged both through their foreign composition and deployment, we can start to see how the Legion built its mythical status from the inside out.How has this myth endured? Through selective memory of certain engagements. If one asks about French Foreign Legion service during the early 20th century, you'll probably have Bir Hakiem thrown in your face. Again, this battle in Libya saw the Legion mount a staunch defence against overwhelming odds. What about the latter portion of the 20th century? Well despite being a catastrophic defeat, Dien Bien Phu tells a similar story. The Legion, stranded in hostile conditions, fighting against the odds. You may have started to notice a pattern! (It is worth noting that the Legion's actions in Indo-China were largely despicable, and are a far cry from any sort of romance whatsoever. For a basic primer on this, try Max Hasting's *Vietnam.*)So what we have is a group of 'underdog' soldiers, being put in situations which are almost impossible to overcome, and succeeding in doing so, all within recent memory. In addition, they are still in active service. This presence in the modern military has allowed their mythos to survive compared to other, arguably more romantic groups. For the sake of comparison, I would like to discuss the Irish Brigade of France in the 18th century, and hopefully explain how an equally celebrated foreign contingent both came to praise, and eventually faded away.First of all, let us consider the soldiers themselves. Like the Legion, the odds were very much stacked against the men of the Irish Brigade. They were exiles, sent away from their homeland for supporting Catholic James II in a war against the new English regime of William of Orange in 1689-91. Their French hosts hardly viewed them with much enthusiasm upon their arrival. "*Il est fort foible!*" declared the Comte d'Avaux upon seeing them disembark. Overall, their inauspicious beginning mirrors the origin of the Legion in at least some aspects. Whilst they were more homogeneous, they weren't seen as pleasant company and there are many stories of Irish soldier turning to highway robbery on the roads around the exiled court of James II.Like the Legion, this changed following several battles in which the Irish contribution became famous. Most of these early engagements took place during the War of the Spanish Succession. At the siege of Cremona (1702) Irish soldiers, roused from their beds and without uniform, held off a Holy Roman attack in brutal streetfighting. At the defeat at Blenheim (1704), the Irish again excelled, to the point that the Allied commander Colonel Goore (whose command had been devastated by the Irish attack) had nothing but praise to offer them.This romantic status was triumphed in both Ireland and in France, and the musical tradition surrounding the brigade combined with their integration with French society both show how popular they were at the time. However, they would in due course slip into obscurity. As the century progressed, more and more French (or other nationalities) joined the ranks, and the Irish Brigades began to be less and less Irish. This loss of identity and regimental tradition which is so critical to the continuation of the Foreign Legion's mythos eventually saw the Irish Brigade disband. It was partially incorporated into other Regiments, and some went to serve the British crown instead. What we can see here is a pattern of a disparate band, whose unique foreign-ness lends a certain quality worthy of extolling, and whose combat performance only enhances that uniqueness. I could offer other examples too; the Irish Brigade in the American Civil War, the Swiss Guard of France, the 442nd Combat Group in WW2. However what sets the Legion apart is its continued endurance and relevance in the modern day. The fact that it has continued to serve allows their mythos to continue as well. I hope this has answered your question, and I am an open door when it comes to follow-up questions as well. Sources and further reading.Hastings, Max, *Vietnam* Hogan, James, (ed) *Negociations de M. le Comte D'Avaux en Irlande*McGarry, Stephen *Irish Brigades Abroad; From the Wild Geese to Napoleon*Murtagh, Harman., 'Irish soldiers abroad 1600-1800' in Bartlett & Jeffery (eds) *A Military History of Ireland* Reynolds, Robert Grey, *The Battle of Bir Hakeim: June 1942 Triumph of the Free French* | [
"Hi there!\n\nFirst off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order ... | 1 | [
"Hi there!\n\nFirst off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order ... | 1 | <P> Beyond its reputation as an elite unit often engaged in serious fighting, the recruitment practices of the French Foreign Legion have also led to a somewhat romanticised view of it being a place for disgraced or "wronged" men looking to leave behind their old lives and start new ones. This view of the legion is common in literature, and has been used for dramatic effect in many films, not the least of which are the several versions of "Beau Geste".
<P> The French Foreign Legion is an elite force composed of soldiers of different race, trade, religion, and sentiments, which began as part of the French Army. Through the years, it has earned a quasi-legendary reputation due to its victories and also its gallant defeats. It was founded in 1831 and was given the right to hire foreign recruits. The Foreign Legion was deeply rooted in the French conquest of Algeria. Since its inception, the Legion played an important role in advancing France's colonial expansion.
<P> The principal distinguishing characteristic of the French Foreign Legion is that it is constituted of foreigners. Well before it created a specific military unit, France recruited foreigners for its military. The French Foreign Legion is also distinctive in that all recruits volunteer; other countries' foreign regiments were constituted of conscripts or prisoners of war (not the case of the 1831 Legion).
<P> The French Foreign Legion has had a long and unique history amongst the units of the French Army. The French Foreign Legion was historically formed of expatriate enlisted personnel led by French officers. Founded by a royal ordinance issued by King Louis Philippe of France on March 9, 1831 with aim of bolstering the strength of the French Army while also finding a use for the influx of refugees inundating France at the time. The Foreign Legion subsequently found a permanent home in the ranks of the French military. The Foreign Legion's history spans across Conquest of Algeria, the Franco-Prussian War, numerous colonial exploits, both World Wars, the First Indochina War, and the Algerian War.
<P> The French Foreign Legion is a military arm of the French army, established in 1831, and it has seen action throughout the world, recently in Africa and the Middle East. It has been featured in a large number of films, including a number about the legion itself, such as 1949's "Outpost in Morocco".
<P> The creation of the Foreign Legion was in large part was due to the Three Glorious Days and its European consequences. Even before its creation of this version of the Legion, the enlistment of foreigners has always taken place. In theory, the Legion was not to engage under any circumstances in combat in France.
<P> The French Foreign Legion is part of the History of France. The Legion was created by a King, combat engaged at Camarón under an Emperor and has known to endure the most heavy losses under the Republic.
| question: Why does the French Foreign Legion have such a romantic reputation while other foreign formations have been forgotten? context: <P> Beyond its reputation as an elite unit often engaged in serious fighting, the recruitment practices of the French Foreign Legion have also led to a somewhat romanticised view of it being a place for disgraced or "wronged" men looking to leave behind their old lives and start new ones. This view of the legion is common in literature, and has been used for dramatic effect in many films, not the least of which are the several versions of "Beau Geste".
<P> The French Foreign Legion is an elite force composed of soldiers of different race, trade, religion, and sentiments, which began as part of the French Army. Through the years, it has earned a quasi-legendary reputation due to its victories and also its gallant defeats. It was founded in 1831 and was given the right to hire foreign recruits. The Foreign Legion was deeply rooted in the French conquest of Algeria. Since its inception, the Legion played an important role in advancing France's colonial expansion.
<P> The principal distinguishing characteristic of the French Foreign Legion is that it is constituted of foreigners. Well before it created a specific military unit, France recruited foreigners for its military. The French Foreign Legion is also distinctive in that all recruits volunteer; other countries' foreign regiments were constituted of conscripts or prisoners of war (not the case of the 1831 Legion).
<P> The French Foreign Legion has had a long and unique history amongst the units of the French Army. The French Foreign Legion was historically formed of expatriate enlisted personnel led by French officers. Founded by a royal ordinance issued by King Louis Philippe of France on March 9, 1831 with aim of bolstering the strength of the French Army while also finding a use for the influx of refugees inundating France at the time. The Foreign Legion subsequently found a permanent home in the ranks of the French military. The Foreign Legion's history spans across Conquest of Algeria, the Franco-Prussian War, numerous colonial exploits, both World Wars, the First Indochina War, and the Algerian War.
<P> The French Foreign Legion is a military arm of the French army, established in 1831, and it has seen action throughout the world, recently in Africa and the Middle East. It has been featured in a large number of films, including a number about the legion itself, such as 1949's "Outpost in Morocco".
<P> The creation of the Foreign Legion was in large part was due to the Three Glorious Days and its European consequences. Even before its creation of this version of the Legion, the enlistment of foreigners has always taken place. In theory, the Legion was not to engage under any circumstances in combat in France.
<P> The French Foreign Legion is part of the History of France. The Legion was created by a King, combat engaged at Camarón under an Emperor and has known to endure the most heavy losses under the Republic.
| answer: Hi there!First off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order to answer it, I would argue that the core reason for their romance is their longevity, continued existence, and achieving their zenith in comparatively modern conflicts. I would also argue they are not the first to enjoy such a reputation, and that the Irish Brigade in French service plotted a similar trajectory a century prior, and this answer should help contextualise the romantic image surrounding the legion. Before tackling this, it may be wise to evaluate exactly what you mean by "romantic". Obviously the Foreign Legion has had an illustrious service in the late 19th and twentieth centuries in particular, and if we choose to equate a celebrated combat record to romance then yes, absolutely they are romantic. The idea of a band of vagabonds given one more chance, a clean slate, and the toughest missions the nation requires of them is the underdog story at its most primal. One only has to look at cinema to see countless examples of this kind of story; The Dirty Dozen, the Magnificent Seven, even Rogue One (if the mods pardon some sci-fi!) to name a few off the top of my head. A distinction therefore needs to be drawn between the foreign nature of their soldiers, and the incredible challenges they overcame. Are they romantic due to their multicultural composition? Or rather are they celebrated due to their military record? Let us consider the military angle first of all, and the origin of the Legion's celebrated status. The Legion's first 'famous' battle was the skirmish at Camerone in 1863. Here, a small detachment of legionaries held off a numerically superior force, and were allowed to walk away. Famously their commanding officer Captain Danjou was killed during the battle, and his wooden hand became a touchstone and relic of the legion thereafter. Every year the Legion celebrate Camerone, and with such a celebration one can see the origin of a staunch regimental tradition. This is a crucial factor when considering longevity. Old regiments with traditions associated with them are likely to catch the public's eye. They mark themselves out from the generic rank and file, and when this is actively encouraged both through their foreign composition and deployment, we can start to see how the Legion built its mythical status from the inside out.How has this myth endured? Through selective memory of certain engagements. If one asks about French Foreign Legion service during the early 20th century, you'll probably have Bir Hakiem thrown in your face. Again, this battle in Libya saw the Legion mount a staunch defence against overwhelming odds. What about the latter portion of the 20th century? Well despite being a catastrophic defeat, Dien Bien Phu tells a similar story. The Legion, stranded in hostile conditions, fighting against the odds. You may have started to notice a pattern! (It is worth noting that the Legion's actions in Indo-China were largely despicable, and are a far cry from any sort of romance whatsoever. For a basic primer on this, try Max Hasting's *Vietnam.*)So what we have is a group of 'underdog' soldiers, being put in situations which are almost impossible to overcome, and succeeding in doing so, all within recent memory. In addition, they are still in active service. This presence in the modern military has allowed their mythos to survive compared to other, arguably more romantic groups. For the sake of comparison, I would like to discuss the Irish Brigade of France in the 18th century, and hopefully explain how an equally celebrated foreign contingent both came to praise, and eventually faded away.First of all, let us consider the soldiers themselves. Like the Legion, the odds were very much stacked against the men of the Irish Brigade. They were exiles, sent away from their homeland for supporting Catholic James II in a war against the new English regime of William of Orange in 1689-91. Their French hosts hardly viewed them with much enthusiasm upon their arrival. "*Il est fort foible!*" declared the Comte d'Avaux upon seeing them disembark. Overall, their inauspicious beginning mirrors the origin of the Legion in at least some aspects. Whilst they were more homogeneous, they weren't seen as pleasant company and there are many stories of Irish soldier turning to highway robbery on the roads around the exiled court of James II.Like the Legion, this changed following several battles in which the Irish contribution became famous. Most of these early engagements took place during the War of the Spanish Succession. At the siege of Cremona (1702) Irish soldiers, roused from their beds and without uniform, held off a Holy Roman attack in brutal streetfighting. At the defeat at Blenheim (1704), the Irish again excelled, to the point that the Allied commander Colonel Goore (whose command had been devastated by the Irish attack) had nothing but praise to offer them.This romantic status was triumphed in both Ireland and in France, and the musical tradition surrounding the brigade combined with their integration with French society both show how popular they were at the time. However, they would in due course slip into obscurity. As the century progressed, more and more French (or other nationalities) joined the ranks, and the Irish Brigades began to be less and less Irish. This loss of identity and regimental tradition which is so critical to the continuation of the Foreign Legion's mythos eventually saw the Irish Brigade disband. It was partially incorporated into other Regiments, and some went to serve the British crown instead. What we can see here is a pattern of a disparate band, whose unique foreign-ness lends a certain quality worthy of extolling, and whose combat performance only enhances that uniqueness. I could offer other examples too; the Irish Brigade in the American Civil War, the Swiss Guard of France, the 442nd Combat Group in WW2. However what sets the Legion apart is its continued endurance and relevance in the modern day. The fact that it has continued to serve allows their mythos to continue as well. I hope this has answered your question, and I am an open door when it comes to follow-up questions as well. Sources and further reading.Hastings, Max, *Vietnam* Hogan, James, (ed) *Negociations de M. le Comte D'Avaux en Irlande*McGarry, Stephen *Irish Brigades Abroad; From the Wild Geese to Napoleon*Murtagh, Harman., 'Irish soldiers abroad 1600-1800' in Bartlett & Jeffery (eds) *A Military History of Ireland* Reynolds, Robert Grey, *The Battle of Bir Hakeim: June 1942 Triumph of the Free French* |
206,786 | beii9g | how does the youtube 'recommend' work? | Exactly how it work is kept secret by google to make it harder to manipulate.A part of how it work depend in correlation between you and other people. If you and another person both have watched the same videos and the perhaps given the same a thumbs up you can guess that the inrest is similar. So when the other people give a new video on a the same subject a thumbs up or just watch all of it that indicate that you likely are interested in the same video. A old video can popup because people stated to watch it now for some reson perhaps because someone in there own video linked to it or it was used in a external article and that drow a lot of traffic to it.Google also used automatic system to analyse the videos and it can give you auto generates subtitles so thy try to determine what the content is and automatically match it to other videos.So the algoritm is secret and changes all the time to and changes so it is harder to game the system and inject auto generated video that is there just to get money from initial advertisement but people do not like | [
"Exactly how it work is kept secret by google to make it harder to manipulate.\n\nA part of how it work depend in correlation between you and other people. If you and another person both have watched the same videos and the perhaps given the same a thumbs up you can guess that the inrest is similar. So when the oth... | 1 | [] | 0 | <P> When a user types in the title of a film or TV show, the site's algorithm provides a list of similar content. It provides recommendations for TV shows to watch based on films liked by the user, and vice versa. It also provides recommendations for music, video games, and books, and includes film and TV trailers and music videos. An account is free and is not required to receive recommendations, but recommendations are more accurate for those with an account. The more a user explores the site, the more the site learns about the user's preferences and the better the results become. The site also has a social media aspect where one can see activity and gain recommendations from other users, how many others in the community like or dislike any recommendation, and how popular their tastes are within the TasteDive community.
<P> The YouTube Data API (v3) lets you incorporate YouTube functionality into your own application. You can use the API to fetch search results and to retrieve, insert, update, and delete resources like videos or playlists.
<P> The Players and Player APIs section identifies ways you can let your users watch YouTube videos in your application and control the playback experience. With an embedded YouTube player, you can integrate the YouTube video playback experience directly in your web page or application. You can use player parameters to customize the player's appearance, and you can also use Player APIs to control the player directly from your web page or app.
<P> In conjunction with the YouTube Player APIs and the YouTube Analytics API, the API lets your application provide a full-fledged YouTube experience that includes search and discovery, content creation, video playback, account management, and viewer statistics.
<P> In terms of distributed participatory design, YouTube and their content creators, or YouTubers, incorporate many of these elements into their website designs and planning. Video pages contain a 'share' function that allows for individuals to circulate a link to a video through various social media sites to increase exposure and possibly redirect people to other sites content creators use for circulating media and for receiving reactions. Additionally, feedback can appear in the form of comments and ratings. Each video has separate comment sections for users to leave input and ideas in. YouTube also uses a rating system of thumbs up/thumbs down to provide the content creators with a statistic on how well a video was received. Many popular YouTubers use social media networks such as Facebook, Twitter, Instagram, and Google+ to announce video updates and any information on external projects. Through managing social networks, website, and YouTube channel, the content creators can manage the distributed participation effectively and maintain their fanbases as well as update them on any changes in the design or content creation process.
<P> Outside of YouTube, Gunadie has teamed up with fellow YouTube personality Andrew Bravener to produce and host Like/Comment/Subscribe; a live, interactive show that features screenings and performances. According to Gunadie, the goal of the show "wasn’t to make the big videos bigger. We went into the community to find those hidden gems: rants, confessions, re-cuts, mashups, lip-syncs & even that ‘weird’ part of YouTube."
<P> These are distinct from the YouTube Awards, which were intended to recognize the best quality videos. YouTube Creator Rewards are based on a channel's subscriber count, but are awarded at the sole discretion of YouTube. Each channel is reviewed before an award is issued, to ensure that the channel follows the YouTube community guidelines. YouTube reserves the right to refuse to hand out a Creator Reward, which it did not previously award to select channels with horror or political content, as well as various critics.
| question: how does the youtube 'recommend' work? context: <P> When a user types in the title of a film or TV show, the site's algorithm provides a list of similar content. It provides recommendations for TV shows to watch based on films liked by the user, and vice versa. It also provides recommendations for music, video games, and books, and includes film and TV trailers and music videos. An account is free and is not required to receive recommendations, but recommendations are more accurate for those with an account. The more a user explores the site, the more the site learns about the user's preferences and the better the results become. The site also has a social media aspect where one can see activity and gain recommendations from other users, how many others in the community like or dislike any recommendation, and how popular their tastes are within the TasteDive community.
<P> The YouTube Data API (v3) lets you incorporate YouTube functionality into your own application. You can use the API to fetch search results and to retrieve, insert, update, and delete resources like videos or playlists.
<P> The Players and Player APIs section identifies ways you can let your users watch YouTube videos in your application and control the playback experience. With an embedded YouTube player, you can integrate the YouTube video playback experience directly in your web page or application. You can use player parameters to customize the player's appearance, and you can also use Player APIs to control the player directly from your web page or app.
<P> In conjunction with the YouTube Player APIs and the YouTube Analytics API, the API lets your application provide a full-fledged YouTube experience that includes search and discovery, content creation, video playback, account management, and viewer statistics.
<P> In terms of distributed participatory design, YouTube and their content creators, or YouTubers, incorporate many of these elements into their website designs and planning. Video pages contain a 'share' function that allows for individuals to circulate a link to a video through various social media sites to increase exposure and possibly redirect people to other sites content creators use for circulating media and for receiving reactions. Additionally, feedback can appear in the form of comments and ratings. Each video has separate comment sections for users to leave input and ideas in. YouTube also uses a rating system of thumbs up/thumbs down to provide the content creators with a statistic on how well a video was received. Many popular YouTubers use social media networks such as Facebook, Twitter, Instagram, and Google+ to announce video updates and any information on external projects. Through managing social networks, website, and YouTube channel, the content creators can manage the distributed participation effectively and maintain their fanbases as well as update them on any changes in the design or content creation process.
<P> Outside of YouTube, Gunadie has teamed up with fellow YouTube personality Andrew Bravener to produce and host Like/Comment/Subscribe; a live, interactive show that features screenings and performances. According to Gunadie, the goal of the show "wasn’t to make the big videos bigger. We went into the community to find those hidden gems: rants, confessions, re-cuts, mashups, lip-syncs & even that ‘weird’ part of YouTube."
<P> These are distinct from the YouTube Awards, which were intended to recognize the best quality videos. YouTube Creator Rewards are based on a channel's subscriber count, but are awarded at the sole discretion of YouTube. Each channel is reviewed before an award is issued, to ensure that the channel follows the YouTube community guidelines. YouTube reserves the right to refuse to hand out a Creator Reward, which it did not previously award to select channels with horror or political content, as well as various critics.
| answer: Exactly how it work is kept secret by google to make it harder to manipulate.A part of how it work depend in correlation between you and other people. If you and another person both have watched the same videos and the perhaps given the same a thumbs up you can guess that the inrest is similar. So when the other people give a new video on a the same subject a thumbs up or just watch all of it that indicate that you likely are interested in the same video. A old video can popup because people stated to watch it now for some reson perhaps because someone in there own video linked to it or it was used in a external article and that drow a lot of traffic to it.Google also used automatic system to analyse the videos and it can give you auto generates subtitles so thy try to determine what the content is and automatically match it to other videos.So the algoritm is secret and changes all the time to and changes so it is harder to game the system and inject auto generated video that is there just to get money from initial advertisement but people do not like |
65,832 | 18xily | How did the myth of the "Mound Builders" as non-Native Americans persist for so long? | Aside from racism, the main issue is that native societies in the Ohio Valley at the time of the European contact didn't claim to have built the mounds. The Adena and Hopewell rose and fell long before Europeans showed up in the New World. The societies contemporary with European colonization of the New World, like the Fort Ancients, seem to have been scattered by waves of Old World epidemics, agricultural strain from the Little Ice Age, and warfare with the Iroquois. By the time things settled down in the Ohio Valley again (~1700), the native population had migrated considerably and didn't have recent ancestral connections to the people who built the mounds they happened to live near.The lack of a direct connection between the mounds and the native societies that the Europeans encountered when moving into the Ohio Valley likely prompted the Moundbuilder myth. Racism and Manifest Destiny perpetuated it, long after the idea became untenable. It still has a few adherents, but those seem to be religiously motivated Mormons from what I've seen. | [
"I wasn't even aware that anyone thought the mounds *weren't* made by natives.",
"Aside from racism, the main issue is that native societies in the Ohio Valley at the time of the European contact didn't claim to have built the mounds. The Adena and Hopewell rose and fell long before Europeans showed up in the New... | 3 | [
"Aside from racism, the main issue is that native societies in the Ohio Valley at the time of the European contact didn't claim to have built the mounds. The Adena and Hopewell rose and fell long before Europeans showed up in the New World. The societies contemporary with European colonization of the New World, lik... | 2 | <P> In "Ancient America, In Notes on American Archaeology" he also speculated on the origins of the "Mound Builder" people then believed to have constructed the famous mounds around the Mississippi and Ohio River Valleys, suggesting that they had been an aboriginal people who had migrated northwards from Central America or Mexico. He rejected the then-common notion that they had been a lost European, Semitic, or Asiatic people who had been wiped out by the North American Indians, asserting on the contrary that the Mounds were "wholly original, wholly American" and "did not come from the Old World". He did, however, still subscribe to the idea that these "Mound Builders" were not the same as the American Indian inhabitants of the region at that time, who he believed were a separate race originating in Asia.
<P> People also claimed that the Indians were not the Mound Builders because the mounds and related artifacts were older than Indian cultures. Caleb Atwater's misunderstanding of stratigraphy caused him to believe that the Mound Builders were a much older civilization than the Indians. In his book, "Antiquities Discovered in the Western States" (1820), Atwater claimed that Indian remains were always found right beneath the surface of the earth. Since the artifacts associated with the Mound Builders were found fairly deep in the ground, Atwater argued that they must be from a different group of people. The discovery of metal artifacts further convinced people that the Mound Builders were not Native Americans. The Indians encountered by the Europeans and Americans were not thought to engage in metallurgy. Some artifacts that were found in relation to the mounds were inscribed with symbols. As the Europeans did not know of any Indian cultures that had a writing system, they assumed a different group had created them.
<P> Scholars believe that the mound site continued to be of great ceremonial importance to the historic Tocobaga Indians of the surrounding area, who coalesced as a people before European encounter in the late sixteenth century. They survived into the eighteenth century, but disappeared as a tribe due to infectious diseases and warfare.
<P> BULLET::::- In the "12th Annual Report of the Bureau of Ethnology", Cyrus Thomas' detailed report on the Mound Builders demolishes the earlier theory that ancient mounds in the United States were built by a "lost race", and shows they were built by the ancestors of modern Native Americans.
<P> A major factor in increasing public knowledge of the origins of the mounds was the 1894 report by Cyrus Thomas of the Bureau of American Ethnology. He concluded that the prehistoric earthworks of the Eastern United States were the work of early cultures of Native Americans. A small number of people had earlier made similar conclusions: Thomas Jefferson, for example, excavated a mound and from the artifacts and burial practices, noted similarities between mound-builder funeral practices and those of Native Americans in his time. In addition, Theodore Lewis in 1886 had refuted Pidgeon's fraudulent claims of pre-Native American moundbuilders.
<P> Next came another group of Indians known as Mound Builders. The Mound Builders were more prevalent in the area, although many of the mounds were destroyed by early white settlement, by either not knowing their significance, or by cultivating the ground in such a manner that would level them out. It was indicated that these Indians were a fun-loving people. They loved to hunt, fish and even to put out gardens in the summer. This may have been because the sparse population allowed them to not need to be over-protective of their hunting grounds. Archeologists report that pieces of pottery unearthed from the mounds were not only skillfully produced, but beautifully decorated; revealing an artistic flair and skill much improved compared to the Bluff Dwellers.
<P> The Mound Builders are the earliest Native American groups known to have inhabited the area. The Mound Builders cultivated corn and constructed large earthen mounds, particularly in the flood plains along the Mississippi River. The Mississippian Culture was in decline by the 12th and 13th centuries, and had largely disappeared by the time of European contact. At the time of European contact the most prominent Native American nation in the area were the Illiniwek who inhabited much of present-day Illinois and eastern Missouri. One particular Illiniwek tribe, the Kaskaskia Indians, originated from the area of present-day Peoria, Illinois, but had migrated south to the area of Kaskaskia, Illinois. In the late 1770s and 1780s, remnants of another Illiniwek tribe, the Peoria tribe, left the east bank of the Mississippi River to escape British and American oppression, with most settling in New Ste. Genevieve and around the Grand Champ Bottom. They were later followed by another Algonquian speaking tribe, the Kickapoos.
| question: How did the myth of the "Mound Builders" as non-Native Americans persist for so long? context: <P> In "Ancient America, In Notes on American Archaeology" he also speculated on the origins of the "Mound Builder" people then believed to have constructed the famous mounds around the Mississippi and Ohio River Valleys, suggesting that they had been an aboriginal people who had migrated northwards from Central America or Mexico. He rejected the then-common notion that they had been a lost European, Semitic, or Asiatic people who had been wiped out by the North American Indians, asserting on the contrary that the Mounds were "wholly original, wholly American" and "did not come from the Old World". He did, however, still subscribe to the idea that these "Mound Builders" were not the same as the American Indian inhabitants of the region at that time, who he believed were a separate race originating in Asia.
<P> People also claimed that the Indians were not the Mound Builders because the mounds and related artifacts were older than Indian cultures. Caleb Atwater's misunderstanding of stratigraphy caused him to believe that the Mound Builders were a much older civilization than the Indians. In his book, "Antiquities Discovered in the Western States" (1820), Atwater claimed that Indian remains were always found right beneath the surface of the earth. Since the artifacts associated with the Mound Builders were found fairly deep in the ground, Atwater argued that they must be from a different group of people. The discovery of metal artifacts further convinced people that the Mound Builders were not Native Americans. The Indians encountered by the Europeans and Americans were not thought to engage in metallurgy. Some artifacts that were found in relation to the mounds were inscribed with symbols. As the Europeans did not know of any Indian cultures that had a writing system, they assumed a different group had created them.
<P> Scholars believe that the mound site continued to be of great ceremonial importance to the historic Tocobaga Indians of the surrounding area, who coalesced as a people before European encounter in the late sixteenth century. They survived into the eighteenth century, but disappeared as a tribe due to infectious diseases and warfare.
<P> BULLET::::- In the "12th Annual Report of the Bureau of Ethnology", Cyrus Thomas' detailed report on the Mound Builders demolishes the earlier theory that ancient mounds in the United States were built by a "lost race", and shows they were built by the ancestors of modern Native Americans.
<P> A major factor in increasing public knowledge of the origins of the mounds was the 1894 report by Cyrus Thomas of the Bureau of American Ethnology. He concluded that the prehistoric earthworks of the Eastern United States were the work of early cultures of Native Americans. A small number of people had earlier made similar conclusions: Thomas Jefferson, for example, excavated a mound and from the artifacts and burial practices, noted similarities between mound-builder funeral practices and those of Native Americans in his time. In addition, Theodore Lewis in 1886 had refuted Pidgeon's fraudulent claims of pre-Native American moundbuilders.
<P> Next came another group of Indians known as Mound Builders. The Mound Builders were more prevalent in the area, although many of the mounds were destroyed by early white settlement, by either not knowing their significance, or by cultivating the ground in such a manner that would level them out. It was indicated that these Indians were a fun-loving people. They loved to hunt, fish and even to put out gardens in the summer. This may have been because the sparse population allowed them to not need to be over-protective of their hunting grounds. Archeologists report that pieces of pottery unearthed from the mounds were not only skillfully produced, but beautifully decorated; revealing an artistic flair and skill much improved compared to the Bluff Dwellers.
<P> The Mound Builders are the earliest Native American groups known to have inhabited the area. The Mound Builders cultivated corn and constructed large earthen mounds, particularly in the flood plains along the Mississippi River. The Mississippian Culture was in decline by the 12th and 13th centuries, and had largely disappeared by the time of European contact. At the time of European contact the most prominent Native American nation in the area were the Illiniwek who inhabited much of present-day Illinois and eastern Missouri. One particular Illiniwek tribe, the Kaskaskia Indians, originated from the area of present-day Peoria, Illinois, but had migrated south to the area of Kaskaskia, Illinois. In the late 1770s and 1780s, remnants of another Illiniwek tribe, the Peoria tribe, left the east bank of the Mississippi River to escape British and American oppression, with most settling in New Ste. Genevieve and around the Grand Champ Bottom. They were later followed by another Algonquian speaking tribe, the Kickapoos.
| answer: Aside from racism, the main issue is that native societies in the Ohio Valley at the time of the European contact didn't claim to have built the mounds. The Adena and Hopewell rose and fell long before Europeans showed up in the New World. The societies contemporary with European colonization of the New World, like the Fort Ancients, seem to have been scattered by waves of Old World epidemics, agricultural strain from the Little Ice Age, and warfare with the Iroquois. By the time things settled down in the Ohio Valley again (~1700), the native population had migrated considerably and didn't have recent ancestral connections to the people who built the mounds they happened to live near.The lack of a direct connection between the mounds and the native societies that the Europeans encountered when moving into the Ohio Valley likely prompted the Moundbuilder myth. Racism and Manifest Destiny perpetuated it, long after the idea became untenable. It still has a few adherents, but those seem to be religiously motivated Mormons from what I've seen. |
227,451 | 3lsmrp | how do we know how well other animals can see or smell? | Two main ways: 1. Dissecting animals sensory organs to analyze their structure (For example, we can look at the structure of rods and cones in their eyes and theorize based on that).2. Running tests like playing high or low pitched sounds, putting them in mazes with food, etc. | [
"Well for one there is the physical properties of the animal. For example Owls big pupils do denote good night vision (human pupils dilate in darkness but not as much as an Owl's).\n\nHowever by far the main reason is how well they use the senses. You know a Dog has a good sense of smell because he finds things, th... | 4 | [
"Two main ways: \n\n1. Dissecting animals sensory organs to analyze their structure (For example, we can look at the structure of rods and cones in their eyes and theorize based on that).\n2. Running tests like playing high or low pitched sounds, putting them in mazes with food, etc.",
"Well, for many animals we ... | 3 | <P> Other animals also have receptors to sense the world around them, with degrees of capability varying greatly between species. Humans have a comparatively weak sense of smell and a stronger sense of sight relative to many other mammals while some animals may lack one or more of the traditional five senses. Some animals may also intake and interpret sensory stimuli in very different ways. Some species of animals are able to sense the world in a way that humans cannot, with some species able to sense electrical and magnetic fields, and detect water pressure and currents.
<P> Animals recognise a wide variety of chemicals using their senses of taste and smell. The nematode "Caenorhabditis elegans" has only 14 types of chemosensory neuron, yet is able to respond to dozens of chemicals because each neuron detects several stimuli. More than 40 highly divergent transmembrane proteins that could contribute to this functional diversity have been described. Most of the candidate receptor genes are in clusters of similar genes; 11 of these appear to be expressed in small subsets of chemosensory neurons. A single type of neuron can potentially express at least 4 different receptor genes. Some of these might encode receptors for water-soluble attractants, repellents and pheromones, which are divergent members of the G-protein-coupled receptor family. Sequences of the Sra family of "C. elegans" receptor-like proteins contain 6-7 hydrophobic, putative transmembrane, regions. These can be distinguished from other 7TM proteins (especially those known to couple G-proteins) by their own characteristic TM signatures.
<P> The study of odors is a growing field but is a complex and difficult one. The human olfactory system can detect many thousands of scents based on only very minute airborne concentrations of a chemical. The sense of smell of many animals is even better. Some fragrant flowers give off "odor plumes" that move downwind and are detectable by bees more than a kilometer away.
<P> Odor molecules are detected by the olfactory receptors (hereafter OR) in the olfactory epithelium of the nasal cavity. Each receptor type is expressed within a subset of neurons, from which they directly connect to the olfactory bulb in the brain. Olfaction is essential for survival in most vertebrates; however, the degree to which an animal depends on smell is highly varied. Great variation exists in the number of OR genes among vertebrate species, as shown through bioinformatic analyses. This diversity exists by virtue of the wide-ranging environments that they inhabit. For instance, dolphins that are secondarily adapted to an aquatic niche possess a considerably smaller subset of genes than most mammals. OR gene repertoires have also evolved in relation to other senses, as higher primates with well-developed vision systems tend to have a smaller number of OR genes. As such, investigating the evolutionary changes of OR genes can provide useful information on how genomes respond to environmental changes. Differences in smell sensitivity are also dependent on the anatomy of the olfactory apparatus, such as the size of the olfactory bulb and epithelium.
<P> Animals that are capable of smell detect aroma compounds with their olfactory receptors. Olfactory receptors are cell-membrane receptors on the surface of sensory neurons in the olfactory system that detect airborne aroma compounds. Aroma compounds can then be identified by Gas Chromatography-Olfactometry (GC-O), which involves a human operator sniffing the GC effluent.
<P> Humans leave a trace of chemicals in every place they go and on everything they touch. Other animals use signaling mechanisms to leave trails or identify each other. The sense of smell is an important sense in using these mechanisms, but it is still not well understood. Humans, compared to the rest of the animal world, do not have a good olfactory sense though we may be better than we first assume. Johannes Kepler once argued that the Earth is an immense organism itself, with chemical signals spreading across the globe through various organisms in order to keep the world functioning and well informed.
<P> Research on human olfaction is scant in comparison to other senses such as vision and hearing, and studies specifically devoted to olfactory recognition are even rarer. Thus, what little information there is on this subject is gleaned through animal studies. Rodents such as mice or rats are suitable subjects for odor recognition research given that smell is their primary sense. "[For these species], recognition of individual body odors is analogous to human face recognition in that it provides information about identity." In mice, individual body odors are represented at the major histocompatibility complex (MHC).
| question: how do we know how well other animals can see or smell? context: <P> Other animals also have receptors to sense the world around them, with degrees of capability varying greatly between species. Humans have a comparatively weak sense of smell and a stronger sense of sight relative to many other mammals while some animals may lack one or more of the traditional five senses. Some animals may also intake and interpret sensory stimuli in very different ways. Some species of animals are able to sense the world in a way that humans cannot, with some species able to sense electrical and magnetic fields, and detect water pressure and currents.
<P> Animals recognise a wide variety of chemicals using their senses of taste and smell. The nematode "Caenorhabditis elegans" has only 14 types of chemosensory neuron, yet is able to respond to dozens of chemicals because each neuron detects several stimuli. More than 40 highly divergent transmembrane proteins that could contribute to this functional diversity have been described. Most of the candidate receptor genes are in clusters of similar genes; 11 of these appear to be expressed in small subsets of chemosensory neurons. A single type of neuron can potentially express at least 4 different receptor genes. Some of these might encode receptors for water-soluble attractants, repellents and pheromones, which are divergent members of the G-protein-coupled receptor family. Sequences of the Sra family of "C. elegans" receptor-like proteins contain 6-7 hydrophobic, putative transmembrane, regions. These can be distinguished from other 7TM proteins (especially those known to couple G-proteins) by their own characteristic TM signatures.
<P> The study of odors is a growing field but is a complex and difficult one. The human olfactory system can detect many thousands of scents based on only very minute airborne concentrations of a chemical. The sense of smell of many animals is even better. Some fragrant flowers give off "odor plumes" that move downwind and are detectable by bees more than a kilometer away.
<P> Odor molecules are detected by the olfactory receptors (hereafter OR) in the olfactory epithelium of the nasal cavity. Each receptor type is expressed within a subset of neurons, from which they directly connect to the olfactory bulb in the brain. Olfaction is essential for survival in most vertebrates; however, the degree to which an animal depends on smell is highly varied. Great variation exists in the number of OR genes among vertebrate species, as shown through bioinformatic analyses. This diversity exists by virtue of the wide-ranging environments that they inhabit. For instance, dolphins that are secondarily adapted to an aquatic niche possess a considerably smaller subset of genes than most mammals. OR gene repertoires have also evolved in relation to other senses, as higher primates with well-developed vision systems tend to have a smaller number of OR genes. As such, investigating the evolutionary changes of OR genes can provide useful information on how genomes respond to environmental changes. Differences in smell sensitivity are also dependent on the anatomy of the olfactory apparatus, such as the size of the olfactory bulb and epithelium.
<P> Animals that are capable of smell detect aroma compounds with their olfactory receptors. Olfactory receptors are cell-membrane receptors on the surface of sensory neurons in the olfactory system that detect airborne aroma compounds. Aroma compounds can then be identified by Gas Chromatography-Olfactometry (GC-O), which involves a human operator sniffing the GC effluent.
<P> Humans leave a trace of chemicals in every place they go and on everything they touch. Other animals use signaling mechanisms to leave trails or identify each other. The sense of smell is an important sense in using these mechanisms, but it is still not well understood. Humans, compared to the rest of the animal world, do not have a good olfactory sense though we may be better than we first assume. Johannes Kepler once argued that the Earth is an immense organism itself, with chemical signals spreading across the globe through various organisms in order to keep the world functioning and well informed.
<P> Research on human olfaction is scant in comparison to other senses such as vision and hearing, and studies specifically devoted to olfactory recognition are even rarer. Thus, what little information there is on this subject is gleaned through animal studies. Rodents such as mice or rats are suitable subjects for odor recognition research given that smell is their primary sense. "[For these species], recognition of individual body odors is analogous to human face recognition in that it provides information about identity." In mice, individual body odors are represented at the major histocompatibility complex (MHC).
| answer: Two main ways: 1. Dissecting animals sensory organs to analyze their structure (For example, we can look at the structure of rods and cones in their eyes and theorize based on that).2. Running tests like playing high or low pitched sounds, putting them in mazes with food, etc. |
181,300 | 6s1022 | what happens to the dashboard when airbags deploy | Watch [this video](_URL_0_) of an airbag deploying in slow motionThere are panels over the airbag which have seams to split apart on. When the airbag starts to go, it pushes against these panels which break on the seams and push out of the way so the airbag can deploy. The ones on the passenger side and the pillars will have similar plastics that split apart, but the pattern will obviously be different | [
"Watch [this video](_URL_0_) of an airbag deploying in slow motion\n\nThere are panels over the airbag which have seams to split apart on. When the airbag starts to go, it pushes against these panels which break on the seams and push out of the way so the airbag can deploy. The ones on the passenger side and the ... | 1 | [
"Watch [this video](_URL_0_) of an airbag deploying in slow motion\n\nThere are panels over the airbag which have seams to split apart on. When the airbag starts to go, it pushes against these panels which break on the seams and push out of the way so the airbag can deploy. The ones on the passenger side and the ... | 1 | <P> From the onset of the crash, the entire deployment and inflation process is about 0.04 seconds. Because vehicles change speed so quickly in a crash, airbags must inflate rapidly to reduce the risk of the occupant hitting the vehicle's interior.
<P> Airbags deploy at speeds up to and in some cases exert tremendous force on the windshield. Occupants can impact the airbag just 50 ms after initial deployment. Depending on vehicle design, airbag deployment and/or occupant impact into the airbag may increase forces on the windshield, dramatically in some cases.
<P> Advanced airbag technologies are being developed to tailor airbag deployment to the severity of the crash, the size and posture of the vehicle occupant, belt usage, and how close that person is to the actual airbag. Many of these systems use multi-stage inflators that deploy less forcefully in stages in moderate crashes than in very severe crashes. Occupant sensing devices let the airbag control unit know if someone is occupying a seat adjacent to an airbag, the mass/weight of the person, whether a seat belt or child restraint is being used, and whether the person is forward in the seat and close to the airbag. Based on this information and crash severity information, the airbag is deployed at either at a high force level, a less forceful level, or not at all.
<P> An airbag is a vehicle occupant-restraint system using a bag designed to inflate extremely quickly, then quickly deflate during a collision. It consists of the airbag cushion, a flexible fabric bag, an inflation module, and an impact sensor.
<P> Adaptive airbag systems may utilize multi-stage airbags to adjust the pressure within the airbag. The greater the pressure within the airbag, the more force the airbag will exert on the occupants as they come in contact with it. These adjustments allow the system to deploy the airbag with a moderate force for most collisions; reserving the maximum force airbag only for the severest of collisions. Additional sensors to determine the location, weight or relative size of the occupants may also be used. Information regarding the occupants and the severity of the crash are used by the airbag control unit, to determine whether airbags should be suppressed or deployed, and if so, at various output levels.
<P> Because of the airbag exit flap design of the steering wheel boss and dashboard panel, these items are not designed to be recoverable if an airbag deploys, meaning that they have to be replaced if the vehicle has not been written off in an accident. Moreover, the dust-like particles and gases can cause irreparable cosmetic damage to the dashboard and upholstery, meaning that minor collisions which result in the deployment of airbags can be costly accidents, even if there are no injuries and there is only minor damage to the vehicle structure.
<P> The airbags in the vehicle are controlled by a central airbag control unit (ACU), a specific type of ECU. The ACU monitors a number of related sensors within the vehicle, including accelerometers, impact sensors, side (door) pressure sensors, wheel speed sensors, gyroscopes, brake pressure sensors, and seat occupancy sensors. The bag itself and its inflation mechanism is concealed within the steering wheel boss (for the driver), or the dashboard (for the front passenger), behind plastic flaps or doors which are designed to tear open under the force of the bag inflating. Once the requisite threshold has been reached or exceeded, the airbag control unit will trigger the ignition of a gas generator propellant to rapidly inflate a fabric bag. As the vehicle occupant collides with and squeezes the bag, the gas escapes in a controlled manner through small vent holes. The airbag's volume and the size of the vents in the bag are tailored to each vehicle type, to spread out the deceleration of (and thus force experienced by) the occupant over time and over the occupant's body, compared to a seat belt alone.
| question: what happens to the dashboard when airbags deploy context: <P> From the onset of the crash, the entire deployment and inflation process is about 0.04 seconds. Because vehicles change speed so quickly in a crash, airbags must inflate rapidly to reduce the risk of the occupant hitting the vehicle's interior.
<P> Airbags deploy at speeds up to and in some cases exert tremendous force on the windshield. Occupants can impact the airbag just 50 ms after initial deployment. Depending on vehicle design, airbag deployment and/or occupant impact into the airbag may increase forces on the windshield, dramatically in some cases.
<P> Advanced airbag technologies are being developed to tailor airbag deployment to the severity of the crash, the size and posture of the vehicle occupant, belt usage, and how close that person is to the actual airbag. Many of these systems use multi-stage inflators that deploy less forcefully in stages in moderate crashes than in very severe crashes. Occupant sensing devices let the airbag control unit know if someone is occupying a seat adjacent to an airbag, the mass/weight of the person, whether a seat belt or child restraint is being used, and whether the person is forward in the seat and close to the airbag. Based on this information and crash severity information, the airbag is deployed at either at a high force level, a less forceful level, or not at all.
<P> An airbag is a vehicle occupant-restraint system using a bag designed to inflate extremely quickly, then quickly deflate during a collision. It consists of the airbag cushion, a flexible fabric bag, an inflation module, and an impact sensor.
<P> Adaptive airbag systems may utilize multi-stage airbags to adjust the pressure within the airbag. The greater the pressure within the airbag, the more force the airbag will exert on the occupants as they come in contact with it. These adjustments allow the system to deploy the airbag with a moderate force for most collisions; reserving the maximum force airbag only for the severest of collisions. Additional sensors to determine the location, weight or relative size of the occupants may also be used. Information regarding the occupants and the severity of the crash are used by the airbag control unit, to determine whether airbags should be suppressed or deployed, and if so, at various output levels.
<P> Because of the airbag exit flap design of the steering wheel boss and dashboard panel, these items are not designed to be recoverable if an airbag deploys, meaning that they have to be replaced if the vehicle has not been written off in an accident. Moreover, the dust-like particles and gases can cause irreparable cosmetic damage to the dashboard and upholstery, meaning that minor collisions which result in the deployment of airbags can be costly accidents, even if there are no injuries and there is only minor damage to the vehicle structure.
<P> The airbags in the vehicle are controlled by a central airbag control unit (ACU), a specific type of ECU. The ACU monitors a number of related sensors within the vehicle, including accelerometers, impact sensors, side (door) pressure sensors, wheel speed sensors, gyroscopes, brake pressure sensors, and seat occupancy sensors. The bag itself and its inflation mechanism is concealed within the steering wheel boss (for the driver), or the dashboard (for the front passenger), behind plastic flaps or doors which are designed to tear open under the force of the bag inflating. Once the requisite threshold has been reached or exceeded, the airbag control unit will trigger the ignition of a gas generator propellant to rapidly inflate a fabric bag. As the vehicle occupant collides with and squeezes the bag, the gas escapes in a controlled manner through small vent holes. The airbag's volume and the size of the vents in the bag are tailored to each vehicle type, to spread out the deceleration of (and thus force experienced by) the occupant over time and over the occupant's body, compared to a seat belt alone.
| answer: Watch [this video](_URL_0_) of an airbag deploying in slow motionThere are panels over the airbag which have seams to split apart on. When the airbag starts to go, it pushes against these panels which break on the seams and push out of the way so the airbag can deploy. The ones on the passenger side and the pillars will have similar plastics that split apart, but the pattern will obviously be different |
125,604 | 5jknns | Which nerve supplies the Diaphragm with autonomic innervation ? | diaphragm is a skeletal muscle, so it is not supplied with autonomic innervation. Breathing is an antonomous process that can be hijacked by higher nervous centers, but all signals come only via phrenic nerve.In the brainstem there is a respiratory center which controls the automatic ventilation. It is dependent on blood CO2 levels, so every increase in CO2 triggers inspiration. Rate and depth of breathing is regulated by blood gases, pH, emotions etc. and it is closely associated to autonomous nervous system. However no noradrenaline/acetylcholine take part, AFAIK.You can overcome this by hyperventilating which decreases your blood CO2, and then holding your breath. This can make you pass out because O2 in your blood is used up before CO2 rises to a value that make you inspirate. | [
"diaphragm is a skeletal muscle, so it is not supplied with autonomic innervation. Breathing is an antonomous process that can be hijacked by higher nervous centers, but all signals come only via phrenic nerve.\n\nIn the brainstem there is a respiratory center which controls the automatic ventilation. It is depende... | 1 | [] | 0 | <P> The phrenic nerves contain motor, sensory, and sympathetic nerve fibers. These nerves provide the only motor supply to the diaphragm as well as sensation to the central tendon. In the thorax, each phrenic nerve supplies the mediastinal pleura.
<P> The vagus nerve is a long, wandering nerve that emerges from the brainstem and provides parasympathetic stimulation to a large number of organs in the thorax and abdomen, including the heart. The nerves from the sympathetic trunk emerge through the T1-T4 thoracic ganglia and travel to both the sinoatrial and atrioventricular nodes, as well as to the atria and ventricles. The ventricles are more richly innervated by sympathetic fibers than parasympathetic fibers. Sympathetic stimulation causes the release of the neurotransmitter norepinephrine (also known as noradrenaline) at the neuromuscular junction of the cardiac nerves. This shortens the repolarization period, thus speeding the rate of depolarization and contraction, which results in an increased heart rate. It opens chemical or ligand-gated sodium and calcium ion channels, allowing an influx of positively charged ions. Norepinephrine binds to the beta–1 receptor.
<P> pancreas also receives autonomic innervation. The blood flow into pancreas is regulated by sympathetic nerve fibers, while parasympathetic neurons stimulate the activity of acinar and centroacinar cells.
<P> The suprascapular nerve is a nerve that arises from the brachial plexus. It is responsible for the innervation of some of the muscles that attach on the scapula, namely the supraspinatus and infraspinatus muscles.
<P> During embryonic development of the thoracic diaphragm, myoblast cells from the septum invade the other components of the diaphragm. They thus give rise to the motor and sensory innervation of the muscular diaphragm by the phrenic nerve.
<P> One of the major nerve routes is from the brain, along the spinal cord and through the back. This is commonly referred to as the sacral area. This area controls the everyday function of the pelvic floor, urethral sphincter, bladder and bowel. By stimulating the sacral nerve (located in the lower back), a signal is sent that manipulates a contraction within the pelvic floor. Over time these contractions rebuild the strength of the organs and muscles within it. This effectively alleviates all symptoms of urinary/faecal disorders, and in many cases eliminates them completely.
<P> The fourth intercostal nerve is innervated by cutaneous slowly-adapting and rapidly-adapting mechanoreceptors, especially by ones densely-packed under the areola; innervation subsequently triggers oxytocin release, which, when in the peripheral bloodstream, causes myoepithelial cell contraction and lactation: this is an example of a non-nerve-innervation muscular reflex.
| question: Which nerve supplies the Diaphragm with autonomic innervation ? context: <P> The phrenic nerves contain motor, sensory, and sympathetic nerve fibers. These nerves provide the only motor supply to the diaphragm as well as sensation to the central tendon. In the thorax, each phrenic nerve supplies the mediastinal pleura.
<P> The vagus nerve is a long, wandering nerve that emerges from the brainstem and provides parasympathetic stimulation to a large number of organs in the thorax and abdomen, including the heart. The nerves from the sympathetic trunk emerge through the T1-T4 thoracic ganglia and travel to both the sinoatrial and atrioventricular nodes, as well as to the atria and ventricles. The ventricles are more richly innervated by sympathetic fibers than parasympathetic fibers. Sympathetic stimulation causes the release of the neurotransmitter norepinephrine (also known as noradrenaline) at the neuromuscular junction of the cardiac nerves. This shortens the repolarization period, thus speeding the rate of depolarization and contraction, which results in an increased heart rate. It opens chemical or ligand-gated sodium and calcium ion channels, allowing an influx of positively charged ions. Norepinephrine binds to the beta–1 receptor.
<P> pancreas also receives autonomic innervation. The blood flow into pancreas is regulated by sympathetic nerve fibers, while parasympathetic neurons stimulate the activity of acinar and centroacinar cells.
<P> The suprascapular nerve is a nerve that arises from the brachial plexus. It is responsible for the innervation of some of the muscles that attach on the scapula, namely the supraspinatus and infraspinatus muscles.
<P> During embryonic development of the thoracic diaphragm, myoblast cells from the septum invade the other components of the diaphragm. They thus give rise to the motor and sensory innervation of the muscular diaphragm by the phrenic nerve.
<P> One of the major nerve routes is from the brain, along the spinal cord and through the back. This is commonly referred to as the sacral area. This area controls the everyday function of the pelvic floor, urethral sphincter, bladder and bowel. By stimulating the sacral nerve (located in the lower back), a signal is sent that manipulates a contraction within the pelvic floor. Over time these contractions rebuild the strength of the organs and muscles within it. This effectively alleviates all symptoms of urinary/faecal disorders, and in many cases eliminates them completely.
<P> The fourth intercostal nerve is innervated by cutaneous slowly-adapting and rapidly-adapting mechanoreceptors, especially by ones densely-packed under the areola; innervation subsequently triggers oxytocin release, which, when in the peripheral bloodstream, causes myoepithelial cell contraction and lactation: this is an example of a non-nerve-innervation muscular reflex.
| answer: diaphragm is a skeletal muscle, so it is not supplied with autonomic innervation. Breathing is an antonomous process that can be hijacked by higher nervous centers, but all signals come only via phrenic nerve.In the brainstem there is a respiratory center which controls the automatic ventilation. It is dependent on blood CO2 levels, so every increase in CO2 triggers inspiration. Rate and depth of breathing is regulated by blood gases, pH, emotions etc. and it is closely associated to autonomous nervous system. However no noradrenaline/acetylcholine take part, AFAIK.You can overcome this by hyperventilating which decreases your blood CO2, and then holding your breath. This can make you pass out because O2 in your blood is used up before CO2 rises to a value that make you inspirate. |
150,388 | tbr9m | Will a person burn less calories doing 2 30 minute workout sessions versus doing one hour straight? | If you're just looking at calories expended, it's relative to the work being done. There's a tight relationship between calories burned and oxygen consumed (5 kcal/liter oxygen) although fuel source can vary this between 5 and 4.7 kcal/L). Assuming you work at the same intensity for the separate bouts as you do for the single bout, they should be pretty equal in caloric output (the total energy cost to perform the work is the same in both cases). Some have suggested that your "metabolism" remains elevated after the separate bouts (so twice the elevated metabolism than after one bout). They are referring to EPOC (excess post exercise oxygen consumption) which is when oxygen consumption remains elevated even when work is no longer being performed (you are at rest). The EPOC is relative to the work done and is related to the energy costs of getting back to a resting state (replenishing glycogen and phosphocreatine stores, thermogenic elevations, etc.). Most of the data I've seen links the size of EPOC to intensity. I've never seen anything to do with duration. So if there were a difference in energy cost, it could be at the size/duration of EPOC (although I've never seen any data to support that). If there were differences here, they would likely be very small and negligible enough to say that both bouts (double 30min or single 60 min) would be pretty much equal.If you can exercise at a higher intensity during the two bouts than you could in the one, then you would be burning more calories (greater total energy cost). | [
"If you're just looking at calories expended, it's relative to the work being done. There's a tight relationship between calories burned and oxygen consumed (5 kcal/liter oxygen) although fuel source can vary this between 5 and 4.7 kcal/L). Assuming you work at the same intensity for the separate bouts as you do ... | 1 | [
"If you're just looking at calories expended, it's relative to the work being done. There's a tight relationship between calories burned and oxygen consumed (5 kcal/liter oxygen) although fuel source can vary this between 5 and 4.7 kcal/L). Assuming you work at the same intensity for the separate bouts as you do ... | 1 | <P> Instead of 30 minutes a day at one time, short bursts of physical activity for 8–10 minutes three times a day are also suitable. Exercising this way can reduce the risk of getting heart disease or coronary ischemia, if it is performed at moderate intensity.
<P> BULLET::::- Higher intensity exercise, such as High-intensity interval training (HIIT), increases the resting metabolic rate (RMR) in the 24 hours following high intensity exercise, ultimately burning more calories than lower intensity exercise; low intensity exercise burns more calories during the exercise, due to the increased duration, but fewer afterwards.
<P> BULLET::::- Pre-schoolers (3 to 5 years) should spend at least 180 minutes a day in a variety of physical activities, of which 60 minutes is energetic play such as running, jumping and kicking and throwing, spread throughout the day - noting more is better.
<P> The 2008 Guidelines indicated it was only beneficial to do at least 10 minutes of an activity at a time. The second edition removes this requirement that states that all moderate-to-vigorous physical activity counts.
<P> A 2018 review of intermittent fasting in obese people showed that reducing calorie intake one to six days per week over at least 12 weeks was effective for reducing body weight on an average of ; the results were not different from a simple calorie restricted diet, and the clinical trials reviewed were run mostly on middle-aged women from the US and the UK, limiting interpretation of the results. Intermittent fasting has not been studied in children, the elderly, or underweight people, and could be harmful in these populations.
<P> Both the health benefits and the performance benefits, or "training effect", require that the duration and the frequency of exercise both exceed a certain minimum. Most authorities suggest at least twenty minutes performed at least three times per week.
<P> BULLET::::- Engage in regular physical activity and reduce sedentary activities to promote health, psychological well-being, and a healthy body weight. (At least 30 minutes on most, and if possible, every day for adults and at least 60 minutes each day for children and teenagers, and for most people increasing to more vigorous-intensity or a longer duration will bring greater benefits.)
| question: Will a person burn less calories doing 2 30 minute workout sessions versus doing one hour straight? context: <P> Instead of 30 minutes a day at one time, short bursts of physical activity for 8–10 minutes three times a day are also suitable. Exercising this way can reduce the risk of getting heart disease or coronary ischemia, if it is performed at moderate intensity.
<P> BULLET::::- Higher intensity exercise, such as High-intensity interval training (HIIT), increases the resting metabolic rate (RMR) in the 24 hours following high intensity exercise, ultimately burning more calories than lower intensity exercise; low intensity exercise burns more calories during the exercise, due to the increased duration, but fewer afterwards.
<P> BULLET::::- Pre-schoolers (3 to 5 years) should spend at least 180 minutes a day in a variety of physical activities, of which 60 minutes is energetic play such as running, jumping and kicking and throwing, spread throughout the day - noting more is better.
<P> The 2008 Guidelines indicated it was only beneficial to do at least 10 minutes of an activity at a time. The second edition removes this requirement that states that all moderate-to-vigorous physical activity counts.
<P> A 2018 review of intermittent fasting in obese people showed that reducing calorie intake one to six days per week over at least 12 weeks was effective for reducing body weight on an average of ; the results were not different from a simple calorie restricted diet, and the clinical trials reviewed were run mostly on middle-aged women from the US and the UK, limiting interpretation of the results. Intermittent fasting has not been studied in children, the elderly, or underweight people, and could be harmful in these populations.
<P> Both the health benefits and the performance benefits, or "training effect", require that the duration and the frequency of exercise both exceed a certain minimum. Most authorities suggest at least twenty minutes performed at least three times per week.
<P> BULLET::::- Engage in regular physical activity and reduce sedentary activities to promote health, psychological well-being, and a healthy body weight. (At least 30 minutes on most, and if possible, every day for adults and at least 60 minutes each day for children and teenagers, and for most people increasing to more vigorous-intensity or a longer duration will bring greater benefits.)
| answer: If you're just looking at calories expended, it's relative to the work being done. There's a tight relationship between calories burned and oxygen consumed (5 kcal/liter oxygen) although fuel source can vary this between 5 and 4.7 kcal/L). Assuming you work at the same intensity for the separate bouts as you do for the single bout, they should be pretty equal in caloric output (the total energy cost to perform the work is the same in both cases). Some have suggested that your "metabolism" remains elevated after the separate bouts (so twice the elevated metabolism than after one bout). They are referring to EPOC (excess post exercise oxygen consumption) which is when oxygen consumption remains elevated even when work is no longer being performed (you are at rest). The EPOC is relative to the work done and is related to the energy costs of getting back to a resting state (replenishing glycogen and phosphocreatine stores, thermogenic elevations, etc.). Most of the data I've seen links the size of EPOC to intensity. I've never seen anything to do with duration. So if there were a difference in energy cost, it could be at the size/duration of EPOC (although I've never seen any data to support that). If there were differences here, they would likely be very small and negligible enough to say that both bouts (double 30min or single 60 min) would be pretty much equal.If you can exercise at a higher intensity during the two bouts than you could in the one, then you would be burning more calories (greater total energy cost). |
167,173 | v7emg | What are the effects of sleeping with the lights on? | Light, particularly blue light, inhibits [melatonin](_URL_0_) production which is a "sleep hormone" and plays a role in our [circadian rhythm](_URL_1_). | [
"Light, particularly blue light, inhibits [melatonin](_URL_0_) production which is a \"sleep hormone\" and plays a role in our [circadian rhythm](_URL_1_).",
"[Lot more info on same question I asked last year.](_URL_0_)",
"[According to Discovery News](_URL_0_), excess light during sleep can cause depression.\n... | 6 | [
"Light, particularly blue light, inhibits [melatonin](_URL_0_) production which is a \"sleep hormone\" and plays a role in our [circadian rhythm](_URL_1_).",
"[Lot more info on same question I asked last year.](_URL_0_)",
"[According to Discovery News](_URL_0_), excess light during sleep can cause depression.\n... | 4 | <P> Medical research on the effects of excessive light on the human body suggests that a variety of adverse health effects may be caused by light pollution or excessive light exposure, and some lighting design textbooks use human health as an explicit criterion for proper interior lighting. Health effects of over-illumination or improper spectral composition of light may include: increased headache incidence, worker fatigue, medically defined stress, decrease in sexual function and increase in anxiety. Likewise, animal models have been studied demonstrating unavoidable light to produce adverse effect on mood and anxiety. For those who need to be awake at night, light at night also has an acute effect on alertness and mood.
<P> Side effects of light therapy for sleep phase disorders include jumpiness or jitteriness, headache, eye irritation and nausea. Some non-depressive physical complaints, such as poor vision and skin rash or irritation, may improve with light therapy.
<P> Nightlights are also helpful in reducing falls and injuries and, at the same time, help the elderly to maintain sleep. Falls are a major concern with the elderly; they threaten their independence and risk further health complications. Lighting systems can help seniors maintain balance and stability. Furthermore, sleep deprivation can contribute to decreased postural control. Nightlights that accent horizontal and vertical spaces, such as soft lighting above a doorway or at the foot of a bed, can reduce the risk of falls without disturbing sleep.
<P> The optimal sleeping light condition is said by some to be total darkness. If a nightlight is used within a sleeping area, it is recommended to choose a dim reddish light to minimize disruptive effects on sleep cycles. In addition, nightlights may be useful in locations other than sleeping areas, such as hallways, bathrooms, or kitchens, to allow late night trips to be made without turning on the full light, while preserving a dark sleeping environment.
<P> Starting about two hours before an individual's regular bedtime, exposure of the eyes to light will delay the circadian phase, causing later wake-up time and later sleep onset. The delaying effect gets stronger as evening progresses; it is also dependent on the wavelength and illuminance ("brightness") of the light. The effect is small in dim indoor lighting.
<P> Another study has indicated that sleeping with the light on may protect the eyes of diabetics from retinopathy, a condition that can lead to blindness. However, the initial study is still inconclusive.
<P> A University of Pennsylvania study indicated that sleeping with the light on or with a nightlight was associated with a greater incidence of nearsightedness in children. However, a later study at Ohio State University contradicted the earlier conclusion. Both studies were published in the journal "Nature".
| question: What are the effects of sleeping with the lights on? context: <P> Medical research on the effects of excessive light on the human body suggests that a variety of adverse health effects may be caused by light pollution or excessive light exposure, and some lighting design textbooks use human health as an explicit criterion for proper interior lighting. Health effects of over-illumination or improper spectral composition of light may include: increased headache incidence, worker fatigue, medically defined stress, decrease in sexual function and increase in anxiety. Likewise, animal models have been studied demonstrating unavoidable light to produce adverse effect on mood and anxiety. For those who need to be awake at night, light at night also has an acute effect on alertness and mood.
<P> Side effects of light therapy for sleep phase disorders include jumpiness or jitteriness, headache, eye irritation and nausea. Some non-depressive physical complaints, such as poor vision and skin rash or irritation, may improve with light therapy.
<P> Nightlights are also helpful in reducing falls and injuries and, at the same time, help the elderly to maintain sleep. Falls are a major concern with the elderly; they threaten their independence and risk further health complications. Lighting systems can help seniors maintain balance and stability. Furthermore, sleep deprivation can contribute to decreased postural control. Nightlights that accent horizontal and vertical spaces, such as soft lighting above a doorway or at the foot of a bed, can reduce the risk of falls without disturbing sleep.
<P> The optimal sleeping light condition is said by some to be total darkness. If a nightlight is used within a sleeping area, it is recommended to choose a dim reddish light to minimize disruptive effects on sleep cycles. In addition, nightlights may be useful in locations other than sleeping areas, such as hallways, bathrooms, or kitchens, to allow late night trips to be made without turning on the full light, while preserving a dark sleeping environment.
<P> Starting about two hours before an individual's regular bedtime, exposure of the eyes to light will delay the circadian phase, causing later wake-up time and later sleep onset. The delaying effect gets stronger as evening progresses; it is also dependent on the wavelength and illuminance ("brightness") of the light. The effect is small in dim indoor lighting.
<P> Another study has indicated that sleeping with the light on may protect the eyes of diabetics from retinopathy, a condition that can lead to blindness. However, the initial study is still inconclusive.
<P> A University of Pennsylvania study indicated that sleeping with the light on or with a nightlight was associated with a greater incidence of nearsightedness in children. However, a later study at Ohio State University contradicted the earlier conclusion. Both studies were published in the journal "Nature".
| answer: Light, particularly blue light, inhibits [melatonin](_URL_0_) production which is a "sleep hormone" and plays a role in our [circadian rhythm](_URL_1_). |
88,497 | 2mhqqn | What were the differences between the generations of the Red Army Faction? | The reason for the talk of various generations of RAF history is a product of the reality of their particular situation during distinct time periods.Essentially the Red Army Faction was "founded" in May of 1970 when famed journalist Ulrike Meinhof helped break convicted arsonist Andreas Baader from police custody in Berlin. The group, which consisted of as many as about 40-50 people in the coming two years, spent their time robbing banks, stealing cars, etc, until eventually beginning a bombing campaign in May of 1972 that left several American soldiers dead, and dozens of Americans and Germans maimed. Quickly the entire leadership was captured and imprisoned.So with the leadership in prison, and only low-level followers still on the outside, new "blood" was needed to refresh the group. This came (partially) in the form of former members of the Heidelberg University "Socialist's Patients Collective" (these were literally mentally ill students of a professor who believed that their mental illnesses were the product of Capitalism, and the cure for their illnesses was to attack the state). They formed the core of the so-called "Second Generation of the Red Army Faction." They were called this in the popular mind to differentiate them from the original leadership, and to emphasize that most of these members joined after the first generation had been imprisoned.So it really was a distinct group of folks.The so-called "third generation" is much more nebulous; this is the group that generally led actions after 1977, after the major leaders of the group committed suicide in prison (Baader, Meinhof (in 76), Gudrun Ennslin, and Jan-Carl Raspe), and much of the second generation were also imprisoned.Truth be told, many people in the "second generation" could be considered "third generation" members. Such as Brigitte Mohnhaupt; who was clearly the prime leader of the group after 1977, yet she was clearly a member of the second generation as well.One reason that the you will hear of a "second" and a "third" generation is to differentiate between the goals of the two generations. The second generation was essentially consumed with securing the freedom of the imprisoned first generation leaders. Their actions, such as the failed 1975 German embassy takeover in Stockholm, or the 1977 kidnapping of industrialists Hanns-Martin Schleyer, or the 1977 hijacking of a Lufthansa plane in Palma Mallorca... all of these were done to secure the release of Baader, Ennslin, et al. Once they died in prison, the "third generation" basically turned their focus on attacking the state; they no longer were consumed with getting leaders out of prison.It's important to note that many supporters or followers of the group do exactly as you suggest; they consider it on long, unbroken history. In reality that is just as valid of a way of looking at it; the talk of the "Generations" is merely a way to understand why their actions changed so dramatically during distinct periods of time.Source: my site: [_URL_1_](_URL_0_) | [
"The reason for the talk of various generations of RAF history is a product of the reality of their particular situation during distinct time periods.\n\nEssentially the Red Army Faction was \"founded\" in May of 1970 when famed journalist Ulrike Meinhof helped break convicted arsonist Andreas Baader from police cu... | 1 | [
"The reason for the talk of various generations of RAF history is a product of the reality of their particular situation during distinct time periods.\n\nEssentially the Red Army Faction was \"founded\" in May of 1970 when famed journalist Ulrike Meinhof helped break convicted arsonist Andreas Baader from police cu... | 1 | <P> The Red Army Faction was formed with the intention of complementing the plethora of revolutionary and radical groups across West Germany and Europe, as a more class conscious and determined force compared with some of its contemporaries. The members and supporters were already associated with the 'Revolutionary Cells' and 2 June Movement as well as radical currents and phenomena such as the Socialist Patients' Collective, Kommune 1 and the Situationists.
<P> The Red Army formed at least 42 divisions during the Second World War which had substantial ethnic majorities in their composition derived from location of initial formation rather than intentional "nationalization" of the divisions, including four Azeri, five Armenian, and eight Georgian rifle divisions and a large number of cavalry divisions in the eastern Ukraine, Cuban region, and Central Asia, including five Uzbek cavalry divisions. See .
<P> The First Red Army formed from the First, Third and Fifth Army Groups in southern Jiangxi under the command of Bo Gu and Otto Braun. When several smaller units formed the Fourth Red Army under Zhang Guotao in the Sichuan–Shaanxi border area, no standard nomenclature of the armies of the Communist Party existed; moreover, during the Chinese Civil War, central control of separate Communist-controlled enclaves within China was limited. After the organization of these first two main forces, the Second Red Army formed in eastern Guizhou by unifying the Second and Sixth Army Groups under He Long and Xiao Ke. In this case, a "Third Red Army" was led by He Long, who established his base area in the Hunan–Hubei border. The defeat of his forces in 1932 led to a merge in October 1934 with the 6th Army Corps, led by Xiao Ke, to form the Second Red Army. These three armies would maintain their historical designation as the First, Second and Fourth Red Armies until Communist military forces were nominally integrated into the National Revolutionary Army, forming the Eighth Route Army and the New Fourth Army, during the Second Sino-Japanese War from 1937 to 1945.
<P> The Red Army Faction (RAF) was a New Left group founded in 1968 by Andreas Baader and Ulrike Meinhof in West Germany. Inspired by Che Guevara, Maoist socialism, and the Vietcong, the group sought to raise awareness of the Vietnamese and Palestinian independence movements through kidnappings, taking embassies hostage, bank robberies, assassinations, bombings, and attacks on U.S. air bases. The group became arguably best known for 1977's "German Autumn". The buildup leading to German Autumn began on April 7, when the RAF shot Federal Prosecutor Siegfried Buback. On July 30, it shot Jürgen Ponto, then head of the Dresdner Bank, in a failed kidnapping attempt; on September 5, the group kidnapped Hanns Martin Schleyer (a former SS officer and an important West German industrialist), executing him on October 19. The hijacking of the Lufthansa jetliner "Landshut" in October 1977 by the PFLP, a Palestinian group, is also considered to be part of German Autumn.
<P> The usual translation into English is the Red Army "Faction"; however, the founders wanted it not to reflect a splinter group but rather an embryonic militant unit that was embedded, in or part of, a wider communist workers' movement, i.e. a "fraction" of a whole.
<P> Zaloga notes that the Red Army formed at least 42 'national' divisions during the Second World War, including four Azeri, five Armenian, and eight Georgian rifle divisions and a large number of cavalry divisions in Central Asia, including five Uzbek cavalry divisions.
<P> Abused equally by the Red and invading White armies, large groups of peasants, as well as Red Army deserters, formed “Green” armies that resisted the Reds and Whites alike. These forces had no grand political agenda like their enemies, for the most part they simply wanted to stop being harassed and be allowed to govern themselves. Though the Green Armies have largely been ignored by history (and by Soviet historians in particular), they constituted a formidable force and a major threat to Red victory in the Civil War. Even after the party declared the Civil War over in 1920, the Red-Green war persisted for some time.
| question: What were the differences between the generations of the Red Army Faction? context: <P> The Red Army Faction was formed with the intention of complementing the plethora of revolutionary and radical groups across West Germany and Europe, as a more class conscious and determined force compared with some of its contemporaries. The members and supporters were already associated with the 'Revolutionary Cells' and 2 June Movement as well as radical currents and phenomena such as the Socialist Patients' Collective, Kommune 1 and the Situationists.
<P> The Red Army formed at least 42 divisions during the Second World War which had substantial ethnic majorities in their composition derived from location of initial formation rather than intentional "nationalization" of the divisions, including four Azeri, five Armenian, and eight Georgian rifle divisions and a large number of cavalry divisions in the eastern Ukraine, Cuban region, and Central Asia, including five Uzbek cavalry divisions. See .
<P> The First Red Army formed from the First, Third and Fifth Army Groups in southern Jiangxi under the command of Bo Gu and Otto Braun. When several smaller units formed the Fourth Red Army under Zhang Guotao in the Sichuan–Shaanxi border area, no standard nomenclature of the armies of the Communist Party existed; moreover, during the Chinese Civil War, central control of separate Communist-controlled enclaves within China was limited. After the organization of these first two main forces, the Second Red Army formed in eastern Guizhou by unifying the Second and Sixth Army Groups under He Long and Xiao Ke. In this case, a "Third Red Army" was led by He Long, who established his base area in the Hunan–Hubei border. The defeat of his forces in 1932 led to a merge in October 1934 with the 6th Army Corps, led by Xiao Ke, to form the Second Red Army. These three armies would maintain their historical designation as the First, Second and Fourth Red Armies until Communist military forces were nominally integrated into the National Revolutionary Army, forming the Eighth Route Army and the New Fourth Army, during the Second Sino-Japanese War from 1937 to 1945.
<P> The Red Army Faction (RAF) was a New Left group founded in 1968 by Andreas Baader and Ulrike Meinhof in West Germany. Inspired by Che Guevara, Maoist socialism, and the Vietcong, the group sought to raise awareness of the Vietnamese and Palestinian independence movements through kidnappings, taking embassies hostage, bank robberies, assassinations, bombings, and attacks on U.S. air bases. The group became arguably best known for 1977's "German Autumn". The buildup leading to German Autumn began on April 7, when the RAF shot Federal Prosecutor Siegfried Buback. On July 30, it shot Jürgen Ponto, then head of the Dresdner Bank, in a failed kidnapping attempt; on September 5, the group kidnapped Hanns Martin Schleyer (a former SS officer and an important West German industrialist), executing him on October 19. The hijacking of the Lufthansa jetliner "Landshut" in October 1977 by the PFLP, a Palestinian group, is also considered to be part of German Autumn.
<P> The usual translation into English is the Red Army "Faction"; however, the founders wanted it not to reflect a splinter group but rather an embryonic militant unit that was embedded, in or part of, a wider communist workers' movement, i.e. a "fraction" of a whole.
<P> Zaloga notes that the Red Army formed at least 42 'national' divisions during the Second World War, including four Azeri, five Armenian, and eight Georgian rifle divisions and a large number of cavalry divisions in Central Asia, including five Uzbek cavalry divisions.
<P> Abused equally by the Red and invading White armies, large groups of peasants, as well as Red Army deserters, formed “Green” armies that resisted the Reds and Whites alike. These forces had no grand political agenda like their enemies, for the most part they simply wanted to stop being harassed and be allowed to govern themselves. Though the Green Armies have largely been ignored by history (and by Soviet historians in particular), they constituted a formidable force and a major threat to Red victory in the Civil War. Even after the party declared the Civil War over in 1920, the Red-Green war persisted for some time.
| answer: The reason for the talk of various generations of RAF history is a product of the reality of their particular situation during distinct time periods.Essentially the Red Army Faction was "founded" in May of 1970 when famed journalist Ulrike Meinhof helped break convicted arsonist Andreas Baader from police custody in Berlin. The group, which consisted of as many as about 40-50 people in the coming two years, spent their time robbing banks, stealing cars, etc, until eventually beginning a bombing campaign in May of 1972 that left several American soldiers dead, and dozens of Americans and Germans maimed. Quickly the entire leadership was captured and imprisoned.So with the leadership in prison, and only low-level followers still on the outside, new "blood" was needed to refresh the group. This came (partially) in the form of former members of the Heidelberg University "Socialist's Patients Collective" (these were literally mentally ill students of a professor who believed that their mental illnesses were the product of Capitalism, and the cure for their illnesses was to attack the state). They formed the core of the so-called "Second Generation of the Red Army Faction." They were called this in the popular mind to differentiate them from the original leadership, and to emphasize that most of these members joined after the first generation had been imprisoned.So it really was a distinct group of folks.The so-called "third generation" is much more nebulous; this is the group that generally led actions after 1977, after the major leaders of the group committed suicide in prison (Baader, Meinhof (in 76), Gudrun Ennslin, and Jan-Carl Raspe), and much of the second generation were also imprisoned.Truth be told, many people in the "second generation" could be considered "third generation" members. Such as Brigitte Mohnhaupt; who was clearly the prime leader of the group after 1977, yet she was clearly a member of the second generation as well.One reason that the you will hear of a "second" and a "third" generation is to differentiate between the goals of the two generations. The second generation was essentially consumed with securing the freedom of the imprisoned first generation leaders. Their actions, such as the failed 1975 German embassy takeover in Stockholm, or the 1977 kidnapping of industrialists Hanns-Martin Schleyer, or the 1977 hijacking of a Lufthansa plane in Palma Mallorca... all of these were done to secure the release of Baader, Ennslin, et al. Once they died in prison, the "third generation" basically turned their focus on attacking the state; they no longer were consumed with getting leaders out of prison.It's important to note that many supporters or followers of the group do exactly as you suggest; they consider it on long, unbroken history. In reality that is just as valid of a way of looking at it; the talk of the "Generations" is merely a way to understand why their actions changed so dramatically during distinct periods of time.Source: my site: [_URL_1_](_URL_0_) |
59,516 | 3y4qpl | How did naval battles end in the 1700s? | Hi there, while you're right that 18th-century naval battles did not usually end with a large number of sinkings, it was not completely unknown. That said, though, the largest cause of the loss of ships during battle during this time was capture by the enemy. > If that was the case then how would you be able to verify that you are victorious?Th main reason would be that you had beaten the enemy fleet into submission, forced it to flee, forced ships to surrender or disabled them completely. The fighting instructions issued to British captains generally contained some version of the phrase "take, sink, burn or destroy" the enemy, and that's generally the order these things would go in. What happened, broadly speaking, in the 18th century was a rethinking of naval tactics to move away from the line-of-battle formations that had led to indecisive results and a move toward tactics that would allow one fleet to overwhelm another. I wrote about this at some length [here](_URL_1_); to quote from that answer: > Nelson had studied these tactics, and saw the potential for their decisive use. In the battle of Cape St. Vincent (1797), when Nelson was still a captain, he broke his ship out of the line of battle without orders so he could engage the Spanish van (front part of their fleet), engaging three Spanish ships with his one and taking two of them as prizes. (The exact number of ships that came to his aid is in dispute, but his 74-gun HMS Captain engaged ships of 130, 112 and 80 guns for a period of time.) Nelson could have been censured for breaking the line without orders, and could quite possibly have lost his ship in the process. The British admiral, Sir John Jervis (later created Earl St. Vincent) did not reprimand Nelson, but also did not mention his action in dispatches. (Nelson himself used his tactics for propaganda purposes, but I'm getting away from the point.) > Nelson also used the tactic of concentrating the strength of his fleet upon a smaller portion of the enemy's line in his tactics at the Battle of the Nile (sometimes called the Battle of Aboukir Bay) in 1798. That battle came after a long and frustrating summer of chasing the French from one end of the Mediterranean to another, which provided Nelson (now an admiral) with the time necessary to meet with his captains and make his tactical intentions known. When Nelson finally caught up with the French fleet, it was at anchor, but he proceeded to attack immediately with the intention of pitting his ships 2-1 or 3-1 against the front of the French line. On his own initiative, Thomas Foley, captain of HMS Goliath, noticed that there was room between the French ships and the shoal water to the west, and passed down the west side of the French line. Other ships followed, so the French line was essentially doubled, allowing the British to anchor, beat ships into submission, weigh anchor and proceed down the line.and > As the British ships approached, Collingwood's Royal Sovereign, leading the southern column, surged ahead and was the first to engage the enemy, passing just astern of the Spanish admiral's ship Santa Ana. Victory, leading the northern column, was under fire for about 40 minutes from four ships without being able to respond, and the Franco-Spanish guns killed a number of the British crew and shot away the ship's wheel. Nelson broke the allied line at 12:45, passing astern of Villeneuve's Bucentaure and engaging the French Redoutable; Victory won that battle eventually, killing or wounding all but 99 of the approximately 650-man crew on Redoutable but Nelson himself suffered a mortal wound from a French musket ball. > The rest of the battle followed essentially as planned, with the British ships passing through the allied line and engaging multiple French and Spanish ships, combining fire whenever possible. The allied van watched the battle unfold, made a small effort to engage, fired a few guns and eventually sailed off. The British captured 22 allied ships, with the loss of none of theirs, but most of their captures were lost in a great storm the night of the battle.So that will hopefully answer part of your question -- despite not sinking any enemy ships at Trafalgar, the British fleet beat 22 Franco-Spanish ships into submission and caused about 16,000 allied casualties, while only losing about 1,600 of their own men. There was no debate over who won that battle. Not to shamelessly self promote, but if you're interested in naval warfare feel free to poke around [in my profile.](_URL_0_) | [
"Hi there, while you're right that 18th-century naval battles did not usually end with a large number of sinkings, it was not completely unknown. That said, though, the largest cause of the loss of ships during battle during this time was capture by the enemy. \n\n > If that was the case then how would you be able ... | 1 | [
"Hi there, while you're right that 18th-century naval battles did not usually end with a large number of sinkings, it was not completely unknown. That said, though, the largest cause of the loss of ships during battle during this time was capture by the enemy. \n\n > If that was the case then how would you be able ... | 1 | <P> Throughout the 18th century the Royal Navy gradually gained ascendancy over the French Navy, with victories in the War of Spanish Succession (1701–1714), inconclusive battles in the War of Austrian Succession (1740–1748), victories in the Seven Years' War (1754–1763), a partial reversal during the American War of Independence (1775–1783), and consolidation into uncontested supremacy during the 19th century from the Battle of Trafalgar in 1805. These conflicts saw the development and refinement of tactics which came to be called the line of battle.
<P> On September 10, 1813, during the War of 1812, nine vessels of the United States Navy under Commodore Oliver Hazard Perry, decisively defeated six vessels of Great Britain’s Royal Navy in the Battle of Lake Erie near Put-in-Bay. This action was one of the major battles of the war.
<P> During this century, the Navy cut its teeth in the Anglo-French War (1627–1629), the Franco-Spanish War (1635–59), the Second Anglo-Dutch War, the Franco-Dutch War, and the Nine Years' War. Major battles in these years include the Battle of Augusta, Battle of Beachy Head, the Battles of Barfleur and La Hougue, the Battle of Lagos, and the Battle of Texel.
<P> The 1700s opened with the War of the Spanish Succession, over a decade long, followed by the War of the Austrian Succession in the 1740s. Principal engagements of these wars include the Battle of Vigo Bay and two separate Battles of Cape Finisterre in 1747. The most grueling conflict for the Navy, however, was the Seven Years' War, in which it was virtually destroyed. Significant actions include the Battle of Cap-Français, the Battle of Quiberon Bay, and another Battle of Cape Finisterre.
<P> The wars of the 18th century produced a series of tactically indecisive naval battles between evenly matched fleets in line ahead, such as Málaga (1704), Rügen Island (1715), Toulon (1744), Minorca (1756), Negapatam (1758), Cuddalore (1758), Pondicherry (1759), Ushant (1778), Dogger Bank (1781), the Chesapeake (1781), Hogland (1788) and Öland (1789). Although a few of these battles had important "strategic" consequences, like the Chesapeake which the British needed to win, all were "tactically" indecisive. Many admirals began to believe that a contest between two equally matched fleets could not produce a decisive result. The tactically decisive actions of the 18th century were all chase actions, where one fleet was clearly superior to the other, such as the two battles of Finisterre (1747), and those at Lagos (1759), Quiberon Bay (1759) and Cape St. Vincent (1780).
<P> The U.S. Navy saw substantial action in the War of 1812, where it was victorious in eleven single-ship duels with the Royal Navy. It drove all significant British forces off Lake Erie and Lake Champlain and prevented them from becoming British-controlled zones. The result was a major defeat for the British invasion of New York state, and the defeat of the military threat from the Native American allies of the British. Despite this, the U.S. Navy was unable to prevent the British from blockading its ports and landing troops. But after the War of 1812 ended in 1815, the U.S. Navy primarily focused its attention on protecting American shipping assets, sending squadrons to the Caribbean, the Mediterranean, where it participated in the Second Barbary War that ended piracy in the region, South America, Africa, and the Pacific. From 1819 to the outbreak of the Civil War, the Africa Squadron operated to suppress the slave trade, seizing 36 slave ships, although its contribution was smaller than that of the much larger British Royal Navy.
<P> During naval operations that were possible preparations for a coordinated French invasion of England, the largest sea battle of the war occurred, on 22 February 1744. This naval battle took place in the Mediterranean off the coast of Toulon, France. A large British fleet under the command of Admiral Thomas Mathews, with Rear Admiral Richard Lestock second in command, was blockading the French coast. A smaller French and Spanish naval force attacked the British blockade and damaged some of the British ships, forcing the British to withdraw and seek repairs. Thus, the British blockade of the French coast was relieved, and the Spanish fleet apparently controlled the Mediterranean Sea. A Spanish squadron took refuge in the harbour at Toulon. The British fleet watched this squadron carefully from a harbour a short distance to the east. On 21 February 1744, the Spanish ships put to sea with a French fleet. Admiral Mathews took his British fleet and attacked the Spanish fleet from 22 February until 23 February 1744 in what has become known as the Battle of Toulon. However, because of miscommunication and possibly treachery on the part of Rear Admiral Lestock, the smaller Spanish fleet was allowed to escape. With the knowledge that a larger French fleet was sailing to the rescue the British ships broke off combat and retreated to the northeast.
| question: How did naval battles end in the 1700s? context: <P> Throughout the 18th century the Royal Navy gradually gained ascendancy over the French Navy, with victories in the War of Spanish Succession (1701–1714), inconclusive battles in the War of Austrian Succession (1740–1748), victories in the Seven Years' War (1754–1763), a partial reversal during the American War of Independence (1775–1783), and consolidation into uncontested supremacy during the 19th century from the Battle of Trafalgar in 1805. These conflicts saw the development and refinement of tactics which came to be called the line of battle.
<P> On September 10, 1813, during the War of 1812, nine vessels of the United States Navy under Commodore Oliver Hazard Perry, decisively defeated six vessels of Great Britain’s Royal Navy in the Battle of Lake Erie near Put-in-Bay. This action was one of the major battles of the war.
<P> During this century, the Navy cut its teeth in the Anglo-French War (1627–1629), the Franco-Spanish War (1635–59), the Second Anglo-Dutch War, the Franco-Dutch War, and the Nine Years' War. Major battles in these years include the Battle of Augusta, Battle of Beachy Head, the Battles of Barfleur and La Hougue, the Battle of Lagos, and the Battle of Texel.
<P> The 1700s opened with the War of the Spanish Succession, over a decade long, followed by the War of the Austrian Succession in the 1740s. Principal engagements of these wars include the Battle of Vigo Bay and two separate Battles of Cape Finisterre in 1747. The most grueling conflict for the Navy, however, was the Seven Years' War, in which it was virtually destroyed. Significant actions include the Battle of Cap-Français, the Battle of Quiberon Bay, and another Battle of Cape Finisterre.
<P> The wars of the 18th century produced a series of tactically indecisive naval battles between evenly matched fleets in line ahead, such as Málaga (1704), Rügen Island (1715), Toulon (1744), Minorca (1756), Negapatam (1758), Cuddalore (1758), Pondicherry (1759), Ushant (1778), Dogger Bank (1781), the Chesapeake (1781), Hogland (1788) and Öland (1789). Although a few of these battles had important "strategic" consequences, like the Chesapeake which the British needed to win, all were "tactically" indecisive. Many admirals began to believe that a contest between two equally matched fleets could not produce a decisive result. The tactically decisive actions of the 18th century were all chase actions, where one fleet was clearly superior to the other, such as the two battles of Finisterre (1747), and those at Lagos (1759), Quiberon Bay (1759) and Cape St. Vincent (1780).
<P> The U.S. Navy saw substantial action in the War of 1812, where it was victorious in eleven single-ship duels with the Royal Navy. It drove all significant British forces off Lake Erie and Lake Champlain and prevented them from becoming British-controlled zones. The result was a major defeat for the British invasion of New York state, and the defeat of the military threat from the Native American allies of the British. Despite this, the U.S. Navy was unable to prevent the British from blockading its ports and landing troops. But after the War of 1812 ended in 1815, the U.S. Navy primarily focused its attention on protecting American shipping assets, sending squadrons to the Caribbean, the Mediterranean, where it participated in the Second Barbary War that ended piracy in the region, South America, Africa, and the Pacific. From 1819 to the outbreak of the Civil War, the Africa Squadron operated to suppress the slave trade, seizing 36 slave ships, although its contribution was smaller than that of the much larger British Royal Navy.
<P> During naval operations that were possible preparations for a coordinated French invasion of England, the largest sea battle of the war occurred, on 22 February 1744. This naval battle took place in the Mediterranean off the coast of Toulon, France. A large British fleet under the command of Admiral Thomas Mathews, with Rear Admiral Richard Lestock second in command, was blockading the French coast. A smaller French and Spanish naval force attacked the British blockade and damaged some of the British ships, forcing the British to withdraw and seek repairs. Thus, the British blockade of the French coast was relieved, and the Spanish fleet apparently controlled the Mediterranean Sea. A Spanish squadron took refuge in the harbour at Toulon. The British fleet watched this squadron carefully from a harbour a short distance to the east. On 21 February 1744, the Spanish ships put to sea with a French fleet. Admiral Mathews took his British fleet and attacked the Spanish fleet from 22 February until 23 February 1744 in what has become known as the Battle of Toulon. However, because of miscommunication and possibly treachery on the part of Rear Admiral Lestock, the smaller Spanish fleet was allowed to escape. With the knowledge that a larger French fleet was sailing to the rescue the British ships broke off combat and retreated to the northeast.
| answer: Hi there, while you're right that 18th-century naval battles did not usually end with a large number of sinkings, it was not completely unknown. That said, though, the largest cause of the loss of ships during battle during this time was capture by the enemy. > If that was the case then how would you be able to verify that you are victorious?Th main reason would be that you had beaten the enemy fleet into submission, forced it to flee, forced ships to surrender or disabled them completely. The fighting instructions issued to British captains generally contained some version of the phrase "take, sink, burn or destroy" the enemy, and that's generally the order these things would go in. What happened, broadly speaking, in the 18th century was a rethinking of naval tactics to move away from the line-of-battle formations that had led to indecisive results and a move toward tactics that would allow one fleet to overwhelm another. I wrote about this at some length [here](_URL_1_); to quote from that answer: > Nelson had studied these tactics, and saw the potential for their decisive use. In the battle of Cape St. Vincent (1797), when Nelson was still a captain, he broke his ship out of the line of battle without orders so he could engage the Spanish van (front part of their fleet), engaging three Spanish ships with his one and taking two of them as prizes. (The exact number of ships that came to his aid is in dispute, but his 74-gun HMS Captain engaged ships of 130, 112 and 80 guns for a period of time.) Nelson could have been censured for breaking the line without orders, and could quite possibly have lost his ship in the process. The British admiral, Sir John Jervis (later created Earl St. Vincent) did not reprimand Nelson, but also did not mention his action in dispatches. (Nelson himself used his tactics for propaganda purposes, but I'm getting away from the point.) > Nelson also used the tactic of concentrating the strength of his fleet upon a smaller portion of the enemy's line in his tactics at the Battle of the Nile (sometimes called the Battle of Aboukir Bay) in 1798. That battle came after a long and frustrating summer of chasing the French from one end of the Mediterranean to another, which provided Nelson (now an admiral) with the time necessary to meet with his captains and make his tactical intentions known. When Nelson finally caught up with the French fleet, it was at anchor, but he proceeded to attack immediately with the intention of pitting his ships 2-1 or 3-1 against the front of the French line. On his own initiative, Thomas Foley, captain of HMS Goliath, noticed that there was room between the French ships and the shoal water to the west, and passed down the west side of the French line. Other ships followed, so the French line was essentially doubled, allowing the British to anchor, beat ships into submission, weigh anchor and proceed down the line.and > As the British ships approached, Collingwood's Royal Sovereign, leading the southern column, surged ahead and was the first to engage the enemy, passing just astern of the Spanish admiral's ship Santa Ana. Victory, leading the northern column, was under fire for about 40 minutes from four ships without being able to respond, and the Franco-Spanish guns killed a number of the British crew and shot away the ship's wheel. Nelson broke the allied line at 12:45, passing astern of Villeneuve's Bucentaure and engaging the French Redoutable; Victory won that battle eventually, killing or wounding all but 99 of the approximately 650-man crew on Redoutable but Nelson himself suffered a mortal wound from a French musket ball. > The rest of the battle followed essentially as planned, with the British ships passing through the allied line and engaging multiple French and Spanish ships, combining fire whenever possible. The allied van watched the battle unfold, made a small effort to engage, fired a few guns and eventually sailed off. The British captured 22 allied ships, with the loss of none of theirs, but most of their captures were lost in a great storm the night of the battle.So that will hopefully answer part of your question -- despite not sinking any enemy ships at Trafalgar, the British fleet beat 22 Franco-Spanish ships into submission and caused about 16,000 allied casualties, while only losing about 1,600 of their own men. There was no debate over who won that battle. Not to shamelessly self promote, but if you're interested in naval warfare feel free to poke around [in my profile.](_URL_0_) |
171,102 | 2av39h | does abs shorten stopping distance of a car? | yes and no, you have to understand how ABS works. Generally you have an ABS sensor that detects how fast each wheel is moving. If the wheel is locked it will release brake pressure for a thousandths of a second and you will feel a slight "kick" in the brake pedal. The reason this is helpful is because when your wheels lock up, you have lower friction and thus stopping power, than if you had a wheel that was not locked but in constant contact with the road. By preventing them from locking up, ABS, limits the amount of time skidding and maximizes the amount of time spent near maximum breaking power. to answer your question about whether or not it will shorten the braking distance of the car, you have to consider the next best alternative. This is called threshold braking whereby the driver exerts just the right amount of brake pressure that the wheels don't lock but are at constant max braking. Threshold braking is extremely difficult to master and even harder to figure out on the fly in different road surfaces, especially low traction conditions like ice or snow. So to answer your question, in a predictable road surface, with a skilled driver who knows the car very well, and a slight bit of luck. ABS may not be the best because there are a few thousandths of a second where the wheels ARE locked, thus reducing maximal brakingin 99.9% of other cases, ABS will shorten stopping distance no question. | [
"yes and no, you have to understand how ABS works. Generally you have an ABS sensor that detects how fast each wheel is moving. If the wheel is locked it will release brake pressure for a thousandths of a second and you will feel a slight \"kick\" in the brake pedal. \n\nThe reason this is helpful is because when y... | 3 | [
"yes and no, you have to understand how ABS works. Generally you have an ABS sensor that detects how fast each wheel is moving. If the wheel is locked it will release brake pressure for a thousandths of a second and you will feel a slight \"kick\" in the brake pedal. \n\nThe reason this is helpful is because when y... | 1 | <P> Threshold braking, or a good ABS, generally results in the shortest stopping distance in a straight line. ABS, cadence and interference braking are intended to preserve steering control while braking.
<P> If this distance is greater than the ACDA, they need to slow down. While most experienced drivers develop a broad intuition required by everyday braking, this rule of thumb can still benefit some to recalibrate expectations for rare hard braking, particularly from high speeds. Additional simple corrections can be made to compensate for the environment and driving ability. Read more about the Seconds of Distance to Stop Rule.
<P> On high-traction surfaces such as bitumen, or concrete, many (though not all) ABS-equipped cars are able to attain braking distances better (i.e. shorter) than those that would be possible without the benefit of ABS. In real world conditions, even an alert and experienced driver without ABS would find it difficult to match or improve on the performance of a typical driver with a modern ABS-equipped vehicle. ABS reduces chances of crashing, and/or the severity of impact. The recommended technique for non-expert drivers in an ABS-equipped car, in a typical full-braking emergency, is to press the brake pedal as firmly as possible and, where appropriate, to steer around obstructions. In such situations, ABS will significantly reduce the chances of a skid and subsequent loss of control.
<P> A number of studies show that drivers of vehicles with ABS tend to drive faster, follow closer and brake later, accounting for the failure of ABS to result in any measurable improvement in road safety. The studies were performed in Canada, Denmark, and Germany. A study led by Fred Mannering, a professor of civil engineering at the University of South Florida supports risk compensation, terming it the "offset hypothesis". A study of crashes involving taxicabs in Munich of which half had been equipped with anti-lock brakes noted that crash rate was substantially the same for both types of cab, and concluded this was due to drivers of ABS-equipped cabs taking more risks.
<P> The brakes include a prefill function whereby the pistons in the calipers move the pads into contact with the discs on lift off to minimize delay in the brakes being applied. This combined with the ABS and standard Carbon Ceramic brakes have caused a reduction in stopping distance from 100–0 km/h (62-0 mph) to . Tests have shown the car will stop from 100 km/h (62.1 mph) in or in with run flat tires, from 60 mph (97 km/h) and from 60 mph (97 km/h) with run flat tires.
<P> ABS is an automated system that uses the principles of threshold braking and cadence braking, techniques which were once practised by skillful drivers before ABSes were widespread. ABS operates at a much faster rate and more effectively than most drivers could manage. Although ABS generally offers improved vehicle control and decreases stopping distances on dry and some slippery surfaces, on loose gravel or snow-covered surfaces ABS may significantly increase braking distance, while still improving steering control. Since ABS was introduced in production vehicles, such systems have become increasingly sophisticated and effective. Modern versions may not only prevent wheel lock under braking, but may also alter the front-to-rear brake bias. This latter function, depending on its specific capabilities and implementation, is known variously as electronic brakeforce distribution, traction control system, emergency brake assist, or electronic stability control (ESC).
<P> to "the fastest that you can stop any bike of normal wheelbase is to apply the front brake so hard that the rear wheel is just about to lift off the ground," depending on road conditions, rider skill level, and desired fraction of maximum possible deceleration.
| question: does abs shorten stopping distance of a car? context: <P> Threshold braking, or a good ABS, generally results in the shortest stopping distance in a straight line. ABS, cadence and interference braking are intended to preserve steering control while braking.
<P> If this distance is greater than the ACDA, they need to slow down. While most experienced drivers develop a broad intuition required by everyday braking, this rule of thumb can still benefit some to recalibrate expectations for rare hard braking, particularly from high speeds. Additional simple corrections can be made to compensate for the environment and driving ability. Read more about the Seconds of Distance to Stop Rule.
<P> On high-traction surfaces such as bitumen, or concrete, many (though not all) ABS-equipped cars are able to attain braking distances better (i.e. shorter) than those that would be possible without the benefit of ABS. In real world conditions, even an alert and experienced driver without ABS would find it difficult to match or improve on the performance of a typical driver with a modern ABS-equipped vehicle. ABS reduces chances of crashing, and/or the severity of impact. The recommended technique for non-expert drivers in an ABS-equipped car, in a typical full-braking emergency, is to press the brake pedal as firmly as possible and, where appropriate, to steer around obstructions. In such situations, ABS will significantly reduce the chances of a skid and subsequent loss of control.
<P> A number of studies show that drivers of vehicles with ABS tend to drive faster, follow closer and brake later, accounting for the failure of ABS to result in any measurable improvement in road safety. The studies were performed in Canada, Denmark, and Germany. A study led by Fred Mannering, a professor of civil engineering at the University of South Florida supports risk compensation, terming it the "offset hypothesis". A study of crashes involving taxicabs in Munich of which half had been equipped with anti-lock brakes noted that crash rate was substantially the same for both types of cab, and concluded this was due to drivers of ABS-equipped cabs taking more risks.
<P> The brakes include a prefill function whereby the pistons in the calipers move the pads into contact with the discs on lift off to minimize delay in the brakes being applied. This combined with the ABS and standard Carbon Ceramic brakes have caused a reduction in stopping distance from 100–0 km/h (62-0 mph) to . Tests have shown the car will stop from 100 km/h (62.1 mph) in or in with run flat tires, from 60 mph (97 km/h) and from 60 mph (97 km/h) with run flat tires.
<P> ABS is an automated system that uses the principles of threshold braking and cadence braking, techniques which were once practised by skillful drivers before ABSes were widespread. ABS operates at a much faster rate and more effectively than most drivers could manage. Although ABS generally offers improved vehicle control and decreases stopping distances on dry and some slippery surfaces, on loose gravel or snow-covered surfaces ABS may significantly increase braking distance, while still improving steering control. Since ABS was introduced in production vehicles, such systems have become increasingly sophisticated and effective. Modern versions may not only prevent wheel lock under braking, but may also alter the front-to-rear brake bias. This latter function, depending on its specific capabilities and implementation, is known variously as electronic brakeforce distribution, traction control system, emergency brake assist, or electronic stability control (ESC).
<P> to "the fastest that you can stop any bike of normal wheelbase is to apply the front brake so hard that the rear wheel is just about to lift off the ground," depending on road conditions, rider skill level, and desired fraction of maximum possible deceleration.
| answer: yes and no, you have to understand how ABS works. Generally you have an ABS sensor that detects how fast each wheel is moving. If the wheel is locked it will release brake pressure for a thousandths of a second and you will feel a slight "kick" in the brake pedal. The reason this is helpful is because when your wheels lock up, you have lower friction and thus stopping power, than if you had a wheel that was not locked but in constant contact with the road. By preventing them from locking up, ABS, limits the amount of time skidding and maximizes the amount of time spent near maximum breaking power. to answer your question about whether or not it will shorten the braking distance of the car, you have to consider the next best alternative. This is called threshold braking whereby the driver exerts just the right amount of brake pressure that the wheels don't lock but are at constant max braking. Threshold braking is extremely difficult to master and even harder to figure out on the fly in different road surfaces, especially low traction conditions like ice or snow. So to answer your question, in a predictable road surface, with a skilled driver who knows the car very well, and a slight bit of luck. ABS may not be the best because there are a few thousandths of a second where the wheels ARE locked, thus reducing maximal brakingin 99.9% of other cases, ABS will shorten stopping distance no question. |
104,961 | e89b2j | why are older movies (that are based on books) more accurate than newer ones? | This seems anecdotal. Do you have examples?In very broad strokes, older movies are less special effects, less action scenes, and more character driven, with long scenes of dialogue. More like novels. | [
"This seems anecdotal. Do you have examples?\n\nIn very broad strokes, older movies are less special effects, less action scenes, and more character driven, with long scenes of dialogue. More like novels.",
"This really only happens with non pop-culture books. Movies like harry potter and lord of the rings are st... | 2 | [] | 0 | <P> One of the aims of the filmmakers from the beginning of production was to develop the maturity of the films. Chris Columbus stated, "We realised that these movies would get progressively darker. Again, we didn't know "how" dark but we realised that as the kids get older, the movies get a little edgier and darker." This transpired with the succeeding three directors who would work on the series in the following years, with the films beginning to deal with issues such as death, betrayal, prejudice, and political corruption as the series developed narratively and thematically.
<P> The old films suffer technically against today's. The pace of modern films is much faster. The style of acting is different. Those old actors were marvellous, but if you consult the man in the street, he's more interested in seeing a current artist than someone who's been dead for years.
<P> In revisiting the film in the 1970s, Arthur Schlesinger noted that Hollywood films generally age well, revealing an unexpected depth or integrity, but in the case of "Gone with the Wind" time has not treated it kindly. Richard Schickel posits that one measure of a film's quality is to ask what the viewer can remember of it, and the film falls down in this regard: unforgettable imagery and dialogue are simply not present. Stanley Kauffmann, likewise, also found the film to be a largely forgettable experience, claiming he could only remember two scenes vividly. Both Schickel and Schlesinger put this down to it being "badly written", in turn describing the dialogue as "flowery" and possessing a "picture postcard" sensibility. Schickel also believes the film fails as popular art, in that it has limited rewatch value—a sentiment that Kauffmann also concurs with, stating that having watched it twice he hopes "never to see it again: twice is twice as much as any lifetime needs". Both Schickel and Andrew Sarris identify the film's main failing is in possessing a producer's sensibility rather than an artistic one: having gone through so many directors and writers the film does not carry a sense of being "created" or "directed", but rather having emerged "steaming from the crowded kitchen", where the main creative force was a producer's obsession in making the film as literally faithful to the novel as possible.
<P> The old films suffer technically against today's. The pace of modern films is much faster. The style of acting is different. Those old actors were marvelous, but if you consult the man in the street, he's more interested in seeing a current artist than someone who's been dead for years.
<P> In February 2004, a few months following release, the film was voted eighth on "Empire"s "100 Greatest Movies of All Time", compiled from readers' top ten lists. This forced the magazine to abandon its policy of only allowing films being older than a year to be eligible. In 2007, "Total Film" named "The Return of the King" the third best film of the past decade ("Total Film"s publication time), behind "The Matrix" and "Fight Club".
<P> In general, attitudes to what material is suitable for viewing by younger audiences have changed over the years, and this is reflected by the reclassification of older films being re-released on video. For example, a 1913 film given the former A rating could very probably be rated PG today. An extreme example of this is the rating of the horror film "Revenge of the Zombies", with a U certificate upon its video release in the late 1990s, whereas, when it was first examined as a film in 1951, it was given one of the first X ratings. The Bela Lugosi horror film "Island of Lost Souls" was refused a certificate when first submitted in 1932, was granted an X in the 1950s, and a 12 for home video release in 1996 – when submitted for a modern video classification in 2011, it was re-classified as a PG.
<P> All the films have been a success financially and critically, making the franchise one of the major Hollywood "tent-poles" akin to "James Bond", "Star Wars", "Indiana Jones" and "Pirates of the Caribbean". The series is noted by audiences for growing visually darker and more mature as each film was released. However, opinions of the films generally divide book fans, with some preferring the more faithful approach of the first two films and others preferring the more stylised character-driven approach of the later films.
| question: why are older movies (that are based on books) more accurate than newer ones? context: <P> One of the aims of the filmmakers from the beginning of production was to develop the maturity of the films. Chris Columbus stated, "We realised that these movies would get progressively darker. Again, we didn't know "how" dark but we realised that as the kids get older, the movies get a little edgier and darker." This transpired with the succeeding three directors who would work on the series in the following years, with the films beginning to deal with issues such as death, betrayal, prejudice, and political corruption as the series developed narratively and thematically.
<P> The old films suffer technically against today's. The pace of modern films is much faster. The style of acting is different. Those old actors were marvellous, but if you consult the man in the street, he's more interested in seeing a current artist than someone who's been dead for years.
<P> In revisiting the film in the 1970s, Arthur Schlesinger noted that Hollywood films generally age well, revealing an unexpected depth or integrity, but in the case of "Gone with the Wind" time has not treated it kindly. Richard Schickel posits that one measure of a film's quality is to ask what the viewer can remember of it, and the film falls down in this regard: unforgettable imagery and dialogue are simply not present. Stanley Kauffmann, likewise, also found the film to be a largely forgettable experience, claiming he could only remember two scenes vividly. Both Schickel and Schlesinger put this down to it being "badly written", in turn describing the dialogue as "flowery" and possessing a "picture postcard" sensibility. Schickel also believes the film fails as popular art, in that it has limited rewatch value—a sentiment that Kauffmann also concurs with, stating that having watched it twice he hopes "never to see it again: twice is twice as much as any lifetime needs". Both Schickel and Andrew Sarris identify the film's main failing is in possessing a producer's sensibility rather than an artistic one: having gone through so many directors and writers the film does not carry a sense of being "created" or "directed", but rather having emerged "steaming from the crowded kitchen", where the main creative force was a producer's obsession in making the film as literally faithful to the novel as possible.
<P> The old films suffer technically against today's. The pace of modern films is much faster. The style of acting is different. Those old actors were marvelous, but if you consult the man in the street, he's more interested in seeing a current artist than someone who's been dead for years.
<P> In February 2004, a few months following release, the film was voted eighth on "Empire"s "100 Greatest Movies of All Time", compiled from readers' top ten lists. This forced the magazine to abandon its policy of only allowing films being older than a year to be eligible. In 2007, "Total Film" named "The Return of the King" the third best film of the past decade ("Total Film"s publication time), behind "The Matrix" and "Fight Club".
<P> In general, attitudes to what material is suitable for viewing by younger audiences have changed over the years, and this is reflected by the reclassification of older films being re-released on video. For example, a 1913 film given the former A rating could very probably be rated PG today. An extreme example of this is the rating of the horror film "Revenge of the Zombies", with a U certificate upon its video release in the late 1990s, whereas, when it was first examined as a film in 1951, it was given one of the first X ratings. The Bela Lugosi horror film "Island of Lost Souls" was refused a certificate when first submitted in 1932, was granted an X in the 1950s, and a 12 for home video release in 1996 – when submitted for a modern video classification in 2011, it was re-classified as a PG.
<P> All the films have been a success financially and critically, making the franchise one of the major Hollywood "tent-poles" akin to "James Bond", "Star Wars", "Indiana Jones" and "Pirates of the Caribbean". The series is noted by audiences for growing visually darker and more mature as each film was released. However, opinions of the films generally divide book fans, with some preferring the more faithful approach of the first two films and others preferring the more stylised character-driven approach of the later films.
| answer: This seems anecdotal. Do you have examples?In very broad strokes, older movies are less special effects, less action scenes, and more character driven, with long scenes of dialogue. More like novels. |
26,545 | ciu8vw | if female orgasms are better than men’s, then why are men typically the ones who bother females about having sex and not the other way around? | "Better" is subjective.Has anybody ever been biologically male, and then biologically female, and compared the two? I don't think so. So we can't ever actually compare them. | [
"\"Better\" is subjective.\n\nHas anybody ever been biologically male, and then biologically female, and compared the two? I don't think so. So we can't ever actually compare them.",
"Every woman is different regarding whether they thoroughly enjoy sex or not with a lot of various factors (i.e. confidence, belief... | 3 | [
"\"Better\" is subjective.\n\nHas anybody ever been biologically male, and then biologically female, and compared the two? I don't think so. So we can't ever actually compare them.",
"Because men are virtually guaranteed an orgasm and women are not. If you had an orgasm every time you did something, you’d be high... | 2 | <P> Another reason why women are more likely to have low sexual desire and less sexual activity compared to men may be because when enduring in copulation with a male, women's experience of achieving an orgasm is low. Therefore, a females gratification for sexual intercourse may be lower than a males, where a male is able to enjoy sex consistently compared to a female, signifying why males sexual desire is usually higher.
<P> Women fake orgasms more frequently than men, with one survey finding that 26% of women fake an orgasm every time they have sex. This is more than the 25% of women who report consistently having an orgasm during coitus. Women tend to achieve orgasm during intercourse less readily than men because most women require direct clitoral stimulation to achieve orgasm, and not all sexual positions provide access to the clitoris, which often makes orgasms difficult to achieve for women. For women in heterosexual relationships, faking an orgasm can also be based on deference to the man, need for his approval, or feelings of shame or sexual inadequacy.
<P> Wallen K and Lloyd EA stated, "In men, orgasms are under strong selective pressure as orgasms are coupled with ejaculation and thus contribute to male reproductive success. By contrast, women's orgasms in intercourse are highly variable and are under little selective pressure as they are not a reproductive necessity."
<P> A consistent finding across culture, religions, relationship status and sexual orientations is that men tend to experience higher sexual desire discrepancy than women. Men value giving and receiving sex orally more than women and men report higher rates of intercourse than women do. Therefore, due to the higher value placed on sexual acts and the greater desired frequency of sex in men may be another contributing reason as to why their sexual desire discrepancy is higher than women's overall.
<P> The female sexual response is more varied than that of men, and women are capable of attaining additional or multiple orgasms through further sexual stimulation. However, there are many women who experience clitoral hypersensitivity after orgasm, which can effectively create a refractory period. These women may be capable of further orgasms, but the pain involved in getting there makes the prospect undesirable.
<P> With regard to the ease or difficulty of achieving orgasm, Hite's research (while subject to methodological limitations) showed that most women need clitoral (exterior) stimulation for orgasm, which can be "easy and strong, given the right stimulation" and that the need for clitoral stimulation in addition to knowing one's own body is the reason that most women reach orgasm more easily by masturbation. Replicating Kinsey's findings, studies by scholars such as Peplau, Fingerhut and Beals (2004) and Diamond (2006) indicate that lesbians have orgasms more often and more easily in sexual interactions than heterosexual women do.
<P> Women may also have a number of postcopulatory adaptations to sperm competition. The female orgasm has been suggested as such an adaptation. Physiologically, some researchers have suggested that females may strategically time orgasms in order to selectively retain sperm from extra-pair partners, and this research was supported by evidence that women tend to have a greater number of copulatory orgasms with males of higher genetic quality, measured by lower fluctuating asymmetry of the partners. Psychologically, the female orgasm may be used as a mechanism to signal relationship satisfaction with a partner to reduce the likelihood of him having extra-pair copulations. Indeed, women who suspect infidelity of their partners have been shown to report pretending orgasm more frequently than women who do not suspect infidelity.
| question: if female orgasms are better than men’s, then why are men typically the ones who bother females about having sex and not the other way around? context: <P> Another reason why women are more likely to have low sexual desire and less sexual activity compared to men may be because when enduring in copulation with a male, women's experience of achieving an orgasm is low. Therefore, a females gratification for sexual intercourse may be lower than a males, where a male is able to enjoy sex consistently compared to a female, signifying why males sexual desire is usually higher.
<P> Women fake orgasms more frequently than men, with one survey finding that 26% of women fake an orgasm every time they have sex. This is more than the 25% of women who report consistently having an orgasm during coitus. Women tend to achieve orgasm during intercourse less readily than men because most women require direct clitoral stimulation to achieve orgasm, and not all sexual positions provide access to the clitoris, which often makes orgasms difficult to achieve for women. For women in heterosexual relationships, faking an orgasm can also be based on deference to the man, need for his approval, or feelings of shame or sexual inadequacy.
<P> Wallen K and Lloyd EA stated, "In men, orgasms are under strong selective pressure as orgasms are coupled with ejaculation and thus contribute to male reproductive success. By contrast, women's orgasms in intercourse are highly variable and are under little selective pressure as they are not a reproductive necessity."
<P> A consistent finding across culture, religions, relationship status and sexual orientations is that men tend to experience higher sexual desire discrepancy than women. Men value giving and receiving sex orally more than women and men report higher rates of intercourse than women do. Therefore, due to the higher value placed on sexual acts and the greater desired frequency of sex in men may be another contributing reason as to why their sexual desire discrepancy is higher than women's overall.
<P> The female sexual response is more varied than that of men, and women are capable of attaining additional or multiple orgasms through further sexual stimulation. However, there are many women who experience clitoral hypersensitivity after orgasm, which can effectively create a refractory period. These women may be capable of further orgasms, but the pain involved in getting there makes the prospect undesirable.
<P> With regard to the ease or difficulty of achieving orgasm, Hite's research (while subject to methodological limitations) showed that most women need clitoral (exterior) stimulation for orgasm, which can be "easy and strong, given the right stimulation" and that the need for clitoral stimulation in addition to knowing one's own body is the reason that most women reach orgasm more easily by masturbation. Replicating Kinsey's findings, studies by scholars such as Peplau, Fingerhut and Beals (2004) and Diamond (2006) indicate that lesbians have orgasms more often and more easily in sexual interactions than heterosexual women do.
<P> Women may also have a number of postcopulatory adaptations to sperm competition. The female orgasm has been suggested as such an adaptation. Physiologically, some researchers have suggested that females may strategically time orgasms in order to selectively retain sperm from extra-pair partners, and this research was supported by evidence that women tend to have a greater number of copulatory orgasms with males of higher genetic quality, measured by lower fluctuating asymmetry of the partners. Psychologically, the female orgasm may be used as a mechanism to signal relationship satisfaction with a partner to reduce the likelihood of him having extra-pair copulations. Indeed, women who suspect infidelity of their partners have been shown to report pretending orgasm more frequently than women who do not suspect infidelity.
| answer: "Better" is subjective.Has anybody ever been biologically male, and then biologically female, and compared the two? I don't think so. So we can't ever actually compare them. |
170,403 | 86wyj9 | Do other animals ever get chapped lips? Or is that a distinctly human problem? | Yes. Dogs have been known of occasion to get chapped lips, and I've no doubt other animals can too. You can purchase treatments for chapped lips in dogs on the market, though I don't know of how effective they are. | [
"Yes. Dogs have been known of occasion to get chapped lips, and I've no doubt other animals can too. You can purchase treatments for chapped lips in dogs on the market, though I don't know of how effective they are.",
"Yes, according to the Dog Owners Home Veterinary Handbook, page 233,\n\"In hunting dogs, chappe... | 2 | [
"Yes. Dogs have been known of occasion to get chapped lips, and I've no doubt other animals can too. You can purchase treatments for chapped lips in dogs on the market, though I don't know of how effective they are."
] | 1 | <P> In many non-human mammals, the upper lip and sinus area is associated with whiskers or vibrissae which serve a sensory function. In humans, these whiskers do not exist but there are still sporadic cases where elements of the associated vibrissal capsular muscles or sinus hair muscles can be found. Based on histological studies of the upper lips of 20 cadavers, Tamatsu et al. found that structures resembling such muscles were present in 35% (7/20) of their specimens.
<P> Lip licker's dermatitis, popularly known as perioral dermatitis, is an irritant contact dermatitis on and around the lips due to saliva from repetitive lip licking. Involving children more than adults, the resulting papules, scaling, erythema and occasional fissures and crusting make a well-defined ring around the lips. The rash extends as far as the tongue can reach and frequently spares the angle of the mouth. Unlike periorificial dermatitis, the vermillion border of the lip is often involved and the treatment is simple moisturisers.
<P> Compared to most other mammals, licking has a minor role for humans. The human tongue is relatively short and inflexible, and is not well adapted for either grooming or drinking. Instead, humans prefer to wash themselves using their hands and drink by sucking fluid into their mouth. Humans have much less hair over their skin than most other mammals, and much of that hair is in places which they cannot reach with their own mouth. The presence of sweat glands all over the human body makes licking as a cooling method unnecessary.
<P> In humans, the lips are important for the production of stops and fricatives, in addition to vowels. Nothing, however, suggests that the lips evolved for those reasons. During primate evolution, a shift from nocturnal to diurnal activity in tarsiers, monkeys and apes (the haplorhines) brought with it an increased reliance on vision at the expense of olfaction. As a result, the snout became reduced and the rhinarium or "wet nose" was lost. The muscles of the face and lips consequently became less constrained, enabling their co-option to serve purposes of facial expression.The lips also became thicker, and the oral cavity hidden behind became smaller. The lips also became thicker. "Hence", according to one major authority, "the evolution of mobile, muscular lips, so important to human speech, was the exaptive result of the evolution of diurnality and visual communication in the common ancestor of haplorhines". It is unclear whether our lips have undergone more recent adaptation to the specific requirements of speech.
<P> Habits or conditions that keep the corners of the mouth moist might include chronic lip licking, thumb sucking (or sucking on other objects such as pens, pipes, lollipops), dental cleaning (e.g. flossing), chewing gum, hypersalivation, drooling and mouth breathing. Some consider habitual lip licking or picking to be a form of nervous tic, and do not consider this to be true angular cheilitis, instead calling it "perlèche" (derived from the French word "pourlècher" meaning "to lick one’s lips"), or "factitious cheilitis" is applied to this habit. The term "cheilocandidiasis" describes exfoliative (flaking) lesions of the lips and the skin around the lips, and is caused by a superficial candidal infection due to chronic lip licking. Less severe cases occur during cold, dry weather, and is a form of chapped lips. Individuals may lick their lips in an attempt to provide a temporary moment of relief, only serving to worsen the condition.
<P> The lips are normally symmetrical, pink, smooth, and moist. There should be no growths, lumps, or discoloration of the tissue. Abnormal findings are asymmetricality, cyanosis, a cherry-red or pale color or dryness. Diseases include mucocele, aphthous ulcer, angular stomatitis, carcinoma, cleft lip, leukoplakia, herpes simplex and chelitis.
<P> A congenital lip pit or lip sinus is a congenital disorder characterized by the presence of pits and possibly associated fistulas in the lips. They are often hereditary, and may occur alone or in association with cleft lip and palate, termed Van der Woude syndrome.
| question: Do other animals ever get chapped lips? Or is that a distinctly human problem? context: <P> In many non-human mammals, the upper lip and sinus area is associated with whiskers or vibrissae which serve a sensory function. In humans, these whiskers do not exist but there are still sporadic cases where elements of the associated vibrissal capsular muscles or sinus hair muscles can be found. Based on histological studies of the upper lips of 20 cadavers, Tamatsu et al. found that structures resembling such muscles were present in 35% (7/20) of their specimens.
<P> Lip licker's dermatitis, popularly known as perioral dermatitis, is an irritant contact dermatitis on and around the lips due to saliva from repetitive lip licking. Involving children more than adults, the resulting papules, scaling, erythema and occasional fissures and crusting make a well-defined ring around the lips. The rash extends as far as the tongue can reach and frequently spares the angle of the mouth. Unlike periorificial dermatitis, the vermillion border of the lip is often involved and the treatment is simple moisturisers.
<P> Compared to most other mammals, licking has a minor role for humans. The human tongue is relatively short and inflexible, and is not well adapted for either grooming or drinking. Instead, humans prefer to wash themselves using their hands and drink by sucking fluid into their mouth. Humans have much less hair over their skin than most other mammals, and much of that hair is in places which they cannot reach with their own mouth. The presence of sweat glands all over the human body makes licking as a cooling method unnecessary.
<P> In humans, the lips are important for the production of stops and fricatives, in addition to vowels. Nothing, however, suggests that the lips evolved for those reasons. During primate evolution, a shift from nocturnal to diurnal activity in tarsiers, monkeys and apes (the haplorhines) brought with it an increased reliance on vision at the expense of olfaction. As a result, the snout became reduced and the rhinarium or "wet nose" was lost. The muscles of the face and lips consequently became less constrained, enabling their co-option to serve purposes of facial expression.The lips also became thicker, and the oral cavity hidden behind became smaller. The lips also became thicker. "Hence", according to one major authority, "the evolution of mobile, muscular lips, so important to human speech, was the exaptive result of the evolution of diurnality and visual communication in the common ancestor of haplorhines". It is unclear whether our lips have undergone more recent adaptation to the specific requirements of speech.
<P> Habits or conditions that keep the corners of the mouth moist might include chronic lip licking, thumb sucking (or sucking on other objects such as pens, pipes, lollipops), dental cleaning (e.g. flossing), chewing gum, hypersalivation, drooling and mouth breathing. Some consider habitual lip licking or picking to be a form of nervous tic, and do not consider this to be true angular cheilitis, instead calling it "perlèche" (derived from the French word "pourlècher" meaning "to lick one’s lips"), or "factitious cheilitis" is applied to this habit. The term "cheilocandidiasis" describes exfoliative (flaking) lesions of the lips and the skin around the lips, and is caused by a superficial candidal infection due to chronic lip licking. Less severe cases occur during cold, dry weather, and is a form of chapped lips. Individuals may lick their lips in an attempt to provide a temporary moment of relief, only serving to worsen the condition.
<P> The lips are normally symmetrical, pink, smooth, and moist. There should be no growths, lumps, or discoloration of the tissue. Abnormal findings are asymmetricality, cyanosis, a cherry-red or pale color or dryness. Diseases include mucocele, aphthous ulcer, angular stomatitis, carcinoma, cleft lip, leukoplakia, herpes simplex and chelitis.
<P> A congenital lip pit or lip sinus is a congenital disorder characterized by the presence of pits and possibly associated fistulas in the lips. They are often hereditary, and may occur alone or in association with cleft lip and palate, termed Van der Woude syndrome.
| answer: Yes. Dogs have been known of occasion to get chapped lips, and I've no doubt other animals can too. You can purchase treatments for chapped lips in dogs on the market, though I don't know of how effective they are. |
181,411 | 5xpdaf | why do we stop enjoying some things we used to enjoy when we were younger? | I think we develop a "been there, done that" mentality in ways.I can not watch cartoons now. No matter what the topic. The irony is I just can't pay attention to cartoons. | [
"I think we develop a \"been there, done that\" mentality in ways.\n\nI can not watch cartoons now. No matter what the topic. The irony is I just can't pay attention to cartoons.",
"Dopamine. Actually learned this from Game Theory a while back, great YouTube channel. Basically the feeling of excitement is caused ... | 2 | [
"I think we develop a \"been there, done that\" mentality in ways.\n\nI can not watch cartoons now. No matter what the topic. The irony is I just can't pay attention to cartoons."
] | 1 | <P> Leisure is important across the lifespan and can facilitate a sense of control and self-worth. Older adults, specifically, can benefit from physical, social, emotional, cultural, and spiritual aspects of leisure. Leisure engagement and relationships are commonly central to "successful" and satisfying aging. For example, engaging in leisure with grandchildren can enhance feelings of generativity, whereby older adults can achieve well-being by leaving a legacy beyond themselves for future generations.
<P> The young people have fun, they adventure together and achieve, overcoming their fears, changing their self-perception and feeling important, and because they socialise with others like them they feel like they belong, are more positive, don't feel judged, feel their anxiety reduce and start to think differently about themselves and what they are capable of.
<P> It's very essential for a child to be able to enjoy fun childhood activities because it can help them build a social life, and easily interact with others. Not being able to do these things at a young age will only make it harder to adapt as the child gets older.
<P> But leisure also allows people – without the need of any modern gadgets – to re-connect with family and friends and experience the happiness arising from that interaction such as chatting over a drink or meal.
<P> The term "enjoying a second childhood" can also refer to a non-senile adult's actions of acquiring objects --- particularly "vintage" items that actually were around during his youth, as opposed to just recently-manufactured "recreations" of classic objects --- that he remembers enjoying as a child, such as clothing, books, musical recordings, "period" art or household items, etc. The person may also seek out old acquaintances whom he knew only during his younger days, or visit kiddie-amusement parks to go on rides or just enjoy watching all the children's gleeful innocent pleasure, so that he can remember his own savoring of this happy carefree "ambiance" as a child.
<P> Although adults who engage in excessive amounts of play may find themselves described as "childish" or "young at heart" by less playful adults, play is actually an important activity, regardless of age. Creativity and happiness can result from adult play, where the objective can be more than fun alone, as in adult expression of the arts, or curiosity-driven science. Some adult "hobbies" are examples of such creative play. In creative professions, such as design, playfulness can remove more serious attitudes (such as shame or embarrassment) that impede brainstorming or artistic experimentation in design.
<P> At old age, in an apparent paradoxical fashion, people usually preserve relatively high levels of happiness, even following harsh adversity in the past and in the face of a foreshortened future. Besides this inclination, Shmotkin's studies showed modes whereby older people sorted out positive and negative feelings from their past and buffered fears about their future. In these inquiries, notions of time perspective appeared fully embedded in the adjustment of people to their old age.
| question: why do we stop enjoying some things we used to enjoy when we were younger? context: <P> Leisure is important across the lifespan and can facilitate a sense of control and self-worth. Older adults, specifically, can benefit from physical, social, emotional, cultural, and spiritual aspects of leisure. Leisure engagement and relationships are commonly central to "successful" and satisfying aging. For example, engaging in leisure with grandchildren can enhance feelings of generativity, whereby older adults can achieve well-being by leaving a legacy beyond themselves for future generations.
<P> The young people have fun, they adventure together and achieve, overcoming their fears, changing their self-perception and feeling important, and because they socialise with others like them they feel like they belong, are more positive, don't feel judged, feel their anxiety reduce and start to think differently about themselves and what they are capable of.
<P> It's very essential for a child to be able to enjoy fun childhood activities because it can help them build a social life, and easily interact with others. Not being able to do these things at a young age will only make it harder to adapt as the child gets older.
<P> But leisure also allows people – without the need of any modern gadgets – to re-connect with family and friends and experience the happiness arising from that interaction such as chatting over a drink or meal.
<P> The term "enjoying a second childhood" can also refer to a non-senile adult's actions of acquiring objects --- particularly "vintage" items that actually were around during his youth, as opposed to just recently-manufactured "recreations" of classic objects --- that he remembers enjoying as a child, such as clothing, books, musical recordings, "period" art or household items, etc. The person may also seek out old acquaintances whom he knew only during his younger days, or visit kiddie-amusement parks to go on rides or just enjoy watching all the children's gleeful innocent pleasure, so that he can remember his own savoring of this happy carefree "ambiance" as a child.
<P> Although adults who engage in excessive amounts of play may find themselves described as "childish" or "young at heart" by less playful adults, play is actually an important activity, regardless of age. Creativity and happiness can result from adult play, where the objective can be more than fun alone, as in adult expression of the arts, or curiosity-driven science. Some adult "hobbies" are examples of such creative play. In creative professions, such as design, playfulness can remove more serious attitudes (such as shame or embarrassment) that impede brainstorming or artistic experimentation in design.
<P> At old age, in an apparent paradoxical fashion, people usually preserve relatively high levels of happiness, even following harsh adversity in the past and in the face of a foreshortened future. Besides this inclination, Shmotkin's studies showed modes whereby older people sorted out positive and negative feelings from their past and buffered fears about their future. In these inquiries, notions of time perspective appeared fully embedded in the adjustment of people to their old age.
| answer: I think we develop a "been there, done that" mentality in ways.I can not watch cartoons now. No matter what the topic. The irony is I just can't pay attention to cartoons. |
1,541 | e0a1gy | why does putting the air conditioner on 25°c in a cooling mode feel different from the same 25°c in heating mode? | The unit isn't putting out air at 25 C.If it's in cooling mode, it's putting out very cold air until the ambient temperature reaches 25 C. If it's in heating mode, it's putting out very warm air until the ambient temperature hits 25 C. | [
"In cooling mode, the thermostat will wait until the temperature goes over 25°C and then turn on the AC until it falls back under 25°C. This produces a 'spike' of cold air when the AC is on, followed by the temperature slowly drifting up toward warm.\n\nIn heating mode, the thermostat will wait until the temperatur... | 15 | [
"In cooling mode, the thermostat will wait until the temperature goes over 25°C and then turn on the AC until it falls back under 25°C. This produces a 'spike' of cold air when the AC is on, followed by the temperature slowly drifting up toward warm.\n\nIn heating mode, the thermostat will wait until the temperatur... | 7 | <P> Switching the direction of heat flow, the same system can be used to circulate the cooled water through the house for cooling in the summer months. The heat is exhausted to the relatively cooler ground (or groundwater) rather than delivering it to the hot outside air as an air conditioner does. As a result, the heat is pumped across a larger temperature difference and this leads to higher efficiency and lower energy use.
<P> The air conditioning chillers' efficiency is measured by their coefficient of performance (COP). In theory, thermal storage systems could make chillers more efficient because heat is discharged into colder nighttime air rather than warmer daytime air. In practice, heat loss overpowers this advantage, since it melts the ice.
<P> An HVAC (heating, ventilating, and air conditioning) cooling tower is used to dispose of ("reject") unwanted heat from a chiller. Water-cooled chillers are normally more energy efficient than air-cooled chillers due to heat rejection to tower water at or near wet-bulb temperatures. Air-cooled chillers must reject heat at the higher dry-bulb temperature, and thus have a lower average reverse-Carnot cycle effectiveness. In areas with a hot climate, large office buildings, hospitals, and schools typically use one or more cooling towers as part of their air conditioning systems. Generally, industrial cooling towers are much larger than HVAC towers.
<P> Air Handling units (AHU) and Roof Top units (RTU) that serve multiple zones should vary the DISCHARGE AIR TEMPERATURE SET POINT VALUE automatically in the range 55 F to 70 F. This adjustment reduces the cooling, heating, and fan energy consumption. When the outside temperature is below 70 F, for zones with very low cooling loads, raising the supply-air temperature decreases the use of reheat at the zone level.
<P> In warm climates where air conditioning is used, any household device that gives off heat will result in a larger load on the cooling system. Items such as stoves, dish washers, clothes dryers, hot water and incandescent lighting all add heat to the home. Low-power or insulated versions of these devices give off less heat for the air conditioning to remove. The air conditioning system can also improve in efficiency by using a heat sink that is cooler than the standard air heat exchanger, such as geothermal or water.
<P> In air conditioning, chilled water is often used to cool a building's air and equipment, especially in situations where many individual rooms must be controlled separately, such as a hotel. A chiller lowers water temperature to between 40° and 45°F before the water is pumped to the location to be cooled.
<P> When there is a high temperature differential (e.g., when an air-source heat pump is used to heat a house with an outside temperature of, say, 0 °C (32 °F)), it takes more work to move the same amount of heat to indoors than on a milder day. Ultimately, due to Carnot efficiency limits, the heat pump's performance will decrease as the outdoor-to-indoor temperature difference increases (outside temperature gets colder), reaching a theoretical limit of 1.0 at −273 °C. In practice, a COP of 1.0 will typically be reached at an outdoor temperature around −18 °C (0 °F) for air source heat pumps.
| question: why does putting the air conditioner on 25°c in a cooling mode feel different from the same 25°c in heating mode? context: <P> Switching the direction of heat flow, the same system can be used to circulate the cooled water through the house for cooling in the summer months. The heat is exhausted to the relatively cooler ground (or groundwater) rather than delivering it to the hot outside air as an air conditioner does. As a result, the heat is pumped across a larger temperature difference and this leads to higher efficiency and lower energy use.
<P> The air conditioning chillers' efficiency is measured by their coefficient of performance (COP). In theory, thermal storage systems could make chillers more efficient because heat is discharged into colder nighttime air rather than warmer daytime air. In practice, heat loss overpowers this advantage, since it melts the ice.
<P> An HVAC (heating, ventilating, and air conditioning) cooling tower is used to dispose of ("reject") unwanted heat from a chiller. Water-cooled chillers are normally more energy efficient than air-cooled chillers due to heat rejection to tower water at or near wet-bulb temperatures. Air-cooled chillers must reject heat at the higher dry-bulb temperature, and thus have a lower average reverse-Carnot cycle effectiveness. In areas with a hot climate, large office buildings, hospitals, and schools typically use one or more cooling towers as part of their air conditioning systems. Generally, industrial cooling towers are much larger than HVAC towers.
<P> Air Handling units (AHU) and Roof Top units (RTU) that serve multiple zones should vary the DISCHARGE AIR TEMPERATURE SET POINT VALUE automatically in the range 55 F to 70 F. This adjustment reduces the cooling, heating, and fan energy consumption. When the outside temperature is below 70 F, for zones with very low cooling loads, raising the supply-air temperature decreases the use of reheat at the zone level.
<P> In warm climates where air conditioning is used, any household device that gives off heat will result in a larger load on the cooling system. Items such as stoves, dish washers, clothes dryers, hot water and incandescent lighting all add heat to the home. Low-power or insulated versions of these devices give off less heat for the air conditioning to remove. The air conditioning system can also improve in efficiency by using a heat sink that is cooler than the standard air heat exchanger, such as geothermal or water.
<P> In air conditioning, chilled water is often used to cool a building's air and equipment, especially in situations where many individual rooms must be controlled separately, such as a hotel. A chiller lowers water temperature to between 40° and 45°F before the water is pumped to the location to be cooled.
<P> When there is a high temperature differential (e.g., when an air-source heat pump is used to heat a house with an outside temperature of, say, 0 °C (32 °F)), it takes more work to move the same amount of heat to indoors than on a milder day. Ultimately, due to Carnot efficiency limits, the heat pump's performance will decrease as the outdoor-to-indoor temperature difference increases (outside temperature gets colder), reaching a theoretical limit of 1.0 at −273 °C. In practice, a COP of 1.0 will typically be reached at an outdoor temperature around −18 °C (0 °F) for air source heat pumps.
| answer: The unit isn't putting out air at 25 C.If it's in cooling mode, it's putting out very cold air until the ambient temperature reaches 25 C. If it's in heating mode, it's putting out very warm air until the ambient temperature hits 25 C. |
125,001 | 51vqf7 | how do dark net vendors ship illegal things internationally without getting caught? | I mean, senders can just put a fake return address on the package. Plus most mail doesn't get looked at in such detail... | [
"I mean, senders can just put a fake return address on the package. Plus most mail doesn't get looked at in such detail...",
"The vast majority of parcels aren't inspected, unless the sender and receiver name or address come up in the database.\n\n",
"Because USPS mail isn't validated by origin address. When ... | 3 | [
"I mean, senders can just put a fake return address on the package. Plus most mail doesn't get looked at in such detail..."
] | 1 | <P> Smuggling is a risky but often very profitable venture. Illegal commodities may be sold to other players or to the black market available on any planet or a starbase with a population over 30,000. There is a chance a player will be detected by the authorities when selling to a black market, however, which might result in a faction bounty. EPS pilots may have equipment that can detect illegal contraband on ships and will turn you over to authorities.
<P> In buying counterfeit goods directly from other smaller sellers, location is becoming less a factor, since consumers can purchase products from all over the world and have them delivered straight to their doors by regular carriers, such as USPS, FedEx and UPS. Whereas in previous years international counterfeiters had to transport most counterfeits through large cargo shipments, criminals now can use small parcel mail to avoid most inspections.
<P> There have been reports and suspicions of illegal trading in the past. In 1997, a devil turned up in Western Australia; it had not escaped from any licensed keeper. During the 1990s there were internet sites in the US that were advertising devil sales, and rumours that some US Navy personnel had tried to buy them illegally during a visit to Tasmania.
<P> Vendors sold illicit product or services to other members of the organization which would usually be done through the vendor's website. Products or services would be reviewed by members to ensure that products which were purchased were of high quality and vendors of low-quality products or services did not remain in the organization.
<P> BULLET::::- Basic statistics on international trade normally do not record smuggled goods or international flows of illegal services. A small fraction of the smuggled goods and illegal services may nevertheless be included in official trade statistics through dummy shipments or dummy declarations that serve to conceal the illegal nature of the activities.
<P> The traders sent (export) their goods to the agents who on the behalf of traders sold them. Sending goods to the agents by road or sea involves different risks i.e. sea storms, pirate attack; goods may be damaged due to poor handling while loading and unloading, etc. Traders exploited different measures to hedge the risk involved in the exporting. Instead of sending all the goods on one ship/truck, they used to send their goods over number of vessels to avoid the total loss of shipment if the vessel was caught in a sea storm, fire, pirate, or came under enemy attacks but this was not good practice due to prolonged time and efforts involved.
<P> Fraudulent vendors are referred to as 'rippers', vendors who take buyer's money then never deliver. This is increasingly mitigated via forum and store based feedback systems as well as through strict site invitation and referral policies.
| question: how do dark net vendors ship illegal things internationally without getting caught? context: <P> Smuggling is a risky but often very profitable venture. Illegal commodities may be sold to other players or to the black market available on any planet or a starbase with a population over 30,000. There is a chance a player will be detected by the authorities when selling to a black market, however, which might result in a faction bounty. EPS pilots may have equipment that can detect illegal contraband on ships and will turn you over to authorities.
<P> In buying counterfeit goods directly from other smaller sellers, location is becoming less a factor, since consumers can purchase products from all over the world and have them delivered straight to their doors by regular carriers, such as USPS, FedEx and UPS. Whereas in previous years international counterfeiters had to transport most counterfeits through large cargo shipments, criminals now can use small parcel mail to avoid most inspections.
<P> There have been reports and suspicions of illegal trading in the past. In 1997, a devil turned up in Western Australia; it had not escaped from any licensed keeper. During the 1990s there were internet sites in the US that were advertising devil sales, and rumours that some US Navy personnel had tried to buy them illegally during a visit to Tasmania.
<P> Vendors sold illicit product or services to other members of the organization which would usually be done through the vendor's website. Products or services would be reviewed by members to ensure that products which were purchased were of high quality and vendors of low-quality products or services did not remain in the organization.
<P> BULLET::::- Basic statistics on international trade normally do not record smuggled goods or international flows of illegal services. A small fraction of the smuggled goods and illegal services may nevertheless be included in official trade statistics through dummy shipments or dummy declarations that serve to conceal the illegal nature of the activities.
<P> The traders sent (export) their goods to the agents who on the behalf of traders sold them. Sending goods to the agents by road or sea involves different risks i.e. sea storms, pirate attack; goods may be damaged due to poor handling while loading and unloading, etc. Traders exploited different measures to hedge the risk involved in the exporting. Instead of sending all the goods on one ship/truck, they used to send their goods over number of vessels to avoid the total loss of shipment if the vessel was caught in a sea storm, fire, pirate, or came under enemy attacks but this was not good practice due to prolonged time and efforts involved.
<P> Fraudulent vendors are referred to as 'rippers', vendors who take buyer's money then never deliver. This is increasingly mitigated via forum and store based feedback systems as well as through strict site invitation and referral policies.
| answer: I mean, senders can just put a fake return address on the package. Plus most mail doesn't get looked at in such detail... |
73,614 | 4l1f1t | how does taking medicine on a full stomach help avoid discomfort? | It doesn't always help. To make it really simple, there are certain drugs that when they dissolve can cause various changes in your GI tract. For example, NSAIDs. Their function suppresses enzymes, some of which help protect your stomach lining. If you take them without food (especially if you swallow them without liquid) the pill itself can cause damage to the lining of your stomach, which will allow the acid to destroy the tissue and cause an ulcer. Some medications cause changes in your brain or affect chemical receptors elsewhere in your body, which causes the nausea. Eating with these medications probably won't prevent these symptoms, but for some people having an empty stomach by itself makes them nauseous so a little food may improve their condition. In other words, food affects how many drugs breakdown and are absorbed into your body. If a doctor/pharmacist tells you to take it on a full stomach, you probably should. | [
"It doesn't always help. To make it really simple, there are certain drugs that when they dissolve can cause various changes in your GI tract. For example, NSAIDs. Their function suppresses enzymes, some of which help protect your stomach lining. If you take them without food (especially if you swallow them without... | 1 | [] | 0 | <P> One of the most causes of chronic stomach problems is use of medications. Use of aspirin and other non-steroidal anti-inflammatory drugs to treat various pain disorders can damage lining of the stomach and cause ulcers. Other medications like narcotics can interfere with stomach emptying and cause bloating, nausea, or vomiting.
<P> The other complaint is that the medicines must be taken on an empty stomach to facilitate absorption. This can be difficult for patients to follow (for example, shift workers who take their meals at irregular times) and may mean the patient waking up an hour earlier than usual everyday just to take medication. The rules are actually less stringent than many physicians and pharmacists realise: the issue is that the absorption of RMP is reduced if taken with fat, but is unaffected by carbohydrate, protein, or antacids. So the patient can in fact have his or her medication with food as long as the meal does not contain fat or oils (e.g., a cup of black coffee or toast with jam and no butter). Taking the medicines with food also helps ease the nausea that many patients feel when taking the medicines on an empty stomach. The effect of food on the absorption of INH is not clear: two studies have shown reduced absorption with food but one study showed no difference. There is a small effect of food on the absorption of PZA and of EMB that is probably not clinically important.
<P> Whenever the stomach contracts, it also closes off the esophagus instead of squeezing stomach acids into it. This prevents the reflux of gastric acid (in GERD). Although antacids and PPI drug therapy can reduce the effects of reflux acid, successful surgical treatment has the advantage of eliminating drug side-effects and damaging effects from other components of reflux such as bile or gastric contents.
<P> Stomachic is a historic term for a medicine that serves to tone the stomach, improving its function and increase appetite. While many herbal remedies claim stomachic effects, modern pharmacology does not have an equivalent term for this type of action.
<P> Antimotility drugs such as loperamide and diphenoxylate reduce the symptoms of diarrhea by slowing transit time in the gut. They may be taken to slow the frequency of stools, but not enough to stop bowel movements completely, which delays expulsion of the causative organisms from the intestines. They should be avoided in patients with fever, bloody diarrhea, and possible inflammatory diarrhea. Adverse reactions may include nausea, vomiting, abdominal pain, hives or rash, and loss of appetite. Antimotility agents should not, as a rule, be taken by children under age two.
<P> Pain is severe, frequently out of proportion to physical signs, and often requires the use of opiates to reduce it to tolerable levels. Pain should be treated as early as medically possible. Nausea can be severe; it may respond to phenothiazine drugs but is sometimes intractable. Hot baths and showers may lessen nausea temporarily, though caution should be used to avoid burns or falls.
<P> Lixisenatide should not be used for people who have problems with stomach emptying. Lixisenatide delays emptying of the stomach, which may change how quickly other drugs that are taken by mouth take effect.
| question: how does taking medicine on a full stomach help avoid discomfort? context: <P> One of the most causes of chronic stomach problems is use of medications. Use of aspirin and other non-steroidal anti-inflammatory drugs to treat various pain disorders can damage lining of the stomach and cause ulcers. Other medications like narcotics can interfere with stomach emptying and cause bloating, nausea, or vomiting.
<P> The other complaint is that the medicines must be taken on an empty stomach to facilitate absorption. This can be difficult for patients to follow (for example, shift workers who take their meals at irregular times) and may mean the patient waking up an hour earlier than usual everyday just to take medication. The rules are actually less stringent than many physicians and pharmacists realise: the issue is that the absorption of RMP is reduced if taken with fat, but is unaffected by carbohydrate, protein, or antacids. So the patient can in fact have his or her medication with food as long as the meal does not contain fat or oils (e.g., a cup of black coffee or toast with jam and no butter). Taking the medicines with food also helps ease the nausea that many patients feel when taking the medicines on an empty stomach. The effect of food on the absorption of INH is not clear: two studies have shown reduced absorption with food but one study showed no difference. There is a small effect of food on the absorption of PZA and of EMB that is probably not clinically important.
<P> Whenever the stomach contracts, it also closes off the esophagus instead of squeezing stomach acids into it. This prevents the reflux of gastric acid (in GERD). Although antacids and PPI drug therapy can reduce the effects of reflux acid, successful surgical treatment has the advantage of eliminating drug side-effects and damaging effects from other components of reflux such as bile or gastric contents.
<P> Stomachic is a historic term for a medicine that serves to tone the stomach, improving its function and increase appetite. While many herbal remedies claim stomachic effects, modern pharmacology does not have an equivalent term for this type of action.
<P> Antimotility drugs such as loperamide and diphenoxylate reduce the symptoms of diarrhea by slowing transit time in the gut. They may be taken to slow the frequency of stools, but not enough to stop bowel movements completely, which delays expulsion of the causative organisms from the intestines. They should be avoided in patients with fever, bloody diarrhea, and possible inflammatory diarrhea. Adverse reactions may include nausea, vomiting, abdominal pain, hives or rash, and loss of appetite. Antimotility agents should not, as a rule, be taken by children under age two.
<P> Pain is severe, frequently out of proportion to physical signs, and often requires the use of opiates to reduce it to tolerable levels. Pain should be treated as early as medically possible. Nausea can be severe; it may respond to phenothiazine drugs but is sometimes intractable. Hot baths and showers may lessen nausea temporarily, though caution should be used to avoid burns or falls.
<P> Lixisenatide should not be used for people who have problems with stomach emptying. Lixisenatide delays emptying of the stomach, which may change how quickly other drugs that are taken by mouth take effect.
| answer: It doesn't always help. To make it really simple, there are certain drugs that when they dissolve can cause various changes in your GI tract. For example, NSAIDs. Their function suppresses enzymes, some of which help protect your stomach lining. If you take them without food (especially if you swallow them without liquid) the pill itself can cause damage to the lining of your stomach, which will allow the acid to destroy the tissue and cause an ulcer. Some medications cause changes in your brain or affect chemical receptors elsewhere in your body, which causes the nausea. Eating with these medications probably won't prevent these symptoms, but for some people having an empty stomach by itself makes them nauseous so a little food may improve their condition. In other words, food affects how many drugs breakdown and are absorbed into your body. If a doctor/pharmacist tells you to take it on a full stomach, you probably should. |
32,727 | cpl4n7 | How true is the claim that “the Japanese were going to surrender in 1945 anyways but the US pushing for unconditional surrender threatened remove or arrest the Emperor which made the Japanese not want to surrender”? | Hi there -- while there's always more to be said on this, you may be interested in [this section of our FAQ](_URL_0_). | [
"Hi there -- while there's always more to be said on this, you may be interested in [this section of our FAQ](_URL_0_)."
] | 1 | [] | 0 | <P> That these flights were possible a few days after Japan's surrender was the result of a lack of clarity about what had occurred. Although Japan had unconditionally surrendered, when Emperor Hirohito had made his announcement over the radio, he had used formal Japanese, not entirely intelligible to ordinary people and, instead of using the word "surrender" (in Japanese), had mentioned only "abiding by the terms of the Potsdam Declaration." Consequently, many people, especially in Japanese-occupied territories, were unsure if anything had significantly changed, allowing a window of a few days for the Japanese air force to continue flying. Although the Japanese and Bose were tight lipped about the destination of the bomber, it was widely assumed by Bose's staff left behind on the tarmac in Saigon that the plane was bound for Dairen on the Manchurian peninsula, which, as stated above, was still under Japanese control. Bose had been talking for over a year about the importance of making contact with the communists, both Russian and Chinese. In 1944, he had asked a minister in his cabinet, Anand Mohan Sahay to travel to Tokyo for the purposes of making contact with the Soviet ambassador, Jacob Malik. However, after consulting the Japanese foreign minister Mamoru Shigemitsu, Sahay decided against it. In May 1945, Sahay had again written to Shigemitsu requesting him to contact Soviet authorities on behalf of Bose; again the reply had been in the negative. Bose had been continually querying General Isoda for over a year about the Japanese army's readiness in Manchuria. After the war, the Japanese confirmed to the British investigators and later Indian commissions of inquiry, that plane was indeed bound for Dairen, and that fellow passenger General Shidea of the Kwantung Army, was to have disembarked with Bose in Dairen and to have served as the main liaison and negotiator for Bose's transfer into Soviet controlled territory in Manchuria.
<P> Hoping to forestall the invasion, the United States, the United Kingdom, and the Republic of China issued a Potsdam Declaration on 26 July 1945, demanding that the Japanese government accept an unconditional surrender. The declaration also stated that if Japan did not surrender, it would be faced with "prompt and utter destruction", a process which was already underway with the incendiary bombing raids destroying 40% of targeted cities, and by naval warfare isolating and starving Japan of imported food. The Japanese government ignored ("mokusatsu") this ultimatum, thus signalling that they were not going to surrender.
<P> Meanwhile, many parties continue to debate the broader question of "why Japan surrendered", attributing the surrender to a number of possible reasons including: the atomic bombings, the Soviet invasion of Manchuria, and Japan's depleted resources.
<P> The surrender terms offered by the United States were scorned by the newspapers as ludicrous, urging that the government remain silent about them, which indeed, the government did, a traditional Japanese technique for dealing with the unacceptable.
<P> The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms. In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force, thereby persuading previously adamant Imperial Army leaders to accept surrender terms. The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS "Missouri" on 2 September 1945, ending the war.
<P> The surrender of Imperial Japan was announced by Hirohito on August 15 and formally signed on September 2, 1945, bringing the hostilities of World War II to a close. By the end of July 1945, the Imperial Japanese Navy (IJN) was incapable of conducting major operations and an Allied invasion of Japan was imminent. Together with the British Empire and China, the United States called for the unconditional surrender of the Japanese armed forces in the Potsdam Declaration on July 26, 1945—the alternative being "prompt and utter destruction". While publicly stating their intent to fight on to the bitter end, Japan's leaders (the Supreme Council for the Direction of the War, also known as the "Big Six") were privately making entreaties to the publicly neutral Soviet Union to mediate peace on terms more favorable to the Japanese. While maintaining a sufficient level of diplomatic engagement with the Japanese to give them the impression they might be willing to mediate, the Soviets were covertly preparing to attack Japanese forces in Manchuria and Korea (in addition to South Sakhalin and the Kuril Islands) in fulfillment of promises they had secretly made to the United States and the United Kingdom at the Tehran and Yalta Conferences.
<P> The air attacks on Japan had crippled her ability to wage war but the Japanese had not surrendered. On July 26, 1945, United States President Harry S. Truman, United Kingdom Prime Minister Winston Churchill, and Chairman of the Chinese Nationalist Government Chiang Kai-shek issued the Potsdam Declaration, which outlined the terms of surrender for the Empire of Japan as agreed upon at the Potsdam Conference. This ultimatum stated if Japan did not surrender, she would face "prompt and utter destruction." The Japanese government ignored this ultimatum ("Mokusatsu", "kill by silence"), and vowed to continue resisting an anticipated Allied invasion of Japan. On August 6, 1945, the "Little Boy" enriched uranium atomic bomb was dropped on the city of Hiroshima, followed on August 9 by the detonation of the "Fat Man" plutonium core atomic bomb over Nagasaki. Both cities were destroyed with enormous loss of life and psychological shock. On August 15, Emperor Hirohito announced the surrender of Japan, stating:
| question: How true is the claim that “the Japanese were going to surrender in 1945 anyways but the US pushing for unconditional surrender threatened remove or arrest the Emperor which made the Japanese not want to surrender”? context: <P> That these flights were possible a few days after Japan's surrender was the result of a lack of clarity about what had occurred. Although Japan had unconditionally surrendered, when Emperor Hirohito had made his announcement over the radio, he had used formal Japanese, not entirely intelligible to ordinary people and, instead of using the word "surrender" (in Japanese), had mentioned only "abiding by the terms of the Potsdam Declaration." Consequently, many people, especially in Japanese-occupied territories, were unsure if anything had significantly changed, allowing a window of a few days for the Japanese air force to continue flying. Although the Japanese and Bose were tight lipped about the destination of the bomber, it was widely assumed by Bose's staff left behind on the tarmac in Saigon that the plane was bound for Dairen on the Manchurian peninsula, which, as stated above, was still under Japanese control. Bose had been talking for over a year about the importance of making contact with the communists, both Russian and Chinese. In 1944, he had asked a minister in his cabinet, Anand Mohan Sahay to travel to Tokyo for the purposes of making contact with the Soviet ambassador, Jacob Malik. However, after consulting the Japanese foreign minister Mamoru Shigemitsu, Sahay decided against it. In May 1945, Sahay had again written to Shigemitsu requesting him to contact Soviet authorities on behalf of Bose; again the reply had been in the negative. Bose had been continually querying General Isoda for over a year about the Japanese army's readiness in Manchuria. After the war, the Japanese confirmed to the British investigators and later Indian commissions of inquiry, that plane was indeed bound for Dairen, and that fellow passenger General Shidea of the Kwantung Army, was to have disembarked with Bose in Dairen and to have served as the main liaison and negotiator for Bose's transfer into Soviet controlled territory in Manchuria.
<P> Hoping to forestall the invasion, the United States, the United Kingdom, and the Republic of China issued a Potsdam Declaration on 26 July 1945, demanding that the Japanese government accept an unconditional surrender. The declaration also stated that if Japan did not surrender, it would be faced with "prompt and utter destruction", a process which was already underway with the incendiary bombing raids destroying 40% of targeted cities, and by naval warfare isolating and starving Japan of imported food. The Japanese government ignored ("mokusatsu") this ultimatum, thus signalling that they were not going to surrender.
<P> Meanwhile, many parties continue to debate the broader question of "why Japan surrendered", attributing the surrender to a number of possible reasons including: the atomic bombings, the Soviet invasion of Manchuria, and Japan's depleted resources.
<P> The surrender terms offered by the United States were scorned by the newspapers as ludicrous, urging that the government remain silent about them, which indeed, the government did, a traditional Japanese technique for dealing with the unacceptable.
<P> The call for unconditional surrender was rejected by the Japanese government, which believed it would be capable of negotiating for more favourable surrender terms. In early August, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese-held Manchuria and quickly defeated the Kwantung Army, which was the largest Japanese fighting force, thereby persuading previously adamant Imperial Army leaders to accept surrender terms. The Red Army also captured the southern part of Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS "Missouri" on 2 September 1945, ending the war.
<P> The surrender of Imperial Japan was announced by Hirohito on August 15 and formally signed on September 2, 1945, bringing the hostilities of World War II to a close. By the end of July 1945, the Imperial Japanese Navy (IJN) was incapable of conducting major operations and an Allied invasion of Japan was imminent. Together with the British Empire and China, the United States called for the unconditional surrender of the Japanese armed forces in the Potsdam Declaration on July 26, 1945—the alternative being "prompt and utter destruction". While publicly stating their intent to fight on to the bitter end, Japan's leaders (the Supreme Council for the Direction of the War, also known as the "Big Six") were privately making entreaties to the publicly neutral Soviet Union to mediate peace on terms more favorable to the Japanese. While maintaining a sufficient level of diplomatic engagement with the Japanese to give them the impression they might be willing to mediate, the Soviets were covertly preparing to attack Japanese forces in Manchuria and Korea (in addition to South Sakhalin and the Kuril Islands) in fulfillment of promises they had secretly made to the United States and the United Kingdom at the Tehran and Yalta Conferences.
<P> The air attacks on Japan had crippled her ability to wage war but the Japanese had not surrendered. On July 26, 1945, United States President Harry S. Truman, United Kingdom Prime Minister Winston Churchill, and Chairman of the Chinese Nationalist Government Chiang Kai-shek issued the Potsdam Declaration, which outlined the terms of surrender for the Empire of Japan as agreed upon at the Potsdam Conference. This ultimatum stated if Japan did not surrender, she would face "prompt and utter destruction." The Japanese government ignored this ultimatum ("Mokusatsu", "kill by silence"), and vowed to continue resisting an anticipated Allied invasion of Japan. On August 6, 1945, the "Little Boy" enriched uranium atomic bomb was dropped on the city of Hiroshima, followed on August 9 by the detonation of the "Fat Man" plutonium core atomic bomb over Nagasaki. Both cities were destroyed with enormous loss of life and psychological shock. On August 15, Emperor Hirohito announced the surrender of Japan, stating:
| answer: Hi there -- while there's always more to be said on this, you may be interested in [this section of our FAQ](_URL_0_). |
82,061 | 2t8g5i | Can catalytic compounds act to decrease the rate of reaction at higher concentrations? | This sounds like a textbook case of transport limitations. The catalyst by definition will *always* reduce the activation energy and increase the reaction rate. If you remember your rate laws:rate=k*[concentration]^(apparent rate order)where k is a rate constant, often expressed by a power law:k = A*exp(-Ea/RT)where Ea is the activation energy of the reaction, and A is a fitted or derived prefactor.A catalyst operates by lowering Ea; no dependence on concentration. However, at higher concentrations what is probably happening is that the reactant can't reach the catalyst. The reactant near the catalyst is probably reacting very quickly, but then not leaving fast enough- forming little local regions of very low reactant concentration near the catalyst particles. At low concentrations, this cloud isn't as dense- allowing something more like the unblocked reaction rate. Given your liquid system this is a little unusual, but possible depending on relative diffusion rates of the species involved.So in your experiment at higher concentrations you're not measuring the chemical reaction rate- you're actually measuring a composite of the chemical reaction rate and the transport rate to the active site, which is significantly lower. This is a particularly strong problem if you are using a heterogeneous catalyst, where the reactant needs to penentrate pores to reach the catalyst active sites. Solving this problem is actually still an ongoing task in Chemical Engineering research, and of huge importance to both the petrochemical and green chemistry indus.For a homogeneous system, ways to remedy this problem include increasing the catalyst amount (not what we want normally, because it's often expensive) sonication, heating, and vigorous stirring. | [
"This sounds like a textbook case of transport limitations. The catalyst by definition will *always* reduce the activation energy and increase the reaction rate. If you remember your rate laws:\n\nrate=k*[concentration]^(apparent rate order)\nwhere k is a rate constant, often expressed by a power law:\nk = A*exp(-E... | 1 | [] | 0 | <P> The mechanisms of chemical reactions can be investigated by observing how the kinetics of a reaction is changed by making an isotopic modification of a substrate, known as the kinetic isotope effect. This is now a standard method in organic chemistry. Briefly, replacing normal hydrogen (protons) by deuterium within a molecule causes the molecular vibrational frequency of X-H (for example C-H, N-H and O-H) bonds to decrease, which leads to a decrease in vibrational zero-point energy. This can lead to a decrease in the reaction rate if the rate-determining step involves breaking a bond between hydrogen and another atom. Thus, if the reaction changes in rate when protons are replaced by deuteriums, it is reasonable to assume that the breaking of the bond to hydrogen is part of the step which determines the rate.
<P> Enzyme-substrate interactions align the reactive chemical groups and hold them close together in an optimal geometry, which increases the rate of the reaction. This reduces the entropy of the reactants and thus makes addition or transfer reactions less unfavorable, since a reduction in the overall entropy when two reactants become a single product. However this is a general effect and is seen in non-addition or transfer reactions where it occurs due to an increase in the "effective concentration" of the reagents. This is understood when considering how increases in concentration leads to increases in reaction rate: essentially when the reactants are more concentrated, they collide more often and so react more often. In enzyme catalysis, the binding of the reagents to the enzyme restricts the conformational space of the reactants, holding them in the 'proper orientation' and close to each other, so that the collide more frequently, and with the correct geometry, to facilitate the desired reaction. The "effective concentration" is the concentration the reactant would have to be, free in solution, to experiences the same collisional frequency. Often such theoretical effective concentrations are unphysical and impossible to realize in reality - which is a testament to the great catalytic power of many enzymes, with massive rate increases over the uncatalyzed state.
<P> As the concentration of the acid catalyst is reduced, the rate of production of acid soluble polymers increases. Feeds that contain high amounts of propylene have a much higher rate of increase in acid consumption over the normal spending range.
<P> When a catalyst is involved in the collision between the reactant molecules, less energy is required for the chemical change to take place, and hence more collisions have sufficient energy for reaction to occur. The reaction rate therefore increases.
<P> In general, chemical reactions occur faster in the presence of a catalyst because the catalyst provides an alternative reaction pathway with a lower activation energy than the non-catalyzed mechanism. In catalyzed mechanisms, the catalyst usually reacts to form a temporary intermediate, which then regenerates the original catalyst in a cyclic process. A substance which provides a mechanism with a higher activation energy does not decrease the rate because the reaction can still occur by the non-catalyzed route. An added substance which does reduce the reaction rate is not considered a catalyst but a reaction inhibitor (see below).
<P> A substance that modifies the transition state to lower the activation energy is termed a catalyst; a catalyst composed only of protein and (if applicable) small molecule cofactors is termed an enzyme. A catalyst increases the rate of reaction without being consumed in the reaction. In addition, the catalyst lowers the activation energy, but it does not change the energies of the original reactants or products, and so does not change equilibrium. Rather, the reactant energy and the product energy remain the same and only the "activation energy" is altered (lowered).
<P> While the rate of a reaction depends just on the activation energy (often represented in organic chemistry as ΔG “delta G double dagger”), the final ratios of products in chemical equilibrium depends only on the standard free-energy change ΔG (“delta G”). The ratio of the final products at equilibrium corresponds directly with the stability of those products.
| question: Can catalytic compounds act to decrease the rate of reaction at higher concentrations? context: <P> The mechanisms of chemical reactions can be investigated by observing how the kinetics of a reaction is changed by making an isotopic modification of a substrate, known as the kinetic isotope effect. This is now a standard method in organic chemistry. Briefly, replacing normal hydrogen (protons) by deuterium within a molecule causes the molecular vibrational frequency of X-H (for example C-H, N-H and O-H) bonds to decrease, which leads to a decrease in vibrational zero-point energy. This can lead to a decrease in the reaction rate if the rate-determining step involves breaking a bond between hydrogen and another atom. Thus, if the reaction changes in rate when protons are replaced by deuteriums, it is reasonable to assume that the breaking of the bond to hydrogen is part of the step which determines the rate.
<P> Enzyme-substrate interactions align the reactive chemical groups and hold them close together in an optimal geometry, which increases the rate of the reaction. This reduces the entropy of the reactants and thus makes addition or transfer reactions less unfavorable, since a reduction in the overall entropy when two reactants become a single product. However this is a general effect and is seen in non-addition or transfer reactions where it occurs due to an increase in the "effective concentration" of the reagents. This is understood when considering how increases in concentration leads to increases in reaction rate: essentially when the reactants are more concentrated, they collide more often and so react more often. In enzyme catalysis, the binding of the reagents to the enzyme restricts the conformational space of the reactants, holding them in the 'proper orientation' and close to each other, so that the collide more frequently, and with the correct geometry, to facilitate the desired reaction. The "effective concentration" is the concentration the reactant would have to be, free in solution, to experiences the same collisional frequency. Often such theoretical effective concentrations are unphysical and impossible to realize in reality - which is a testament to the great catalytic power of many enzymes, with massive rate increases over the uncatalyzed state.
<P> As the concentration of the acid catalyst is reduced, the rate of production of acid soluble polymers increases. Feeds that contain high amounts of propylene have a much higher rate of increase in acid consumption over the normal spending range.
<P> When a catalyst is involved in the collision between the reactant molecules, less energy is required for the chemical change to take place, and hence more collisions have sufficient energy for reaction to occur. The reaction rate therefore increases.
<P> In general, chemical reactions occur faster in the presence of a catalyst because the catalyst provides an alternative reaction pathway with a lower activation energy than the non-catalyzed mechanism. In catalyzed mechanisms, the catalyst usually reacts to form a temporary intermediate, which then regenerates the original catalyst in a cyclic process. A substance which provides a mechanism with a higher activation energy does not decrease the rate because the reaction can still occur by the non-catalyzed route. An added substance which does reduce the reaction rate is not considered a catalyst but a reaction inhibitor (see below).
<P> A substance that modifies the transition state to lower the activation energy is termed a catalyst; a catalyst composed only of protein and (if applicable) small molecule cofactors is termed an enzyme. A catalyst increases the rate of reaction without being consumed in the reaction. In addition, the catalyst lowers the activation energy, but it does not change the energies of the original reactants or products, and so does not change equilibrium. Rather, the reactant energy and the product energy remain the same and only the "activation energy" is altered (lowered).
<P> While the rate of a reaction depends just on the activation energy (often represented in organic chemistry as ΔG “delta G double dagger”), the final ratios of products in chemical equilibrium depends only on the standard free-energy change ΔG (“delta G”). The ratio of the final products at equilibrium corresponds directly with the stability of those products.
| answer: This sounds like a textbook case of transport limitations. The catalyst by definition will *always* reduce the activation energy and increase the reaction rate. If you remember your rate laws:rate=k*[concentration]^(apparent rate order)where k is a rate constant, often expressed by a power law:k = A*exp(-Ea/RT)where Ea is the activation energy of the reaction, and A is a fitted or derived prefactor.A catalyst operates by lowering Ea; no dependence on concentration. However, at higher concentrations what is probably happening is that the reactant can't reach the catalyst. The reactant near the catalyst is probably reacting very quickly, but then not leaving fast enough- forming little local regions of very low reactant concentration near the catalyst particles. At low concentrations, this cloud isn't as dense- allowing something more like the unblocked reaction rate. Given your liquid system this is a little unusual, but possible depending on relative diffusion rates of the species involved.So in your experiment at higher concentrations you're not measuring the chemical reaction rate- you're actually measuring a composite of the chemical reaction rate and the transport rate to the active site, which is significantly lower. This is a particularly strong problem if you are using a heterogeneous catalyst, where the reactant needs to penentrate pores to reach the catalyst active sites. Solving this problem is actually still an ongoing task in Chemical Engineering research, and of huge importance to both the petrochemical and green chemistry indus.For a homogeneous system, ways to remedy this problem include increasing the catalyst amount (not what we want normally, because it's often expensive) sonication, heating, and vigorous stirring. |
180,467 | bagveg | why dont car doors unlock if you pull the handle while it’s unlocking? | Because there's a piece it needs to move before the door will be able to open, and it can't move that part while the door handle is being pulled on. (In general.) | [
"Because it's not fully unlocked until it's done unlocking. The mechanism to prevent the door from opening is only fully disengaged at the end of the unlock cycle. ",
"There are linkages that have to complete their jobs once the button is pressed to lock or unlock the door. Linkages and parts are moving and actua... | 5 | [
"There are linkages that have to complete their jobs once the button is pressed to lock or unlock the door. Linkages and parts are moving and actuating once the lock/unlock button is pressed. ",
"Because there's a piece it needs to move before the door will be able to open, and it can't move that part while the d... | 4 | <P> When leaving a vehicle that is equipped with a smart-key system, the vehicle is locked by either pressing a button on a door handle, touching a capacitive area on a door handle, or simply walking away from the vehicle. The method of locking varies across models.
<P> The remote keyless entry can unlock just the drivers door by pushing the unlock button once, while holding down the button unlocks all doors. Using the key to unlock the door after using the remote keyless entry to lock the doors will cause the alarm to sound, if equipped with a security system. The doors must be unlocked with the remote to avoid the security system being set off.
<P> Locked cars may be bypassed by introducing a stiff wire between the door and the cars structure to operate internal unlocking catches. The previous method may be assisted by gently prying the door from the frame with an air wedge or lever. To avoid bypass, a door should be secured using a deadbolt system, in which the locking mechanism and bolt are operated by the key. This prevents the device from being opened without the locking mechanism itself being properly operated.
<P> All three doors were motorized for a sensor based “keyless” entry. Pressing on a single button on the keychain automatically opened the nearest door, making it easy for somebody holding bags of groceries or other sundries to get the things in the car without putting anything down. The interior was maximized for easy storage and good looks.
<P> Some cars come with an additional key, known as a valet key that starts the ignition and opens the driver's side door but prevents the valet from gaining access to valuables that are located in the trunk or the glove box.
<P> On vehicles equipped with power central locking, the feature is activated from the inside drivers door lock switch only, by pushing the rocker switch to lock or unlock all doors. There is no label on any of the doors that suggest the door lock function is electric. Other doors can be locked or unlocked individually by pushing the respective door lock rocker switch, but it will not lock or unlock the other doors. The outside key door lock can unlock the drivers door only by turning the key partially, or with a complete turn to the left to unlock all doors.
<P> Today, this system is commonly found on a variety of vehicles, and although the exact method of operation differs between makes and models, their operation is generally similar: a vehicle can be unlocked without the driver needing to physically push a button on the key fob to lock or unlock the car and is also able to start or stop the ignition without physically having to insert the key and turning the ignition. Instead, the vehicle senses that the key (which may be located in the user's pocket, purse, etc.) is approaching the vehicle.
| question: why dont car doors unlock if you pull the handle while it’s unlocking? context: <P> When leaving a vehicle that is equipped with a smart-key system, the vehicle is locked by either pressing a button on a door handle, touching a capacitive area on a door handle, or simply walking away from the vehicle. The method of locking varies across models.
<P> The remote keyless entry can unlock just the drivers door by pushing the unlock button once, while holding down the button unlocks all doors. Using the key to unlock the door after using the remote keyless entry to lock the doors will cause the alarm to sound, if equipped with a security system. The doors must be unlocked with the remote to avoid the security system being set off.
<P> Locked cars may be bypassed by introducing a stiff wire between the door and the cars structure to operate internal unlocking catches. The previous method may be assisted by gently prying the door from the frame with an air wedge or lever. To avoid bypass, a door should be secured using a deadbolt system, in which the locking mechanism and bolt are operated by the key. This prevents the device from being opened without the locking mechanism itself being properly operated.
<P> All three doors were motorized for a sensor based “keyless” entry. Pressing on a single button on the keychain automatically opened the nearest door, making it easy for somebody holding bags of groceries or other sundries to get the things in the car without putting anything down. The interior was maximized for easy storage and good looks.
<P> Some cars come with an additional key, known as a valet key that starts the ignition and opens the driver's side door but prevents the valet from gaining access to valuables that are located in the trunk or the glove box.
<P> On vehicles equipped with power central locking, the feature is activated from the inside drivers door lock switch only, by pushing the rocker switch to lock or unlock all doors. There is no label on any of the doors that suggest the door lock function is electric. Other doors can be locked or unlocked individually by pushing the respective door lock rocker switch, but it will not lock or unlock the other doors. The outside key door lock can unlock the drivers door only by turning the key partially, or with a complete turn to the left to unlock all doors.
<P> Today, this system is commonly found on a variety of vehicles, and although the exact method of operation differs between makes and models, their operation is generally similar: a vehicle can be unlocked without the driver needing to physically push a button on the key fob to lock or unlock the car and is also able to start or stop the ignition without physically having to insert the key and turning the ignition. Instead, the vehicle senses that the key (which may be located in the user's pocket, purse, etc.) is approaching the vehicle.
| answer: Because there's a piece it needs to move before the door will be able to open, and it can't move that part while the door handle is being pulled on. (In general.) |
54,477 | 6v1ajp | Is it possible for meteoroids carrying microorganisms from Earth to travel to another planet, such as Mars, and seed life onto them? | What you're asking is basically the panspermia theory, that says life can spread between habitable planets when carried by meteors. Research has been done about this by taking extremophiles into space and exposing them to vacuum, radiation, extreme temperatures and direct sunlight. Survival rates were good enough.The weak link in the theory is atmospheric entry. When the meteor burns up, any lifeforms it carries would be vaporized and destroyed. A possible exception is that lifeforms are carried in very deep a crack in the rock that isn't reached by the hot air outside. ESA has discussed experiments about atmospheric entry in these conditions but none have been performed yet. | [
"What you're asking is basically the panspermia theory, that says life can spread between habitable planets when carried by meteors. Research has been done about this by taking extremophiles into space and exposing them to vacuum, radiation, extreme temperatures and direct sunlight. Survival rates were good enough.... | 1 | [
"What you're asking is basically the panspermia theory, that says life can spread between habitable planets when carried by meteors. Research has been done about this by taking extremophiles into space and exposing them to vacuum, radiation, extreme temperatures and direct sunlight. Survival rates were good enough.... | 1 | <P> Earth receives a steady stream of meteorites from Mars, but they come from relatively few original impactors, and transfer was more likely in the early Solar System. Also some life forms viable on both Mars and on Earth might be unable to survive transfer on a meteorite, and there is so far no direct evidence of any transfer of life from Mars to Earth in this way.
<P> Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars.
<P> BULLET::::- A new study finds that DNA can survive a flight through space and re-entry into Earth's atmosphere and still pass on genetic information. These results indicate that life and organic molecules could potentially spread between planetary bodies through meteor impacts.
<P> A number of meteorites thought to have originated from Mars have been catalogued from around the world, including the Nakhlites. These are considered to have been ejected by the impact of another large body colliding with the Martian surface. They then travelled through the solar system for an unknown period before penetrating the Earth's atmosphere.
<P> Impacts on Earth able to send microorganisms to Mars are also infrequent. Impactors of 10 km across or larger can send debris to Mars through the Earth's atmosphere but these occur rarely, and were more common in the early Solar System.
<P> In March 2019, researchers reported the possibility of biosignatures in this Martian meteorite based on its microtexture and morphology as detected with optical microscopy and FTIR-ATR microscopy, and on the detection of mineralized organic compounds, suggesting that microbial life could have existed on the planet Mars. More broadly, and as a result of their studies, the researchers suggest Solar System materials should be carefully studied to determine whether there may be signs of microbial forms within other space rocks as well.
<P> Although computer models suggest that a captured meteoroid would typically take some tens of millions of years before collision with a planet, there are documented viable Earthly bacterial spores that are 40 million years old that are very resistant to radiation, and others able to resume life after being dormant for 25 million years, suggesting that lithopanspermia life-transfers are possible via meteorites exceeding 1 m in size.
| question: Is it possible for meteoroids carrying microorganisms from Earth to travel to another planet, such as Mars, and seed life onto them? context: <P> Earth receives a steady stream of meteorites from Mars, but they come from relatively few original impactors, and transfer was more likely in the early Solar System. Also some life forms viable on both Mars and on Earth might be unable to survive transfer on a meteorite, and there is so far no direct evidence of any transfer of life from Mars to Earth in this way.
<P> Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars.
<P> BULLET::::- A new study finds that DNA can survive a flight through space and re-entry into Earth's atmosphere and still pass on genetic information. These results indicate that life and organic molecules could potentially spread between planetary bodies through meteor impacts.
<P> A number of meteorites thought to have originated from Mars have been catalogued from around the world, including the Nakhlites. These are considered to have been ejected by the impact of another large body colliding with the Martian surface. They then travelled through the solar system for an unknown period before penetrating the Earth's atmosphere.
<P> Impacts on Earth able to send microorganisms to Mars are also infrequent. Impactors of 10 km across or larger can send debris to Mars through the Earth's atmosphere but these occur rarely, and were more common in the early Solar System.
<P> In March 2019, researchers reported the possibility of biosignatures in this Martian meteorite based on its microtexture and morphology as detected with optical microscopy and FTIR-ATR microscopy, and on the detection of mineralized organic compounds, suggesting that microbial life could have existed on the planet Mars. More broadly, and as a result of their studies, the researchers suggest Solar System materials should be carefully studied to determine whether there may be signs of microbial forms within other space rocks as well.
<P> Although computer models suggest that a captured meteoroid would typically take some tens of millions of years before collision with a planet, there are documented viable Earthly bacterial spores that are 40 million years old that are very resistant to radiation, and others able to resume life after being dormant for 25 million years, suggesting that lithopanspermia life-transfers are possible via meteorites exceeding 1 m in size.
| answer: What you're asking is basically the panspermia theory, that says life can spread between habitable planets when carried by meteors. Research has been done about this by taking extremophiles into space and exposing them to vacuum, radiation, extreme temperatures and direct sunlight. Survival rates were good enough.The weak link in the theory is atmospheric entry. When the meteor burns up, any lifeforms it carries would be vaporized and destroyed. A possible exception is that lifeforms are carried in very deep a crack in the rock that isn't reached by the hot air outside. ESA has discussed experiments about atmospheric entry in these conditions but none have been performed yet. |
39,464 | iql7c | Why didn't whales evolve gills? | while a whale with gills may have more fitness than a lung-breathing whale, this does not mean that the trait has to evolve. when the mutations which are the basis for evolutionary change occur, they occur randomly. the genome does not "know" to mutate in a certain way to make the animal more adapted to the environment. in whales, the necessary mutations to start them off on a path towards evolving functional gills never occurred, simply by random chance. | [
"while a whale with gills may have more fitness than a lung-breathing whale, this does not mean that the trait has to evolve. when the mutations which are the basis for evolutionary change occur, they occur randomly. the genome does not \"know\" to mutate in a certain way to make the animal more adapted to the envi... | 6 | [
"while a whale with gills may have more fitness than a lung-breathing whale, this does not mean that the trait has to evolve. when the mutations which are the basis for evolutionary change occur, they occur randomly. the genome does not \"know\" to mutate in a certain way to make the animal more adapted to the envi... | 3 | <P> The fossil record traces the gradual transition from terrestrial to aquatic life. The regression of the hind limbs allowed greater flexibility of the spine. This made it possible for whales to move around with the vertical tail hitting the water. The front legs transformed into flippers, costing them their mobility on land.
<P> Previously, the evolution of gills was thought to have occurred through two diverging lines: gills formed from the endoderm, as seen in jawless fish species, or those form by the ectoderm, as seen in jawed fish. However, recent studies on gill formation of the little skate ("Leucoraja erinacea") has shown potential evidence supporting the claim that gills from all current fish species have in fact evolved from a common ancestor.
<P> Whales are adapted for diving to great depths. In addition to their streamlined bodies, they can slow their heart rate to conserve oxygen; blood is rerouted from tissue tolerant of water pressure to the heart and brain among other organs; haemoglobin and myoglobin store oxygen in body tissue; and they have twice the concentration of myoglobin than haemoglobin. Before going on long dives, many whales exhibit a behaviour known as sounding; they stay close to the surface for a series of short, shallow dives while building their oxygen reserves, and then make a sounding dive.
<P> "Georgiacetus" had a tail and lacked the fluke present in slightly younger fossils. It probably swam using its hindlimbs by wiggling its hips and moving its trunk up and down, a locomotor behaviour abandoned by modern whales. Whales evolved in South Asia, and it was previously thought that the fluke helped early whales spread across Earth from there, so "Georgiacetus"' presence in America and its legs and tail contradicts this hypothesis.
<P> Whales have evolved from land-living mammals. As such, whales must breathe air regularly, although they can remain submerged under water for long periods of time. Some species such as the sperm whale are able to stay submerged for as much as 90 minutes. They have blowholes (modified nostrils) located on top of their heads, through which air is taken in and expelled. They are warm-blooded, and have a layer of fat, or blubber, under the skin. With streamlined fusiform bodies and two limbs that are modified into flippers, whales can travel at up to 20 knots, though they are not as flexible or agile as seals. Whales produce a great variety of vocalizations, notably the extended songs of the humpback whale. Although whales are widespread, most species prefer the colder waters of the Northern and Southern Hemispheres, and migrate to the equator to give birth. Species such as humpbacks and blue whales are capable of travelling thousands of miles without feeding. Males typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for one to two years.
<P> When sauropods were first discovered, their immense size led many scientists to compare them with modern-day whales. Most studies in the 19th and early 20th centuries concluded that sauropods were too large to have supported their weight on land, and therefore that they must have been mainly aquatic. Most life restorations of sauropods in art through the first three quarters of the 20th century depicted them fully or partially immersed in water. This early notion was cast in doubt beginning in the 1950s, when a study by Kermack (1951) demonstrated that, if the animal were submerged in several metres of water, the pressure would be enough to fatally collapse the lungs and airway. However, this and other early studies of sauropod ecology were flawed in that they ignored a substantial body of evidence that the bodies of sauropods were heavily permeated with air sacs. In 1878, paleontologist E.D. Cope had even referred to these structures as "floats".
<P> In amphibians and some primitive bony fish, the larvae bear external gills, branching off from the gill arches. These are reduced in adulthood, their function taken over by the gills proper in fish and by lungs in most amphibians. Some amphibians retain the external larval gills in adulthood, the complex internal gill system as seen in fish apparently being irrevocably lost very early in the evolution of tetrapods.
| question: Why didn't whales evolve gills? context: <P> The fossil record traces the gradual transition from terrestrial to aquatic life. The regression of the hind limbs allowed greater flexibility of the spine. This made it possible for whales to move around with the vertical tail hitting the water. The front legs transformed into flippers, costing them their mobility on land.
<P> Previously, the evolution of gills was thought to have occurred through two diverging lines: gills formed from the endoderm, as seen in jawless fish species, or those form by the ectoderm, as seen in jawed fish. However, recent studies on gill formation of the little skate ("Leucoraja erinacea") has shown potential evidence supporting the claim that gills from all current fish species have in fact evolved from a common ancestor.
<P> Whales are adapted for diving to great depths. In addition to their streamlined bodies, they can slow their heart rate to conserve oxygen; blood is rerouted from tissue tolerant of water pressure to the heart and brain among other organs; haemoglobin and myoglobin store oxygen in body tissue; and they have twice the concentration of myoglobin than haemoglobin. Before going on long dives, many whales exhibit a behaviour known as sounding; they stay close to the surface for a series of short, shallow dives while building their oxygen reserves, and then make a sounding dive.
<P> "Georgiacetus" had a tail and lacked the fluke present in slightly younger fossils. It probably swam using its hindlimbs by wiggling its hips and moving its trunk up and down, a locomotor behaviour abandoned by modern whales. Whales evolved in South Asia, and it was previously thought that the fluke helped early whales spread across Earth from there, so "Georgiacetus"' presence in America and its legs and tail contradicts this hypothesis.
<P> Whales have evolved from land-living mammals. As such, whales must breathe air regularly, although they can remain submerged under water for long periods of time. Some species such as the sperm whale are able to stay submerged for as much as 90 minutes. They have blowholes (modified nostrils) located on top of their heads, through which air is taken in and expelled. They are warm-blooded, and have a layer of fat, or blubber, under the skin. With streamlined fusiform bodies and two limbs that are modified into flippers, whales can travel at up to 20 knots, though they are not as flexible or agile as seals. Whales produce a great variety of vocalizations, notably the extended songs of the humpback whale. Although whales are widespread, most species prefer the colder waters of the Northern and Southern Hemispheres, and migrate to the equator to give birth. Species such as humpbacks and blue whales are capable of travelling thousands of miles without feeding. Males typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for one to two years.
<P> When sauropods were first discovered, their immense size led many scientists to compare them with modern-day whales. Most studies in the 19th and early 20th centuries concluded that sauropods were too large to have supported their weight on land, and therefore that they must have been mainly aquatic. Most life restorations of sauropods in art through the first three quarters of the 20th century depicted them fully or partially immersed in water. This early notion was cast in doubt beginning in the 1950s, when a study by Kermack (1951) demonstrated that, if the animal were submerged in several metres of water, the pressure would be enough to fatally collapse the lungs and airway. However, this and other early studies of sauropod ecology were flawed in that they ignored a substantial body of evidence that the bodies of sauropods were heavily permeated with air sacs. In 1878, paleontologist E.D. Cope had even referred to these structures as "floats".
<P> In amphibians and some primitive bony fish, the larvae bear external gills, branching off from the gill arches. These are reduced in adulthood, their function taken over by the gills proper in fish and by lungs in most amphibians. Some amphibians retain the external larval gills in adulthood, the complex internal gill system as seen in fish apparently being irrevocably lost very early in the evolution of tetrapods.
| answer: while a whale with gills may have more fitness than a lung-breathing whale, this does not mean that the trait has to evolve. when the mutations which are the basis for evolutionary change occur, they occur randomly. the genome does not "know" to mutate in a certain way to make the animal more adapted to the environment. in whales, the necessary mutations to start them off on a path towards evolving functional gills never occurred, simply by random chance. |
143,161 | 5k6jj9 | the different ways to get student loan forgiveness | If you work for a government or not-for-profit organization, you may be able to receive loan forgiveness under the Public Service Loan Forgiveness Program (such as teachers, medical professionals at the VA and other government or nonprofit run health care facilities). To be eligible you would need to apply and show proof of employment. Teachers, for example, can get a percentage of their student loans forgiven if they've worked in a public school for a required number of years, and their loan payments are being paid on schedule.Link to info: _URL_0_ | [
"If you work for a government or not-for-profit organization, you may be able to receive loan forgiveness under the Public Service Loan Forgiveness Program (such as teachers, medical professionals at the VA and other government or nonprofit run health care facilities). To be eligible you would need to apply and sho... | 1 | [
"If you work for a government or not-for-profit organization, you may be able to receive loan forgiveness under the Public Service Loan Forgiveness Program (such as teachers, medical professionals at the VA and other government or nonprofit run health care facilities). To be eligible you would need to apply and sho... | 1 | <P> The Public Service Loan Forgiveness (PSLF) program is a United States government program that was created under the College Cost Reduction and Access Act of 2007 (CCRAA) to provide indebted professionals a way out of their federal student loan debt burden by working full-time in public service. The program permits Direct Loan borrowers who make 120 qualifying monthly payments under a qualifying repayment plan, while working full-time for a qualifying employer, to have the remainder of their balance forgiven. The earliest time in which borrowers could receive forgiveness under the program was after October 1, 2017. The Department of Education reported that 864 borrowers had their respective loans forgiven under the program as of March 31, 2019.
<P> The Public Service Loan Forgiveness Program provides for the forgiveness of certain types of federal student loans after 10 years of qualifying employment and payments. The IBR plan is one of the qualifying repayment plans for the Public Service Loan Forgiveness Program. And, to receive Public Service Loan Forgiveness, borrower must have repaid their loans under one of the "income-driven repayment plans", including IBR.
<P> The Teacher Loan Forgiveness program is a student loan forgiveness program by the United States Department of Education. This program is intended to encourage individuals to enter and continue in the teaching profession. Under this program, teachers who provide direct classroom teaching, or classroom-type teaching in a nonclassroom setting full-time for five complete and consecutive academic years in a Title 1 eligible school or school district may be eligible to receive loan forgiveness for their federal student loans.
<P> BULLET::::- Minimize the risk of investment in higher education through loan forgiveness or insurance programs. The federal government should enact partial or total loan forgiveness for students who have taken out student loans.
<P> An education loan is taken out by the student (or parent) in order to pay for educational expenses. Unlike scholarships and grants, this money must be repaid with interest. Educational loan options include federal student loans, federal parent loans, private loans, and consolidation loans.
<P> Student loan deferment is an agreement between the student and lender that the student may reduce or postpone repayment of a student loan for a designated period. Deferment or forbearance will prevent the loan from going into default, but may increase the overall cost of the loan. If the student is experiencing financial hardship or is unemployed, he or she may be eligible for deferment. The lender may require valid proof of financial hardship and other financial information when the student applies.
<P> Because they are private loans, loans granted under the FFEL program are not eligible for the Public Service Loan Forgiveness program. There have been media reports of many FFEL borrowers unaware their loans were ineligible. FFEL borrows can gain access to loan forgiveness by refinancing an existing loan with the Federal Direct Student Loan Program, but payments made before refinancing do not count toward loan forgiveness.
| question: the different ways to get student loan forgiveness context: <P> The Public Service Loan Forgiveness (PSLF) program is a United States government program that was created under the College Cost Reduction and Access Act of 2007 (CCRAA) to provide indebted professionals a way out of their federal student loan debt burden by working full-time in public service. The program permits Direct Loan borrowers who make 120 qualifying monthly payments under a qualifying repayment plan, while working full-time for a qualifying employer, to have the remainder of their balance forgiven. The earliest time in which borrowers could receive forgiveness under the program was after October 1, 2017. The Department of Education reported that 864 borrowers had their respective loans forgiven under the program as of March 31, 2019.
<P> The Public Service Loan Forgiveness Program provides for the forgiveness of certain types of federal student loans after 10 years of qualifying employment and payments. The IBR plan is one of the qualifying repayment plans for the Public Service Loan Forgiveness Program. And, to receive Public Service Loan Forgiveness, borrower must have repaid their loans under one of the "income-driven repayment plans", including IBR.
<P> The Teacher Loan Forgiveness program is a student loan forgiveness program by the United States Department of Education. This program is intended to encourage individuals to enter and continue in the teaching profession. Under this program, teachers who provide direct classroom teaching, or classroom-type teaching in a nonclassroom setting full-time for five complete and consecutive academic years in a Title 1 eligible school or school district may be eligible to receive loan forgiveness for their federal student loans.
<P> BULLET::::- Minimize the risk of investment in higher education through loan forgiveness or insurance programs. The federal government should enact partial or total loan forgiveness for students who have taken out student loans.
<P> An education loan is taken out by the student (or parent) in order to pay for educational expenses. Unlike scholarships and grants, this money must be repaid with interest. Educational loan options include federal student loans, federal parent loans, private loans, and consolidation loans.
<P> Student loan deferment is an agreement between the student and lender that the student may reduce or postpone repayment of a student loan for a designated period. Deferment or forbearance will prevent the loan from going into default, but may increase the overall cost of the loan. If the student is experiencing financial hardship or is unemployed, he or she may be eligible for deferment. The lender may require valid proof of financial hardship and other financial information when the student applies.
<P> Because they are private loans, loans granted under the FFEL program are not eligible for the Public Service Loan Forgiveness program. There have been media reports of many FFEL borrowers unaware their loans were ineligible. FFEL borrows can gain access to loan forgiveness by refinancing an existing loan with the Federal Direct Student Loan Program, but payments made before refinancing do not count toward loan forgiveness.
| answer: If you work for a government or not-for-profit organization, you may be able to receive loan forgiveness under the Public Service Loan Forgiveness Program (such as teachers, medical professionals at the VA and other government or nonprofit run health care facilities). To be eligible you would need to apply and show proof of employment. Teachers, for example, can get a percentage of their student loans forgiven if they've worked in a public school for a required number of years, and their loan payments are being paid on schedule.Link to info: _URL_0_ |
76,458 | 2tgyya | what actually happens at an eye exam and what am i supposed do say? | When the put the big frames on you and start switching lenses, they're testing various different combinations to see exactly which parts of your eye have changed. Some of it's very basic things like clarity at certain ranges, and how to fix it, some of it is things like colour perception. You don't necessarily notice it in general conditions because your mind compensates, but in the very specific conditions of the exam room they can get very accurate measures of exactly how your eyes change what goes into them.As for what you should answer, it's the truth. That's the only way they're going to be able to properly gauge what's wrong, if anything, and how to fix it. If they don't fix it, you may well start having problems, and your eyes can get even worse. The answer that would mean you had perfect vision is pretty much impossible to know unless you're well trained and also know the exact setup they're giving you, which is partly intentional, because it would be pretty bad if you could just lie and pretend to be perfect. Something that complex doesn't and shouldn't have a cheat sheet. | [
"When the put the big frames on you and start switching lenses, they're testing various different combinations to see exactly which parts of your eye have changed. Some of it's very basic things like clarity at certain ranges, and how to fix it, some of it is things like colour perception. You don't necessarily not... | 2 | [] | 0 | <P> During a physical examination to check for MG, a doctor might ask the person to perform repetitive movements. For instance, the doctor may ask one to look at a fixed point for 30 seconds and to relax the muscles of the forehead. This is done because a person with MG and ptosis of the eyes might be involuntarily using the forehead muscles to compensate for the weakness in the eyelids. The clinical examiner might also try to elicit the "curtain sign" in a patient by holding one of the person's eyes open, which in the case of MG will lead the other eye to close.
<P> While a patient is seated in the examination chair, he rests his chin and forehead on a support to steady the head. Using the biomicroscope, the optician then proceeds to examine the patient's eye. A fine strip of paper, stained with fluorescein, a fluorescent dye, may be touched to the side of the eye; this stains the tear film on the surface of the eye to aid examination. The dye is naturally rinsed out of the eye by tears. Adults need no special preparation for the test; however children may need some preparation, depending on age, previous experiences, and level of trust.
<P> A subsequent test may involve placing drops in the eye in order to dilate the pupils. The drops take about 15 to 20 minutes to work, after which the examination is repeated, allowing the back of the eye to be examined. Patients will experience some light sensitivity for a few hours after this exam, and the dilating drops may also cause increased pressure in the eye, leading to nausea and pain. Patients who experience serious symptoms are advised to seek medical attention immediately.
<P> An eye examination is a series of tests performed by an ophthalmologist (medical doctor), optometrist, or orthoptist, optician (UK), assessing vision and ability to focus on and discern objects, as well as other tests and examinations pertaining to the eyes.
<P> In most cases, the patient meets the ophthalmologist for eye examination and other tests weeks or months preceding surgery. During the meeting, the ophthalmologist will examine the eye and diagnose its condition. The doctor will also record the history of the patient’s health and other previous eye treatments, if any. The doctor will discuss the risks and benefits of the surgery. If the patient elects for the surgery, the doctor will have the patient sign an informed consent form. The doctor may also perform physical and lab examinations, such as an X-ray, an EKG, a slit lamp test, an ultrasound B-scan, or an A-scan.
<P> Getting a regular eye exam may play a role in identifying the signs of some systemic diseases. "The eye is composed of many different types of tissue. This unique feature makes the eye susceptible to a wide variety of diseases as well as provides insights into many body systems. Almost any part of the eye can give important clues to the diagnosis of systemic diseases. Signs of a systemic disease may be evident on the outer surface of the eye (eyelids, conjunctiva and cornea), middle of the eye and at the back of the eye (retina)."
<P> For clinical evaluation purposes in the practice of psychiatry and clinical psychology, as part of a mental status exam, the clinician may describe the initiation, frequency, and quality of eye contact. For example, the doctor may note whether the patient initiates, responds to, sustains, or evades eye contact. The clinician may also note whether eye contact is unusually intense or blank, or whether the patient glares, looks down, or looks aside frequently.
| question: what actually happens at an eye exam and what am i supposed do say? context: <P> During a physical examination to check for MG, a doctor might ask the person to perform repetitive movements. For instance, the doctor may ask one to look at a fixed point for 30 seconds and to relax the muscles of the forehead. This is done because a person with MG and ptosis of the eyes might be involuntarily using the forehead muscles to compensate for the weakness in the eyelids. The clinical examiner might also try to elicit the "curtain sign" in a patient by holding one of the person's eyes open, which in the case of MG will lead the other eye to close.
<P> While a patient is seated in the examination chair, he rests his chin and forehead on a support to steady the head. Using the biomicroscope, the optician then proceeds to examine the patient's eye. A fine strip of paper, stained with fluorescein, a fluorescent dye, may be touched to the side of the eye; this stains the tear film on the surface of the eye to aid examination. The dye is naturally rinsed out of the eye by tears. Adults need no special preparation for the test; however children may need some preparation, depending on age, previous experiences, and level of trust.
<P> A subsequent test may involve placing drops in the eye in order to dilate the pupils. The drops take about 15 to 20 minutes to work, after which the examination is repeated, allowing the back of the eye to be examined. Patients will experience some light sensitivity for a few hours after this exam, and the dilating drops may also cause increased pressure in the eye, leading to nausea and pain. Patients who experience serious symptoms are advised to seek medical attention immediately.
<P> An eye examination is a series of tests performed by an ophthalmologist (medical doctor), optometrist, or orthoptist, optician (UK), assessing vision and ability to focus on and discern objects, as well as other tests and examinations pertaining to the eyes.
<P> In most cases, the patient meets the ophthalmologist for eye examination and other tests weeks or months preceding surgery. During the meeting, the ophthalmologist will examine the eye and diagnose its condition. The doctor will also record the history of the patient’s health and other previous eye treatments, if any. The doctor will discuss the risks and benefits of the surgery. If the patient elects for the surgery, the doctor will have the patient sign an informed consent form. The doctor may also perform physical and lab examinations, such as an X-ray, an EKG, a slit lamp test, an ultrasound B-scan, or an A-scan.
<P> Getting a regular eye exam may play a role in identifying the signs of some systemic diseases. "The eye is composed of many different types of tissue. This unique feature makes the eye susceptible to a wide variety of diseases as well as provides insights into many body systems. Almost any part of the eye can give important clues to the diagnosis of systemic diseases. Signs of a systemic disease may be evident on the outer surface of the eye (eyelids, conjunctiva and cornea), middle of the eye and at the back of the eye (retina)."
<P> For clinical evaluation purposes in the practice of psychiatry and clinical psychology, as part of a mental status exam, the clinician may describe the initiation, frequency, and quality of eye contact. For example, the doctor may note whether the patient initiates, responds to, sustains, or evades eye contact. The clinician may also note whether eye contact is unusually intense or blank, or whether the patient glares, looks down, or looks aside frequently.
| answer: When the put the big frames on you and start switching lenses, they're testing various different combinations to see exactly which parts of your eye have changed. Some of it's very basic things like clarity at certain ranges, and how to fix it, some of it is things like colour perception. You don't necessarily notice it in general conditions because your mind compensates, but in the very specific conditions of the exam room they can get very accurate measures of exactly how your eyes change what goes into them.As for what you should answer, it's the truth. That's the only way they're going to be able to properly gauge what's wrong, if anything, and how to fix it. If they don't fix it, you may well start having problems, and your eyes can get even worse. The answer that would mean you had perfect vision is pretty much impossible to know unless you're well trained and also know the exact setup they're giving you, which is partly intentional, because it would be pretty bad if you could just lie and pretend to be perfect. Something that complex doesn't and shouldn't have a cheat sheet. |
135,736 | 3pnl6f | Was the German Battleship really that big of a threat? | *Bismarck* was a major threat, because of what it could do, and what preventing that meant for the Royal Navy. She was a major threat to the Atlantic convoy routes that Britain was relying upon for much of its food, as well as steel and other strategic materials. In 1941, Britain would import 14.7 million tonnes of food, and 15 million tonnes of raw materials. Losses to shipping on these routes, mainly from U-boats, were high - 5 million tonnes would be lost in 1941 alone, and British shipbuilding couldn't replace this (see [this chapter](_URL_0_) from British War Economy by Hancock and Gowing). *Bismarck* represented a further threat to these routes, as she could wipe out any convoy she ran into if it wasn't escorted by a battleship. Other surface raiders, such as the cruiser *Admiral Scheer* and battlecruisers *Scharnhorst* and *Gneisenau* were greatly effective - *Scheer* would sink over 100,000 tonnes on a single cruise in 1940-41, while *Scharnhorst* and *Gneisenau* would sink a similar amount between them during Operation Berlin. There's little reason to believe that *Bismarck* wouldn't have been as capable. Of course, the RN could have escorted the convoys with battleships, and did as a matter of routine when German surface raiders were out. However, this took them away from areas where the Admiralty would have preferred them to be, such as the Mediterranean. It also exposed them to submarine attack. There was also the threat that *Bismarck* could join up with *Scharnhorst* and *Gneisenau* at Brest. There, they would form a force that the British couldn't effectively counter with additional convoy escorts, but would instead have to be hunted down, a far more difficult task. | [
"*Bismarck* was a major threat, because of what it could do, and what preventing that meant for the Royal Navy. She was a major threat to the Atlantic convoy routes that Britain was relying upon for much of its food, as well as steel and other strategic materials. In 1941, Britain would import 14.7 million tonnes o... | 1 | [
"*Bismarck* was a major threat, because of what it could do, and what preventing that meant for the Royal Navy. She was a major threat to the Atlantic convoy routes that Britain was relying upon for much of its food, as well as steel and other strategic materials. In 1941, Britain would import 14.7 million tonnes o... | 1 | <P> With 16 dreadnought-type battleships, compared with the Royal Navy's 28, the German High Seas Fleet stood little chance of winning a head-to-head clash. The Germans therefore adopted a divide-and-conquer strategy. They would stage raids into the North Sea and bombard the English coast, with the aim of luring out small British squadrons and pickets, which could then be destroyed by superior forces or submarines.
<P> Until the King George V class class, also at 28 knots, no British battleship was fast enough to catch the new German battleship "Bismarck". Her mission was to evade action and make for the open seas to attack convoys. "Hood" was needed to stop her. The navy recognized that "Hood" needed to be rebuilt to strengthen her decks to protect the vulnerable magazines but by 1938, with war threatening, the Admiralty felt that they could not risk taking her out of commission. Britain had only three battlecruisers to match the three German pocket battleships.
<P> "Bismarck" had been hit only two (or perhaps three) times but Admiral Lütjens overruled "Bismarck"s Captain Ernst Lindemann who wanted to pursue the damaged "Prince of Wales" and finish her off. All of the hits on "Bismarck" had been inflicted by "Prince of Wales" guns. One of the hits had penetrated the German battleship's hull near the bow, rupturing some of her fuel tanks, causing her to leak oil continuously and at a serious rate. This was to be a critical factor as the pursuit continued, forcing "Bismarck" to make for Brest instead of escaping into the great expanse of the Atlantic. The resulting oil slick also helped the British cruisers to shadow her.
<P> In 1941 one of the four modern German battleships, sank while breaking out into the Atlantic for commerce raiding. "Bismarck" was in turn hunted down by much superior British forces after being crippled by an air-launched torpedo. She was subsequently scuttled after being rendered a burning wreck by two British battleships.
<P> German naval historian Erich Gröner, in his book "German Warships 1815–1945", stated that the German navy considered the ships to be "very good sea-boats." They suffered a slight loss of speed in a swell, and with the rudder hard over, the ships lost up to 66% speed and heeled over 8 degrees. The battleships had a transverse metacentric height of 2.59 m. "König", "Grosser Kurfürst", "Markgraf", and "Kronprinz" each had a standard crew of 41 officers and 1095 enlisted men; "König", the flagship of the III Squadron, had an additional crew of 14 officers and another 68 sailors. The ships carried several smaller boats, including one picket boat, three barges, two launches, two yawls, and two dinghies.
<P> The British public were shocked that their most emblematic warship and more than 1,400 of her crew had been destroyed so suddenly. The Admiralty mobilised every available warship in the Atlantic to hunt down and destroy "Bismarck". The Royal Navy forces pursued and brought "Bismarck" to battle. The German battleship was sunk on the morning of 27 May.
<P> In one scene, Lütjens speculates that after "Bismarck" has undergone repair in Brest, the two German battleships based there, "Gneisenau" and "Scharnhorst", could join "Bismarck" in raiding Allied shipping. There is no record of such a discussion at that time, although it would have been possible for "Bismarck" to sortie with the two battleships if "Bismarck" had reached the port.
| question: Was the German Battleship really that big of a threat? context: <P> With 16 dreadnought-type battleships, compared with the Royal Navy's 28, the German High Seas Fleet stood little chance of winning a head-to-head clash. The Germans therefore adopted a divide-and-conquer strategy. They would stage raids into the North Sea and bombard the English coast, with the aim of luring out small British squadrons and pickets, which could then be destroyed by superior forces or submarines.
<P> Until the King George V class class, also at 28 knots, no British battleship was fast enough to catch the new German battleship "Bismarck". Her mission was to evade action and make for the open seas to attack convoys. "Hood" was needed to stop her. The navy recognized that "Hood" needed to be rebuilt to strengthen her decks to protect the vulnerable magazines but by 1938, with war threatening, the Admiralty felt that they could not risk taking her out of commission. Britain had only three battlecruisers to match the three German pocket battleships.
<P> "Bismarck" had been hit only two (or perhaps three) times but Admiral Lütjens overruled "Bismarck"s Captain Ernst Lindemann who wanted to pursue the damaged "Prince of Wales" and finish her off. All of the hits on "Bismarck" had been inflicted by "Prince of Wales" guns. One of the hits had penetrated the German battleship's hull near the bow, rupturing some of her fuel tanks, causing her to leak oil continuously and at a serious rate. This was to be a critical factor as the pursuit continued, forcing "Bismarck" to make for Brest instead of escaping into the great expanse of the Atlantic. The resulting oil slick also helped the British cruisers to shadow her.
<P> In 1941 one of the four modern German battleships, sank while breaking out into the Atlantic for commerce raiding. "Bismarck" was in turn hunted down by much superior British forces after being crippled by an air-launched torpedo. She was subsequently scuttled after being rendered a burning wreck by two British battleships.
<P> German naval historian Erich Gröner, in his book "German Warships 1815–1945", stated that the German navy considered the ships to be "very good sea-boats." They suffered a slight loss of speed in a swell, and with the rudder hard over, the ships lost up to 66% speed and heeled over 8 degrees. The battleships had a transverse metacentric height of 2.59 m. "König", "Grosser Kurfürst", "Markgraf", and "Kronprinz" each had a standard crew of 41 officers and 1095 enlisted men; "König", the flagship of the III Squadron, had an additional crew of 14 officers and another 68 sailors. The ships carried several smaller boats, including one picket boat, three barges, two launches, two yawls, and two dinghies.
<P> The British public were shocked that their most emblematic warship and more than 1,400 of her crew had been destroyed so suddenly. The Admiralty mobilised every available warship in the Atlantic to hunt down and destroy "Bismarck". The Royal Navy forces pursued and brought "Bismarck" to battle. The German battleship was sunk on the morning of 27 May.
<P> In one scene, Lütjens speculates that after "Bismarck" has undergone repair in Brest, the two German battleships based there, "Gneisenau" and "Scharnhorst", could join "Bismarck" in raiding Allied shipping. There is no record of such a discussion at that time, although it would have been possible for "Bismarck" to sortie with the two battleships if "Bismarck" had reached the port.
| answer: *Bismarck* was a major threat, because of what it could do, and what preventing that meant for the Royal Navy. She was a major threat to the Atlantic convoy routes that Britain was relying upon for much of its food, as well as steel and other strategic materials. In 1941, Britain would import 14.7 million tonnes of food, and 15 million tonnes of raw materials. Losses to shipping on these routes, mainly from U-boats, were high - 5 million tonnes would be lost in 1941 alone, and British shipbuilding couldn't replace this (see [this chapter](_URL_0_) from British War Economy by Hancock and Gowing). *Bismarck* represented a further threat to these routes, as she could wipe out any convoy she ran into if it wasn't escorted by a battleship. Other surface raiders, such as the cruiser *Admiral Scheer* and battlecruisers *Scharnhorst* and *Gneisenau* were greatly effective - *Scheer* would sink over 100,000 tonnes on a single cruise in 1940-41, while *Scharnhorst* and *Gneisenau* would sink a similar amount between them during Operation Berlin. There's little reason to believe that *Bismarck* wouldn't have been as capable. Of course, the RN could have escorted the convoys with battleships, and did as a matter of routine when German surface raiders were out. However, this took them away from areas where the Admiralty would have preferred them to be, such as the Mediterranean. It also exposed them to submarine attack. There was also the threat that *Bismarck* could join up with *Scharnhorst* and *Gneisenau* at Brest. There, they would form a force that the British couldn't effectively counter with additional convoy escorts, but would instead have to be hunted down, a far more difficult task. |
44,643 | 1450vm | What are some books and other resources for studying Germany's tank divisions and their actions from WW2? | Firstly, in most languages one who works in a tank is called a "tanker" or "Crewman".Indispensable: _URL_0_Memoir of German tank ace "Otto Carius". I've read it, it's great. He was shot ~11 times and is still alive. | [
"Firstly, in most languages one who works in a tank is called a \"tanker\" or \"Crewman\".\n\nIndispensable: _URL_0_\n\n\nMemoir of German tank ace \"Otto Carius\". I've read it, it's great. He was shot ~11 times and is still alive.",
"Maybe these books can be of help:\n\n[By Tank Into Normandy - Stuart Hills](_... | 3 | [
"Firstly, in most languages one who works in a tank is called a \"tanker\" or \"Crewman\".\n\nIndispensable: _URL_0_\n\n\nMemoir of German tank ace \"Otto Carius\". I've read it, it's great. He was shot ~11 times and is still alive."
] | 1 | <P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945," London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
| question: What are some books and other resources for studying Germany's tank divisions and their actions from WW2? context: <P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945," London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
<P> BULLET::::- Chamberlain, Peter, and Hilary L. Doyle. Thomas L. Jentz (Technical Editor). "Encyclopedia of German Tanks of World War Two: A Complete Illustrated Directory of German Battle Tanks, Armoured Cars, Self-propelled Guns, and Semi-tracked Vehicles, 1933–1945". London: Arms and Armour Press, 1978 (revised edition 1993).
| answer: Firstly, in most languages one who works in a tank is called a "tanker" or "Crewman".Indispensable: _URL_0_Memoir of German tank ace "Otto Carius". I've read it, it's great. He was shot ~11 times and is still alive. |
77,386 | 2bnq4r | How are meteorites dated if they don't develop in Earth's conditions? | This is a great question and it turns out the exact same techniques work for both terrestrial and extra-terrestrial samples assuming they all come from our solar system.There is a technique for dating where you need different minerals from the same rock (or meteorite) and then you can setup an isochron where you plot the isotope ratio of the daughter (e.g., 206Pb) to a primordial isotope of the daughter element (something with no radioactive decay input so 204Pb) versus the ratio of the parent isotope (so 238U) to the 204Pb. If these plot on a line the slope of that line is the age of the sample (basically things that have higher 238U/204Pb should have higher 206Pb/204Pb because the ingrowth is faster) and from this the intercept tells you the original 206Pb/204Pb ratio the sample had. This is the safest and best way to date samples where you don't know the isotope composition of the daughter ahead of time.Now you can also go another route and try to find phases to date where the initial amount of daughter (e.g., Pb) is so low that you don't need to correct for it. For example if you find a mineral called zircon in your meteorite you generally can just date it without correcting for initial Pb because it was so low.I hope that helps. | [
"This is a great question and it turns out the exact same techniques work for both terrestrial and extra-terrestrial samples assuming they all come from our solar system.\n\nThere is a technique for dating where you need different minerals from the same rock (or meteorite) and then you can setup an isochron where y... | 1 | [
"This is a great question and it turns out the exact same techniques work for both terrestrial and extra-terrestrial samples assuming they all come from our solar system.\n\nThere is a technique for dating where you need different minerals from the same rock (or meteorite) and then you can setup an isochron where y... | 1 | <P> Most meteorites date from the oldest times in the solar system and are by far the oldest material available on the planet. Despite their age, they are fairly vulnerable to terrestrial environment: water, salt, and oxygen attack the meteorites as soon they reach the ground.
<P> The oldest inclusions found in meteorites, thought to trace the first solid material to form in the pre-solar nebula, are 4568.2 million years old, which is one definition of the age of the Solar System. Studies of ancient meteorites reveal traces of stable daughter nuclei of short-lived isotopes, such as iron-60, that only form in exploding, short-lived stars. This indicates that one or more supernovae occurred near the Sun while it was forming. A shock wave from a supernova may have triggered the formation of the Sun by creating relatively dense regions within the cloud, causing these regions to collapse. Because only massive, short-lived stars produce supernovae, the Sun must have formed in a large star-forming region that produced massive stars, possibly similar to the Orion Nebula. Studies of the structure of the Kuiper belt and of anomalous materials within it suggest that the Sun formed within a cluster of between 1,000 and 10,000 stars with a diameter of between 6.5 and 19.5 light years and a collective mass of . This cluster began to break apart between 135 million and 535 million years after formation. Several simulations of our young Sun interacting with close-passing stars over the first 100 million years of its life produce anomalous orbits observed in the outer Solar System, such as detached objects.
<P> Meteorite weathering is the terrestrial alteration of a meteorite. Most meteorites date from the oldest times in the Solar System and are by far the oldest material available on our planet. Despite their age, they are vulnerable to the terrestrial environment. Water, chlorine and oxygen attack meteorites as soon they reach the ground.
<P> Studies have shown it to be the oldest discovered meteorite impacting the Earth during the Quaternary Period, about one million years ago. It is quite clearly part of the iron core or mantle of a planetoid, which shattered into many pieces upon its fall on our planet. Since landing on Earth the meteorite has experienced four ice ages. It was unearthed from a glacial moraine in the northern tundra. It has a strongly weathered surface covered with cemented faceted pebbles.
<P> Older rocks could be found, however, in the form of asteroid fragments that fall to Earth as meteorites. Like the rocks on Earth, asteroids also show a strong cutoff point, at about 4.6 Ga, which is assumed to be the time when the first solids formed in the protoplanetary disk around the then-young Sun. The Hadean, then, was the period of time between the formation of these early rocks in space, and the eventual solidification of the Earth's crust, some 700 million years later. This time would include the accretion of the planets from the disk and the slow cooling of the Earth into a solid body as the gravitational potential energy of accretion was released.
<P> The majority of SNC meteorites are quite young compared to most other meteorites and seem to imply that volcanic activity was present on Mars only a few hundred million years ago. The young formation ages of Martian meteorites was one of the early recognized characteristics that suggested their origin from a planetary body such as Mars. Among Martian meteorites, only ALH 84001 and NWA 7034 have radiometric ages older than about 1400 Ma (Ma = million years). All nakhlites, as well as Chassigny and NWA 2737, give similar if not identical formation ages around 1300 Ma, as determined by various radiometric dating techniques. Formation ages determined for many shergottites are variable and much younger, mostly ~150-575 Ma. The chronological history of shergottites is not totally understood, and a few scientists have suggested that some may actually have formed prior to the times given by their radiometric ages, a suggestion not accepted by most scientists. Formation ages of SNC meteorites are often linked to their cosmic-ray exposure (CRE) ages, as measured from the nuclear products of interactions of the meteorite in space with energetic cosmic ray particles. Thus, all measured nakhlites give essentially identical CRE ages of approximately 11 Ma, which when combined with their possible identical formation ages indicates ejection of nakhlites into space from a single location on Mars by a single impact event. Some of the shergottites also seem to form distinct groups according to their CRE ages and formation ages, again indicating ejection of several different shergottites from Mars by a single impact. However, CRE ages of shergottites vary considerably (~0.5–19 Ma), and several impact events are required to eject all the known shergottites. It had been asserted that there are no large young craters on Mars that are candidates as sources for the Martian meteorites, but subsequent studies claimed to have a likely source for ALH 84001 and a possible source for other shergottites.
<P> The meteorite was found to contain some of the oldest material in the solar system. Two 10-micron diamond grains (xenoliths) were found in the meteorite recovered before the rain fell. In primitive meteorites like Sutter's Mill, some grains survived from what existed in the cloud of gas, dust and ice that formed the solar system.
| question: How are meteorites dated if they don't develop in Earth's conditions? context: <P> Most meteorites date from the oldest times in the solar system and are by far the oldest material available on the planet. Despite their age, they are fairly vulnerable to terrestrial environment: water, salt, and oxygen attack the meteorites as soon they reach the ground.
<P> The oldest inclusions found in meteorites, thought to trace the first solid material to form in the pre-solar nebula, are 4568.2 million years old, which is one definition of the age of the Solar System. Studies of ancient meteorites reveal traces of stable daughter nuclei of short-lived isotopes, such as iron-60, that only form in exploding, short-lived stars. This indicates that one or more supernovae occurred near the Sun while it was forming. A shock wave from a supernova may have triggered the formation of the Sun by creating relatively dense regions within the cloud, causing these regions to collapse. Because only massive, short-lived stars produce supernovae, the Sun must have formed in a large star-forming region that produced massive stars, possibly similar to the Orion Nebula. Studies of the structure of the Kuiper belt and of anomalous materials within it suggest that the Sun formed within a cluster of between 1,000 and 10,000 stars with a diameter of between 6.5 and 19.5 light years and a collective mass of . This cluster began to break apart between 135 million and 535 million years after formation. Several simulations of our young Sun interacting with close-passing stars over the first 100 million years of its life produce anomalous orbits observed in the outer Solar System, such as detached objects.
<P> Meteorite weathering is the terrestrial alteration of a meteorite. Most meteorites date from the oldest times in the Solar System and are by far the oldest material available on our planet. Despite their age, they are vulnerable to the terrestrial environment. Water, chlorine and oxygen attack meteorites as soon they reach the ground.
<P> Studies have shown it to be the oldest discovered meteorite impacting the Earth during the Quaternary Period, about one million years ago. It is quite clearly part of the iron core or mantle of a planetoid, which shattered into many pieces upon its fall on our planet. Since landing on Earth the meteorite has experienced four ice ages. It was unearthed from a glacial moraine in the northern tundra. It has a strongly weathered surface covered with cemented faceted pebbles.
<P> Older rocks could be found, however, in the form of asteroid fragments that fall to Earth as meteorites. Like the rocks on Earth, asteroids also show a strong cutoff point, at about 4.6 Ga, which is assumed to be the time when the first solids formed in the protoplanetary disk around the then-young Sun. The Hadean, then, was the period of time between the formation of these early rocks in space, and the eventual solidification of the Earth's crust, some 700 million years later. This time would include the accretion of the planets from the disk and the slow cooling of the Earth into a solid body as the gravitational potential energy of accretion was released.
<P> The majority of SNC meteorites are quite young compared to most other meteorites and seem to imply that volcanic activity was present on Mars only a few hundred million years ago. The young formation ages of Martian meteorites was one of the early recognized characteristics that suggested their origin from a planetary body such as Mars. Among Martian meteorites, only ALH 84001 and NWA 7034 have radiometric ages older than about 1400 Ma (Ma = million years). All nakhlites, as well as Chassigny and NWA 2737, give similar if not identical formation ages around 1300 Ma, as determined by various radiometric dating techniques. Formation ages determined for many shergottites are variable and much younger, mostly ~150-575 Ma. The chronological history of shergottites is not totally understood, and a few scientists have suggested that some may actually have formed prior to the times given by their radiometric ages, a suggestion not accepted by most scientists. Formation ages of SNC meteorites are often linked to their cosmic-ray exposure (CRE) ages, as measured from the nuclear products of interactions of the meteorite in space with energetic cosmic ray particles. Thus, all measured nakhlites give essentially identical CRE ages of approximately 11 Ma, which when combined with their possible identical formation ages indicates ejection of nakhlites into space from a single location on Mars by a single impact event. Some of the shergottites also seem to form distinct groups according to their CRE ages and formation ages, again indicating ejection of several different shergottites from Mars by a single impact. However, CRE ages of shergottites vary considerably (~0.5–19 Ma), and several impact events are required to eject all the known shergottites. It had been asserted that there are no large young craters on Mars that are candidates as sources for the Martian meteorites, but subsequent studies claimed to have a likely source for ALH 84001 and a possible source for other shergottites.
<P> The meteorite was found to contain some of the oldest material in the solar system. Two 10-micron diamond grains (xenoliths) were found in the meteorite recovered before the rain fell. In primitive meteorites like Sutter's Mill, some grains survived from what existed in the cloud of gas, dust and ice that formed the solar system.
| answer: This is a great question and it turns out the exact same techniques work for both terrestrial and extra-terrestrial samples assuming they all come from our solar system.There is a technique for dating where you need different minerals from the same rock (or meteorite) and then you can setup an isochron where you plot the isotope ratio of the daughter (e.g., 206Pb) to a primordial isotope of the daughter element (something with no radioactive decay input so 204Pb) versus the ratio of the parent isotope (so 238U) to the 204Pb. If these plot on a line the slope of that line is the age of the sample (basically things that have higher 238U/204Pb should have higher 206Pb/204Pb because the ingrowth is faster) and from this the intercept tells you the original 206Pb/204Pb ratio the sample had. This is the safest and best way to date samples where you don't know the isotope composition of the daughter ahead of time.Now you can also go another route and try to find phases to date where the initial amount of daughter (e.g., Pb) is so low that you don't need to correct for it. For example if you find a mineral called zircon in your meteorite you generally can just date it without correcting for initial Pb because it was so low.I hope that helps. |
122,334 | 58ok9q | When plays were preformed in classical Greece and Rome would the audience have shouted and jeered at the stage like they did at medieval and Renaissance plays? | The crowds at ancient theatrical performances could be outright brutal, in the most literal sense of the word. Aristotle mentions the actor who played Amphiaraus in one of Carcinus' tragedies was thrown from the stage because the scene being portrayed upset the audience with its inconsistency. Aristotle uses ἐξέπεσεν, a verb that probably implies not physical ejection, but that the actor was forced to withdraw after being hissed at--Demosthenes explicitly connects the verb with συρίζω, the verb to hiss. There's an anecdote in Pollux about a certain Hermon, a comic actor who was called on the stage early, in the middle of his vocal exercises, because all the actors before him had been thrown out. Plato in the *Republic* compares the behavior of the crowd at the theater to its behavior at other large gatherings, including the assembly and law courts--in the *Apology* Socrates repeatedly has to ask the crowd at his trial not to shout out at him when he thinks he's about to say something unpopular or ridiculous. Aristophanes himself refers to the crowd hounding a fellow playwright of his, Crates, with hisses and anger. Indeed, showing approval or lack thereof at an Athenian dramatic performance was somewhat expected, as it could influence the judges--in the *Ecclesiazusae* Aristophanes appeals to anyone in the audience who liked the play and the judges together to judge his play well. Besides that audiences would bang on their benches (the Theater of Dionysus was not fully built from stone until the 4th Century), throw fruit, vegetables--even stones. Demosthenes accused Aeschines of surviving during his time as an actor by eating the figs, grapes, and olives thrown at him by the audiences at his performances. Roman audiences were no better. During Caesar's consulship, at the height of the triumvirs' unpopularity, the crowd threw out a gladiatorial exhibition with hissing (*sibilis*) and forced the poor actor Diphilus to repeat the line *nostra miseria tu es magnus* (by our misery are you great--an attack on Pompey) over and over. On the same occasion the crowd refused to applaud Caesar but applauded wildly when Curio entered. In the prologue to the *Andria* asks his audience please to hiss the play off the stage until it's finished. Roman theatrical performances could be quite nasty--before there were circus factions there were theater gangs. Tacitus mentions one of the leaders of the Pannonian mutiny under Tiberius as being the leader of one of these *operae* and we know of incidents when the audience began fighting each other over whether particular actors were any good | [
"The crowds at ancient theatrical performances could be outright brutal, in the most literal sense of the word. Aristotle mentions the actor who played Amphiaraus in one of Carcinus' tragedies was thrown from the stage because the scene being portrayed upset the audience with its inconsistency. Aristotle uses ἐξέπε... | 1 | [
"The crowds at ancient theatrical performances could be outright brutal, in the most literal sense of the word. Aristotle mentions the actor who played Amphiaraus in one of Carcinus' tragedies was thrown from the stage because the scene being portrayed upset the audience with its inconsistency. Aristotle uses ἐξέπε... | 1 | <P> Before the Roman texts started being used for inspiration, there were many story-telling elements that the Latin writers had not figured out yet. There used to be a disregard for dramatic unity. Latin plays had a tendency to jump from scene to scene in a disjunct manner. It was jarring and hard for audiences to follow. Sometimes there were scenes written that they later realized were impossible to do on stage. Dialogue also had a tendency to be stiff and lengthy. Practical jokes were very popular in these shows, but they were only very simple tricks.
<P> Plays of the ancient Greek theatre always included a chorus that offered a variety of background and summary information to help the audience follow the performance. They commented on themes, and, as August Wilhelm Schlegel proposed in the early 19th century to subsequent controversy, demonstrated how the audience might react to the drama. According to Schlegel, the Chorus is "the ideal spectator", and conveys to the actual spectator "a lyrical and musical expression of his own emotions, and elevates him to the region of contemplation". In many of these plays, the chorus expressed to the audience what the main characters could not say, such as their hidden fears or secrets. The chorus often provided other characters with the insight they needed.
<P> In 1914, the Istituto Nazionale del Dramma Antico (INDA) began the annual performance of Greek drama in the ancient theatre (the first was the tragedy "Agamemnon" of Aeschylus, directed by Ettore Romagnoli). The ancient Greek tragedies are performed at sunset, in Italian (with translations by famous writers such as Salvatore Quasimodo), without sound systems because of the quality of the theatre's acoustics. Each theatre season begins in May and ends in July, attracting thousands of spectators from all over the world. Some of the most illustrious performed tragedies are "Antigone", "Oedipus Rex", "Electra", "Medea" and "The Bacchae". Aside from this, the theatre has enjoyed use for concerts and official prizegivings, like the Premio Vittorini, but such use has been tightly limited for conservation reasons.
<P> Plays were performed in medieval times in a form of theatre called "Commedia dell'arte", which used music and sound effects to enhance performances. The use of music and sound in the Elizabethan Theatre followed, in which music and sound effects were produced off stage using devices such as bells, whistles, and horns. Cues would be written in the script for music and sound effects to be played at the appropriate time.
<P> They often communicated in song form, but sometimes spoke their lines in unison. The chorus had to work in unison to help explain the play as there were only one to three actors on stage who were already playing several parts each. As the Greek theatres were so large, the chorus' actions had to be exaggerated and their voices clear so that everyone could see and hear them. To do this, they used techniques such as synchronization, echo, ripple, physical theatre and the use of masks to aid them. A Greek chorus was often led by a coryphaeus. They also served as the ancient equivalent for a curtain, as their parodos (entering procession) signified the beginnings of a play and their exodos (exit procession) served as the curtains closing.
<P> Beginning with Friedrich von Schlegel, many have argued that the tragedies of Seneca the Younger in the first century AD were written to be recited at small parties rather than performed. Although that theory has become widely pervasive in the history of theater, there is no evidence to support the contention that his plays were intended to be read or recited at small gatherings of the wealthy. The emperor Nero, a pupil of Seneca, may have performed in some of them. Some of the drama of the Middle Ages was of the closet-drama type, such as the drama of Hroswitha of Gandersheim and debate poems in quasi-dramatic form.
<P> During the Italian Renaissance, there was a renewed interest in the theatre of ancient Greece. The Florentine Camerata crafted the first operas out of the intermezzi that acted as comic or musical relief during the dramas of the time. These were based entirely on the Greek chorus, as historian H.C. Montgomery argues.
| question: When plays were preformed in classical Greece and Rome would the audience have shouted and jeered at the stage like they did at medieval and Renaissance plays? context: <P> Before the Roman texts started being used for inspiration, there were many story-telling elements that the Latin writers had not figured out yet. There used to be a disregard for dramatic unity. Latin plays had a tendency to jump from scene to scene in a disjunct manner. It was jarring and hard for audiences to follow. Sometimes there were scenes written that they later realized were impossible to do on stage. Dialogue also had a tendency to be stiff and lengthy. Practical jokes were very popular in these shows, but they were only very simple tricks.
<P> Plays of the ancient Greek theatre always included a chorus that offered a variety of background and summary information to help the audience follow the performance. They commented on themes, and, as August Wilhelm Schlegel proposed in the early 19th century to subsequent controversy, demonstrated how the audience might react to the drama. According to Schlegel, the Chorus is "the ideal spectator", and conveys to the actual spectator "a lyrical and musical expression of his own emotions, and elevates him to the region of contemplation". In many of these plays, the chorus expressed to the audience what the main characters could not say, such as their hidden fears or secrets. The chorus often provided other characters with the insight they needed.
<P> In 1914, the Istituto Nazionale del Dramma Antico (INDA) began the annual performance of Greek drama in the ancient theatre (the first was the tragedy "Agamemnon" of Aeschylus, directed by Ettore Romagnoli). The ancient Greek tragedies are performed at sunset, in Italian (with translations by famous writers such as Salvatore Quasimodo), without sound systems because of the quality of the theatre's acoustics. Each theatre season begins in May and ends in July, attracting thousands of spectators from all over the world. Some of the most illustrious performed tragedies are "Antigone", "Oedipus Rex", "Electra", "Medea" and "The Bacchae". Aside from this, the theatre has enjoyed use for concerts and official prizegivings, like the Premio Vittorini, but such use has been tightly limited for conservation reasons.
<P> Plays were performed in medieval times in a form of theatre called "Commedia dell'arte", which used music and sound effects to enhance performances. The use of music and sound in the Elizabethan Theatre followed, in which music and sound effects were produced off stage using devices such as bells, whistles, and horns. Cues would be written in the script for music and sound effects to be played at the appropriate time.
<P> They often communicated in song form, but sometimes spoke their lines in unison. The chorus had to work in unison to help explain the play as there were only one to three actors on stage who were already playing several parts each. As the Greek theatres were so large, the chorus' actions had to be exaggerated and their voices clear so that everyone could see and hear them. To do this, they used techniques such as synchronization, echo, ripple, physical theatre and the use of masks to aid them. A Greek chorus was often led by a coryphaeus. They also served as the ancient equivalent for a curtain, as their parodos (entering procession) signified the beginnings of a play and their exodos (exit procession) served as the curtains closing.
<P> Beginning with Friedrich von Schlegel, many have argued that the tragedies of Seneca the Younger in the first century AD were written to be recited at small parties rather than performed. Although that theory has become widely pervasive in the history of theater, there is no evidence to support the contention that his plays were intended to be read or recited at small gatherings of the wealthy. The emperor Nero, a pupil of Seneca, may have performed in some of them. Some of the drama of the Middle Ages was of the closet-drama type, such as the drama of Hroswitha of Gandersheim and debate poems in quasi-dramatic form.
<P> During the Italian Renaissance, there was a renewed interest in the theatre of ancient Greece. The Florentine Camerata crafted the first operas out of the intermezzi that acted as comic or musical relief during the dramas of the time. These were based entirely on the Greek chorus, as historian H.C. Montgomery argues.
| answer: The crowds at ancient theatrical performances could be outright brutal, in the most literal sense of the word. Aristotle mentions the actor who played Amphiaraus in one of Carcinus' tragedies was thrown from the stage because the scene being portrayed upset the audience with its inconsistency. Aristotle uses ἐξέπεσεν, a verb that probably implies not physical ejection, but that the actor was forced to withdraw after being hissed at--Demosthenes explicitly connects the verb with συρίζω, the verb to hiss. There's an anecdote in Pollux about a certain Hermon, a comic actor who was called on the stage early, in the middle of his vocal exercises, because all the actors before him had been thrown out. Plato in the *Republic* compares the behavior of the crowd at the theater to its behavior at other large gatherings, including the assembly and law courts--in the *Apology* Socrates repeatedly has to ask the crowd at his trial not to shout out at him when he thinks he's about to say something unpopular or ridiculous. Aristophanes himself refers to the crowd hounding a fellow playwright of his, Crates, with hisses and anger. Indeed, showing approval or lack thereof at an Athenian dramatic performance was somewhat expected, as it could influence the judges--in the *Ecclesiazusae* Aristophanes appeals to anyone in the audience who liked the play and the judges together to judge his play well. Besides that audiences would bang on their benches (the Theater of Dionysus was not fully built from stone until the 4th Century), throw fruit, vegetables--even stones. Demosthenes accused Aeschines of surviving during his time as an actor by eating the figs, grapes, and olives thrown at him by the audiences at his performances. Roman audiences were no better. During Caesar's consulship, at the height of the triumvirs' unpopularity, the crowd threw out a gladiatorial exhibition with hissing (*sibilis*) and forced the poor actor Diphilus to repeat the line *nostra miseria tu es magnus* (by our misery are you great--an attack on Pompey) over and over. On the same occasion the crowd refused to applaud Caesar but applauded wildly when Curio entered. In the prologue to the *Andria* asks his audience please to hiss the play off the stage until it's finished. Roman theatrical performances could be quite nasty--before there were circus factions there were theater gangs. Tacitus mentions one of the leaders of the Pannonian mutiny under Tiberius as being the leader of one of these *operae* and we know of incidents when the audience began fighting each other over whether particular actors were any good |
106,374 | 5t2nqf | What audio or video recording interviews an eye witness to a historical event or period that happened the furthest in the past? | Not a direct answer, but something that might help is to browse the National Archives.[Here is a collection of Documentary/Political recordings made by Thomas Edison between 1888 and 1927](_URL_0_). One that sticks out is Shackleton recounting his journey to the South Pole in 1908, which is the earliest recording I could find that fits your criteria. Keep in mind that before magnetic tape became wildey available after WWII, audio recordings were really terrible methods of storing information. The methods they used were prone to degradation, couldn't record for very long times, and had awful fidelity. Most of the recordings you hear today from that era have been processed using advanced forensic audio techniques and have higher fidelity than the originals. | [
"Not a direct answer, but something that might help is to browse the National Archives.\n\n[Here is a collection of Documentary/Political recordings made by Thomas Edison between 1888 and 1927](_URL_0_). \n\nOne that sticks out is Shackleton recounting his journey to the South Pole in 1908, which is the earliest re... | 1 | [
"Not a direct answer, but something that might help is to browse the National Archives.\n\n[Here is a collection of Documentary/Political recordings made by Thomas Edison between 1888 and 1927](_URL_0_). \n\nOne that sticks out is Shackleton recounting his journey to the South Pole in 1908, which is the earliest re... | 1 | <P> The archive pioneered the usage of video testimonies to record eyewitness accounts of major historical events. It has served as the primary inspiration for video testimony projects documenting other state-sanctioned crimes against humanity and their aftermaths.
<P> The Fortunoff Archive pioneered the usage of video testimonies to record eyewitness accounts of major historical events. Prior to the existence of the Archive, researchers relied on audio and written testimonies. The Archive has served as the primary inspiration for video testimony projects documenting the Cambodian genocide, ethnic cleansing in the former Yugoslavia and other crimes against humanity.
<P> The Oral History strand began in November 2003, and aims to interview as many people as possible who visited or worked in the theatre between 1945 and 1968. The original recordings may be consulted via the British Library Archival Sound Recordings where full, searchable transcripts are available. Over 250 interviews have been added to the site, and interviewees include Frith Banbury, Michael Frayn, Trevor Griffiths, Glenda Jackson, Ann Jellicoe, Ian McDiarmid, Peter Nichols, Corin Redgrave, Arnold Wesker, Timothy West.
<P> In April 2010, original transcripts of witness statements made during the preliminary hearing were rediscovered in an abandoned closet in county offices in Bisbee, Arizona, and the county said they would be preserved and digitized. Photocopies of these documents have been available to researchers since 1960, and new digitized records of the originals have been made available for online access. While the transcripts do not offer any significant deviations from generally accepted historical accounts of the gunfight itself, they were taken directly from eyewitnesses shortly afterwards and as such, they provide an interesting and unique perspective of the event.
<P> In 1946, David P. Boder, a professor of psychology at the Illinois Institute of Technology in Chicago, traveled to Europe to record long interviews with "displaced persons"—most of them Holocaust survivors. Using the first device capable of capturing hours of audio—the wire recorder—Boder came back with the first recorded Holocaust testimonials and in all likelihood the first recorded oral histories of significant length.
<P> "Witness" features a unique interactive feature where the survivors, World War II liberators, and Righteous Among the Nations included in the book, have an invisible link embedded in their image. When their image is accessed with a smart phone or other device, the reader is taken to an excerpt of their video testimony on USC Shoah Foundation Institute for Visual History and Education (created by Steven Spielberg) or March of the Living Digital Archive Project websites. Translations in several other languages have been completed and/or published with the launch of the Polish language edition taking place in November 2018 at the Polin Museum, the Spanish edition (Testimonios; traspasar la antorcha de la memoria del holocausto a las nuevas generaciones) launched in January 2019, and the Hebrew edition scheduled for release in early to mid 2019. The exhibit was on display at the Auschwitz-Birkenau State Museum until July 2016. (View March of the Living Exhibit at Auschwitz-Birkenau State Museum.)
<P> 503 witness testimonies were recorded over a period of 16 months from July 2001 to October 2002, using voice recorders and camcorders. The recordings were conducted in Jeju, Seoul, as well as in countries including Japan and the United States. Witnesses were chosen from "the damage and casualty report of the Jeju Provincial Council, newspapers, broadcast programs and collections of testimony." Further, the committee received recommended witnesses from various organizations; the committee also launched its own witness selection from ex-armed guerrillas and commanders of the suppression operations. Through this process, a list of 2,780 people was completed, out of which 500 people were screened into a final selection. Priority was given to witnesses with unusual backgrounds, those who underwent unique or specific incidents or came from a village that incurred severe damage, and those who were discovered by the committee's own investigation.
| question: What audio or video recording interviews an eye witness to a historical event or period that happened the furthest in the past? context: <P> The archive pioneered the usage of video testimonies to record eyewitness accounts of major historical events. It has served as the primary inspiration for video testimony projects documenting other state-sanctioned crimes against humanity and their aftermaths.
<P> The Fortunoff Archive pioneered the usage of video testimonies to record eyewitness accounts of major historical events. Prior to the existence of the Archive, researchers relied on audio and written testimonies. The Archive has served as the primary inspiration for video testimony projects documenting the Cambodian genocide, ethnic cleansing in the former Yugoslavia and other crimes against humanity.
<P> The Oral History strand began in November 2003, and aims to interview as many people as possible who visited or worked in the theatre between 1945 and 1968. The original recordings may be consulted via the British Library Archival Sound Recordings where full, searchable transcripts are available. Over 250 interviews have been added to the site, and interviewees include Frith Banbury, Michael Frayn, Trevor Griffiths, Glenda Jackson, Ann Jellicoe, Ian McDiarmid, Peter Nichols, Corin Redgrave, Arnold Wesker, Timothy West.
<P> In April 2010, original transcripts of witness statements made during the preliminary hearing were rediscovered in an abandoned closet in county offices in Bisbee, Arizona, and the county said they would be preserved and digitized. Photocopies of these documents have been available to researchers since 1960, and new digitized records of the originals have been made available for online access. While the transcripts do not offer any significant deviations from generally accepted historical accounts of the gunfight itself, they were taken directly from eyewitnesses shortly afterwards and as such, they provide an interesting and unique perspective of the event.
<P> In 1946, David P. Boder, a professor of psychology at the Illinois Institute of Technology in Chicago, traveled to Europe to record long interviews with "displaced persons"—most of them Holocaust survivors. Using the first device capable of capturing hours of audio—the wire recorder—Boder came back with the first recorded Holocaust testimonials and in all likelihood the first recorded oral histories of significant length.
<P> "Witness" features a unique interactive feature where the survivors, World War II liberators, and Righteous Among the Nations included in the book, have an invisible link embedded in their image. When their image is accessed with a smart phone or other device, the reader is taken to an excerpt of their video testimony on USC Shoah Foundation Institute for Visual History and Education (created by Steven Spielberg) or March of the Living Digital Archive Project websites. Translations in several other languages have been completed and/or published with the launch of the Polish language edition taking place in November 2018 at the Polin Museum, the Spanish edition (Testimonios; traspasar la antorcha de la memoria del holocausto a las nuevas generaciones) launched in January 2019, and the Hebrew edition scheduled for release in early to mid 2019. The exhibit was on display at the Auschwitz-Birkenau State Museum until July 2016. (View March of the Living Exhibit at Auschwitz-Birkenau State Museum.)
<P> 503 witness testimonies were recorded over a period of 16 months from July 2001 to October 2002, using voice recorders and camcorders. The recordings were conducted in Jeju, Seoul, as well as in countries including Japan and the United States. Witnesses were chosen from "the damage and casualty report of the Jeju Provincial Council, newspapers, broadcast programs and collections of testimony." Further, the committee received recommended witnesses from various organizations; the committee also launched its own witness selection from ex-armed guerrillas and commanders of the suppression operations. Through this process, a list of 2,780 people was completed, out of which 500 people were screened into a final selection. Priority was given to witnesses with unusual backgrounds, those who underwent unique or specific incidents or came from a village that incurred severe damage, and those who were discovered by the committee's own investigation.
| answer: Not a direct answer, but something that might help is to browse the National Archives.[Here is a collection of Documentary/Political recordings made by Thomas Edison between 1888 and 1927](_URL_0_). One that sticks out is Shackleton recounting his journey to the South Pole in 1908, which is the earliest recording I could find that fits your criteria. Keep in mind that before magnetic tape became wildey available after WWII, audio recordings were really terrible methods of storing information. The methods they used were prone to degradation, couldn't record for very long times, and had awful fidelity. Most of the recordings you hear today from that era have been processed using advanced forensic audio techniques and have higher fidelity than the originals. |
90,030 | 1h8kb0 | Why did Stalin purge the old, inter-war Communists of Eastern Europe in the immediate aftermath of World War II? | Stalins own purges of Eastern European communists - like Partisans, POWs, and other intelligentsia in the immediate aftermath of his 'Great Patriotic War' was his attempt to eliminate any serious opposition to the formation of the Eastern Bloc. Using the NKVD and other intelligence apparatuses to subdue and repress voices of dissent or idealism that contradicted his plans for Eastern Europe would ensure that the newly formed Communist governments would not only be strictly in line with his policies, but would also be directly subservient to the Soviet State as a satellite nation.Stalin most of all did not want leaders like Tito in Yugoslavia that could outwardly challenge his ultimatums and domestic policies - as he once said > We study and take as an example the Soviet system, but we are developing socialism in our country in somewhat different forms. (...) No matter how much each of us loves the land of socialism, the USSR, he can in no case love his own country less.- Josip Broz TitoStalin used purges and the secret police of other aligned socialist states in the Eastern Bloc to isolate Tito and Yugoslavia as Stalin held that "I will shake my little finger and there will be no more Tito,"Stalin considered that without Soviet support and membership in the [Cominform](_URL_0_) there would be no more support for his leadership.Stalin also sent agents to assassinate Tito as Stalin took Tito's belligerence towards Soviet expansionism in Eastern Europe personally. Tito once remarked after several unsuccessful attempts on his life from Stalin > Stop sending people to kill me. We've already captured five of them, one of them with a bomb and another with a rifle (...) If you don't stop sending killers, I'll send one to Moscow, and I won't have to send a second.- Josip Broz TitoLeaders like Tito were most certainly the opposite direction in which he wanted Eastern Europe to develop. The USSR's own experience in Barbarossa in which Romania, Hungary, Slovakia, Croatia, and Finland attacked in conjunction with Nazi Germany - convinced them the need for the need to have a firmer, if not direct, hand in guiding the events and politics of Eastern Europe. Purging elements not loyal or complacent to the Soviet Union, communist or not, solidified their influence and removed (mostly, Hungary rebelled in 1956) the possibility of rebellion against their plans. | [
"According to 'Stalin, Soviet Policy, and the Consolidation of a Communist' Bloc _URL_0_ \n: \"The experiences of the interwar years, most notably with Poland, Romania, and Hungary, and Stalin’s feelings of betrayal and humiliation when Hitler broke the Nazi-Soviet Pact and launched an all-out war against the USSR,... | 2 | [
"According to 'Stalin, Soviet Policy, and the Consolidation of a Communist' Bloc _URL_0_ \n: \"The experiences of the interwar years, most notably with Poland, Romania, and Hungary, and Stalin’s feelings of betrayal and humiliation when Hitler broke the Nazi-Soviet Pact and launched an all-out war against the USSR,... | 2 | <P> In addition, sizable resources were employed in the purge, such as in Hungary, where almost one million adults were employed to record, control, indoctrinate, spy on and sometimes kill targets of the purge. Unlike the repressions under Nazi occupation, no ongoing war existed that could bring an end to the tribulations of the Eastern Bloc, and morale severely suffered as a consequence. Because the party later had to admit the mistakes of much that occurred during the purges after Stalin's death, the purges also destroyed the moral base upon which the party operated. In doing so, the party abrogated its prior Leninist claim to moral infallibility for the working class.
<P> Stalin's purges of the 1930s affected Comintern activists living in both the Soviet Union and overseas. At Stalin's direction, the Comintern was thoroughly infused with Soviet secret police and foreign intelligence operatives and informers working under Comintern guise. One of its leaders, Mikhail Trilisser, using the pseudonym Mikhail Aleksandrovich Moskvin, was in fact chief of the foreign department of the Soviet OGPU (later the NKVD). At Stalin's orders, 133 out of 492 Comintern staff members became victims of the Great Purge. Several hundred German communists and antifascists who had either fled from Nazi Germany or were convinced to relocate in the Soviet Union were liquidated and more than a thousand were handed over to Germany. Fritz Platten died in a labor camp and the leaders of the Indian (Virendranath Chattopadhyaya or Chatto), Korean, Mexican, Iranian and Turkish communist parties were executed. Out of 11 Mongolian Communist Party leaders, only Khorloogiin Choibalsan survived. Leopold Trepper recalled these days: "In house, where the party activists of all the countries were living, no-one slept until 3 o'clock in the morning. [...] Exactly 3 o'clock the car lights began to be seen [...] we stayed near the window and waited [to find out], where the car stopped".
<P> Set against this, the purges of the Red Army leadership, in which Molotov participated, weakened the Soviet Union's defence capacity and contributed to the military disasters of 1941 and 1942, which were mostly caused by unreadiness for war. The purges also led to the dismantling of privatised agriculture and its replacement by collectivised agriculture. This left a legacy of chronic agricultural inefficiencies and under-production which the Soviet regime never fully rectified.
<P> Some relaxation of Soviet control occurred after Stalin's death in 1953 and the subsequent de-stalinization. State brutality and repression waned in the Bloc. The Red Army withdrew from the Balkans, though not from East Germany and countries needed for transit purposes. Continuing maintenance of communist power was guaranteed by the Brezhnev Doctrine, such as in the 1968 Warsaw Pact invasion of Czechoslovakia, on the grounds that a threat to the system in one country was a challenge to the alliance as a whole.
<P> The late 1930s saw purges of the Red Army leadership which occurred concurrently with Stalin's Great Purge of Soviet society. In 1936 and 1937, at the orders of Stalin, thousands of Red Army senior officers were dismissed from their commands. The purges had the objective of cleansing the Red Army of the "politically unreliable elements," mainly among higher-ranking officers. This inevitably provided a convenient pretext for the settling of personal vendettas or to eliminate competition by officers seeking the same command. Many army, corps, and divisional commanders were sacked: most were imprisoned or sent to labor camps; others were executed. Among the victims was the Red Army's primary military theorist, Marshal Mikhail Tukhachevsky, who was perceived by Stalin as a potential political rival. Officers who remained soon found all of their decisions being closely examined by political officers, even in mundane matters such as record-keeping and field training exercises. An atmosphere of fear and unwillingness to take the initiative soon pervaded the Red Army; suicide rates among junior officers rose to record levels. The purges significantly impaired the combat capabilities of the Red Army. Hoyt concludes "the Soviet defense system was damaged to the point of incompetence" and stresses "the fear in which high officers lived." Clark says, "Stalin not only cut the heart out of the army, he also gave it brain damage." Lewin identifies three serious results: the loss of experienced and well-trained senior officers; the distrust it caused among potential allies especially France; and the encouragement it gave Germany.
<P> The Great Purge was a series of campaigns of political repression and persecution in the Soviet Union orchestrated against members of the Communist Party, writers and intellectuals, peasants and ordinary citizens. In September 1937 Stalin dispatched Anastas Mikoyan, along with Georgy Malenkov and Lavrentiy Beria, with a list of 300 names to Yerevan to oversee the liquidation of the Communist Party of Armenia (CPA), which was largely made up of Old Bolsheviks. Armenian communist leaders such as Vagharshak Ter-Vahanyan and Aghasi Khanjian fell victim to the purge, the former being a defendant at the first of the Moscow Show Trials. Mikoyan tried, but failed, to save one from being executed during his trip to Armenia. That person was arrested during one of his speeches to the CPA by Beria. Over a thousand people were arrested and seven of nine members of the Armenian Politburo were sacked from office. According to one study, 4,530 people were executed by firing squad in the years 1937-38 alone, the majority of them having been accused of anti-Soviet or "counter-revolutionary" activities, for belonging to the nationalist Dashnak party, or Trotskyism.
<P> Following some dissent within ruling communist parties throughout the Eastern Bloc, especially after the 1948 Tito–Stalin split, several party purges occurred, with several hundred thousand members purged in several countries. In addition to rank-and-file member purges, prominent communists were purged, with some subjected to public show trials. These were more likely to be instigated, and sometimes orchestrated, by the Kremlin or even Stalin himself, as he had done in the earlier Moscow Trials.
| question: Why did Stalin purge the old, inter-war Communists of Eastern Europe in the immediate aftermath of World War II? context: <P> In addition, sizable resources were employed in the purge, such as in Hungary, where almost one million adults were employed to record, control, indoctrinate, spy on and sometimes kill targets of the purge. Unlike the repressions under Nazi occupation, no ongoing war existed that could bring an end to the tribulations of the Eastern Bloc, and morale severely suffered as a consequence. Because the party later had to admit the mistakes of much that occurred during the purges after Stalin's death, the purges also destroyed the moral base upon which the party operated. In doing so, the party abrogated its prior Leninist claim to moral infallibility for the working class.
<P> Stalin's purges of the 1930s affected Comintern activists living in both the Soviet Union and overseas. At Stalin's direction, the Comintern was thoroughly infused with Soviet secret police and foreign intelligence operatives and informers working under Comintern guise. One of its leaders, Mikhail Trilisser, using the pseudonym Mikhail Aleksandrovich Moskvin, was in fact chief of the foreign department of the Soviet OGPU (later the NKVD). At Stalin's orders, 133 out of 492 Comintern staff members became victims of the Great Purge. Several hundred German communists and antifascists who had either fled from Nazi Germany or were convinced to relocate in the Soviet Union were liquidated and more than a thousand were handed over to Germany. Fritz Platten died in a labor camp and the leaders of the Indian (Virendranath Chattopadhyaya or Chatto), Korean, Mexican, Iranian and Turkish communist parties were executed. Out of 11 Mongolian Communist Party leaders, only Khorloogiin Choibalsan survived. Leopold Trepper recalled these days: "In house, where the party activists of all the countries were living, no-one slept until 3 o'clock in the morning. [...] Exactly 3 o'clock the car lights began to be seen [...] we stayed near the window and waited [to find out], where the car stopped".
<P> Set against this, the purges of the Red Army leadership, in which Molotov participated, weakened the Soviet Union's defence capacity and contributed to the military disasters of 1941 and 1942, which were mostly caused by unreadiness for war. The purges also led to the dismantling of privatised agriculture and its replacement by collectivised agriculture. This left a legacy of chronic agricultural inefficiencies and under-production which the Soviet regime never fully rectified.
<P> Some relaxation of Soviet control occurred after Stalin's death in 1953 and the subsequent de-stalinization. State brutality and repression waned in the Bloc. The Red Army withdrew from the Balkans, though not from East Germany and countries needed for transit purposes. Continuing maintenance of communist power was guaranteed by the Brezhnev Doctrine, such as in the 1968 Warsaw Pact invasion of Czechoslovakia, on the grounds that a threat to the system in one country was a challenge to the alliance as a whole.
<P> The late 1930s saw purges of the Red Army leadership which occurred concurrently with Stalin's Great Purge of Soviet society. In 1936 and 1937, at the orders of Stalin, thousands of Red Army senior officers were dismissed from their commands. The purges had the objective of cleansing the Red Army of the "politically unreliable elements," mainly among higher-ranking officers. This inevitably provided a convenient pretext for the settling of personal vendettas or to eliminate competition by officers seeking the same command. Many army, corps, and divisional commanders were sacked: most were imprisoned or sent to labor camps; others were executed. Among the victims was the Red Army's primary military theorist, Marshal Mikhail Tukhachevsky, who was perceived by Stalin as a potential political rival. Officers who remained soon found all of their decisions being closely examined by political officers, even in mundane matters such as record-keeping and field training exercises. An atmosphere of fear and unwillingness to take the initiative soon pervaded the Red Army; suicide rates among junior officers rose to record levels. The purges significantly impaired the combat capabilities of the Red Army. Hoyt concludes "the Soviet defense system was damaged to the point of incompetence" and stresses "the fear in which high officers lived." Clark says, "Stalin not only cut the heart out of the army, he also gave it brain damage." Lewin identifies three serious results: the loss of experienced and well-trained senior officers; the distrust it caused among potential allies especially France; and the encouragement it gave Germany.
<P> The Great Purge was a series of campaigns of political repression and persecution in the Soviet Union orchestrated against members of the Communist Party, writers and intellectuals, peasants and ordinary citizens. In September 1937 Stalin dispatched Anastas Mikoyan, along with Georgy Malenkov and Lavrentiy Beria, with a list of 300 names to Yerevan to oversee the liquidation of the Communist Party of Armenia (CPA), which was largely made up of Old Bolsheviks. Armenian communist leaders such as Vagharshak Ter-Vahanyan and Aghasi Khanjian fell victim to the purge, the former being a defendant at the first of the Moscow Show Trials. Mikoyan tried, but failed, to save one from being executed during his trip to Armenia. That person was arrested during one of his speeches to the CPA by Beria. Over a thousand people were arrested and seven of nine members of the Armenian Politburo were sacked from office. According to one study, 4,530 people were executed by firing squad in the years 1937-38 alone, the majority of them having been accused of anti-Soviet or "counter-revolutionary" activities, for belonging to the nationalist Dashnak party, or Trotskyism.
<P> Following some dissent within ruling communist parties throughout the Eastern Bloc, especially after the 1948 Tito–Stalin split, several party purges occurred, with several hundred thousand members purged in several countries. In addition to rank-and-file member purges, prominent communists were purged, with some subjected to public show trials. These were more likely to be instigated, and sometimes orchestrated, by the Kremlin or even Stalin himself, as he had done in the earlier Moscow Trials.
| answer: Stalins own purges of Eastern European communists - like Partisans, POWs, and other intelligentsia in the immediate aftermath of his 'Great Patriotic War' was his attempt to eliminate any serious opposition to the formation of the Eastern Bloc. Using the NKVD and other intelligence apparatuses to subdue and repress voices of dissent or idealism that contradicted his plans for Eastern Europe would ensure that the newly formed Communist governments would not only be strictly in line with his policies, but would also be directly subservient to the Soviet State as a satellite nation.Stalin most of all did not want leaders like Tito in Yugoslavia that could outwardly challenge his ultimatums and domestic policies - as he once said > We study and take as an example the Soviet system, but we are developing socialism in our country in somewhat different forms. (...) No matter how much each of us loves the land of socialism, the USSR, he can in no case love his own country less.- Josip Broz TitoStalin used purges and the secret police of other aligned socialist states in the Eastern Bloc to isolate Tito and Yugoslavia as Stalin held that "I will shake my little finger and there will be no more Tito,"Stalin considered that without Soviet support and membership in the [Cominform](_URL_0_) there would be no more support for his leadership.Stalin also sent agents to assassinate Tito as Stalin took Tito's belligerence towards Soviet expansionism in Eastern Europe personally. Tito once remarked after several unsuccessful attempts on his life from Stalin > Stop sending people to kill me. We've already captured five of them, one of them with a bomb and another with a rifle (...) If you don't stop sending killers, I'll send one to Moscow, and I won't have to send a second.- Josip Broz TitoLeaders like Tito were most certainly the opposite direction in which he wanted Eastern Europe to develop. The USSR's own experience in Barbarossa in which Romania, Hungary, Slovakia, Croatia, and Finland attacked in conjunction with Nazi Germany - convinced them the need for the need to have a firmer, if not direct, hand in guiding the events and politics of Eastern Europe. Purging elements not loyal or complacent to the Soviet Union, communist or not, solidified their influence and removed (mostly, Hungary rebelled in 1956) the possibility of rebellion against their plans. |
206,012 | 1qge38 | Do historians in general have a 20-year rule? At what point do historians start talking about more recent events in a historical context? | I think the short answer to this question is no. Historians frequently write about the recent past, frame their inquiries in the context of recent events, and seek to intervene in contemporary debates. My dissertation, for example, extends through the year 2000 because I'm interested in providing historical context for a pressing contemporary issue on the island I study.That being said, putting some years between the historian and historical actors can be very helpful, particularly in certain types of research. Historians who rely on declassified documents as primary material may have to wait for those documents to be made available. Historians whose work focuses on a living subject may find that certain archival documents like diaries and letters aren't available until that subject passes away. Any number of other issues could prevent key sources from becoming accessible to researchers until some years after the fact.Historians who write about the recent past may not be able to offer a complete, “total” history of their subjects. But that's true of historians who work on any time period––our interpretations are always evolving in light of new insights and new evidence. I personally feel that historians help to keep their work relevant by engaging with contemporary issues and being willing to engage with the contemporary era. But there are a variety of perspectives about this. | [
"I think the short answer to this question is no. Historians frequently write about the recent past, frame their inquiries in the context of recent events, and seek to intervene in contemporary debates. My dissertation, for example, extends through the year 2000 because I'm interested in providing historical contex... | 4 | [
"I think the short answer to this question is no. Historians frequently write about the recent past, frame their inquiries in the context of recent events, and seek to intervene in contemporary debates. My dissertation, for example, extends through the year 2000 because I'm interested in providing historical contex... | 3 | <P> Some theories claim that the dates of historical events have been deliberately distorted. These include the phantom time hypothesis of German conspiracy theorist Heribert Illig, who in 1991 published an allegation that 297 years had been added to the calendar by establishment figures such as Pope Sylvester II in order to position themselves at the millennium.
<P> He also re-evaluated conventional definitions of western historical periods through the prisms of economic and agrarian history, both in writing and in lectures, which gave rise to lively discussions between experts. In 1949 German Historians' Day took place at Munich and Lütge took the opportunity to set out his thesis that the Black Death which reached Europe between 1346 and 1350, and the ensuing changes in economic and political power balances that ensued as a result of depopulation, made 1350 a much more plausible starting point for "Modern history" than 1500 which then (as now) was widely – often unquestioningly – identified as the starting point for the modern era in European history. The next year he set out the contention in greater detail in the newly redesigned and relaunched "" ("Yearbooks of National Economy and Statistics"). He challenged another historians' shibboleth in 1958 when he argued that the decades directly preceding the Thirty Years' War had not been a period of slow decline, marked by a succession of poor harvests in Europe, as widely believed, but that it was only the outbreak of hostilities in 1618 that put an end to several decades of dynamic economic development.
<P> As of 2011 there are ten published chronicles spanning 1835 to the present. Here they are ordered by the fictional history and the year of the narrative follows the title; none of the titles includes a date.
<P> Sandra Miesel wrote a new prologue that was added to the series' republication in the 1980s and formally transformed it from future history into alternate history. Her prologue took the divergence point from our own history as the (premature) death of U.S. President Dwight D. Eisenhower in 1956 (in our world he died in 1969), and the assumption of power by a younger, hotheaded Richard Nixon, which led to exacerbation of the Cold War and a devastating nuclear war in 1958.
<P> BULLET::::- "Chronica" (ending at 519), uniting all world history in one sequence of rulers, a union of Goth and Roman antecedents, flattering Goth sensibilities as the sequence neared the date of composition
<P> Modern history describes the historical period after the Middle history. Modern history can be further broken down into the "early modern period" and the "late modern period" after the French Revolution and the Industrial Revolution. "Contemporary history" describes the span of historic events that are immediately relevant to the present time. The Great Divergence refers to the period of time in which the process by which the Western Europe and the parts of the New World overcame pre-modern growth constraints and emerged during the 19th century as the powerful and wealthy world civilization of the time, eclipsing Qing China, Mughal India, Tokugawa Japan, and the Ottoman Empire.
<P> At this point, the newly created timeline start to diverge greatly from the actual history of the 17th Century, in no small part because the news of a town from the future brought spies and emissaries, and a fair number of encyclopedias and history textbooks found their way into European courts. One theme of the series is of down-timer leaders trying to change, hasten or head off their histories while the acts of ordinary citizens going about their day-to-day affairs and of the leaders of Grantville effect more fundamental societal and political changes.
| question: Do historians in general have a 20-year rule? At what point do historians start talking about more recent events in a historical context? context: <P> Some theories claim that the dates of historical events have been deliberately distorted. These include the phantom time hypothesis of German conspiracy theorist Heribert Illig, who in 1991 published an allegation that 297 years had been added to the calendar by establishment figures such as Pope Sylvester II in order to position themselves at the millennium.
<P> He also re-evaluated conventional definitions of western historical periods through the prisms of economic and agrarian history, both in writing and in lectures, which gave rise to lively discussions between experts. In 1949 German Historians' Day took place at Munich and Lütge took the opportunity to set out his thesis that the Black Death which reached Europe between 1346 and 1350, and the ensuing changes in economic and political power balances that ensued as a result of depopulation, made 1350 a much more plausible starting point for "Modern history" than 1500 which then (as now) was widely – often unquestioningly – identified as the starting point for the modern era in European history. The next year he set out the contention in greater detail in the newly redesigned and relaunched "" ("Yearbooks of National Economy and Statistics"). He challenged another historians' shibboleth in 1958 when he argued that the decades directly preceding the Thirty Years' War had not been a period of slow decline, marked by a succession of poor harvests in Europe, as widely believed, but that it was only the outbreak of hostilities in 1618 that put an end to several decades of dynamic economic development.
<P> As of 2011 there are ten published chronicles spanning 1835 to the present. Here they are ordered by the fictional history and the year of the narrative follows the title; none of the titles includes a date.
<P> Sandra Miesel wrote a new prologue that was added to the series' republication in the 1980s and formally transformed it from future history into alternate history. Her prologue took the divergence point from our own history as the (premature) death of U.S. President Dwight D. Eisenhower in 1956 (in our world he died in 1969), and the assumption of power by a younger, hotheaded Richard Nixon, which led to exacerbation of the Cold War and a devastating nuclear war in 1958.
<P> BULLET::::- "Chronica" (ending at 519), uniting all world history in one sequence of rulers, a union of Goth and Roman antecedents, flattering Goth sensibilities as the sequence neared the date of composition
<P> Modern history describes the historical period after the Middle history. Modern history can be further broken down into the "early modern period" and the "late modern period" after the French Revolution and the Industrial Revolution. "Contemporary history" describes the span of historic events that are immediately relevant to the present time. The Great Divergence refers to the period of time in which the process by which the Western Europe and the parts of the New World overcame pre-modern growth constraints and emerged during the 19th century as the powerful and wealthy world civilization of the time, eclipsing Qing China, Mughal India, Tokugawa Japan, and the Ottoman Empire.
<P> At this point, the newly created timeline start to diverge greatly from the actual history of the 17th Century, in no small part because the news of a town from the future brought spies and emissaries, and a fair number of encyclopedias and history textbooks found their way into European courts. One theme of the series is of down-timer leaders trying to change, hasten or head off their histories while the acts of ordinary citizens going about their day-to-day affairs and of the leaders of Grantville effect more fundamental societal and political changes.
| answer: I think the short answer to this question is no. Historians frequently write about the recent past, frame their inquiries in the context of recent events, and seek to intervene in contemporary debates. My dissertation, for example, extends through the year 2000 because I'm interested in providing historical context for a pressing contemporary issue on the island I study.That being said, putting some years between the historian and historical actors can be very helpful, particularly in certain types of research. Historians who rely on declassified documents as primary material may have to wait for those documents to be made available. Historians whose work focuses on a living subject may find that certain archival documents like diaries and letters aren't available until that subject passes away. Any number of other issues could prevent key sources from becoming accessible to researchers until some years after the fact.Historians who write about the recent past may not be able to offer a complete, “total” history of their subjects. But that's true of historians who work on any time period––our interpretations are always evolving in light of new insights and new evidence. I personally feel that historians help to keep their work relevant by engaging with contemporary issues and being willing to engage with the contemporary era. But there are a variety of perspectives about this. |
203,381 | 12lzm0 | Ransoms in Medieval Europe | The capture of a king would of course bring a literal king's ransom (although not necessarily to the actual captor). Michael Prestwich, Armies and Warfare in the Middle Ages, has a section on ransoms, including Richard I (although he was not captured in battle), 150,000 marks; David of Scotland, 100,000 marks; king John of France, £500,000. These sums were not always paid in full, and it is probably impossible to express them in today's terms, except to say that they were huge. | [
"Richard I of England was ransomed for 150 000 marks. At modern prices, this is in the neighbourhood of $30 million, but would have actually been *much* higher, owing to the relative scarcity of silver at the time and very different economies. Some estimates give an adjusted value of several billion dollars.",
... | 3 | [
"Richard I of England was ransomed for 150 000 marks. At modern prices, this is in the neighbourhood of $30 million, but would have actually been *much* higher, owing to the relative scarcity of silver at the time and very different economies. Some estimates give an adjusted value of several billion dollars.",
... | 3 | <P> In Europe during the Middle Ages, ransom became an important custom of chivalric warfare. An important knight, especially nobility or royalty, was worth a significant sum of money if captured, but nothing if he was killed. For this reason, the practice of ransom contributed to the development of heraldry, which allowed knights to advertise their identities, and by implication their ransom value, and made them less likely to be killed out of hand. Examples include Richard the Lion Heart and Bertrand du Guesclin.
<P> Finally, in this context a form of more regular "re-recruitment" should be mentioned that in practice had some quantitative importance: the ransoming of prisoners of war. Though in the first stages of the Eighty Years' War both parties had mercilessly executed prisoners of war (a practice that continued for a long time in the war on sea), this practice was soon recognized as a waste of money, as prisoners were often ready and able to offer large sums of money to regain their freedom. The practice of ransoming had long been customary in medieval wars and there was no reason to forgo its pecuniary advantages in this conflict. Informal ransoming was soon formalized in a so-called Cartel between the high commands of the two belligerents, first in 1599, and more definitely in 1602. This cartel was a formal treaty that enumerated the rates of exchange for different grades of prisoners and other conditions of treatment (and compensation for housing and feeding). The advantage for the commanders of both armies was that the losses due to the taking of prisoners could be replenished relatively cheaply and speedily. The cartel with Spain remained in force for the remainder of the war. Similar cartels were concluded in later wars.
<P> The rest of the captured knights and soldiers were sold into slavery, and one was reportedly bought in Damascus in exchange for some sandals. The high ranking Frankish barons captured were held for ransom.
<P> Dead Man's Ransom is a medieval mystery novel by Ellis Peters, first of four novels set in the disruptive year of 1141. It is the ninth in the Cadfael Chronicles, and was first published in 1984 (1984 in literature).
<P> Medieval robber barons most often imposed high or unauthorized tolls on rivers or roads passing through their territory. Some robbed merchants, land travelers, and river traffic—seizing money, cargoes, entire ships, or engaged in kidnapping for ransom.
<P> In the early 18th century the custom was that the captain of a captured vessel gave a bond or “ransom bill,” leaving one of his crew as a hostage or “ransomer” in the hands of the captor. Frequent mention is made of the taking of French privateers which had in them ten or a dozen ransomers. The owner could be sued on his bond. Payment of ransom was banned by the Parliament of Great Britain in 1782 although this was repealed in 1864. It was generally allowed by other nations.
<P> the practice of charitable ransoming— created an ideal environment for the new Order. Consequently, the preponderance of what Mercedarians came to possess here were lands donated by the king, successful crusaders and other patrons.
| question: Ransoms in Medieval Europe context: <P> In Europe during the Middle Ages, ransom became an important custom of chivalric warfare. An important knight, especially nobility or royalty, was worth a significant sum of money if captured, but nothing if he was killed. For this reason, the practice of ransom contributed to the development of heraldry, which allowed knights to advertise their identities, and by implication their ransom value, and made them less likely to be killed out of hand. Examples include Richard the Lion Heart and Bertrand du Guesclin.
<P> Finally, in this context a form of more regular "re-recruitment" should be mentioned that in practice had some quantitative importance: the ransoming of prisoners of war. Though in the first stages of the Eighty Years' War both parties had mercilessly executed prisoners of war (a practice that continued for a long time in the war on sea), this practice was soon recognized as a waste of money, as prisoners were often ready and able to offer large sums of money to regain their freedom. The practice of ransoming had long been customary in medieval wars and there was no reason to forgo its pecuniary advantages in this conflict. Informal ransoming was soon formalized in a so-called Cartel between the high commands of the two belligerents, first in 1599, and more definitely in 1602. This cartel was a formal treaty that enumerated the rates of exchange for different grades of prisoners and other conditions of treatment (and compensation for housing and feeding). The advantage for the commanders of both armies was that the losses due to the taking of prisoners could be replenished relatively cheaply and speedily. The cartel with Spain remained in force for the remainder of the war. Similar cartels were concluded in later wars.
<P> The rest of the captured knights and soldiers were sold into slavery, and one was reportedly bought in Damascus in exchange for some sandals. The high ranking Frankish barons captured were held for ransom.
<P> Dead Man's Ransom is a medieval mystery novel by Ellis Peters, first of four novels set in the disruptive year of 1141. It is the ninth in the Cadfael Chronicles, and was first published in 1984 (1984 in literature).
<P> Medieval robber barons most often imposed high or unauthorized tolls on rivers or roads passing through their territory. Some robbed merchants, land travelers, and river traffic—seizing money, cargoes, entire ships, or engaged in kidnapping for ransom.
<P> In the early 18th century the custom was that the captain of a captured vessel gave a bond or “ransom bill,” leaving one of his crew as a hostage or “ransomer” in the hands of the captor. Frequent mention is made of the taking of French privateers which had in them ten or a dozen ransomers. The owner could be sued on his bond. Payment of ransom was banned by the Parliament of Great Britain in 1782 although this was repealed in 1864. It was generally allowed by other nations.
<P> the practice of charitable ransoming— created an ideal environment for the new Order. Consequently, the preponderance of what Mercedarians came to possess here were lands donated by the king, successful crusaders and other patrons.
| answer: The capture of a king would of course bring a literal king's ransom (although not necessarily to the actual captor). Michael Prestwich, Armies and Warfare in the Middle Ages, has a section on ransoms, including Richard I (although he was not captured in battle), 150,000 marks; David of Scotland, 100,000 marks; king John of France, £500,000. These sums were not always paid in full, and it is probably impossible to express them in today's terms, except to say that they were huge. |
181,478 | 7ocdyy | How long does the fallout of a nuclear detonation last? | > Does living in parts of Nevada carry any real risk of side effects on healthNo. At least not from nuclear weapons. > or were they simply not powerful enough to have any lasting radioactive effects?Several radioisotopes are still measurable and it will stay that way basically forever, but detectors are really sensitive. We have slightly higher amounts of C-14 in the atmosphere since the Cold War (everywhere, not just Nevada), that alone will stay measurable for at least a few more decades. Its half life is much longer (5700 years), but the carbon gets absorbed by the environment over time as well so it disappears from the atmosphere.Reconstruction of Hiroshima and Nagasaki started quickly after the war and today the radiation levels are not higher than elsewhere. | [
"It depends on the half life of the radioactive isotopes they used in the detonations. The fact that many of the Nevada tests were aerial or on the surface means that they kicked up large volumes of radioactive dust, which has the potential to be quite dangerous to air breathers. How far this spread and whether muc... | 4 | [] | 0 | <P> During the first hour after a nuclear explosion, radioactivity levels drop precipitously. Radioactivity levels are further reduced by about 90% after another 7 hours and by about 99% after 2 days. An accurate rule of thumb, applicable in the time-period of days to a few weeks post-detonation which approximates the radioactive dose rate generated by the decay of the myriad of isotopes present in nuclear fallout, is the "7/10 rule". The rule states that for each 7-fold increase in time the dose rate drops by a factor of 10. For example, assuming the fallout process has ended 24 hours post detonation and the dose rate would be lethal if a few hours of exposure occurred, 50 roentgens per hour, then 7 days after detonation the dose rate will be 5 R/hr and 49 days after detonation (7×7 days) the dose rate will be 0.5 R/hr at which point no special precautions would need to be taken and venturing outside into that dose rate for an hour or two would pose a close to negligible health hazard, thus permitting an evacuation to be done with acceptable safety to a known contamination free zone. Following a surface-burst nuclear detonation, approximately 80 percent of the fallout would be deposited on the ground during the first 24 hours.
<P> The danger of radiation from fallout also decreases rapidly with time due in large part to the exponential decay of the individual radionuclides. A book by Cresson H. Kearny presents data showing that for the first few days after the explosion, the radiation dose rate is reduced by a factor of ten for every seven-fold increase in the number of hours since the explosion. He presents data showing that "it takes about seven times as long for the dose rate to decay from 1000 roentgens per hour (1000 R/hr) to 10 R/hr (48 hours) as to decay from 1000 R/hr to 100 R/hr (7 hours)." This is a fairly rough rule of thumb based on observed data, not a precise relation.
<P> Since explosives detonate at typically 7–8 kilometers per second, or 7–8 meters per millisecond, a 1 millisecond delay in detonation from one side of a nuclear weapon to the other would be longer than the time the detonation would take to cross the weapon. The time precision and consistency of EBWs (0.1 microsecond or less) are roughly enough time for the detonation to move 1 millimeter at most, and for the most precise commercial EBWs this is 0.025 microsecond and about 0.2 mm variation in the detonation wave. This is sufficiently precise for very low tolerance applications such as nuclear weapon explosive lenses.
<P> Fallout comes in two varieties. The first is a small amount of carcinogenic material with a long half-life. The second, depending on the height of detonation, is a huge quantity of radioactive dust and sand with a short half-life.
<P> The danger of radiation from radioactive precipitation/"fallout" decreases with time, as radioactivity decays exponentially with time, such that for each factor of seven increase in time, the radiation is reduced by a factor of ten. Creating the following 7-10 rule-of-thumb after a typical nuclear detonation while under the conditions that all fallout that will fall on the land has done so completely and no "further" deposition in the area will occur - After 7 hours, the average dose rate outside is reduced by a factor of ten; after 49(7x7) hours, it is reduced by a further factor of ten (to a value of 1/100th of the initial dose rate); after two weeks the radiation from the fallout will have reduced by a factor of 1000 compared to the initial level; and after 14 weeks the average dose rate will have reduced to 1/10,000th of the initial level.
<P> Following a single IND (improvised nuclear device) detonation in the US, the National Atmospheric Release Advisory Center (NARAC) would, within minutes to at most hours, after the detonation have a reliable prediction of the fallout plume size and direction. When armed with this prediction they would then begin attempting to corroborate this with readings from radiation survey meter equipment that would fly over close to the ground in the affected area by means of helicopter or drone (UAV) aircraft on material intelligence gathering missions, which would also follow within tens of minutes to at most hours after the detonation.
<P> From many smaller detonations combined the fallout for the entire launch of a 6,000 short ton (5,500 metric ton) Orion is equal to the detonation of a typical 10 megaton (40 petajoule) nuclear weapon as an air burst, therefore most of its fallout would be the comparatively dilute delayed fallout. Assuming the use of nuclear explosives with a high portion of total yield from fission, it would produce a combined fallout total similar to the surface burst yield of the "Mike" shot of Operation Ivy, a 10.4 Megaton device detonated in 1952. The comparison is not quite perfect as, due to its surface burst location, "Ivy Mike" created a large amount of early fallout contamination. Historical above-ground nuclear weapon tests included 189 megatons of fission yield and caused average global radiation exposure per person peaking at 0.11 mSv/a in 1963, with a 0.007 mSv/a residual in modern times, superimposed upon other sources of exposure, primarily natural background radiation, which averages 2.4 mSv/a globally but varies greatly, such as 6 mSv/a in some high-altitude cities. Any comparison would be influenced by how population dosage is affected by detonation locations, with very remote sites preferred.
| question: How long does the fallout of a nuclear detonation last? context: <P> During the first hour after a nuclear explosion, radioactivity levels drop precipitously. Radioactivity levels are further reduced by about 90% after another 7 hours and by about 99% after 2 days. An accurate rule of thumb, applicable in the time-period of days to a few weeks post-detonation which approximates the radioactive dose rate generated by the decay of the myriad of isotopes present in nuclear fallout, is the "7/10 rule". The rule states that for each 7-fold increase in time the dose rate drops by a factor of 10. For example, assuming the fallout process has ended 24 hours post detonation and the dose rate would be lethal if a few hours of exposure occurred, 50 roentgens per hour, then 7 days after detonation the dose rate will be 5 R/hr and 49 days after detonation (7×7 days) the dose rate will be 0.5 R/hr at which point no special precautions would need to be taken and venturing outside into that dose rate for an hour or two would pose a close to negligible health hazard, thus permitting an evacuation to be done with acceptable safety to a known contamination free zone. Following a surface-burst nuclear detonation, approximately 80 percent of the fallout would be deposited on the ground during the first 24 hours.
<P> The danger of radiation from fallout also decreases rapidly with time due in large part to the exponential decay of the individual radionuclides. A book by Cresson H. Kearny presents data showing that for the first few days after the explosion, the radiation dose rate is reduced by a factor of ten for every seven-fold increase in the number of hours since the explosion. He presents data showing that "it takes about seven times as long for the dose rate to decay from 1000 roentgens per hour (1000 R/hr) to 10 R/hr (48 hours) as to decay from 1000 R/hr to 100 R/hr (7 hours)." This is a fairly rough rule of thumb based on observed data, not a precise relation.
<P> Since explosives detonate at typically 7–8 kilometers per second, or 7–8 meters per millisecond, a 1 millisecond delay in detonation from one side of a nuclear weapon to the other would be longer than the time the detonation would take to cross the weapon. The time precision and consistency of EBWs (0.1 microsecond or less) are roughly enough time for the detonation to move 1 millimeter at most, and for the most precise commercial EBWs this is 0.025 microsecond and about 0.2 mm variation in the detonation wave. This is sufficiently precise for very low tolerance applications such as nuclear weapon explosive lenses.
<P> Fallout comes in two varieties. The first is a small amount of carcinogenic material with a long half-life. The second, depending on the height of detonation, is a huge quantity of radioactive dust and sand with a short half-life.
<P> The danger of radiation from radioactive precipitation/"fallout" decreases with time, as radioactivity decays exponentially with time, such that for each factor of seven increase in time, the radiation is reduced by a factor of ten. Creating the following 7-10 rule-of-thumb after a typical nuclear detonation while under the conditions that all fallout that will fall on the land has done so completely and no "further" deposition in the area will occur - After 7 hours, the average dose rate outside is reduced by a factor of ten; after 49(7x7) hours, it is reduced by a further factor of ten (to a value of 1/100th of the initial dose rate); after two weeks the radiation from the fallout will have reduced by a factor of 1000 compared to the initial level; and after 14 weeks the average dose rate will have reduced to 1/10,000th of the initial level.
<P> Following a single IND (improvised nuclear device) detonation in the US, the National Atmospheric Release Advisory Center (NARAC) would, within minutes to at most hours, after the detonation have a reliable prediction of the fallout plume size and direction. When armed with this prediction they would then begin attempting to corroborate this with readings from radiation survey meter equipment that would fly over close to the ground in the affected area by means of helicopter or drone (UAV) aircraft on material intelligence gathering missions, which would also follow within tens of minutes to at most hours after the detonation.
<P> From many smaller detonations combined the fallout for the entire launch of a 6,000 short ton (5,500 metric ton) Orion is equal to the detonation of a typical 10 megaton (40 petajoule) nuclear weapon as an air burst, therefore most of its fallout would be the comparatively dilute delayed fallout. Assuming the use of nuclear explosives with a high portion of total yield from fission, it would produce a combined fallout total similar to the surface burst yield of the "Mike" shot of Operation Ivy, a 10.4 Megaton device detonated in 1952. The comparison is not quite perfect as, due to its surface burst location, "Ivy Mike" created a large amount of early fallout contamination. Historical above-ground nuclear weapon tests included 189 megatons of fission yield and caused average global radiation exposure per person peaking at 0.11 mSv/a in 1963, with a 0.007 mSv/a residual in modern times, superimposed upon other sources of exposure, primarily natural background radiation, which averages 2.4 mSv/a globally but varies greatly, such as 6 mSv/a in some high-altitude cities. Any comparison would be influenced by how population dosage is affected by detonation locations, with very remote sites preferred.
| answer: > Does living in parts of Nevada carry any real risk of side effects on healthNo. At least not from nuclear weapons. > or were they simply not powerful enough to have any lasting radioactive effects?Several radioisotopes are still measurable and it will stay that way basically forever, but detectors are really sensitive. We have slightly higher amounts of C-14 in the atmosphere since the Cold War (everywhere, not just Nevada), that alone will stay measurable for at least a few more decades. Its half life is much longer (5700 years), but the carbon gets absorbed by the environment over time as well so it disappears from the atmosphere.Reconstruction of Hiroshima and Nagasaki started quickly after the war and today the radiation levels are not higher than elsewhere. |
152,685 | 2wczb6 | if the earth and us and the big bang were formed from the collision of various molecules and particles. (please excuse my rusty science) how did these particles and molecules form in the first place? | The Big Bang wasn't formed by collisions of various molecules and particles.The Big Bang is a theory that all matter in the universe originated at a "singularity" - that is, an infinitesimally small point, inside which was all the matter and energy in the whole universe. The Big Bang theory doesn't attempt to explain anything *before* that point (and as far is science is concerned, your guess is as good as anybody's) but afterwards, that matter and energy expanded out into the universe and formed particles and whatnot. | [
"The Big Bang wasn't formed by collisions of various molecules and particles.\n\nThe Big Bang is a theory that all matter in the universe originated at a \"singularity\" - that is, an infinitesimally small point, inside which was all the matter and energy in the whole universe. The Big Bang theory doesn't attempt t... | 5 | [
"The Big Bang wasn't formed by collisions of various molecules and particles.\n\nThe Big Bang is a theory that all matter in the universe originated at a \"singularity\" - that is, an infinitesimally small point, inside which was all the matter and energy in the whole universe. The Big Bang theory doesn't attempt t... | 1 | <P> The Big Bang produced a hot, dense, nearly homogeneous universe. As the universe expanded and cooled, particles, then nuclei, and finally atoms formed. At a redshift of about 1100, equivalent to about 400,000 years after the Big Bang, when the primordial plasma filling the universe cooled sufficiently for protons and electrons to combine into neutral hydrogen atoms, the universe became optically thin whereby photons from this early era no longer interacted with matter. We detect these photons today as the cosmic microwave background (CMB). The CMB shows that the universe was still remarkably smooth and uniform.
<P> The first subatomic particles to be formed included protons, neutrons, and electrons. Though simple atomic nuclei formed within the first three minutes after the Big Bang, thousands of years passed before the first electrically neutral atoms formed. The majority of atoms produced by the Big Bang were hydrogen, along with helium and traces of lithium. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies, and the heavier elements were synthesized either within stars or during supernovae.
<P> BULLET::::- Big Bang cosmology (standard) – cosmology based on the Big Bang model of the universe. The Big Bang is a theoretical explosion from which all matter in the universe is alleged to have originated approximately 13.799 ± 0.021 billion years ago.
<P> English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past."
<P> When the two lead nuclei slam into each other, matter undergoes a transition to form for a brief instant a droplet of primordial matter, the so-called quark–gluon plasma which is believed to have filled the universe a few microseconds after the Big Bang.
<P> The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:
<P> About 470 million years ago, an asteroid as big as a city block smashed into what is now Decorah, supporting a theory that a giant space rock broke up and bombarded Earth just as early life began flourishing in the oceans.
| question: if the earth and us and the big bang were formed from the collision of various molecules and particles. (please excuse my rusty science) how did these particles and molecules form in the first place? context: <P> The Big Bang produced a hot, dense, nearly homogeneous universe. As the universe expanded and cooled, particles, then nuclei, and finally atoms formed. At a redshift of about 1100, equivalent to about 400,000 years after the Big Bang, when the primordial plasma filling the universe cooled sufficiently for protons and electrons to combine into neutral hydrogen atoms, the universe became optically thin whereby photons from this early era no longer interacted with matter. We detect these photons today as the cosmic microwave background (CMB). The CMB shows that the universe was still remarkably smooth and uniform.
<P> The first subatomic particles to be formed included protons, neutrons, and electrons. Though simple atomic nuclei formed within the first three minutes after the Big Bang, thousands of years passed before the first electrically neutral atoms formed. The majority of atoms produced by the Big Bang were hydrogen, along with helium and traces of lithium. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies, and the heavier elements were synthesized either within stars or during supernovae.
<P> BULLET::::- Big Bang cosmology (standard) – cosmology based on the Big Bang model of the universe. The Big Bang is a theoretical explosion from which all matter in the universe is alleged to have originated approximately 13.799 ± 0.021 billion years ago.
<P> English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past."
<P> When the two lead nuclei slam into each other, matter undergoes a transition to form for a brief instant a droplet of primordial matter, the so-called quark–gluon plasma which is believed to have filled the universe a few microseconds after the Big Bang.
<P> The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:
<P> About 470 million years ago, an asteroid as big as a city block smashed into what is now Decorah, supporting a theory that a giant space rock broke up and bombarded Earth just as early life began flourishing in the oceans.
| answer: The Big Bang wasn't formed by collisions of various molecules and particles.The Big Bang is a theory that all matter in the universe originated at a "singularity" - that is, an infinitesimally small point, inside which was all the matter and energy in the whole universe. The Big Bang theory doesn't attempt to explain anything *before* that point (and as far is science is concerned, your guess is as good as anybody's) but afterwards, that matter and energy expanded out into the universe and formed particles and whatnot. |
140,053 | 1kxxsp | Why do you have to get multiple vaccinations of the same virus? | Ordinarily, when an infectious agent enters the body, the immune system targets multiple parts of the agent with different monoclonal antibodies to produce an overall polyclonal response. This is an incredibly efficient process, but some organisms can spread and reproduce faster than the immune system can effectively mount a response.When you vaccinate against, say, HPV, the effect on the immune system is less effective than the body's own cascade responses. These days many vaccines don't provide the body with the whole version of the virus; instead, they contain a part of it e.g. part of the capsid. Whilst that is effective to some extent, it's not as good as that polyclonal response the body would normally mount. It needs to be reinforced with several vaccinations to have a protective effect.As to why we can't do super-vaccinations, a polyclonal vaccination, well in many cases it isn't feasible either due to scientific restrictions (we don't know which parts to target), health reasons (targetting a specific part may also affect other bodily cells) or economic reasons (the vaccine would be too expensive to purely produce, limiting it's economic potential).As you can imagine, the latter is the biggest reason.Sources: * [NYU Lagone](_URL_2_)* [CDC Principles of Vaccination](_URL_1_)* [Roitt textbook of immunology](_URL_0_) | [
"Ordinarily, when an infectious agent enters the body, the immune system targets multiple parts of the agent with different monoclonal antibodies to produce an overall polyclonal response. This is an incredibly efficient process, but some organisms can spread and reproduce faster than the immune system can effectiv... | 2 | [] | 0 | <P> A vaccine against a particular virus is relatively easy to create. The virus is foreign to the body, and therefore expresses antigens that the immune system can recognize. Furthermore, viruses usually only provide a few viable variants. By contrast, developing vaccines for viruses that mutate constantly such as influenza or HIV has been problematic.
<P> "Vaccinia" is also used in recombinant vaccines, as a vector for expression of foreign genes within a host, in order to generate an immune response. Other poxviruses are also used as live recombinant vaccines.
<P> While most antivirals treat viral infection, vaccines are a preemptive first line of defense against pathogens. Vaccination involves the introduction (i.e. via injection) of a small amount of typically inactivated or attenuated antigenic material to stimulate an individual's immune system. The immune system responds by developing white blood cells to specifically combat the introduced pathogen, resulting in adaptive immunity. Vaccination in a population results in herd immunity and greatly improved population health, with significant reductions in viral infection and disease.
<P> Choosing not to vaccinate is largely to blame for the recent outbreak of measles. Parents choosing not to vaccinate prevents herd immunity, which is what patients who suffer with immunocompromising diseases rely on to protect them. To prevent the measles outbreak of 2019 from getting worse it is necessary for anti-vaxxers to choose to vaccinate, however, just presenting evidence to parents that are resistance is usually not enough. “Finding common ground with parents' goals and hopes for their children and sharing compelling individual accounts may help” (Smith, 2018, p. 1).
<P> Vaccination is a way of preventing diseases caused by viruses. Vaccines simulate a natural infection and its associated immune response, but do not cause the disease. Their use has resulted in the eradication of smallpox and a dramatic decline in illness and death caused by infections such as polio, measles, mumps and rubella. Vaccines are available to prevent over fourteen viral infections of humans and more are used to prevent viral infections of animals. Vaccines may consist of either live or killed viruses. Live vaccines contain weakened forms of the virus, but these vaccines can be dangerous when given to people with weak immunity. In these people, the weakened virus can cause the original disease. Biotechnology and genetic engineering techniques are used to produce "designer" vaccines that only have the capsid proteins of the virus. Hepatitis B vaccine is an example of this type of vaccine. These vaccines are safer because they can never cause the disease.
<P> Many parents do not vaccinate their children because they feel that diseases are no longer present due to vaccination. This is a false assumption, since diseases held in check by immunization programs can and do still return if immunization is dropped. These pathogens could possibly infect vaccinated people, due to the pathogen's ability to mutate when it is able to live in unvaccinated hosts. In 2010, California had the worst whooping cough outbreak in 50 years. A possible contributing factor was parents choosing not to vaccinate their children. There was also a case in Texas in 2012 where 21 members of a church contracted measles because they chose not to immunize.
<P> The initial problem facing the participants was whether to use a ring vaccination strategy over a mass vaccination one in order to deal with the small number of those thought to be infected with the smallpox virus. While ring vaccination is recommended for initial control over an outbreak, states may quickly choose to switch to mass vaccination if it is unsuccessful. In addition, the participants for countries with no infected persons faced pressures to share available vaccine resources with countries currently experiencing outbreaks. As more countries began to experience outbreaks, domestic pressures forced participants to withhold the sharing of vaccines in order to preserve their supply for their own citizens. Other strategies, such as vaccine dilution, became necessary as the amount of those suspected to be infected grew. Participants also considered the viability of closing borders to prevent the further spread of the outbreak to their own countries. Certain dire measures, such as the use of military quarantines, were considered as participants also had the obligation to ensure public safety in civilian populations.
| question: Why do you have to get multiple vaccinations of the same virus? context: <P> A vaccine against a particular virus is relatively easy to create. The virus is foreign to the body, and therefore expresses antigens that the immune system can recognize. Furthermore, viruses usually only provide a few viable variants. By contrast, developing vaccines for viruses that mutate constantly such as influenza or HIV has been problematic.
<P> "Vaccinia" is also used in recombinant vaccines, as a vector for expression of foreign genes within a host, in order to generate an immune response. Other poxviruses are also used as live recombinant vaccines.
<P> While most antivirals treat viral infection, vaccines are a preemptive first line of defense against pathogens. Vaccination involves the introduction (i.e. via injection) of a small amount of typically inactivated or attenuated antigenic material to stimulate an individual's immune system. The immune system responds by developing white blood cells to specifically combat the introduced pathogen, resulting in adaptive immunity. Vaccination in a population results in herd immunity and greatly improved population health, with significant reductions in viral infection and disease.
<P> Choosing not to vaccinate is largely to blame for the recent outbreak of measles. Parents choosing not to vaccinate prevents herd immunity, which is what patients who suffer with immunocompromising diseases rely on to protect them. To prevent the measles outbreak of 2019 from getting worse it is necessary for anti-vaxxers to choose to vaccinate, however, just presenting evidence to parents that are resistance is usually not enough. “Finding common ground with parents' goals and hopes for their children and sharing compelling individual accounts may help” (Smith, 2018, p. 1).
<P> Vaccination is a way of preventing diseases caused by viruses. Vaccines simulate a natural infection and its associated immune response, but do not cause the disease. Their use has resulted in the eradication of smallpox and a dramatic decline in illness and death caused by infections such as polio, measles, mumps and rubella. Vaccines are available to prevent over fourteen viral infections of humans and more are used to prevent viral infections of animals. Vaccines may consist of either live or killed viruses. Live vaccines contain weakened forms of the virus, but these vaccines can be dangerous when given to people with weak immunity. In these people, the weakened virus can cause the original disease. Biotechnology and genetic engineering techniques are used to produce "designer" vaccines that only have the capsid proteins of the virus. Hepatitis B vaccine is an example of this type of vaccine. These vaccines are safer because they can never cause the disease.
<P> Many parents do not vaccinate their children because they feel that diseases are no longer present due to vaccination. This is a false assumption, since diseases held in check by immunization programs can and do still return if immunization is dropped. These pathogens could possibly infect vaccinated people, due to the pathogen's ability to mutate when it is able to live in unvaccinated hosts. In 2010, California had the worst whooping cough outbreak in 50 years. A possible contributing factor was parents choosing not to vaccinate their children. There was also a case in Texas in 2012 where 21 members of a church contracted measles because they chose not to immunize.
<P> The initial problem facing the participants was whether to use a ring vaccination strategy over a mass vaccination one in order to deal with the small number of those thought to be infected with the smallpox virus. While ring vaccination is recommended for initial control over an outbreak, states may quickly choose to switch to mass vaccination if it is unsuccessful. In addition, the participants for countries with no infected persons faced pressures to share available vaccine resources with countries currently experiencing outbreaks. As more countries began to experience outbreaks, domestic pressures forced participants to withhold the sharing of vaccines in order to preserve their supply for their own citizens. Other strategies, such as vaccine dilution, became necessary as the amount of those suspected to be infected grew. Participants also considered the viability of closing borders to prevent the further spread of the outbreak to their own countries. Certain dire measures, such as the use of military quarantines, were considered as participants also had the obligation to ensure public safety in civilian populations.
| answer: Ordinarily, when an infectious agent enters the body, the immune system targets multiple parts of the agent with different monoclonal antibodies to produce an overall polyclonal response. This is an incredibly efficient process, but some organisms can spread and reproduce faster than the immune system can effectively mount a response.When you vaccinate against, say, HPV, the effect on the immune system is less effective than the body's own cascade responses. These days many vaccines don't provide the body with the whole version of the virus; instead, they contain a part of it e.g. part of the capsid. Whilst that is effective to some extent, it's not as good as that polyclonal response the body would normally mount. It needs to be reinforced with several vaccinations to have a protective effect.As to why we can't do super-vaccinations, a polyclonal vaccination, well in many cases it isn't feasible either due to scientific restrictions (we don't know which parts to target), health reasons (targetting a specific part may also affect other bodily cells) or economic reasons (the vaccine would be too expensive to purely produce, limiting it's economic potential).As you can imagine, the latter is the biggest reason.Sources: * [NYU Lagone](_URL_2_)* [CDC Principles of Vaccination](_URL_1_)* [Roitt textbook of immunology](_URL_0_) |
196,105 | 16byv9 | Medically speaking, what is the "best" blood type to have? Is there one? | AB+ for the same reason you already put in your question (in case of an accident, you could basically receive a different blood type without having many complications). Other than that, there is inexistant support those wild claims by new age gurus or quack doctors that type X or Y are better, or that you should eat A if you are X and B if you are Y. | [
"AB+ for the same reason you already put in your question (in case of an accident, you could basically receive a different blood type without having many complications). \n\nOther than that, there is inexistant support those wild claims by new age gurus or quack doctors that type X or Y are better, or that you shou... | 2 | [
"AB+ for the same reason you already put in your question (in case of an accident, you could basically receive a different blood type without having many complications). \n\nOther than that, there is inexistant support those wild claims by new age gurus or quack doctors that type X or Y are better, or that you shou... | 1 | <P> The A blood type contains about 20 subgroups, of which A1 and A2 are the most common (over 99%). A1 makes up about 80% of all A-type blood, with A2 making up almost all of the rest. These two subgroups are not always interchangeable as far as transfusion is concerned, as some A2 individuals produce antibodies against the A1 antigen. Complications can sometimes arise in rare cases when typing the blood.
<P> For RBCs, type O negative blood is considered a "universal donor" as recipients with types A, B, or AB can almost always receive O negative blood safely. Type AB positive is considered a "universal recipient" because they can receive the other ABO/Rh types safely. These are not truly universal, as other red cell antigens can further complicate transfusions.
<P> In addition to the ABO and Rh blood group systems, there are more than two hundred minor blood groups that can complicate blood transfusions. These are known as rare blood types. Whereas common blood types are expressed in a letter or two, which may be a plus or a minus, a smaller number of people express their blood type in an extensive series of letters in addition to their "AB±" type designation.
<P> Rare blood types can cause supply problems for blood banks and hospitals. For example, Duffy-negative blood occurs much more frequently in people of African origin, and the rarity of this blood type in the rest of the population can result in a shortage of Duffy-negative blood for these patients. Similarly for RhD negative people, there is a risk associated with travelling to parts of the world where supplies of RhD negative blood are rare, particularly East Asia, where blood services may endeavor to encourage Westerners to donate blood.
<P> The ABO blood types were discovered by Karl Landsteiner in 1901, for which he received the Nobel Prize in Physiology or Medicine in 1930. ABO blood types are also present in some other animals such as rodents and apes, including chimpanzees, bonobos, and gorillas.
<P> The medical school is associated with 3 Nobel Prize winners; 2 winners of the Nobel Prize in Physiology or Medicine and 1 winner of the Nobel Prize in Chemistry. Graduates of the medical school have founded medical schools and universities all over the world including 5 out of the 7 Ivy League medical schools (Pennsylvania, Yale, Columbia, Harvard and Dartmouth), University of Sydney, Sydney Medical School, University of Melbourne Medical School, McGill University Faculty of Medicine, University of Vermont College of Medicine, Université de Montréal Faculty of Medicine, the Royal Postgraduate Medical School (now part of Imperial College School of Medicine), the University of Cape Town Medical School, Birkbeck, University of London, the Middlesex Hospital Medical School and the London School of Medicine for Women (both now part of UCL Medical School).
<P> Blood types are often qualified with a plus or minus to indicate the presence or absence of the Rh factor; for instance, A+ means A-type blood with the Rh factor present, while B− means B-type blood with the Rh factor absent.
| question: Medically speaking, what is the "best" blood type to have? Is there one? context: <P> The A blood type contains about 20 subgroups, of which A1 and A2 are the most common (over 99%). A1 makes up about 80% of all A-type blood, with A2 making up almost all of the rest. These two subgroups are not always interchangeable as far as transfusion is concerned, as some A2 individuals produce antibodies against the A1 antigen. Complications can sometimes arise in rare cases when typing the blood.
<P> For RBCs, type O negative blood is considered a "universal donor" as recipients with types A, B, or AB can almost always receive O negative blood safely. Type AB positive is considered a "universal recipient" because they can receive the other ABO/Rh types safely. These are not truly universal, as other red cell antigens can further complicate transfusions.
<P> In addition to the ABO and Rh blood group systems, there are more than two hundred minor blood groups that can complicate blood transfusions. These are known as rare blood types. Whereas common blood types are expressed in a letter or two, which may be a plus or a minus, a smaller number of people express their blood type in an extensive series of letters in addition to their "AB±" type designation.
<P> Rare blood types can cause supply problems for blood banks and hospitals. For example, Duffy-negative blood occurs much more frequently in people of African origin, and the rarity of this blood type in the rest of the population can result in a shortage of Duffy-negative blood for these patients. Similarly for RhD negative people, there is a risk associated with travelling to parts of the world where supplies of RhD negative blood are rare, particularly East Asia, where blood services may endeavor to encourage Westerners to donate blood.
<P> The ABO blood types were discovered by Karl Landsteiner in 1901, for which he received the Nobel Prize in Physiology or Medicine in 1930. ABO blood types are also present in some other animals such as rodents and apes, including chimpanzees, bonobos, and gorillas.
<P> The medical school is associated with 3 Nobel Prize winners; 2 winners of the Nobel Prize in Physiology or Medicine and 1 winner of the Nobel Prize in Chemistry. Graduates of the medical school have founded medical schools and universities all over the world including 5 out of the 7 Ivy League medical schools (Pennsylvania, Yale, Columbia, Harvard and Dartmouth), University of Sydney, Sydney Medical School, University of Melbourne Medical School, McGill University Faculty of Medicine, University of Vermont College of Medicine, Université de Montréal Faculty of Medicine, the Royal Postgraduate Medical School (now part of Imperial College School of Medicine), the University of Cape Town Medical School, Birkbeck, University of London, the Middlesex Hospital Medical School and the London School of Medicine for Women (both now part of UCL Medical School).
<P> Blood types are often qualified with a plus or minus to indicate the presence or absence of the Rh factor; for instance, A+ means A-type blood with the Rh factor present, while B− means B-type blood with the Rh factor absent.
| answer: AB+ for the same reason you already put in your question (in case of an accident, you could basically receive a different blood type without having many complications). Other than that, there is inexistant support those wild claims by new age gurus or quack doctors that type X or Y are better, or that you should eat A if you are X and B if you are Y. |
182,442 | kkkee | What would it be like to swim in a pool on the moon? | Actually, and surprisingly, there wouldn't really be any difference at all. First, water is (nearly) incompressible, meaning it would be at the same density on the Earth or on the Moon, and buoyancy is not a function of gravity so you would have to swim just as hard to stay afloat in both locations. The only real difference would be since gravity is pulling you down less on the moon, you would sink slower if you did nothing to keep afloat, or after cannonballing in. | [
"Humans are so close to water in density that I doubt the different gravitational field would make a difference. Viscosity isn't gravity-dependent. During the portion of the stroke where your arms are above the water, that might feel weird.",
"Actually, and surprisingly, there wouldn't really be any difference at... | 8 | [
"Humans are so close to water in density that I doubt the different gravitational field would make a difference. Viscosity isn't gravity-dependent. During the portion of the stroke where your arms are above the water, that might feel weird.",
"Actually, and surprisingly, there wouldn't really be any difference at... | 8 | <P> Moon pools can be used in chambers below sea level, especially for the use of scuba divers, and their design requires more complex consideration of air and water pressure acting on the moon pool surface.
<P> A moon pool is a feature of marine drilling platforms, drillships and diving support vessels, some marine research and underwater exploration or research vessels, and underwater habitats, in which it is also known as a wet porch. It is an opening in the floor or base of the hull, platform, or chamber giving access to the water below, allowing technicians or researchers to lower tools and instruments into the sea. It provides shelter and protection so that even if the ship is in high seas or surrounded by ice, researchers can work in comfort rather than on a deck exposed to the elements. A moon pool also allows divers or small submersible craft to enter or leave the water easily and in a more protected environment.
<P> Very deep moon pools are used in underwater habitats—submerged chambers used by divers engaged in underwater research, exploration, marine salvage, and recreation. In this case, shown in part D of the diagram, there is no dry access between the chamber and the sea surface, and the moon pool is the only entry or exit to the chamber. Submerged chambers provide dry areas for work and rest without the need to ascend to the surface. This kind of submerged chamber uses the same principles as the diving bell, except they are fixed to the seafloor, and may be called a wet porch, wet room, or wet bell. Sometimes the term moon pool is used to mean the complete chamber, not just the opening in the bottom and the air–water interface.
<P> While swimming in Sun Moon Lake is usually not permitted, there is an annual 3-km race called the Swimming Carnival of Sun Moon Lake held around the Mid-Autumn Festival each year. The Sun Moon Lake Swimming Carnival was launched in 1983 and is listed among Top 50 Open Water Swims in Asia and Top 100 Open Water Swims of the World. Everyone over 10 years old and with ability to swim long distances can join, regardless of nationality. In recent years the participants have numbered in the tens of thousands. Other festivities held at the same time include fireworks, laser shows, and concerts.
<P> Swimming is the self-propulsion of a person through water, usually for recreation, sport, exercise, or survival. Locomotion is achieved through coordinated movement of the limbs, the body, or both. Humans can hold their breath underwater and undertake rudimentary locomotive swimming within weeks of birth, as a survival response.
<P> Nude swimming, or skinny dipping, is the practice of bathing naked, whether in natural bodies of water, in swimming pools, or in hot tubs. The term "dipping" refers to the practice of being immersed in spring waters for health reasons at spa towns.
<P> Swimhiking is a recreation activity which combines hiking and outdoor swimming. It has been conceived by Peter Hayes while hiking in the Lake District of England. When hiking and you arrive at a lake, you change into a swimming costume and put your clothes in a waterproof rucksack and you swim across the lake. On the other side you change back into your hiking gear and continue hiking.
| question: What would it be like to swim in a pool on the moon? context: <P> Moon pools can be used in chambers below sea level, especially for the use of scuba divers, and their design requires more complex consideration of air and water pressure acting on the moon pool surface.
<P> A moon pool is a feature of marine drilling platforms, drillships and diving support vessels, some marine research and underwater exploration or research vessels, and underwater habitats, in which it is also known as a wet porch. It is an opening in the floor or base of the hull, platform, or chamber giving access to the water below, allowing technicians or researchers to lower tools and instruments into the sea. It provides shelter and protection so that even if the ship is in high seas or surrounded by ice, researchers can work in comfort rather than on a deck exposed to the elements. A moon pool also allows divers or small submersible craft to enter or leave the water easily and in a more protected environment.
<P> Very deep moon pools are used in underwater habitats—submerged chambers used by divers engaged in underwater research, exploration, marine salvage, and recreation. In this case, shown in part D of the diagram, there is no dry access between the chamber and the sea surface, and the moon pool is the only entry or exit to the chamber. Submerged chambers provide dry areas for work and rest without the need to ascend to the surface. This kind of submerged chamber uses the same principles as the diving bell, except they are fixed to the seafloor, and may be called a wet porch, wet room, or wet bell. Sometimes the term moon pool is used to mean the complete chamber, not just the opening in the bottom and the air–water interface.
<P> While swimming in Sun Moon Lake is usually not permitted, there is an annual 3-km race called the Swimming Carnival of Sun Moon Lake held around the Mid-Autumn Festival each year. The Sun Moon Lake Swimming Carnival was launched in 1983 and is listed among Top 50 Open Water Swims in Asia and Top 100 Open Water Swims of the World. Everyone over 10 years old and with ability to swim long distances can join, regardless of nationality. In recent years the participants have numbered in the tens of thousands. Other festivities held at the same time include fireworks, laser shows, and concerts.
<P> Swimming is the self-propulsion of a person through water, usually for recreation, sport, exercise, or survival. Locomotion is achieved through coordinated movement of the limbs, the body, or both. Humans can hold their breath underwater and undertake rudimentary locomotive swimming within weeks of birth, as a survival response.
<P> Nude swimming, or skinny dipping, is the practice of bathing naked, whether in natural bodies of water, in swimming pools, or in hot tubs. The term "dipping" refers to the practice of being immersed in spring waters for health reasons at spa towns.
<P> Swimhiking is a recreation activity which combines hiking and outdoor swimming. It has been conceived by Peter Hayes while hiking in the Lake District of England. When hiking and you arrive at a lake, you change into a swimming costume and put your clothes in a waterproof rucksack and you swim across the lake. On the other side you change back into your hiking gear and continue hiking.
| answer: Actually, and surprisingly, there wouldn't really be any difference at all. First, water is (nearly) incompressible, meaning it would be at the same density on the Earth or on the Moon, and buoyancy is not a function of gravity so you would have to swim just as hard to stay afloat in both locations. The only real difference would be since gravity is pulling you down less on the moon, you would sink slower if you did nothing to keep afloat, or after cannonballing in. |
120,101 | 2esycz | What did composers from the classical period [Mozart, Haydn, even Beethoven] think of the United States? Do we know? [crosspost from r/classicalmusic] | I’ve actually thought about this question before! Sadly I have never read anything about any musician or composer from Revolutionary America time having thoughts one way or another about America. European art music (especially opera) didn’t really get a foothold in America until the 1820s. Classical composers tend to be a mercenary bunch, and America from 1750-1800ish just didn’t have much money or potential consumers of art music to attract Italian or Viennese musicians to schlep alllll the way over to North America. In the Classical period music still tended to go to the old classical stories for inspiration for their secular music (although it’s starting to lose the absolute stranglehold that it had in baroque times), so you see operas and symphonies starring more Greek gods than revolutionary mortals. In the 1770s Metastasio librettos, which are all classical stories, are still in heavy rotation in opera. If the American revolution had happened a couple of decades later during the Romantic period I really think it could have got some traction in opera though… But this explains pretty well why it doesn’t appear thematically in more places. Also, if you’ve got royal patrons (Mozart and Beethoven both), or an official court position (like Haydn as Kappelmeister) writing music about how awesome a colony overthrowing its monarch is, well, that’s not very politically savvy to say the least. Best to just write about Venus and Mars yet again… Maybe you could get away with it in France though? Your observation about Napoleon and republic idealism is a keen one! However, I have my own cynical read on its dedication, which is that Beethoven was working on the centuries-old practice of dedicating music to someone unsolicited in the hopes the maybe-new-patron would like the music and give you some money, or heck just give you some money to be polite. This was a pretty common thing, Bach did it (close to Beethoven in time/society). Napoleon was a pretty huge fan of music, opera in particular, and he did give a fair amount of money out to musicians as he went through Europe. You also see (in the Wikipedia you linked to) that Beethoven stripped the dedication from it to get money from someone else. Cash Rules Everything Around Musicians says me. Now here’s an interesting footnote - Italian opera got into SOUTH America way before it got into North America, because they had more money for one, and the Spaniards liked opera, arguably more than the British did in the 18th century, though that’s a pretty long argument I’d have to work out. [John Rosselli wrote a fair amount about Italian opera in Latin America if you’d like to look into more.](_URL_0_) Although USA is the top global consumer of opera now, so we caught up. :) | [
"I’ve actually thought about this question before! Sadly I have never read anything about any musician or composer from Revolutionary America time having thoughts one way or another about America. European art music (especially opera) didn’t really get a foothold in America until the 1820s. Classical composers tend... | 2 | [
"I’ve actually thought about this question before! Sadly I have never read anything about any musician or composer from Revolutionary America time having thoughts one way or another about America. European art music (especially opera) didn’t really get a foothold in America until the 1820s. Classical composers tend... | 2 | <P> Classical music was brought to the United States during the colonial era. Many American composers of this period worked exclusively with European models, while others, such as William Billings, Supply Belcher, and Justin Morgan, also known as the "First New England School", developed a style almost entirely independent of European models. Of these composers, Billings is the most well-remembered; he was also influential "as the founder of the American church choir, as the first musician to use a pitch pipe, and as the first to introduce a violoncello into church service". Many of these composers were amateur singers who developed new forms of sacred music suitable for performance by amateurs, and often using harmonic methods which would have been considered bizarre by contemporary European standards. These composers' styles were untouched by "the influence of their sophisticated European contemporaries", using modal or pentatonic scales or melodies and eschewing the European rules of harmony.
<P> In the early 19th century, America produced diverse composers such as Anthony Heinrich, who composed in an idiosyncratic, intentionally American style and was the first American composer to write for a symphony orchestra. Many other composers, most famously William Henry Fry and George Frederick Bristow, supported the idea of an American classical style, though their works were very European in orientation. It was John Knowles Paine, however, who became the first American composer to be accepted in Europe. Paine's example inspired the composers of the "Second New England School", which included such figures as Amy Beach, Edward MacDowell, and Horatio Parker.
<P> During the Second New England School's years of prominence, American musical education was still in its infancy. Americans often learned music theory and composition in Europe or from European musicians who had emigrated to the United States. As a result, large portions of American classical music written at the time reflect European influences, especially from Germany. Although America lagged in composition, in the second half of the 20th century the country developed permanent and robust opera and symphonic organizations and exceeded Europe in the quality of piano manufacture and piano ownership per capita.
<P> Early 20th scholarly analysis of American music tended to interpret European-derived classical traditions as the most worthy of study, with the folk, religious, and traditional musics of the common people denigrated as low-class and of little artistic or social worth. American music history was compared to the much longer historical record of European nations, and was found wanting, leading writers like the composer Arthur Farwell to ponder what sorts of musical traditions might arise from American culture, in his 1915 "Music in America". In 1930, John Tasker Howard's "Our American Music" became a standard analysis, focusing on largely on concert music composed in the United States. Since the analysis of musicologist Charles Seeger in the mid-20th century, American music history has often been described as intimately related to perceptions of race and ancestry. Under this view, the diverse racial and ethnic background of the United States has both promoted a sense of musical separation between the races, while still fostering constant acculturation, as elements of European, African, and indigenous musics have shifted between fields. Gilbert Chase's "America's Music, from the Pilgrims to the Present", was the first major work to examine the music of the entire United States, and recognize folk traditions as more culturally significant than music for the concert hall. Chase's analysis of a diverse American musical identity has remained the dominant view among the academic establishment. Until the 1960s and 1970s, however, most musical scholars in the United States continued to study European music, limiting themselves only to certain fields of American music, especially European-derived classical and operatic styles, and sometimes African American jazz. More modern musicologists and ethnomusicologists have studied subjects ranging from the national musical identity to the individual styles and techniques of specific communities in a particular time of American history. Prominent recent studies of American music include Charles Hamm's "Music in the New World" from 1983, and Richard Crawford's "America's Musical Life" from 2001.
<P> Throughout the later part of American history, and into modern times, the relationship between American and European music has been a discussed topic among scholars of American music. Some have urged for the adoption of more purely European techniques and styles, which are sometimes perceived as more refined or elegant, while others have pushed for a sense of musical nationalism that celebrates distinctively American styles. Modern classical music scholar John Warthen Struble has contrasted American and European, concluding that the music of the United States is inherently distinct because the United States has not had centuries of musical evolution as a nation. Instead, the music of the United States is that of dozens or hundreds of indigenous and immigrant groups, all of which developed largely in regional isolation until the American Civil War, when people from across the country were brought together in army units, trading musical styles and practices. Struble deemed the ballads of the Civil War "the first American folk music with discernible features that can be considered unique to America: the first 'American' sounding music, as distinct from any regional style derived from another country."
<P> By the end of the 19th century, serious American composers were travelling to European countries to study, especially with German and French composition teachers, and they gained a thorough understanding of Romantic style, including an understanding of the Lieder tradition. American songs written between 1870 and 1910 are often dismissed as sounding too "derivative", although the compositional craft shown in these works is quite high.
<P> This is a list of composers of the Classical music era, roughly from 1730 to 1820. Prominent composers of the Classical era include Carl Philipp Emanuel Bach, Johann Stamitz, Joseph Haydn, Johann Christian Bach, Antonio Salieri, Muzio Clementi, Wolfgang Amadeus Mozart, Luigi Boccherini, Ludwig van Beethoven, and Franz Schubert.
| question: What did composers from the classical period [Mozart, Haydn, even Beethoven] think of the United States? Do we know? [crosspost from r/classicalmusic] context: <P> Classical music was brought to the United States during the colonial era. Many American composers of this period worked exclusively with European models, while others, such as William Billings, Supply Belcher, and Justin Morgan, also known as the "First New England School", developed a style almost entirely independent of European models. Of these composers, Billings is the most well-remembered; he was also influential "as the founder of the American church choir, as the first musician to use a pitch pipe, and as the first to introduce a violoncello into church service". Many of these composers were amateur singers who developed new forms of sacred music suitable for performance by amateurs, and often using harmonic methods which would have been considered bizarre by contemporary European standards. These composers' styles were untouched by "the influence of their sophisticated European contemporaries", using modal or pentatonic scales or melodies and eschewing the European rules of harmony.
<P> In the early 19th century, America produced diverse composers such as Anthony Heinrich, who composed in an idiosyncratic, intentionally American style and was the first American composer to write for a symphony orchestra. Many other composers, most famously William Henry Fry and George Frederick Bristow, supported the idea of an American classical style, though their works were very European in orientation. It was John Knowles Paine, however, who became the first American composer to be accepted in Europe. Paine's example inspired the composers of the "Second New England School", which included such figures as Amy Beach, Edward MacDowell, and Horatio Parker.
<P> During the Second New England School's years of prominence, American musical education was still in its infancy. Americans often learned music theory and composition in Europe or from European musicians who had emigrated to the United States. As a result, large portions of American classical music written at the time reflect European influences, especially from Germany. Although America lagged in composition, in the second half of the 20th century the country developed permanent and robust opera and symphonic organizations and exceeded Europe in the quality of piano manufacture and piano ownership per capita.
<P> Early 20th scholarly analysis of American music tended to interpret European-derived classical traditions as the most worthy of study, with the folk, religious, and traditional musics of the common people denigrated as low-class and of little artistic or social worth. American music history was compared to the much longer historical record of European nations, and was found wanting, leading writers like the composer Arthur Farwell to ponder what sorts of musical traditions might arise from American culture, in his 1915 "Music in America". In 1930, John Tasker Howard's "Our American Music" became a standard analysis, focusing on largely on concert music composed in the United States. Since the analysis of musicologist Charles Seeger in the mid-20th century, American music history has often been described as intimately related to perceptions of race and ancestry. Under this view, the diverse racial and ethnic background of the United States has both promoted a sense of musical separation between the races, while still fostering constant acculturation, as elements of European, African, and indigenous musics have shifted between fields. Gilbert Chase's "America's Music, from the Pilgrims to the Present", was the first major work to examine the music of the entire United States, and recognize folk traditions as more culturally significant than music for the concert hall. Chase's analysis of a diverse American musical identity has remained the dominant view among the academic establishment. Until the 1960s and 1970s, however, most musical scholars in the United States continued to study European music, limiting themselves only to certain fields of American music, especially European-derived classical and operatic styles, and sometimes African American jazz. More modern musicologists and ethnomusicologists have studied subjects ranging from the national musical identity to the individual styles and techniques of specific communities in a particular time of American history. Prominent recent studies of American music include Charles Hamm's "Music in the New World" from 1983, and Richard Crawford's "America's Musical Life" from 2001.
<P> Throughout the later part of American history, and into modern times, the relationship between American and European music has been a discussed topic among scholars of American music. Some have urged for the adoption of more purely European techniques and styles, which are sometimes perceived as more refined or elegant, while others have pushed for a sense of musical nationalism that celebrates distinctively American styles. Modern classical music scholar John Warthen Struble has contrasted American and European, concluding that the music of the United States is inherently distinct because the United States has not had centuries of musical evolution as a nation. Instead, the music of the United States is that of dozens or hundreds of indigenous and immigrant groups, all of which developed largely in regional isolation until the American Civil War, when people from across the country were brought together in army units, trading musical styles and practices. Struble deemed the ballads of the Civil War "the first American folk music with discernible features that can be considered unique to America: the first 'American' sounding music, as distinct from any regional style derived from another country."
<P> By the end of the 19th century, serious American composers were travelling to European countries to study, especially with German and French composition teachers, and they gained a thorough understanding of Romantic style, including an understanding of the Lieder tradition. American songs written between 1870 and 1910 are often dismissed as sounding too "derivative", although the compositional craft shown in these works is quite high.
<P> This is a list of composers of the Classical music era, roughly from 1730 to 1820. Prominent composers of the Classical era include Carl Philipp Emanuel Bach, Johann Stamitz, Joseph Haydn, Johann Christian Bach, Antonio Salieri, Muzio Clementi, Wolfgang Amadeus Mozart, Luigi Boccherini, Ludwig van Beethoven, and Franz Schubert.
| answer: I’ve actually thought about this question before! Sadly I have never read anything about any musician or composer from Revolutionary America time having thoughts one way or another about America. European art music (especially opera) didn’t really get a foothold in America until the 1820s. Classical composers tend to be a mercenary bunch, and America from 1750-1800ish just didn’t have much money or potential consumers of art music to attract Italian or Viennese musicians to schlep alllll the way over to North America. In the Classical period music still tended to go to the old classical stories for inspiration for their secular music (although it’s starting to lose the absolute stranglehold that it had in baroque times), so you see operas and symphonies starring more Greek gods than revolutionary mortals. In the 1770s Metastasio librettos, which are all classical stories, are still in heavy rotation in opera. If the American revolution had happened a couple of decades later during the Romantic period I really think it could have got some traction in opera though… But this explains pretty well why it doesn’t appear thematically in more places. Also, if you’ve got royal patrons (Mozart and Beethoven both), or an official court position (like Haydn as Kappelmeister) writing music about how awesome a colony overthrowing its monarch is, well, that’s not very politically savvy to say the least. Best to just write about Venus and Mars yet again… Maybe you could get away with it in France though? Your observation about Napoleon and republic idealism is a keen one! However, I have my own cynical read on its dedication, which is that Beethoven was working on the centuries-old practice of dedicating music to someone unsolicited in the hopes the maybe-new-patron would like the music and give you some money, or heck just give you some money to be polite. This was a pretty common thing, Bach did it (close to Beethoven in time/society). Napoleon was a pretty huge fan of music, opera in particular, and he did give a fair amount of money out to musicians as he went through Europe. You also see (in the Wikipedia you linked to) that Beethoven stripped the dedication from it to get money from someone else. Cash Rules Everything Around Musicians says me. Now here’s an interesting footnote - Italian opera got into SOUTH America way before it got into North America, because they had more money for one, and the Spaniards liked opera, arguably more than the British did in the 18th century, though that’s a pretty long argument I’d have to work out. [John Rosselli wrote a fair amount about Italian opera in Latin America if you’d like to look into more.](_URL_0_) Although USA is the top global consumer of opera now, so we caught up. :) |
109,365 | ct1p85 | how to tell non-planet time of day (if day is defined)? | Well if location and orbit are removed there's nothing left to alter a "days time" or a year seeing as they're both related to rotation and orbit. So theyd all be the same at that point. | [
"Well if location and orbit are removed there's nothing left to alter a \"days time\" or a year seeing as they're both related to rotation and orbit. So theyd all be the same at that point.",
"At that point, there will need to be some standard to keep things coordinated. For example, one could still use the moder... | 3 | [
"Well if location and orbit are removed there's nothing left to alter a \"days time\" or a year seeing as they're both related to rotation and orbit. So theyd all be the same at that point."
] | 1 | <P> A standard time zone covers roughly 15° of longitude, so any point within that zone which is not on the reference longitude (generally a multiple of 15°) will experience a difference from standard time equal to 4 minutes of time per degree. For illustration, sunsets and sunrises are at a much later "official" time at the western edge of a time-zone, compared to sunrise and sunset times at the eastern edge. If a sundial is located at, say, a longitude 5° west of the reference longitude, its time will read 20 minutes slow, since the Sun appears to revolve around the Earth at 15° per hour. This is a constant correction throughout the year. For equiangular dials such as equatorial, spherical or Lambert dials, this correction can be made by rotating the dial surface by an angle equaling the difference in longitude, without changing the gnomon position or orientation. However, this method does not work for other dials, such as a horizontal dial; the correction must be applied by the viewer.
<P> BULLET::::- Most places on Earth use a time zone which differs from the local solar time by minutes or even hours. For example, if a location uses a time zone with reference meridian 15° to the east, the Sun will rise around 07:00 on the equinox and set 12 hours later around 19:00.
<P> In most places on Earth, local time is determined by longitude, such that the time of day is more-or-less synchronised to the position of the sun in the sky (for example, at midday the sun is roughly at its highest). This line of reasoning fails at the North Pole, where the sun rises and sets only once per year, and all lines of longitude, and hence all time zones, converge. There is no permanent human presence at the North Pole and no particular time zone has been assigned. Polar expeditions may use any time zone that is convenient, such as Greenwich Mean Time, or the time zone of the country from which they departed.
<P> Presently, lay-person calculations of longitude can be made by noting the exact local time (leaving out any reference for Daylight Saving Time) when the sun is at its highest point in the sky. The calculation of noon can be made more easily and accurately with a small, exactly vertical rod driven into level ground—take the time reading when the shadow is pointing due north (in the northern hemisphere). Then take your local time reading and subtract it from GMT (Greenwich Mean Time) or the time in London, England. For example, a noon reading (1200 hours) near central Canada or the US would occur at approximately 6 pm (1800 hours) in London. The six-hour differential is one quarter of a 24-hour day, or 90 degrees of a 360-degree circle (the Earth). The calculation can also be made by taking the number of hours (use decimals for fractions of an hour) multiplied by 15, the number of degrees in one hour. Either way, it can be demonstrated that much of central North America is at or near 90 degrees west longitude. Eastern longitudes can be determined by adding the local time to GMT, with similar calculations.
<P> The canons of John of Saxony explained how one could find the planetary position (longitudes) at any given time. One would have to calculate the length of time between the basic year and they year sought. They would then divide them by mean figures of the planetary orbits, and add/subtract values to adjust for hours and minutes. To expedite these calculations he had an accompanying table of sexagesimal multiplication. In addition to this, he divided the day into sixty parts rather than 24 hours, consistently representing time by sexagesimal fractions and multiples of a day. It is in this form that the Alfonsine tables circulated in Western Europe for the next three centuries.
<P> BULLET::::2. The solar time must be corrected for the longitude of the sundial relative to the longitude of the official time zone. For example, an uncorrected sundial located "west" of Greenwich, England but within the same time-zone, shows an "earlier" time than the official time. It may show "11:45" at official noon, and will show "noon" after the official noon. This correction can easily be made by rotating the hour-lines by a constant angle equal to the difference in longitudes, which makes this is a commonly possible design option.
<P> Apparent solar time ('apparent' is often used in English-language sources, but 'true' in French astronomical literature) is based on the solar day, which is the period between one solar noon (passage of the real Sun across the meridian) and the next. A solar day is approximately 24 hours of mean time. Because the Earth's orbit around the sun is elliptical, and because of the obliquity of the Earth's axis relative to the plane of the orbit (the ecliptic), the apparent solar day varies a few dozen seconds above or below the mean value of 24 hours. As the variation accumulates over a few weeks, there are differences as large as 16 minutes between apparent solar time and mean solar time (see Equation of time). However, these variations cancel out over a year. There are also other perturbations such as Earth's wobble, but these are less than a second per year.
| question: how to tell non-planet time of day (if day is defined)? context: <P> A standard time zone covers roughly 15° of longitude, so any point within that zone which is not on the reference longitude (generally a multiple of 15°) will experience a difference from standard time equal to 4 minutes of time per degree. For illustration, sunsets and sunrises are at a much later "official" time at the western edge of a time-zone, compared to sunrise and sunset times at the eastern edge. If a sundial is located at, say, a longitude 5° west of the reference longitude, its time will read 20 minutes slow, since the Sun appears to revolve around the Earth at 15° per hour. This is a constant correction throughout the year. For equiangular dials such as equatorial, spherical or Lambert dials, this correction can be made by rotating the dial surface by an angle equaling the difference in longitude, without changing the gnomon position or orientation. However, this method does not work for other dials, such as a horizontal dial; the correction must be applied by the viewer.
<P> BULLET::::- Most places on Earth use a time zone which differs from the local solar time by minutes or even hours. For example, if a location uses a time zone with reference meridian 15° to the east, the Sun will rise around 07:00 on the equinox and set 12 hours later around 19:00.
<P> In most places on Earth, local time is determined by longitude, such that the time of day is more-or-less synchronised to the position of the sun in the sky (for example, at midday the sun is roughly at its highest). This line of reasoning fails at the North Pole, where the sun rises and sets only once per year, and all lines of longitude, and hence all time zones, converge. There is no permanent human presence at the North Pole and no particular time zone has been assigned. Polar expeditions may use any time zone that is convenient, such as Greenwich Mean Time, or the time zone of the country from which they departed.
<P> Presently, lay-person calculations of longitude can be made by noting the exact local time (leaving out any reference for Daylight Saving Time) when the sun is at its highest point in the sky. The calculation of noon can be made more easily and accurately with a small, exactly vertical rod driven into level ground—take the time reading when the shadow is pointing due north (in the northern hemisphere). Then take your local time reading and subtract it from GMT (Greenwich Mean Time) or the time in London, England. For example, a noon reading (1200 hours) near central Canada or the US would occur at approximately 6 pm (1800 hours) in London. The six-hour differential is one quarter of a 24-hour day, or 90 degrees of a 360-degree circle (the Earth). The calculation can also be made by taking the number of hours (use decimals for fractions of an hour) multiplied by 15, the number of degrees in one hour. Either way, it can be demonstrated that much of central North America is at or near 90 degrees west longitude. Eastern longitudes can be determined by adding the local time to GMT, with similar calculations.
<P> The canons of John of Saxony explained how one could find the planetary position (longitudes) at any given time. One would have to calculate the length of time between the basic year and they year sought. They would then divide them by mean figures of the planetary orbits, and add/subtract values to adjust for hours and minutes. To expedite these calculations he had an accompanying table of sexagesimal multiplication. In addition to this, he divided the day into sixty parts rather than 24 hours, consistently representing time by sexagesimal fractions and multiples of a day. It is in this form that the Alfonsine tables circulated in Western Europe for the next three centuries.
<P> BULLET::::2. The solar time must be corrected for the longitude of the sundial relative to the longitude of the official time zone. For example, an uncorrected sundial located "west" of Greenwich, England but within the same time-zone, shows an "earlier" time than the official time. It may show "11:45" at official noon, and will show "noon" after the official noon. This correction can easily be made by rotating the hour-lines by a constant angle equal to the difference in longitudes, which makes this is a commonly possible design option.
<P> Apparent solar time ('apparent' is often used in English-language sources, but 'true' in French astronomical literature) is based on the solar day, which is the period between one solar noon (passage of the real Sun across the meridian) and the next. A solar day is approximately 24 hours of mean time. Because the Earth's orbit around the sun is elliptical, and because of the obliquity of the Earth's axis relative to the plane of the orbit (the ecliptic), the apparent solar day varies a few dozen seconds above or below the mean value of 24 hours. As the variation accumulates over a few weeks, there are differences as large as 16 minutes between apparent solar time and mean solar time (see Equation of time). However, these variations cancel out over a year. There are also other perturbations such as Earth's wobble, but these are less than a second per year.
| answer: Well if location and orbit are removed there's nothing left to alter a "days time" or a year seeing as they're both related to rotation and orbit. So theyd all be the same at that point. |
177,355 | blqlvj | American War of Independence and class conflict | There was, first of all, a lot of participation across class lines by the Colonists. There were wealthy merchants like Robert Morris, southern planters like Patrick Henry, bourgeois and haute-bourgeois businessmen like Paul Revere. But there were also farmers and sons of farmers, like Joseph Plumb Martin in Pennsylvania and Henry Bedinger and the volunteers for the Beeline March from Virginia. While some of the wealthy were definitely affected by the Townshend Act taxes, the growing British response to colonial resistance to paying the taxes and the Boston Tea Party resulted in the "Intolerable" acts, which had not only new taxes but new impositions: the charter of Massachusetts and most of the government was dumped and a royal governor put in charge, trial of government officials could be done in English courts, ( instead of trial in Massachusetts ), the port of Boston was closed until the colonists paid for the Boston Tea Party tea, and British troops could be quartered anywhere the governor saw fit. The governor appointed was also Thomas Gage, a military man. This effect of this was beyond simple taxation and economic interest: it took away local government and replaced it with an official responsible only to the Crown, and that created resentment among all classes in the state. When the governor took the state gunpowder out of the armory ( which he had a perfect right to do) there was an immediate massive mob response, the Powder Alarm. Like the Great Fear in the early stage of the French Revolution, the Powder Alarm shows the population was already close to being up in arms and ready to fight. When that was met by military action, the other colonies became alarmed at what they saw could easily become their own fates, of being run by royal governors.There were also grievances beyond Boston Bay . The end of the war with the French , in 1763, was followed by Pontiac's War- that promised further expensive campaigns in North America. The Proclamation of 1763 , that barred new settlement beyond the Alleghenies, was intended to pacify the Native nations, and did so. But it had been assumed by most of he colonists that, after the War, lands to the west would be opened. When they were not, there was great resentment- and that resentment was not only shared by wealthy planters but by anyone looking for land- and , in the 18th c. colonies, which were mostly dominated by farms, land was the basis of the economy and so the equivalent of money. This resentment went far deeper than wealthy merchants.If there were very mixed classes fighting the Revolutionary War, though, it's very true that the revolution went no deeper than removal of British authority when it was over. The same elites who ran the colonies before continued to run them. And those elites were aware of the possibility of democracy taking root. Shay's Rebellion in Massachusetts was seen by people like Washington as a warning sign. If ex-soldiers in the Army could march to take control of the local courthouse in Springfield, MA in revolt against Boston elite rule, how many other disgruntled veterans might want to take up arms as well? Those fears were part of the motivation for the 1787 Constitutional Convention, where a stronger central government could be created to keep mob rule in its place.Because the power did not devolve to the common people, really, many have said the Revolutionary War should be called something else than Revolutionary. They have a point- there was no Terror, no blood running in the streets. Slaves were slaves before, and were slaves after. But still, the actual revolt was not limited to elites, and not only powered by their interests. Nor , it might be added, could it be said that the British elites had an enormous desire to hang on to their property in the 13 Colonies. The big money in the British colonies was in the Caribbean, in the giant sugar plantations in places like Jamaica. Those never had a chance of revolting. | [
"There was, first of all, a lot of participation across class lines by the Colonists. There were wealthy merchants like Robert Morris, southern planters like Patrick Henry, bourgeois and haute-bourgeois businessmen like Paul Revere. But there were also farmers and sons of farmers, like Joseph Plumb Martin in Penn... | 2 | [] | 0 | <P> The Wars of Independence in South America were the numerous wars against Spanish rule that took place during the early 19th century, from 1808 to 1829. The conflicts can be characterized both as a civil wars and a war of national liberation, since the majority of combatants on both sides were Spanish Americans and the goal of the conflict for one side was the independence of the Spanish colonies in the Americas. The events in Napoleonic Europe, during which France deposed Ferdinand VII of Spain and Maria I of Portugal provided the spark for conflict within both Spanish and Portuguese colonies between those pro-Independence criollos who sought political and economic independence from Europe and Royalist criollos, who supported the continued allegiance to and permanence within the Spanish or Portuguese empires. The conflict saw prolonged campaigns between poorly equipped, largely peasant forces, often in harsh conditions. By the end of the wars, the military relationship between South America and Europe had changed forever.
<P> Historian Steve Fraser, author of "The Age of Acquiescence: The Life and Death of American Resistance to Organized Wealth and Power", asserts that class conflict is an inevitability if current political and economic conditions continue, noting that “people are increasingly fed up… their voices are not being heard. And I think that can only go on for so long without there being more and more outbreaks of what used to be called class struggle, class warfare.”
<P> The typical example of class conflict described is class conflict within capitalism. This class conflict is seen to occur primarily between the bourgeoisie and the proletariat, and takes the form of conflict over hours of work, value of wages, division of profits, cost of consumer goods, the culture at work, control over parliament or bureaucracy, and economic inequality. The particular implementation of government programs which may seem purely humanitarian, such as disaster relief, can actually be a form of class conflict. In the USA class conflict is often noted in labor/management disputes. As far back as 1933 representative Edward Hamilton of ALPA, the Airline Pilot's Association, used the term "class warfare" to describe airline management's opposition at the National Labor Board hearings in October of that year. Apart from these day-to-day forms of class conflict, during periods of crisis or revolution class conflict takes on a violent nature and involves repression, assault, restriction of civil liberties, and murderous violence such as assassinations or death squads. (Zinn, "People's History")
<P> BULLET::::- racially based communal conflict against African Americans that took place before the American Civil War, often in relation to attempted slave revolts, and after the war, in relation to tensions under Reconstruction and later efforts to suppress black voting and institute Jim Crow and maintain white supremacy.
<P> This is a timeline of events related to the Spanish American wars of independence. Numerous wars against Spanish rule in Spanish America took place during the early 19th century, from 1808 until 1829, directly related to the Napoleonic French invasion of Spain. The conflict started with short-lived governing juntas established in Chuquisaca and Quito opposing the composition of the Supreme Central Junta of Seville. When the Central Junta fell to the French, numerous new Juntas appeared all across the Americas, eventually resulting in a chain of newly independent countries stretching from Argentina and Chile in the south, to Mexico in the north. After the death of the king Ferdinand VII, in 1833, only Cuba and Puerto Rico remained under Spanish rule, until the Spanish–American War in 1898.
<P> The Spanish American wars of independence were the numerous wars against Spanish rule in Spanish America with the aim of political independence that took place during the early 19th century, after the French invasion of Spain during Europe's Napoleonic Wars. Although there has been research on the idea of a separate Spanish American ("creole") identity separate from that of Iberia, political independence was not initially the aim of most Spanish Americans, nor was it necessarily inevitable. After the restoration of rule by Ferdinand VII in 1814, and his rejection of the Spanish liberal constitution of 1812, the monarchy as well as liberals hardened their stance toward its overseas possessions, and they in turn increasingly sought political independence.
<P> The War was also a source of racial liberalism in that previously marginalized groups of Americans were able to gain a foothold in the economy due to the need for a strong labor force. This gain in economic power translated into strong political power, and as a result, certain government actions, such as Executive Order 8802, were implemented to aid these groups.
| question: American War of Independence and class conflict context: <P> The Wars of Independence in South America were the numerous wars against Spanish rule that took place during the early 19th century, from 1808 to 1829. The conflicts can be characterized both as a civil wars and a war of national liberation, since the majority of combatants on both sides were Spanish Americans and the goal of the conflict for one side was the independence of the Spanish colonies in the Americas. The events in Napoleonic Europe, during which France deposed Ferdinand VII of Spain and Maria I of Portugal provided the spark for conflict within both Spanish and Portuguese colonies between those pro-Independence criollos who sought political and economic independence from Europe and Royalist criollos, who supported the continued allegiance to and permanence within the Spanish or Portuguese empires. The conflict saw prolonged campaigns between poorly equipped, largely peasant forces, often in harsh conditions. By the end of the wars, the military relationship between South America and Europe had changed forever.
<P> Historian Steve Fraser, author of "The Age of Acquiescence: The Life and Death of American Resistance to Organized Wealth and Power", asserts that class conflict is an inevitability if current political and economic conditions continue, noting that “people are increasingly fed up… their voices are not being heard. And I think that can only go on for so long without there being more and more outbreaks of what used to be called class struggle, class warfare.”
<P> The typical example of class conflict described is class conflict within capitalism. This class conflict is seen to occur primarily between the bourgeoisie and the proletariat, and takes the form of conflict over hours of work, value of wages, division of profits, cost of consumer goods, the culture at work, control over parliament or bureaucracy, and economic inequality. The particular implementation of government programs which may seem purely humanitarian, such as disaster relief, can actually be a form of class conflict. In the USA class conflict is often noted in labor/management disputes. As far back as 1933 representative Edward Hamilton of ALPA, the Airline Pilot's Association, used the term "class warfare" to describe airline management's opposition at the National Labor Board hearings in October of that year. Apart from these day-to-day forms of class conflict, during periods of crisis or revolution class conflict takes on a violent nature and involves repression, assault, restriction of civil liberties, and murderous violence such as assassinations or death squads. (Zinn, "People's History")
<P> BULLET::::- racially based communal conflict against African Americans that took place before the American Civil War, often in relation to attempted slave revolts, and after the war, in relation to tensions under Reconstruction and later efforts to suppress black voting and institute Jim Crow and maintain white supremacy.
<P> This is a timeline of events related to the Spanish American wars of independence. Numerous wars against Spanish rule in Spanish America took place during the early 19th century, from 1808 until 1829, directly related to the Napoleonic French invasion of Spain. The conflict started with short-lived governing juntas established in Chuquisaca and Quito opposing the composition of the Supreme Central Junta of Seville. When the Central Junta fell to the French, numerous new Juntas appeared all across the Americas, eventually resulting in a chain of newly independent countries stretching from Argentina and Chile in the south, to Mexico in the north. After the death of the king Ferdinand VII, in 1833, only Cuba and Puerto Rico remained under Spanish rule, until the Spanish–American War in 1898.
<P> The Spanish American wars of independence were the numerous wars against Spanish rule in Spanish America with the aim of political independence that took place during the early 19th century, after the French invasion of Spain during Europe's Napoleonic Wars. Although there has been research on the idea of a separate Spanish American ("creole") identity separate from that of Iberia, political independence was not initially the aim of most Spanish Americans, nor was it necessarily inevitable. After the restoration of rule by Ferdinand VII in 1814, and his rejection of the Spanish liberal constitution of 1812, the monarchy as well as liberals hardened their stance toward its overseas possessions, and they in turn increasingly sought political independence.
<P> The War was also a source of racial liberalism in that previously marginalized groups of Americans were able to gain a foothold in the economy due to the need for a strong labor force. This gain in economic power translated into strong political power, and as a result, certain government actions, such as Executive Order 8802, were implemented to aid these groups.
| answer: There was, first of all, a lot of participation across class lines by the Colonists. There were wealthy merchants like Robert Morris, southern planters like Patrick Henry, bourgeois and haute-bourgeois businessmen like Paul Revere. But there were also farmers and sons of farmers, like Joseph Plumb Martin in Pennsylvania and Henry Bedinger and the volunteers for the Beeline March from Virginia. While some of the wealthy were definitely affected by the Townshend Act taxes, the growing British response to colonial resistance to paying the taxes and the Boston Tea Party resulted in the "Intolerable" acts, which had not only new taxes but new impositions: the charter of Massachusetts and most of the government was dumped and a royal governor put in charge, trial of government officials could be done in English courts, ( instead of trial in Massachusetts ), the port of Boston was closed until the colonists paid for the Boston Tea Party tea, and British troops could be quartered anywhere the governor saw fit. The governor appointed was also Thomas Gage, a military man. This effect of this was beyond simple taxation and economic interest: it took away local government and replaced it with an official responsible only to the Crown, and that created resentment among all classes in the state. When the governor took the state gunpowder out of the armory ( which he had a perfect right to do) there was an immediate massive mob response, the Powder Alarm. Like the Great Fear in the early stage of the French Revolution, the Powder Alarm shows the population was already close to being up in arms and ready to fight. When that was met by military action, the other colonies became alarmed at what they saw could easily become their own fates, of being run by royal governors.There were also grievances beyond Boston Bay . The end of the war with the French , in 1763, was followed by Pontiac's War- that promised further expensive campaigns in North America. The Proclamation of 1763 , that barred new settlement beyond the Alleghenies, was intended to pacify the Native nations, and did so. But it had been assumed by most of he colonists that, after the War, lands to the west would be opened. When they were not, there was great resentment- and that resentment was not only shared by wealthy planters but by anyone looking for land- and , in the 18th c. colonies, which were mostly dominated by farms, land was the basis of the economy and so the equivalent of money. This resentment went far deeper than wealthy merchants.If there were very mixed classes fighting the Revolutionary War, though, it's very true that the revolution went no deeper than removal of British authority when it was over. The same elites who ran the colonies before continued to run them. And those elites were aware of the possibility of democracy taking root. Shay's Rebellion in Massachusetts was seen by people like Washington as a warning sign. If ex-soldiers in the Army could march to take control of the local courthouse in Springfield, MA in revolt against Boston elite rule, how many other disgruntled veterans might want to take up arms as well? Those fears were part of the motivation for the 1787 Constitutional Convention, where a stronger central government could be created to keep mob rule in its place.Because the power did not devolve to the common people, really, many have said the Revolutionary War should be called something else than Revolutionary. They have a point- there was no Terror, no blood running in the streets. Slaves were slaves before, and were slaves after. But still, the actual revolt was not limited to elites, and not only powered by their interests. Nor , it might be added, could it be said that the British elites had an enormous desire to hang on to their property in the 13 Colonies. The big money in the British colonies was in the Caribbean, in the giant sugar plantations in places like Jamaica. Those never had a chance of revolting. |
127,230 | 3um6uc | there are estimates that 46% of the labor force is at risk for being automated in the next 10-25 years. why is no one talking about this? why do we need "jobs" when there are about to be less and less for more and more people? | This is actually quite a prominent area in leftist theory that goes back to Marx. One of Marx's biggest points wasn't just workers controlling the means of production, but also the advancement of technology to increase everybody's leisure time. Fully Automated Luxury Communism is a bit buzzwordy but encapsulates well what quite a few modern leftists adhere to (myself included). This book explains it well - _URL_0_ Also, the Universal Basic Income is a way of allowing leisure time to increase during automation without the negative financial effects caused by mass unemployment. You can read more at /r/basicincome. | [
"There have been major shifts of work performed in the past. Most human work was done on farms. Now there are few farmers. We invented other kinds of work.\n\nYou can look at many things we do and realize that they do not need to be done. The whole sports industry is merely entertainment. The travel industry is unn... | 19 | [
"There have been major shifts of work performed in the past. Most human work was done on farms. Now there are few farmers. We invented other kinds of work.\n\nYou can look at many things we do and realize that they do not need to be done. The whole sports industry is merely entertainment. The travel industry is unn... | 11 | <P> A number of studies have predicted that automation will take a large proportion of jobs in the future, but estimates of the level of unemployment this will cause vary. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School showed that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement. The study, published in 2013, shows that automation can affect both skilled and unskilled work and both high and low-paying occupations; however, low-paid physical occupations are most at risk. It estimated that 47% of US jobs were at high risk of automation. In 2014, the economic think tank Bruegel released a study, based on the Frey and Osborne approach, claiming that across the European Union's 28 member states, 54% of jobs were at risk of automation. The countries where jobs were least vulnerable to automation were Sweden, with 46.69% of jobs vulnerable, the UK at 47.17%, the Netherlands at 49.50%, and France and Denmark, both at 49.54%. The countries where jobs were found to be most vulnerable were Romania at 61.93%, Portugal at 58.94%, Croatia at 57.9%, and Bulgaria at 56.56%. A 2015 report by the Taub Center found that 41% of jobs in Israel were at risk of being automated within the next two decades. In January 2016, a joint study by the Oxford Martin School and Citibank, based on previous studies on automation and data from the World Bank, found that the risk of automation in developing countries was much higher than in developed countries. It found that 77% of jobs in China, 69% of jobs in India, 85% of jobs in Ethiopia, and 55% of jobs in Uzbekistan were at risk of automation. The World Bank similarly employed the methodology of Frey and Osborne. A 2016 study by the International Labour Organization found 74% of salaried electrical & electronics industry positions in Thailand, 75% of salaried electrical & electronics industry positions in Vietnam, 63% of salaried electrical & electronics industry positions in Indonesia, and 81% of salaried electrical & electronics industry positions in the Philippines were at high risk of automation. A 2016 United Nations report stated that 75% of jobs in the developing world were at risk of automation, and predicted that more jobs might be lost when corporations stop outsourcing to developing countries after automation in industrialized countries makes it less lucrative to outsource to countries with lower labor costs.
<P> The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects. Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; "The Economist" states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. Author Martin Ford and others go further and argue that a large number of jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.
<P> However, not all recent empirical studies have found evidence to support the idea that automation will cause widespread unemployment. A study released in 2015, examining the impact of industrial robots in 17 countries between 1993 and 2007, found no overall reduction in employment was caused by the robots, and that there was a slight increase in overall wages. According to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not replacement of employees but automation of portions of the tasks they perform. A 2016 OECD study found that among the 21 OECD countries surveyed, on average only 9% of jobs were in foreseeable danger of automation, but this varied greatly among countries: for example in South Korea the figure of at-risk jobs was 6% while in Austria it was 12%. In contrast to other studies, the OECD study does not primarily base its assessment on the tasks that a job entails, but also includes demographic variables, including sex, education and age. It is not clear however why a job should be more or less automatise just because it is performed by a woman. In 2017, Forrester estimated that automation would result in a net loss of about 7% of jobs in the US by 2027, replacing 17% of jobs while creating new jobs equivalent to 10% of the workforce. Another study argued that the risk of US jobs to automation had been overestimated due to factors such as the heterogeneity of tasks within occupations and the adaptability of jobs being neglected. The study found that once this was taken into account, the number of occupations at risk to automation in the US drops, ceteris paribus, from 38% to 9%. A 2017 study on the effect of automation on Germany found no evidence that automation caused total job losses but that they do effect the jobs people are employed in; losses in the industrial sector due to automation were offset by gains in the service sector. Manufacturing workers were also not at risk from automation and were in fact more likely to remain employed, though not necessarily doing the same tasks. However, automation did result in a decrease in labour's income share as it raised productivity but not wages.
<P> Analyzing the true state of the U.S. labor market is very complex and a challenge for leading economists, who may arrive at different conclusions. For example, the main gauge, the unemployment rate, can be falling (a positive sign) while the labor force participation rate is falling as well (a negative sign). Further, the reasons for persons leaving the labor force may not be clear, such as aging (more people retiring) or because they are discouraged and have stopped looking for work. The extent to which persons are not fully utilizing their skills is also difficult to determine when measuring the level of underemployment.
<P> Automation is already contributing significantly to unemployment, particularly in nations where the government does not proactively seek to diminish its impact. In the United States, 47% of all current jobs have the potential to be fully automated by 2033, according to the research of experts Carl Benedikt Frey and Michael Osborne. Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation’s risk of being automated. Prospects are particularly bleak for occupations that do not presently require a university degree, such as truck driving. Even in high-tech corridors like Silicon Valley, concern is spreading about a future in which a sizable percentage of adults have little chance of sustaining gainful employment. As the example of Sweden suggests, however, the transition to a more automated future need not inspire panic, if there is sufficient political will to promote the retraining of workers whose positions are being rendered obsolete.
<P> The Obama White House has pointed out that every 3 months "about 6 percent of jobs in the economy are destroyed by shrinking or closing businesses, while a slightly larger percentage of jobs are added". A recent MIT economics study of automation in the United States from 1990 to 2007 found that there may be a negative impact on employment and wages when robots are introduced to an industry. When one robot is added per one thousand workers, the employment to population ratio decreases between 0.18–0.34 percentages and wages are reduced by 0.25–0.5 percentage points. During the time period studied, the US did not have many robots in the economy which restricts the impact of automation. However, automation is expected to triple (conservative estimate) or quadruple (a generous estimate) leading these numbers to become substantially higher.
<P> Scholars presume that AI will bring 2.2 million jobs to the work marketplace and will eliminate 1.8 million positions by 2020. The impact of automation on employment is significant. At the same time, there are warnings that users may get lost in too much data generated. According to the McKinsey Global Institute study concluded that 45% of 750 paid jobs could be automated. Other sources predict that as much as 800 million jobs will be lost due to automation. The recent report, which covered job market, socials policy, military industry, job hunt, and economic development summarizes that higher skilled jobs are much harder to automate thus developing world tends to prefer lower skilled jobs to be automated first.
| question: there are estimates that 46% of the labor force is at risk for being automated in the next 10-25 years. why is no one talking about this? why do we need "jobs" when there are about to be less and less for more and more people? context: <P> A number of studies have predicted that automation will take a large proportion of jobs in the future, but estimates of the level of unemployment this will cause vary. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School showed that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement. The study, published in 2013, shows that automation can affect both skilled and unskilled work and both high and low-paying occupations; however, low-paid physical occupations are most at risk. It estimated that 47% of US jobs were at high risk of automation. In 2014, the economic think tank Bruegel released a study, based on the Frey and Osborne approach, claiming that across the European Union's 28 member states, 54% of jobs were at risk of automation. The countries where jobs were least vulnerable to automation were Sweden, with 46.69% of jobs vulnerable, the UK at 47.17%, the Netherlands at 49.50%, and France and Denmark, both at 49.54%. The countries where jobs were found to be most vulnerable were Romania at 61.93%, Portugal at 58.94%, Croatia at 57.9%, and Bulgaria at 56.56%. A 2015 report by the Taub Center found that 41% of jobs in Israel were at risk of being automated within the next two decades. In January 2016, a joint study by the Oxford Martin School and Citibank, based on previous studies on automation and data from the World Bank, found that the risk of automation in developing countries was much higher than in developed countries. It found that 77% of jobs in China, 69% of jobs in India, 85% of jobs in Ethiopia, and 55% of jobs in Uzbekistan were at risk of automation. The World Bank similarly employed the methodology of Frey and Osborne. A 2016 study by the International Labour Organization found 74% of salaried electrical & electronics industry positions in Thailand, 75% of salaried electrical & electronics industry positions in Vietnam, 63% of salaried electrical & electronics industry positions in Indonesia, and 81% of salaried electrical & electronics industry positions in the Philippines were at high risk of automation. A 2016 United Nations report stated that 75% of jobs in the developing world were at risk of automation, and predicted that more jobs might be lost when corporations stop outsourcing to developing countries after automation in industrialized countries makes it less lucrative to outsource to countries with lower labor costs.
<P> The relationship between automation and employment is complicated. While automation eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects. Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; "The Economist" states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. Author Martin Ford and others go further and argue that a large number of jobs are routine, repetitive and (to an AI) predictable; Ford warns that these jobs may be automated in the next couple of decades, and that many of the new jobs may not be "accessible to people with average capability", even with retraining. Economists point out that in the past technology has tended to increase rather than reduce total employment, but acknowledge that "we're in uncharted territory" with AI.
<P> However, not all recent empirical studies have found evidence to support the idea that automation will cause widespread unemployment. A study released in 2015, examining the impact of industrial robots in 17 countries between 1993 and 2007, found no overall reduction in employment was caused by the robots, and that there was a slight increase in overall wages. According to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not replacement of employees but automation of portions of the tasks they perform. A 2016 OECD study found that among the 21 OECD countries surveyed, on average only 9% of jobs were in foreseeable danger of automation, but this varied greatly among countries: for example in South Korea the figure of at-risk jobs was 6% while in Austria it was 12%. In contrast to other studies, the OECD study does not primarily base its assessment on the tasks that a job entails, but also includes demographic variables, including sex, education and age. It is not clear however why a job should be more or less automatise just because it is performed by a woman. In 2017, Forrester estimated that automation would result in a net loss of about 7% of jobs in the US by 2027, replacing 17% of jobs while creating new jobs equivalent to 10% of the workforce. Another study argued that the risk of US jobs to automation had been overestimated due to factors such as the heterogeneity of tasks within occupations and the adaptability of jobs being neglected. The study found that once this was taken into account, the number of occupations at risk to automation in the US drops, ceteris paribus, from 38% to 9%. A 2017 study on the effect of automation on Germany found no evidence that automation caused total job losses but that they do effect the jobs people are employed in; losses in the industrial sector due to automation were offset by gains in the service sector. Manufacturing workers were also not at risk from automation and were in fact more likely to remain employed, though not necessarily doing the same tasks. However, automation did result in a decrease in labour's income share as it raised productivity but not wages.
<P> Analyzing the true state of the U.S. labor market is very complex and a challenge for leading economists, who may arrive at different conclusions. For example, the main gauge, the unemployment rate, can be falling (a positive sign) while the labor force participation rate is falling as well (a negative sign). Further, the reasons for persons leaving the labor force may not be clear, such as aging (more people retiring) or because they are discouraged and have stopped looking for work. The extent to which persons are not fully utilizing their skills is also difficult to determine when measuring the level of underemployment.
<P> Automation is already contributing significantly to unemployment, particularly in nations where the government does not proactively seek to diminish its impact. In the United States, 47% of all current jobs have the potential to be fully automated by 2033, according to the research of experts Carl Benedikt Frey and Michael Osborne. Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation’s risk of being automated. Prospects are particularly bleak for occupations that do not presently require a university degree, such as truck driving. Even in high-tech corridors like Silicon Valley, concern is spreading about a future in which a sizable percentage of adults have little chance of sustaining gainful employment. As the example of Sweden suggests, however, the transition to a more automated future need not inspire panic, if there is sufficient political will to promote the retraining of workers whose positions are being rendered obsolete.
<P> The Obama White House has pointed out that every 3 months "about 6 percent of jobs in the economy are destroyed by shrinking or closing businesses, while a slightly larger percentage of jobs are added". A recent MIT economics study of automation in the United States from 1990 to 2007 found that there may be a negative impact on employment and wages when robots are introduced to an industry. When one robot is added per one thousand workers, the employment to population ratio decreases between 0.18–0.34 percentages and wages are reduced by 0.25–0.5 percentage points. During the time period studied, the US did not have many robots in the economy which restricts the impact of automation. However, automation is expected to triple (conservative estimate) or quadruple (a generous estimate) leading these numbers to become substantially higher.
<P> Scholars presume that AI will bring 2.2 million jobs to the work marketplace and will eliminate 1.8 million positions by 2020. The impact of automation on employment is significant. At the same time, there are warnings that users may get lost in too much data generated. According to the McKinsey Global Institute study concluded that 45% of 750 paid jobs could be automated. Other sources predict that as much as 800 million jobs will be lost due to automation. The recent report, which covered job market, socials policy, military industry, job hunt, and economic development summarizes that higher skilled jobs are much harder to automate thus developing world tends to prefer lower skilled jobs to be automated first.
| answer: This is actually quite a prominent area in leftist theory that goes back to Marx. One of Marx's biggest points wasn't just workers controlling the means of production, but also the advancement of technology to increase everybody's leisure time. Fully Automated Luxury Communism is a bit buzzwordy but encapsulates well what quite a few modern leftists adhere to (myself included). This book explains it well - _URL_0_ Also, the Universal Basic Income is a way of allowing leisure time to increase during automation without the negative financial effects caused by mass unemployment. You can read more at /r/basicincome. |
1,125 | 17w2jw | How definitive are the DNA results on the Richard III skeleton? | You didn't really elaborate on what you mean, but I'm guessing you want to know how confident we can be that the skeleton they've found is King Richard?Here's an overview of the evidence:DNA comparisons:* Geneticists were able to extract and sequence mitochondrial DNA from the skeleton* Mitochondrial DNA is passed down from mother to child unchanged except for the occasional mutation* So, by comparing the skeleton's mitochondrial DNA to living people who descend from King Richard's mother's line along an unbroken line of females, we can see if the skeleton has the same mitochondrial group as what King Richard would be expected to have.* Genealogists were able to track down two direct matriline descendants of Anne of York (Richard III's sister) both of whom provided DNA samples for mitochondrial DNA testing. One of the descendants wants to remain anonymous. The second descendant is a Canadian by the name of Michael Ibsen. * The fact that they have two people means that they can compare them both and make sure that they match. It makes us more sure that we are predicting King Richard's haplogroup correctly because we can more safely say that there's no anomaly (such as an unknown adoption in one of the descendant's background).* The two descendants do indeed match, and they are members of a subgroup of haplogroup J. Luckily it is fairly rare, somewhere between 1 and 2 percent of the population belongs to this particular group. If the two living descendants were members of a very prevalent haplogroup, it would increase the odds that any match found between them and the skeleton would be purely coincidental. * Mitochondrial DNA comparison of the three people can be found [here](_URL_0_) -- it's a virtually perfect match.So, that's the particulars of the DNA evidence that they have. However, there's additional evidence which makes them more sure that it's King Richard, and not some random haplogroup J guy:* Records say he was buried at a church in Leicester, 100 miles north of London. Archaeologist Richard Buckley identified a possible location of the grave through map analysis. They looked where his analyses predicted that King Richard would be, and they found the skeleton.* Radiocarbon dating estimates that the death occurred between 1455 and 1540 (Richard died in 1485)* The skeleton they found appears to have died in battle, and there's no coffin or anything like that, consistent with an enemy burial.* Various head injuries that the skeleton suffered are consistent with the way King Richard's death in battle was described* The remains display signs of scoliosis, consistent with contemporary descriptions of Richard. Other features of the skeleton are also consistent with Richard, such as the age. He died at age 32 and the skeleton they found died "in his late 20s to late 30s"The DNA evidence alone or the circumstantial evidence alone would not have been enough to make a strong conclusion, but looking at everything together is pretty convincing. The research team is not saying that they are 100% sure they have found King Richard, but rather that they: > can now confirm that the body is that of Richard III "beyond a reasonable doubt" | [
"[Here is a link to an article describing how they did the DNA analysis](_URL_0_) I will summarize it's points.\n\nThey were able to get a good sample of the corpse's mitochondrial DNA, which is passed without combination from mother offspring. Then, through a historical analysis, they found two people (currently ... | 2 | [
"[Here is a link to an article describing how they did the DNA analysis](_URL_0_) I will summarize it's points.\n\nThey were able to get a good sample of the corpse's mitochondrial DNA, which is passed without combination from mother offspring. Then, through a historical analysis, they found two people (currently ... | 2 | <P> In February 2016, French, Danish and Norwegian researchers opened the lead boxes in order to conduct DNA analysis of the remains. Radiocarbon dating of the remains showed that neither skeleton could be that of Richard I or Richard II. One skeleton dated from the third century BCE, the other from the eighth century AD, both long before the lifetimes of Richard I and Richard II.
<P> On 4 February 2013, the University of Leicester confirmed that the skeleton was that of Richard III. The identification was based on mitochondrial DNA evidence, soil analysis, and dental tests, and physical characteristics of the skeleton consistent with contemporary accounts of Richard's appearance. Osteoarchaeologist Jo Appleby commented: "The skeleton has a number of unusual features: its slender build, the scoliosis, and the battle-related trauma. All of these are highly consistent with the information that we have about Richard III in life and about the circumstances of his death."
<P> Professor Michael Hicks, a Richard III specialist, has been particularly critical of the use of the mitochondrial DNA to argue that the body is Richard III's, stating that "any male sharing a maternal ancestress in the direct female line could qualify". He also criticises the rejection by the Leicester team of the Y chromosomal evidence, suggesting that it was not acceptable to the Leicester team to conclude that the skeleton was anyone other than Richard III. He argues that on the basis of the present scientific evidence "identification with Richard III is more unlikely than likely". However, Hicks himself draws attention to the contemporary view held by some that Richard III's grandfather, Richard, Earl of Cambridge, was the product of an illegitimate union between Cambridge's mother Isabella of Castile (a bastard daughter of Pedro the Cruel of Castile) and John Holland (brother in law of Henry IV of England), rather than Edmund of Langley, 1st Duke of York (Edward III's fourth son). If that was the case then the Y chromosome discrepancy with the Beaufort line would be explained but obviously still fail to prove the identity of the body. Hicks suggests alternative candidates descended from Richard III's maternal ancestress for the body (e.g. Thomas Percy, 1st Baron Egremont, and John de la Pole, 1st Earl of Lincoln) but does not provide evidence to support his suggestions. Philippa Langley refutes Hicks's argument on the grounds that he does not take into account all the evidence.
<P> On 4 February 2013, the University of Leicester confirmed that the skeleton was beyond reasonable doubt that of King Richard III. This conclusion was based on mitochondrial DNA evidence, soil analysis, and dental tests (there were some molars missing as a result of caries), as well as physical characteristics of the skeleton which are highly consistent with contemporary accounts of Richard's appearance. The team announced that the "arrowhead" discovered with the body was a Roman-era nail, probably disturbed when the body was first interred. However, there were numerous perimortem wounds on the body, and part of the skull had been sliced off with a bladed weapon; this would have caused rapid death. The team concluded that it is unlikely that the king was wearing a helmet in his last moments. Soil taken from the remains was found to contain microscopic roundworm eggs. Several eggs were found in samples taken from the pelvis, where the king's intestines were, but not from the skull and only very small numbers were identified in soil surrounding the grave. The findings suggest that the higher concentration of eggs in the pelvic area probably arose from a roundworm infection the King suffered in his life, rather than from human waste dumped in the area at a later date, researchers said. The Mayor of Leicester announced that the king's skeleton would be re-interred at Leicester Cathedral in early 2014, but a judicial review of that decision delayed the reinterment for a year. A museum to Richard III was opened in July 2014 in the Victorian school buildings next to the Greyfriars grave site.
<P> The age of the bones at death matched that of Richard when he was killed; they were dated to about the period of his death and were mostly consistent with physical descriptions of the king. Preliminary DNA analysis showed that mitochondrial DNA extracted from the bones matched that of two matrilineal descendants, one 17th-generation and the other 19th-generation, of Richard's sister Anne of York. Taking these findings into account along with other historical, scientific and archaeological evidence, the University of Leicester announced on 4 February 2013 that it had concluded beyond reasonable doubt that the skeleton was that of Richard III.
<P> On 5 February 2013 Professor Caroline Wilkinson of the University of Dundee conducted a facial reconstruction of Richard III, commissioned by the Richard III Society, based on 3D mappings of his skull. The face is described as "warm, young, earnest and rather serious". On 11 February 2014 the University of Leicester announced the project to sequence the entire genome of Richard III and one of his living relatives, Michael Ibsen, whose mitochondrial DNA confirmed the identification of the excavated remains. Richard III thus became the first ancient person of known historical identity to have their genome sequenced.
<P> On 12 September, it was announced that the skeleton discovered during the search might be that of Richard III. Several reasons were given: the body was of an adult male; it was buried beneath the choir of the church; and there was severe scoliosis of the spine, possibly making one shoulder higher than the other (to what extent depended on the severity of the condition). Additionally, there was an object that appeared to be an arrowhead embedded in the spine; and there were perimortem injuries to the skull. These included a relatively shallow orifice, which is most likely to have been caused by a rondel dagger, and a scooping depression to the skull, inflicted by a bladed weapon, most probably a sword. Additionally, the bottom of the skull presented a gaping hole, where a halberd had cut away and entered it. Forensic pathologist Dr Stuart Hamilton stated that this injury would have left the individual's brain visible, and most certainly would have been the cause of death. Dr Jo Appleby, the osteo-archaeologist who excavated the skeleton, concurred and described the latter as "a mortal battlefield wound in the back of the skull". The base of the skull also presented another fatal wound in which a bladed weapon had been thrust into it, leaving behind a jagged hole. Closer examination of the interior of the skull revealed a mark opposite this wound, showing that the blade penetrated to a depth of . In total, the skeleton presented ten wounds: four minor injuries on the top of the skull, one dagger blow on the cheekbone, one cut on the lower jaw, two fatal injuries on the base of the skull, one cut on a rib bone, and one final wound on the pelvis, most probably inflicted after death. It is generally accepted that postmortem, Richard's naked body was tied to the back of a horse, with his arms slung over one side and his legs and buttocks over the other. This presented a tempting target for onlookers, and the angle of the blow on the pelvis suggests that one of them stabbed Richard's right buttock with substantial force, as the cut extends from the back all the way to the front of the pelvic bone and was most probably an act of humiliation. It is also possible that Richard suffered other injuries which left no trace on the skeleton.
| question: How definitive are the DNA results on the Richard III skeleton? context: <P> In February 2016, French, Danish and Norwegian researchers opened the lead boxes in order to conduct DNA analysis of the remains. Radiocarbon dating of the remains showed that neither skeleton could be that of Richard I or Richard II. One skeleton dated from the third century BCE, the other from the eighth century AD, both long before the lifetimes of Richard I and Richard II.
<P> On 4 February 2013, the University of Leicester confirmed that the skeleton was that of Richard III. The identification was based on mitochondrial DNA evidence, soil analysis, and dental tests, and physical characteristics of the skeleton consistent with contemporary accounts of Richard's appearance. Osteoarchaeologist Jo Appleby commented: "The skeleton has a number of unusual features: its slender build, the scoliosis, and the battle-related trauma. All of these are highly consistent with the information that we have about Richard III in life and about the circumstances of his death."
<P> Professor Michael Hicks, a Richard III specialist, has been particularly critical of the use of the mitochondrial DNA to argue that the body is Richard III's, stating that "any male sharing a maternal ancestress in the direct female line could qualify". He also criticises the rejection by the Leicester team of the Y chromosomal evidence, suggesting that it was not acceptable to the Leicester team to conclude that the skeleton was anyone other than Richard III. He argues that on the basis of the present scientific evidence "identification with Richard III is more unlikely than likely". However, Hicks himself draws attention to the contemporary view held by some that Richard III's grandfather, Richard, Earl of Cambridge, was the product of an illegitimate union between Cambridge's mother Isabella of Castile (a bastard daughter of Pedro the Cruel of Castile) and John Holland (brother in law of Henry IV of England), rather than Edmund of Langley, 1st Duke of York (Edward III's fourth son). If that was the case then the Y chromosome discrepancy with the Beaufort line would be explained but obviously still fail to prove the identity of the body. Hicks suggests alternative candidates descended from Richard III's maternal ancestress for the body (e.g. Thomas Percy, 1st Baron Egremont, and John de la Pole, 1st Earl of Lincoln) but does not provide evidence to support his suggestions. Philippa Langley refutes Hicks's argument on the grounds that he does not take into account all the evidence.
<P> On 4 February 2013, the University of Leicester confirmed that the skeleton was beyond reasonable doubt that of King Richard III. This conclusion was based on mitochondrial DNA evidence, soil analysis, and dental tests (there were some molars missing as a result of caries), as well as physical characteristics of the skeleton which are highly consistent with contemporary accounts of Richard's appearance. The team announced that the "arrowhead" discovered with the body was a Roman-era nail, probably disturbed when the body was first interred. However, there were numerous perimortem wounds on the body, and part of the skull had been sliced off with a bladed weapon; this would have caused rapid death. The team concluded that it is unlikely that the king was wearing a helmet in his last moments. Soil taken from the remains was found to contain microscopic roundworm eggs. Several eggs were found in samples taken from the pelvis, where the king's intestines were, but not from the skull and only very small numbers were identified in soil surrounding the grave. The findings suggest that the higher concentration of eggs in the pelvic area probably arose from a roundworm infection the King suffered in his life, rather than from human waste dumped in the area at a later date, researchers said. The Mayor of Leicester announced that the king's skeleton would be re-interred at Leicester Cathedral in early 2014, but a judicial review of that decision delayed the reinterment for a year. A museum to Richard III was opened in July 2014 in the Victorian school buildings next to the Greyfriars grave site.
<P> The age of the bones at death matched that of Richard when he was killed; they were dated to about the period of his death and were mostly consistent with physical descriptions of the king. Preliminary DNA analysis showed that mitochondrial DNA extracted from the bones matched that of two matrilineal descendants, one 17th-generation and the other 19th-generation, of Richard's sister Anne of York. Taking these findings into account along with other historical, scientific and archaeological evidence, the University of Leicester announced on 4 February 2013 that it had concluded beyond reasonable doubt that the skeleton was that of Richard III.
<P> On 5 February 2013 Professor Caroline Wilkinson of the University of Dundee conducted a facial reconstruction of Richard III, commissioned by the Richard III Society, based on 3D mappings of his skull. The face is described as "warm, young, earnest and rather serious". On 11 February 2014 the University of Leicester announced the project to sequence the entire genome of Richard III and one of his living relatives, Michael Ibsen, whose mitochondrial DNA confirmed the identification of the excavated remains. Richard III thus became the first ancient person of known historical identity to have their genome sequenced.
<P> On 12 September, it was announced that the skeleton discovered during the search might be that of Richard III. Several reasons were given: the body was of an adult male; it was buried beneath the choir of the church; and there was severe scoliosis of the spine, possibly making one shoulder higher than the other (to what extent depended on the severity of the condition). Additionally, there was an object that appeared to be an arrowhead embedded in the spine; and there were perimortem injuries to the skull. These included a relatively shallow orifice, which is most likely to have been caused by a rondel dagger, and a scooping depression to the skull, inflicted by a bladed weapon, most probably a sword. Additionally, the bottom of the skull presented a gaping hole, where a halberd had cut away and entered it. Forensic pathologist Dr Stuart Hamilton stated that this injury would have left the individual's brain visible, and most certainly would have been the cause of death. Dr Jo Appleby, the osteo-archaeologist who excavated the skeleton, concurred and described the latter as "a mortal battlefield wound in the back of the skull". The base of the skull also presented another fatal wound in which a bladed weapon had been thrust into it, leaving behind a jagged hole. Closer examination of the interior of the skull revealed a mark opposite this wound, showing that the blade penetrated to a depth of . In total, the skeleton presented ten wounds: four minor injuries on the top of the skull, one dagger blow on the cheekbone, one cut on the lower jaw, two fatal injuries on the base of the skull, one cut on a rib bone, and one final wound on the pelvis, most probably inflicted after death. It is generally accepted that postmortem, Richard's naked body was tied to the back of a horse, with his arms slung over one side and his legs and buttocks over the other. This presented a tempting target for onlookers, and the angle of the blow on the pelvis suggests that one of them stabbed Richard's right buttock with substantial force, as the cut extends from the back all the way to the front of the pelvic bone and was most probably an act of humiliation. It is also possible that Richard suffered other injuries which left no trace on the skeleton.
| answer: You didn't really elaborate on what you mean, but I'm guessing you want to know how confident we can be that the skeleton they've found is King Richard?Here's an overview of the evidence:DNA comparisons:* Geneticists were able to extract and sequence mitochondrial DNA from the skeleton* Mitochondrial DNA is passed down from mother to child unchanged except for the occasional mutation* So, by comparing the skeleton's mitochondrial DNA to living people who descend from King Richard's mother's line along an unbroken line of females, we can see if the skeleton has the same mitochondrial group as what King Richard would be expected to have.* Genealogists were able to track down two direct matriline descendants of Anne of York (Richard III's sister) both of whom provided DNA samples for mitochondrial DNA testing. One of the descendants wants to remain anonymous. The second descendant is a Canadian by the name of Michael Ibsen. * The fact that they have two people means that they can compare them both and make sure that they match. It makes us more sure that we are predicting King Richard's haplogroup correctly because we can more safely say that there's no anomaly (such as an unknown adoption in one of the descendant's background).* The two descendants do indeed match, and they are members of a subgroup of haplogroup J. Luckily it is fairly rare, somewhere between 1 and 2 percent of the population belongs to this particular group. If the two living descendants were members of a very prevalent haplogroup, it would increase the odds that any match found between them and the skeleton would be purely coincidental. * Mitochondrial DNA comparison of the three people can be found [here](_URL_0_) -- it's a virtually perfect match.So, that's the particulars of the DNA evidence that they have. However, there's additional evidence which makes them more sure that it's King Richard, and not some random haplogroup J guy:* Records say he was buried at a church in Leicester, 100 miles north of London. Archaeologist Richard Buckley identified a possible location of the grave through map analysis. They looked where his analyses predicted that King Richard would be, and they found the skeleton.* Radiocarbon dating estimates that the death occurred between 1455 and 1540 (Richard died in 1485)* The skeleton they found appears to have died in battle, and there's no coffin or anything like that, consistent with an enemy burial.* Various head injuries that the skeleton suffered are consistent with the way King Richard's death in battle was described* The remains display signs of scoliosis, consistent with contemporary descriptions of Richard. Other features of the skeleton are also consistent with Richard, such as the age. He died at age 32 and the skeleton they found died "in his late 20s to late 30s"The DNA evidence alone or the circumstantial evidence alone would not have been enough to make a strong conclusion, but looking at everything together is pretty convincing. The research team is not saying that they are 100% sure they have found King Richard, but rather that they: > can now confirm that the body is that of Richard III "beyond a reasonable doubt" |
129,385 | 1a2r50 | Which famous European historical figures visited America before transatlantic travel became more common and what did they do? | The Swedish aristocrat [Axel von Fersen](_URL_2_) is most famous as being the alleged lover of Marie Antoinette, but he'd also visited Voltaire in Geneva. He went off to fight in the Revolutionary War under [Rochambeau](_URL_0_), and served as the interpreter between the latter and George Washington. Washington later inducted him into the [Society of the Cincinnati](_URL_1_). He was ultimately murdered by an angry mob in Stockholm, under strange circumstances. Interesting character. Fersen had only the highest of praise for Washington, but took a rather negative view of his home state in a 1782 letter from Williamsburg: > The principal product of Virginia is tobacco; not that this State, which is the largest of the thirteen, is not capable of other cultivation, but the laziness of the inhabitants and their conceit are great obstacles to industry. It really seems as if the Virginians were another race of men; instead of occupying themselves with their farms and making them profitable, each land-owner wants to be a lord. No white man ever works, but, as in the West India islands, all the work is done by negro slaves, who are ordered by the whites, and by overseers under them. > There are, in Virginia, at least twenty negroes to one white man; so that this State has sent but few soldiers to the army. All persons who do business are regarded as inferior by the others, who say they are not gentlemen, and they do not choose to live with them socially. These Virginians have all the aristocratic instincts, and when one sees them it is hard to understand how they came to enter a general confederation and to accept a government founded on perfect equality of condition. But the same spirit which has led them to shake off the English yoke may lead them to other action of the same kind, and I should not be surprised to see Virginia detach herself, after the peace, from the other States. Neither should I be surprised to see the American government become a complete aristocracy. | [
"The Marquis de Lafayette fought during the American Revolution.",
"Two of my favorites were literary tours. Oscar Wilde visited in 1882 where he was promoting his ideas of aestheticism. He'd be walking around town in New York one week with lilies in his hair and spend the next drinking whisky with miners in Virg... | 3 | [
"The Marquis de Lafayette fought during the American Revolution.",
"Two of my favorites were literary tours. Oscar Wilde visited in 1882 where he was promoting his ideas of aestheticism. He'd be walking around town in New York one week with lilies in his hair and spend the next drinking whisky with miners in Virg... | 3 | <P> Throughout her Lloyd transatlantic career "George Washington" carried some notable and interesting passengers to and from Europe. In August 1909 Sigmund Freud sailed from Bremen bound for New York on his one and only trip to the US. He was accompanied by his colleagues Carl Jung and Sándor Ferenczi. In February 1910, banker Edgar Speyer, a Privy Counsellor appointed by Edward VII of the United Kingdom, arrived for a visit to the United States. Prince Tsai Tao, the uncle of the Emperor of China, departed in one of "George Washington"'s imperial suites after a four-day visit to New York in May; the flew from the mainmast in his honor as the ship departed. In October, Henry W. Taft, brother of U.S. President William Howard Taft returned from a visit to Europe. In December, disgraced Arctic explorer Frederick Cook arrived on the liner; conflicting opinions on the veracity of his claims of reaching the North Pole nearly caused a fight to erupt on board. On the same voyage as Cook, German actor Ernst von Possart arrived for his first stage performances in New York in over 20 years.
<P> During the first half of the 19th century, most of the northern European emigrants who traveled to the New World embarked on their transatlantic voyages from Hamburg. The German shipping company Hamburg America Line, also known as the "Hamburg Amerikanische Packetfahrt Actien-Gesellschaft" (HAPAG), was involved in Atlantic transport for almost a century. The company began operations in 1847 and employed many German immigrants, many of them fleeing the revolutions of 1848–9. New York City was the most common destination for ships traveling from Hamburg, and various restaurants in the city began offering the Hamburg-style steak in order to attract German sailors. The steak frequently appeared on the menu as a "Hamburg-style American fillet", or even "beefsteak à Hambourgeoise". Early American preparations of minced beef were therefore made to fit the tastes of European immigrants, evoking memories of the port of Hamburg and the world they left behind.
<P> During the first half of the 19th century, most of the northern European emigrants who traveled to the New World embarked on their transatlantic voyages from Hamburg. The German shipping company Hamburg America Line, also known as the "Hamburg Amerikanische Packetfahrt Actien-Gesellschaft" (HAPAG), was involved in Atlantic transport for almost a century. The company began operations in 1847 and employed many German immigrants, many of them fleeing the revolutions of 1848–9. New York City was the most common destination for ships traveling from Hamburg, and various restaurants in the city began offering the Hamburg-style steak in order to attract German sailors. The steak frequently appeared on the menu as a "Hamburg-style American fillet", or even "beefsteak à Hambourgeoise". Early American preparations of minced beef were therefore made to fit the tastes of European immigrants, evoking memories of the port of Hamburg and the world they left behind.
<P> On January 5, 1818, the 424-ton transatlantic packet "James Monroe" sailed from Liverpool, opening the first regular trans-Atlantic voyage route, the Black Ball Line. Shipping on this route continued until 1878. Commercially successful transatlantic traffic has led to the creation of many competing companies, including the Red Star Line in 1822. Transportation significantly contributed to the establishment of the New York one of the centers of world trade.
<P> Transatlantic crossings became faster, safer, and more reliable with the advent of steamships in the 19th century. Grand ocean liners began making regularly scheduled crossings, and soon it became a symbol of national and company status to build the largest, fastest, and most luxurious ocean liner for transatlantic crossings. The United States, United Kingdom, France, Germany and Italy built the most famous ocean liners. Examples of some famous transatlantic liners are RMS "Titanic" (had only made one voyage that was unsuccessful due to striking an iceberg), , , , , , , , , , , , "Queen Elizabeth 2", and .
<P> At the start of the century 25% of the world's trade was through British ports, 18% of this being to North America. Trans-oceanic travel was important at the start of the century with transatlantic liners competing for the "Blue Riband" for the fastest crossing. A significant event was the sinking of the "Titanic" in 1912. This led to the Global Maritime Distress Safety System and to the Iceberg Patrol. The rise of air travel led to a decrease in ocean travel but then, towards the end of the century, cruise ships became important again.
<P> In May 1919, with World War I ended, the first successful transatlantic flight took place from the United States to Great Britain by three U.S. Navy "Curtiss Flyer" flying boats. They used the harbor of Horta on the Azorean island of Faial as a critical stopover in their flight. In the 1930s Pan American Airways flew the first regularly scheduled commercial airliners, "Pan-Am Clippers" (Sikorsky S-40 flying boats), from Norfolk, VA to the Azores and then on to Europe.
| question: Which famous European historical figures visited America before transatlantic travel became more common and what did they do? context: <P> Throughout her Lloyd transatlantic career "George Washington" carried some notable and interesting passengers to and from Europe. In August 1909 Sigmund Freud sailed from Bremen bound for New York on his one and only trip to the US. He was accompanied by his colleagues Carl Jung and Sándor Ferenczi. In February 1910, banker Edgar Speyer, a Privy Counsellor appointed by Edward VII of the United Kingdom, arrived for a visit to the United States. Prince Tsai Tao, the uncle of the Emperor of China, departed in one of "George Washington"'s imperial suites after a four-day visit to New York in May; the flew from the mainmast in his honor as the ship departed. In October, Henry W. Taft, brother of U.S. President William Howard Taft returned from a visit to Europe. In December, disgraced Arctic explorer Frederick Cook arrived on the liner; conflicting opinions on the veracity of his claims of reaching the North Pole nearly caused a fight to erupt on board. On the same voyage as Cook, German actor Ernst von Possart arrived for his first stage performances in New York in over 20 years.
<P> During the first half of the 19th century, most of the northern European emigrants who traveled to the New World embarked on their transatlantic voyages from Hamburg. The German shipping company Hamburg America Line, also known as the "Hamburg Amerikanische Packetfahrt Actien-Gesellschaft" (HAPAG), was involved in Atlantic transport for almost a century. The company began operations in 1847 and employed many German immigrants, many of them fleeing the revolutions of 1848–9. New York City was the most common destination for ships traveling from Hamburg, and various restaurants in the city began offering the Hamburg-style steak in order to attract German sailors. The steak frequently appeared on the menu as a "Hamburg-style American fillet", or even "beefsteak à Hambourgeoise". Early American preparations of minced beef were therefore made to fit the tastes of European immigrants, evoking memories of the port of Hamburg and the world they left behind.
<P> During the first half of the 19th century, most of the northern European emigrants who traveled to the New World embarked on their transatlantic voyages from Hamburg. The German shipping company Hamburg America Line, also known as the "Hamburg Amerikanische Packetfahrt Actien-Gesellschaft" (HAPAG), was involved in Atlantic transport for almost a century. The company began operations in 1847 and employed many German immigrants, many of them fleeing the revolutions of 1848–9. New York City was the most common destination for ships traveling from Hamburg, and various restaurants in the city began offering the Hamburg-style steak in order to attract German sailors. The steak frequently appeared on the menu as a "Hamburg-style American fillet", or even "beefsteak à Hambourgeoise". Early American preparations of minced beef were therefore made to fit the tastes of European immigrants, evoking memories of the port of Hamburg and the world they left behind.
<P> On January 5, 1818, the 424-ton transatlantic packet "James Monroe" sailed from Liverpool, opening the first regular trans-Atlantic voyage route, the Black Ball Line. Shipping on this route continued until 1878. Commercially successful transatlantic traffic has led to the creation of many competing companies, including the Red Star Line in 1822. Transportation significantly contributed to the establishment of the New York one of the centers of world trade.
<P> Transatlantic crossings became faster, safer, and more reliable with the advent of steamships in the 19th century. Grand ocean liners began making regularly scheduled crossings, and soon it became a symbol of national and company status to build the largest, fastest, and most luxurious ocean liner for transatlantic crossings. The United States, United Kingdom, France, Germany and Italy built the most famous ocean liners. Examples of some famous transatlantic liners are RMS "Titanic" (had only made one voyage that was unsuccessful due to striking an iceberg), , , , , , , , , , , , "Queen Elizabeth 2", and .
<P> At the start of the century 25% of the world's trade was through British ports, 18% of this being to North America. Trans-oceanic travel was important at the start of the century with transatlantic liners competing for the "Blue Riband" for the fastest crossing. A significant event was the sinking of the "Titanic" in 1912. This led to the Global Maritime Distress Safety System and to the Iceberg Patrol. The rise of air travel led to a decrease in ocean travel but then, towards the end of the century, cruise ships became important again.
<P> In May 1919, with World War I ended, the first successful transatlantic flight took place from the United States to Great Britain by three U.S. Navy "Curtiss Flyer" flying boats. They used the harbor of Horta on the Azorean island of Faial as a critical stopover in their flight. In the 1930s Pan American Airways flew the first regularly scheduled commercial airliners, "Pan-Am Clippers" (Sikorsky S-40 flying boats), from Norfolk, VA to the Azores and then on to Europe.
| answer: The Swedish aristocrat [Axel von Fersen](_URL_2_) is most famous as being the alleged lover of Marie Antoinette, but he'd also visited Voltaire in Geneva. He went off to fight in the Revolutionary War under [Rochambeau](_URL_0_), and served as the interpreter between the latter and George Washington. Washington later inducted him into the [Society of the Cincinnati](_URL_1_). He was ultimately murdered by an angry mob in Stockholm, under strange circumstances. Interesting character. Fersen had only the highest of praise for Washington, but took a rather negative view of his home state in a 1782 letter from Williamsburg: > The principal product of Virginia is tobacco; not that this State, which is the largest of the thirteen, is not capable of other cultivation, but the laziness of the inhabitants and their conceit are great obstacles to industry. It really seems as if the Virginians were another race of men; instead of occupying themselves with their farms and making them profitable, each land-owner wants to be a lord. No white man ever works, but, as in the West India islands, all the work is done by negro slaves, who are ordered by the whites, and by overseers under them. > There are, in Virginia, at least twenty negroes to one white man; so that this State has sent but few soldiers to the army. All persons who do business are regarded as inferior by the others, who say they are not gentlemen, and they do not choose to live with them socially. These Virginians have all the aristocratic instincts, and when one sees them it is hard to understand how they came to enter a general confederation and to accept a government founded on perfect equality of condition. But the same spirit which has led them to shake off the English yoke may lead them to other action of the same kind, and I should not be surprised to see Virginia detach herself, after the peace, from the other States. Neither should I be surprised to see the American government become a complete aristocracy. |
116,277 | 631sy4 | Why were schooners so popular in New England? | There are a couple reasons: * **Labor Shortages**. For much of the early history of New England labor shortages were a real and common issue. After the Great Migration of the 1630s we begin seeing large numbers of English colonists heading back to England (this is one reason slavery takes off during this period). For a larger square rigged ship, such as East Indiamen, you would need a crew of 10+ to man shifts during calm seas; a topsail schooner can be manned by a crew of 3, or even one in favorable weather or an emergency. * **Logistics of building**. Within the first few decades of settlement in New England timber forests near the coast were already being exhausted. Building shallow drafted vessels further inland allowed the cost of transporting timber to remain lower, and allowed these yards to remain closer to the mills. All of this helped keep cost even lower. * **Cost**. Besides the obvious savings on crew, schooners can generally be built quickly, make more efficient use of sail compared to a ketch or barque, and are relatively simple to perform maintenance on. All of this made them extremely popular for fisherman and sailors who were trying to move up financially; it was quite common for a group of sailors/fisherman to become part owners with family members or friends. * **They were practical** Schooners are built with a shallow draft, meaning they can access shallower waters. This gave greater access to inland waterways, meaning goods could get farther inland without having to be crossloaded to smaller boats or wagons - keeping the cost of transport and thus goods lower. Despite this, they also handled fairly well in blue water; allowing them to take part in the highly profitable West Indies trade. * **Speed**. The way schooners of New England are generally rigged means that more sail is being used at different wind angles than other traditional ships. They can sail closer to the wind than other vessels their size, and in general maintain greater speed during transit. Being able to travel at a good speed for both legs of a journey meant faster exchange of goods, which again meant greater profit to the owner: goods are on market faster, crews are paid for less time at sea, and market fluctuations have less of an impact on profit. All of these reasons helped the schooner to not only dominate the coastal and West Indies trade into the 19th century, but also made them a near perfect vessel for privateers looking to prey on British merchants during the American Revolution and War of 1812. If you want a little more reading on schooners, I'll suggest reading Dana Story's *Shipbuilders of Essex*, which documents the methods, families, and industry of Essex, Massachusetts; a place which dominated the North American schooner building industry for centuries. There is also a 1947 documentary of the same title that is floating around on youtube that is worth a look. If you prefer a more hands on approach, the Essex Shipbuilding Museum is a wealth of knowledge and houses the last traditional wooden shipyard in the United States, where NEA Fellow Harold Burnham still builds in the old way. Harold is a fantastic resource and a hell of a nice guy, and you can also sail on a couple of schooners he built; the replica privateer *Fame* out of Salem, as well as the *Thomas E. Lannon* and *Ardelle*(his own person schooner) which sail out of Maritime Gloucester. | [
"There are a couple reasons:\n \n* **Labor Shortages**. For much of the early history of New England labor shortages were a real and common issue. After the Great Migration of the 1630s we begin seeing large numbers of English colonists heading back to England (this is one reason slavery takes off during this peri... | 1 | [] | 0 | <P> In the 1700s and 1800s in what is now New England and Atlantic Canada schooners became popular for coastal trade, requiring a smaller crew for their size compared to square rig ships, and being fast and versatile. Three-masted schooners were first introduced around 1800.
<P> Schooners were popular on both sides of the Atlantic in the late 1800s and early 1900s, but gradually giving way in Europe to the cutter. By 1910, 45 five-masted and 10 six-masted schooners had been built in Bath, Maine and other Penobscot Bay towns. The "Thomas W. Lawson" was the only seven-masted schooner built.
<P> The Royal Navy purchased the schooner on 12 October 1768 and renamed her "Halifax"; she met a need for more coastal patrol schooners to combat smuggling and deal with colonial unrest in New England. The careful record of her lines and construction by Portsmouth dockyard naval architects, and the detailed record of her naval service, make the schooner a much-studied example of early schooners in North America.
<P> Even though steamboats were used for time-critical routes such as for passengers and mail, schooners were still economical to use for bulk cargoes such as grain, wood, or iron ore. Steam tugs were introduced on the Great Lakes that could tow one or more barges. Since old schooners were available, they could be adapted to towing service with reduced crews. When winds were favorable, the schooner barge could have one or two sails rigged to save fuel in the steam tug. Eventually schooner-rigged wooden ships were purposely built for use as barges. The concept was later extended to salt-water use, with, for example, the United States Navy converting some schooners for use as barges for coal.
<P> Although highly popular in their time, schooners were replaced by more efficient sloops, yawls and ketches as sailboats, and in the freight business they were replaced by steamships, barges, and railroads.
<P> Because of rough weather and small crews, schooner barges were frequently lost from tows, set adrift during bad weather, or sunk. By the 1920s, schooner barges were no longer in practical use on the Great Lakes since steam and diesel powered ships provided better operating flexibility and safety, with lower crew costs than a tug and barges hauling the same amount of cargo.
<P> An American design that reached its zenith of size on the American Great Lakes, and also used widely in New Zealand, the schooner rigged scow was used for coastal and inland transport, colonial days to the early 1900s. Scow schooners had a broad, shallow hull, and used centreboards, bilgeboards or leeboards rather than a deep keel. The broad hull gave them stability, and the retractable foils allowed them to move even heavy loads of cargo in waters far too shallow for keelboats to enter. The squared off bow and stern allowed the maximum amount of cargo to be carried in the hull. The smallest sailing scows were sloop rigged (making them technically a "scow sloop"), but otherwise similar in design. The scow sloop eventually evolved into the "inland lake scow", a type of fast racing boat.
| question: Why were schooners so popular in New England? context: <P> In the 1700s and 1800s in what is now New England and Atlantic Canada schooners became popular for coastal trade, requiring a smaller crew for their size compared to square rig ships, and being fast and versatile. Three-masted schooners were first introduced around 1800.
<P> Schooners were popular on both sides of the Atlantic in the late 1800s and early 1900s, but gradually giving way in Europe to the cutter. By 1910, 45 five-masted and 10 six-masted schooners had been built in Bath, Maine and other Penobscot Bay towns. The "Thomas W. Lawson" was the only seven-masted schooner built.
<P> The Royal Navy purchased the schooner on 12 October 1768 and renamed her "Halifax"; she met a need for more coastal patrol schooners to combat smuggling and deal with colonial unrest in New England. The careful record of her lines and construction by Portsmouth dockyard naval architects, and the detailed record of her naval service, make the schooner a much-studied example of early schooners in North America.
<P> Even though steamboats were used for time-critical routes such as for passengers and mail, schooners were still economical to use for bulk cargoes such as grain, wood, or iron ore. Steam tugs were introduced on the Great Lakes that could tow one or more barges. Since old schooners were available, they could be adapted to towing service with reduced crews. When winds were favorable, the schooner barge could have one or two sails rigged to save fuel in the steam tug. Eventually schooner-rigged wooden ships were purposely built for use as barges. The concept was later extended to salt-water use, with, for example, the United States Navy converting some schooners for use as barges for coal.
<P> Although highly popular in their time, schooners were replaced by more efficient sloops, yawls and ketches as sailboats, and in the freight business they were replaced by steamships, barges, and railroads.
<P> Because of rough weather and small crews, schooner barges were frequently lost from tows, set adrift during bad weather, or sunk. By the 1920s, schooner barges were no longer in practical use on the Great Lakes since steam and diesel powered ships provided better operating flexibility and safety, with lower crew costs than a tug and barges hauling the same amount of cargo.
<P> An American design that reached its zenith of size on the American Great Lakes, and also used widely in New Zealand, the schooner rigged scow was used for coastal and inland transport, colonial days to the early 1900s. Scow schooners had a broad, shallow hull, and used centreboards, bilgeboards or leeboards rather than a deep keel. The broad hull gave them stability, and the retractable foils allowed them to move even heavy loads of cargo in waters far too shallow for keelboats to enter. The squared off bow and stern allowed the maximum amount of cargo to be carried in the hull. The smallest sailing scows were sloop rigged (making them technically a "scow sloop"), but otherwise similar in design. The scow sloop eventually evolved into the "inland lake scow", a type of fast racing boat.
| answer: There are a couple reasons: * **Labor Shortages**. For much of the early history of New England labor shortages were a real and common issue. After the Great Migration of the 1630s we begin seeing large numbers of English colonists heading back to England (this is one reason slavery takes off during this period). For a larger square rigged ship, such as East Indiamen, you would need a crew of 10+ to man shifts during calm seas; a topsail schooner can be manned by a crew of 3, or even one in favorable weather or an emergency. * **Logistics of building**. Within the first few decades of settlement in New England timber forests near the coast were already being exhausted. Building shallow drafted vessels further inland allowed the cost of transporting timber to remain lower, and allowed these yards to remain closer to the mills. All of this helped keep cost even lower. * **Cost**. Besides the obvious savings on crew, schooners can generally be built quickly, make more efficient use of sail compared to a ketch or barque, and are relatively simple to perform maintenance on. All of this made them extremely popular for fisherman and sailors who were trying to move up financially; it was quite common for a group of sailors/fisherman to become part owners with family members or friends. * **They were practical** Schooners are built with a shallow draft, meaning they can access shallower waters. This gave greater access to inland waterways, meaning goods could get farther inland without having to be crossloaded to smaller boats or wagons - keeping the cost of transport and thus goods lower. Despite this, they also handled fairly well in blue water; allowing them to take part in the highly profitable West Indies trade. * **Speed**. The way schooners of New England are generally rigged means that more sail is being used at different wind angles than other traditional ships. They can sail closer to the wind than other vessels their size, and in general maintain greater speed during transit. Being able to travel at a good speed for both legs of a journey meant faster exchange of goods, which again meant greater profit to the owner: goods are on market faster, crews are paid for less time at sea, and market fluctuations have less of an impact on profit. All of these reasons helped the schooner to not only dominate the coastal and West Indies trade into the 19th century, but also made them a near perfect vessel for privateers looking to prey on British merchants during the American Revolution and War of 1812. If you want a little more reading on schooners, I'll suggest reading Dana Story's *Shipbuilders of Essex*, which documents the methods, families, and industry of Essex, Massachusetts; a place which dominated the North American schooner building industry for centuries. There is also a 1947 documentary of the same title that is floating around on youtube that is worth a look. If you prefer a more hands on approach, the Essex Shipbuilding Museum is a wealth of knowledge and houses the last traditional wooden shipyard in the United States, where NEA Fellow Harold Burnham still builds in the old way. Harold is a fantastic resource and a hell of a nice guy, and you can also sail on a couple of schooners he built; the replica privateer *Fame* out of Salem, as well as the *Thomas E. Lannon* and *Ardelle*(his own person schooner) which sail out of Maritime Gloucester. |
26,012 | 3wusfu | how important a consistent sleep schedule is? why? | For starters you title your ELI5 wrong when sleepy. The real issue is that your body has a natural sleep rhythem an as a result you will feel sleepy at certain times more than others. This sleep rhythm is slow to change, and sleeping at times when you are normally awake will cause your sleep to be lighter and shorter than if you slept during your normal sleep hours. If you spent several days on a weird sleep schedule, even if you slept the full 8 hours, you'd still be groggy and tired from not getting a decent amount of sleep and being awake in the "wrong" hours. It might even make it impossible to sleep at some points, or cause you to oversleep depending on the hours and times of your new sleep schedule. | [
"I get some sleep every day between 7 and 8 hours. You don't want to screw yourself over by getting like 4 hours of sleep, and don't want to sleep horrifically late because y'know life.",
"Very important. Fatigue, exhaustion, stress all can happen if you're falling asleep at extreme time differences each night. T... | 19 | [
"Very important. Fatigue, exhaustion, stress all can happen if you're falling asleep at extreme time differences each night. There's also things like weight gain and depression. It has been also studied that those who don't get enough sleep may have a higher risk of developing Alzheimer's disease.\n\nConsistent sl... | 6 | <P> According to a recent study at Brigham Young University, a regular sleep schedule can make an almost immediate difference on the body's ability to metabolize fat cells. In this specific study design, 300 college aged women (19–26 years old) were followed for a week and given an activity tracker which not only monitored movements, but also sleep patterns. The study also found that participants with lower BMI had higher quality of sleep, while those with higher BMI's had lower quality of sleep. But was the reverse relationship also true?
<P> Other researchers have questioned these claims. A 2004 editorial in the journal "Sleep" stated that according to the available data, the average number of hours of sleep in a 24-hour period has not changed significantly in recent decades among adults. Furthermore, the editorial suggests that there is a range of normal sleep time required by healthy adults, and many indicators used to suggest chronic sleepiness among the population as a whole do not stand up to scientific scrutiny.
<P> Because disrupted sleep is a significant contributor to fatigue, a diagnostic evaluation considers the quality of sleep, the emotional state of the person, sleep pattern, and stress level. The amount of sleep, the hours that are set aside for sleep, and the number of times that a person awakens during the night are important. A sleep study may be ordered to rule out a sleep disorder.
<P> Human sleep needs vary by age and amongst individuals; sleep is considered to be adequate when there is no daytime sleepiness or dysfunction. Moreover, self-reported sleep duration is only moderately correlated with actual sleep time as measured by actigraphy, and those affected with sleep state misperception may typically report having slept only four hours despite having slept a full eight hours.
<P> Because hormones play a major role in energy balance and metabolism, and sleep plays a critical role in the timing and amplitude of their secretion, sleep has a sizable effect on metabolism. This could explain some of the early theories of sleep function that predicted that sleep has a metabolic regulation role.
<P> The evolution of different types of sleep patterns is influenced by a number of selective pressures, including body size, relative metabolic rate, predation, type and location of food sources, and immune function. Sleep (especially deep SWS and REM) is tricky behavior because it steeply increases predation risk. This means that, for sleep to have evolved, the functions of sleep should have provided a substantial advantage over the risk it entails. In fact, studying sleep in different organisms shows how they have balanced this risk by evolving partial sleep mechanisms or by having protective habitats. Thus, studying the evolution of sleep might give a clue not only to the developmental aspects and mechanisms, but also to an adaptive justification for sleep.
<P> BULLET::::- Sleep: There is research supporting that the more sleep an individual gets the more likely said individual is to retain a set of information. To opposes this, the research also supports that the less sleep an individual gets the less information that individual is likely to retain.
| question: how important a consistent sleep schedule is? why? context: <P> According to a recent study at Brigham Young University, a regular sleep schedule can make an almost immediate difference on the body's ability to metabolize fat cells. In this specific study design, 300 college aged women (19–26 years old) were followed for a week and given an activity tracker which not only monitored movements, but also sleep patterns. The study also found that participants with lower BMI had higher quality of sleep, while those with higher BMI's had lower quality of sleep. But was the reverse relationship also true?
<P> Other researchers have questioned these claims. A 2004 editorial in the journal "Sleep" stated that according to the available data, the average number of hours of sleep in a 24-hour period has not changed significantly in recent decades among adults. Furthermore, the editorial suggests that there is a range of normal sleep time required by healthy adults, and many indicators used to suggest chronic sleepiness among the population as a whole do not stand up to scientific scrutiny.
<P> Because disrupted sleep is a significant contributor to fatigue, a diagnostic evaluation considers the quality of sleep, the emotional state of the person, sleep pattern, and stress level. The amount of sleep, the hours that are set aside for sleep, and the number of times that a person awakens during the night are important. A sleep study may be ordered to rule out a sleep disorder.
<P> Human sleep needs vary by age and amongst individuals; sleep is considered to be adequate when there is no daytime sleepiness or dysfunction. Moreover, self-reported sleep duration is only moderately correlated with actual sleep time as measured by actigraphy, and those affected with sleep state misperception may typically report having slept only four hours despite having slept a full eight hours.
<P> Because hormones play a major role in energy balance and metabolism, and sleep plays a critical role in the timing and amplitude of their secretion, sleep has a sizable effect on metabolism. This could explain some of the early theories of sleep function that predicted that sleep has a metabolic regulation role.
<P> The evolution of different types of sleep patterns is influenced by a number of selective pressures, including body size, relative metabolic rate, predation, type and location of food sources, and immune function. Sleep (especially deep SWS and REM) is tricky behavior because it steeply increases predation risk. This means that, for sleep to have evolved, the functions of sleep should have provided a substantial advantage over the risk it entails. In fact, studying sleep in different organisms shows how they have balanced this risk by evolving partial sleep mechanisms or by having protective habitats. Thus, studying the evolution of sleep might give a clue not only to the developmental aspects and mechanisms, but also to an adaptive justification for sleep.
<P> BULLET::::- Sleep: There is research supporting that the more sleep an individual gets the more likely said individual is to retain a set of information. To opposes this, the research also supports that the less sleep an individual gets the less information that individual is likely to retain.
| answer: For starters you title your ELI5 wrong when sleepy. The real issue is that your body has a natural sleep rhythem an as a result you will feel sleepy at certain times more than others. This sleep rhythm is slow to change, and sleeping at times when you are normally awake will cause your sleep to be lighter and shorter than if you slept during your normal sleep hours. If you spent several days on a weird sleep schedule, even if you slept the full 8 hours, you'd still be groggy and tired from not getting a decent amount of sleep and being awake in the "wrong" hours. It might even make it impossible to sleep at some points, or cause you to oversleep depending on the hours and times of your new sleep schedule. |
178,612 | trppd | why was greece allowed to enter the european union without meeting the required criteria? | You're referring to the euro. A minor, but quite important, difference. | [
"You're referring to the euro. A minor, but quite important, difference.",
"The cooked their books when applying to the Eurozone - after the crisis, the fraud came out, and it turned out the numbers were even worse than the fraudsters thought."
] | 2 | [
"You're referring to the euro. A minor, but quite important, difference.",
"The cooked their books when applying to the Eurozone - after the crisis, the fraud came out, and it turned out the numbers were even worse than the fraudsters thought."
] | 2 | <P> The European Communities (Greek Accession) Act 1979 (c. 50) is an Act of the Parliament of the United Kingdom which ratified and legislated for the accession of Greece to the European Communities. It received royal assent on 20 December 1979.
<P> Some economic experts argue that the best option for Greece, and the rest of the EU, would be to engineer an "orderly default", allowing Athens to withdraw simultaneously from the eurozone and reintroduce its national currency the drachma at a debased rate. If Greece were to leave the euro, the economic and political consequences would be devastating. According to Japanese financial company Nomura an exit would lead to a 60% devaluation of the new drachma. Analysts at French bank BNP Paribas added that the fallout from a Greek exit would wipe 20% off Greece's GDP, increase Greece's debt-to-GDP ratio to over 200%, and send inflation soaring to 40–50%. Also UBS warned of hyperinflation, a bank run and even "military coups and possible civil war that could afflict a departing country". Eurozone National Central Banks (NCBs) may lose up to €100bn in debt claims against the Greek national bank through the ECB's TARGET2 system. The Deutsche Bundesbank alone may have to write off €27bn.
<P> A recent issue concerning education in Greece is the institutionalisation of private universities. According to the constitution only state-run universities operate on the land. However, in the recent years many foreign private universities have established branches in Greece, offering Bachelor's level degrees, thus creating a legal contradiction between the Greek constitution and the EU laws allowing foreign companies to operate anywhere in the Union. Additionally, every year, tens of thousands of Greek students are not accepted to the state-run University system and become "educational immigrants" to other countries' Higher Education institutions, where they move to study.
<P> Although Greece had been granted a spot in the 2008 final because of its seventh-place finish at the Eurovision Song Contest 2007, it had to compete in a semi-final for the first time since 2004 because of new rules put into effect by the European Broadcasting Union (EBU). In previous years, countries that received a top 10 placing were automatically granted a spot in the next year's final without having to compete in a semi-final, but for 2008, the EBU changed the automatic qualification regulations so that all countries except the "Big 4" (France, Germany, Spain, and the United Kingdom) and host country, would have to pass through one of two semi-finals. The EBU split up countries with a friendly voting history into separate semi-finals, to give a better chance for other countries to win. Greece and Cyprus had often been accused of favoring each other, with each awarding the other the maximum number of points (twelve) at the previous contest. On January 28, 2008, the EBU held a special draw which determined that Greece would be in "Semi-final 1" held on May 20, 2008 in Belgrade, Serbia; Cyprus was subsequently placed in the second semi-final.
<P> Further, the European Commission signaled that the referendum question, to which they would recommend a "Yes", from its viewpoint should be understood as whether or not Greece wanted to remain part of Europe and the Eurozone, which at the present state included acceptance of receiving conditional bailout help on a set of mutually negotiated and agreed terms. The Commission claimed the biggest impediment to jobs, growth and investment at the moment in Greece, was not the contents of the Institution's bailout proposals, but instead a paralyzing uncertainty caused by the Greek government's decision to cut itself off from continued bailout support and a moratorium on implementing structural reforms. According to the Commission, this uncertainty and standstill could only be removed if Greece at the negotiating table agreed on one of the latest compromise proposals which the Institutions had tabled after accommodating a range of objections and requests tabled by the Greek government. They claimed the confidence effect of voting "Yes" to the settlement of such a deal, the predictability it would bring, together with the injection of liquidity into the economy from disbursements, would restore job creation and growth to the benefit of Greece.
<P> An EU summit ended on 24 May with repeated calls for Greece to stick to the terms of the EU/IMF memorandum, if they wanted to receive more funds to tackle its debt problem and current economic crisis.
<P> Both the Greek government and the EU favour Greece staying within the Euro and believe this to be possible. However, some commentators believe an exit is likely. In February 2015, the former head of the US Federal Reserve, Alan Greenspan, said "it is just a matter of time" for Greece to withdraw from the eurozone, and former United Kingdom Chancellor of the Exchequer Kenneth Clarke described it as inevitable.
| question: why was greece allowed to enter the european union without meeting the required criteria? context: <P> The European Communities (Greek Accession) Act 1979 (c. 50) is an Act of the Parliament of the United Kingdom which ratified and legislated for the accession of Greece to the European Communities. It received royal assent on 20 December 1979.
<P> Some economic experts argue that the best option for Greece, and the rest of the EU, would be to engineer an "orderly default", allowing Athens to withdraw simultaneously from the eurozone and reintroduce its national currency the drachma at a debased rate. If Greece were to leave the euro, the economic and political consequences would be devastating. According to Japanese financial company Nomura an exit would lead to a 60% devaluation of the new drachma. Analysts at French bank BNP Paribas added that the fallout from a Greek exit would wipe 20% off Greece's GDP, increase Greece's debt-to-GDP ratio to over 200%, and send inflation soaring to 40–50%. Also UBS warned of hyperinflation, a bank run and even "military coups and possible civil war that could afflict a departing country". Eurozone National Central Banks (NCBs) may lose up to €100bn in debt claims against the Greek national bank through the ECB's TARGET2 system. The Deutsche Bundesbank alone may have to write off €27bn.
<P> A recent issue concerning education in Greece is the institutionalisation of private universities. According to the constitution only state-run universities operate on the land. However, in the recent years many foreign private universities have established branches in Greece, offering Bachelor's level degrees, thus creating a legal contradiction between the Greek constitution and the EU laws allowing foreign companies to operate anywhere in the Union. Additionally, every year, tens of thousands of Greek students are not accepted to the state-run University system and become "educational immigrants" to other countries' Higher Education institutions, where they move to study.
<P> Although Greece had been granted a spot in the 2008 final because of its seventh-place finish at the Eurovision Song Contest 2007, it had to compete in a semi-final for the first time since 2004 because of new rules put into effect by the European Broadcasting Union (EBU). In previous years, countries that received a top 10 placing were automatically granted a spot in the next year's final without having to compete in a semi-final, but for 2008, the EBU changed the automatic qualification regulations so that all countries except the "Big 4" (France, Germany, Spain, and the United Kingdom) and host country, would have to pass through one of two semi-finals. The EBU split up countries with a friendly voting history into separate semi-finals, to give a better chance for other countries to win. Greece and Cyprus had often been accused of favoring each other, with each awarding the other the maximum number of points (twelve) at the previous contest. On January 28, 2008, the EBU held a special draw which determined that Greece would be in "Semi-final 1" held on May 20, 2008 in Belgrade, Serbia; Cyprus was subsequently placed in the second semi-final.
<P> Further, the European Commission signaled that the referendum question, to which they would recommend a "Yes", from its viewpoint should be understood as whether or not Greece wanted to remain part of Europe and the Eurozone, which at the present state included acceptance of receiving conditional bailout help on a set of mutually negotiated and agreed terms. The Commission claimed the biggest impediment to jobs, growth and investment at the moment in Greece, was not the contents of the Institution's bailout proposals, but instead a paralyzing uncertainty caused by the Greek government's decision to cut itself off from continued bailout support and a moratorium on implementing structural reforms. According to the Commission, this uncertainty and standstill could only be removed if Greece at the negotiating table agreed on one of the latest compromise proposals which the Institutions had tabled after accommodating a range of objections and requests tabled by the Greek government. They claimed the confidence effect of voting "Yes" to the settlement of such a deal, the predictability it would bring, together with the injection of liquidity into the economy from disbursements, would restore job creation and growth to the benefit of Greece.
<P> An EU summit ended on 24 May with repeated calls for Greece to stick to the terms of the EU/IMF memorandum, if they wanted to receive more funds to tackle its debt problem and current economic crisis.
<P> Both the Greek government and the EU favour Greece staying within the Euro and believe this to be possible. However, some commentators believe an exit is likely. In February 2015, the former head of the US Federal Reserve, Alan Greenspan, said "it is just a matter of time" for Greece to withdraw from the eurozone, and former United Kingdom Chancellor of the Exchequer Kenneth Clarke described it as inevitable.
| answer: You're referring to the euro. A minor, but quite important, difference. |
63,094 | 4m7wn0 | Questions about a census from 1840 | I can't give you a firm answer about the people in question, but I have spent some time working with census aggregates. So some general considerations:It might be an error. Here's how the census was done at the time:A guy would come up to your house. He had a big ledger with him full of pre-printed census forms. As I'm sure you know from looking at it, the 1840 census asked a fair number of questions. The census taker would do the tallies and mark out the form himself. If you've ever worked on a large spreadsheet, you know how easy it is to migrate a row or column or two over without realizing it. This is the most likely explanation for the incredible (not just surprisingly large, but literally unbelievable) number of free black Americans recorded as mentally ill in 1840. It was so nuts that John C. Calhoun, of all people, admitted to every error pointed out to him. However, he refused to redo the count (as Secretary of State at the time he was in charge of such things.) because "everybody knew" that freedom made black people crazy. He's still John C. Calhoun. It wouldn't be a big surprise to find out that individual tallies got messed up elsewhere.However, it might be legit. The standard for who was a member of the household amounted to whoever lived there regularly on census day. If he boarded servants and apprentices they would normally be counted as part of his family. This was also true of slaves. Family, in a census context, means the same as household does to us rather than requiring a biological or married relation. The longterm residents of a boarding house and inmates at asylums were members of the owners' or manager's census families.With regard to the manager vs. owner distinction. Errors are always possible, but if the owner of the Iron Works didn't live on-site then the manager would probably be listed as the head of household. There are issues similar in regard large plantations, which is where I learned all of this. The really big planters often had operations in multiple states. They themselves might appear in both the censuses of, say, South Carolina and Georgia due to that property. But the census worker was supposed to list the people present and/or primarily resident at the time Legal ownership wasn't necessarily the controlling factor. Given that plantation overseers often lived on site, and plantation owners might have a separate residence where they spent most of their time (This is especially common in the South Carolina lowcountry.), the overseer is the obvious candidate for head of household when the census person comes by. He lives there and is generally in charge and responsible for the property. The way to check this for sure would be to look for a separate entry for the owner. Has he got a house in town somewhere? If there's a full accounting of him elsewhere, then I'd be pretty confident that placing the overseer as head of household for the iron works was deliberate rather than a mistake. I lean towards it being so anyway, but the separate census entry would really seal the deal. | [
"I can't give you a firm answer about the people in question, but I have spent some time working with census aggregates. So some general considerations:\n\nIt might be an error. Here's how the census was done at the time:\n\nA guy would come up to your house. He had a big ledger with him full of pre-printed census ... | 1 | [] | 0 | <P> The United States Census of 1840 was the sixth census of the United States. Conducted by the Census Office on June 1, 1840, it determined the resident population of the United States to be 17,069,453 — an increase of 32.7 percent over the 12,866,020 persons enumerated during the 1830 Census. The total population included 2,487,355 slaves. In 1840, the center of population was about 260 miles (418 km) west of Washington, near Weston, Virginia.
<P> The 1790 United States Census was the first census in the history of the United States. The population of the United States was recorded as 3,929,214 as of Census Day, August 2, 1790, as mandated by Article I, Section 2 of the United States Constitution and applicable laws.
<P> BULLET::::- Significance: The 1850 United States Census can be seen as a historical document that gives insight into the state of the nation's economy in 1850. It is much more detailed and provides more information than the 1840 census.
<P> The Seventh Census of the United States (1850) was taken 1 June 1850. This was the first year in which the census bureau attempted to count and name every member of every household, including women and children. Slaves were counted by gender and age on associated Slave Schedules, listed by their owner's name. The first slave schedules were produced in 1850. Prior to 1850, census records had recorded only the name of the head of the household and broad statistical accounting of other household members, (three children under age five, one woman between the age of 35 and 40, etc.)..
<P> The United States Census of 1870 was the ninth United States Census. It was conducted by the Census Bureau from June 1,1870 to August 23, 1871. The 1870 Census was the first census to provide detailed information on the African-American population, only five years after the culmination of the Civil War when slaves were granted freedom. The total population was 38,925,598 with a resident population of 38,558,371 individuals, a 22.62% increase from 1860. The mafia Life of Francis Amasa Walker", Holt, p.111 "Conditions for the work were therefore so adverse that the new superintendent (Walker), with characteristic frankness, repudiated in many instances the results of the Census, denouncing them as false or misleading and pointing out the plain reasons." p.113 " When the appointments of enumerators were made in 1870 the entire lot was taken from the Republican party, and most of those in the South were negroes. Some of the negroes could not read or write, and the enumeration of the Southern population was done very badly. My judgement was that the census of 1870 erred as to the colored population between 350,000 and 400,000"/ref
<P> The United States Census of 1790 was the first census of the whole United States. It recorded the population of the United States as of Census Day, August 2, 1790, as mandated by Article I, Section 2 of the United States Constitution and applicable laws. In the first census, the population of the United States was enumerated to be 3,929,214.
<P> His 1844 report to the American Statistical Association was presented to Congress by John Quincy Adams who notes that it demonstrates ""a multitude of gross and important errors in the printed census of 1840.""
| question: Questions about a census from 1840 context: <P> The United States Census of 1840 was the sixth census of the United States. Conducted by the Census Office on June 1, 1840, it determined the resident population of the United States to be 17,069,453 — an increase of 32.7 percent over the 12,866,020 persons enumerated during the 1830 Census. The total population included 2,487,355 slaves. In 1840, the center of population was about 260 miles (418 km) west of Washington, near Weston, Virginia.
<P> The 1790 United States Census was the first census in the history of the United States. The population of the United States was recorded as 3,929,214 as of Census Day, August 2, 1790, as mandated by Article I, Section 2 of the United States Constitution and applicable laws.
<P> BULLET::::- Significance: The 1850 United States Census can be seen as a historical document that gives insight into the state of the nation's economy in 1850. It is much more detailed and provides more information than the 1840 census.
<P> The Seventh Census of the United States (1850) was taken 1 June 1850. This was the first year in which the census bureau attempted to count and name every member of every household, including women and children. Slaves were counted by gender and age on associated Slave Schedules, listed by their owner's name. The first slave schedules were produced in 1850. Prior to 1850, census records had recorded only the name of the head of the household and broad statistical accounting of other household members, (three children under age five, one woman between the age of 35 and 40, etc.)..
<P> The United States Census of 1870 was the ninth United States Census. It was conducted by the Census Bureau from June 1,1870 to August 23, 1871. The 1870 Census was the first census to provide detailed information on the African-American population, only five years after the culmination of the Civil War when slaves were granted freedom. The total population was 38,925,598 with a resident population of 38,558,371 individuals, a 22.62% increase from 1860. The mafia Life of Francis Amasa Walker", Holt, p.111 "Conditions for the work were therefore so adverse that the new superintendent (Walker), with characteristic frankness, repudiated in many instances the results of the Census, denouncing them as false or misleading and pointing out the plain reasons." p.113 " When the appointments of enumerators were made in 1870 the entire lot was taken from the Republican party, and most of those in the South were negroes. Some of the negroes could not read or write, and the enumeration of the Southern population was done very badly. My judgement was that the census of 1870 erred as to the colored population between 350,000 and 400,000"/ref
<P> The United States Census of 1790 was the first census of the whole United States. It recorded the population of the United States as of Census Day, August 2, 1790, as mandated by Article I, Section 2 of the United States Constitution and applicable laws. In the first census, the population of the United States was enumerated to be 3,929,214.
<P> His 1844 report to the American Statistical Association was presented to Congress by John Quincy Adams who notes that it demonstrates ""a multitude of gross and important errors in the printed census of 1840.""
| answer: I can't give you a firm answer about the people in question, but I have spent some time working with census aggregates. So some general considerations:It might be an error. Here's how the census was done at the time:A guy would come up to your house. He had a big ledger with him full of pre-printed census forms. As I'm sure you know from looking at it, the 1840 census asked a fair number of questions. The census taker would do the tallies and mark out the form himself. If you've ever worked on a large spreadsheet, you know how easy it is to migrate a row or column or two over without realizing it. This is the most likely explanation for the incredible (not just surprisingly large, but literally unbelievable) number of free black Americans recorded as mentally ill in 1840. It was so nuts that John C. Calhoun, of all people, admitted to every error pointed out to him. However, he refused to redo the count (as Secretary of State at the time he was in charge of such things.) because "everybody knew" that freedom made black people crazy. He's still John C. Calhoun. It wouldn't be a big surprise to find out that individual tallies got messed up elsewhere.However, it might be legit. The standard for who was a member of the household amounted to whoever lived there regularly on census day. If he boarded servants and apprentices they would normally be counted as part of his family. This was also true of slaves. Family, in a census context, means the same as household does to us rather than requiring a biological or married relation. The longterm residents of a boarding house and inmates at asylums were members of the owners' or manager's census families.With regard to the manager vs. owner distinction. Errors are always possible, but if the owner of the Iron Works didn't live on-site then the manager would probably be listed as the head of household. There are issues similar in regard large plantations, which is where I learned all of this. The really big planters often had operations in multiple states. They themselves might appear in both the censuses of, say, South Carolina and Georgia due to that property. But the census worker was supposed to list the people present and/or primarily resident at the time Legal ownership wasn't necessarily the controlling factor. Given that plantation overseers often lived on site, and plantation owners might have a separate residence where they spent most of their time (This is especially common in the South Carolina lowcountry.), the overseer is the obvious candidate for head of household when the census person comes by. He lives there and is generally in charge and responsible for the property. The way to check this for sure would be to look for a separate entry for the owner. Has he got a house in town somewhere? If there's a full accounting of him elsewhere, then I'd be pretty confident that placing the overseer as head of household for the iron works was deliberate rather than a mistake. I lean towards it being so anyway, but the separate census entry would really seal the deal. |
41,172 | c30pi1 | what is the electric universe theory? | It's basically pseudoscience that postulates that electricity describes most of the features of the universe. It's generally said to be at odds with modern Physics though, and tends to be popular among the conspiracy types. | [
"It's basically pseudoscience that postulates that electricity describes most of the features of the universe. It's generally said to be at odds with modern Physics though, and tends to be popular among the conspiracy types."
] | 1 | [
"It's basically pseudoscience that postulates that electricity describes most of the features of the universe. It's generally said to be at odds with modern Physics though, and tends to be popular among the conspiracy types."
] | 1 | <P> Electric Universe is a psychedelic trance project from Germany formed by Boris Blenn and Michael Dressler in 1991. Their first EP release, "Solar Energy" was an instant hit with the underground trance scene and is often credited with putting the Spirit Zone Recordings label at the forefront of psychedelic trance early on. According to The Sofia Echo, they were "hailed in the 1990s as one of the top psychedelic trance projects to come out of Germany".
<P> Electric Universe is the thirteenth studio album by Earth, Wind & Fire, released in November 1983 on Columbia Records. The album rose to Nos. 8 & 40 on the Billboard Top Soul Albums and Billboard 200 charts respectively.
<P> Electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, the temperature was approximately above 10 K, the electromagnetic force and the weak force were merged into a combined electroweak force.
<P> Patricius' theory of the universe is that, from God there emanated Light which extends throughout space and is the explanation of all development. This Light is not corporeal and yet is the fundamental reality of things. From Light came Heat and Fluidity; these three together with Space make up the elements out of which all things are constructed. This cosmic theory is a curious combination of materialistic and abstract ideas; the influence of his master Bernardino Telesio, generally predominant, is not strong enough to overcome his inherent disbelief in the adequacy of purely scientific explanation.
<P> Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
<P> The cosmotheistic hypothesis stipulates that the Big Bang that created our cosmos was a local event in an infinite universe — a universe that contains an infinite number of cosmoi (what is now being speculated by theoretical physicists as the multiverse). It proposes that this cosmos is an evolutionary entity, in a constant state of ever growing complexity — that eventually has produced conscious life. It posits that due to the evolutionary nature of cosmic development, now being revealed by the "new physics" and "new cosmology", it is statistically certain that huge numbers of conscious life-forms (equivalent in self-awareness to human beings) have arisen throughout the cosmos; as if conscious life has been sown (as a cosmic genome) throughout the cosmos by the very process of cosmic evolution.
<P> Wheeler speculated that reality is created by observers in the universe. "How does something arise from nothing?", he asked about the existence of space and time. He also coined the term "Participatory Anthropic Principle" (PAP), a version of a Strong Anthropic Principle.
| question: what is the electric universe theory? context: <P> Electric Universe is a psychedelic trance project from Germany formed by Boris Blenn and Michael Dressler in 1991. Their first EP release, "Solar Energy" was an instant hit with the underground trance scene and is often credited with putting the Spirit Zone Recordings label at the forefront of psychedelic trance early on. According to The Sofia Echo, they were "hailed in the 1990s as one of the top psychedelic trance projects to come out of Germany".
<P> Electric Universe is the thirteenth studio album by Earth, Wind & Fire, released in November 1983 on Columbia Records. The album rose to Nos. 8 & 40 on the Billboard Top Soul Albums and Billboard 200 charts respectively.
<P> Electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, the temperature was approximately above 10 K, the electromagnetic force and the weak force were merged into a combined electroweak force.
<P> Patricius' theory of the universe is that, from God there emanated Light which extends throughout space and is the explanation of all development. This Light is not corporeal and yet is the fundamental reality of things. From Light came Heat and Fluidity; these three together with Space make up the elements out of which all things are constructed. This cosmic theory is a curious combination of materialistic and abstract ideas; the influence of his master Bernardino Telesio, generally predominant, is not strong enough to overcome his inherent disbelief in the adequacy of purely scientific explanation.
<P> Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
<P> The cosmotheistic hypothesis stipulates that the Big Bang that created our cosmos was a local event in an infinite universe — a universe that contains an infinite number of cosmoi (what is now being speculated by theoretical physicists as the multiverse). It proposes that this cosmos is an evolutionary entity, in a constant state of ever growing complexity — that eventually has produced conscious life. It posits that due to the evolutionary nature of cosmic development, now being revealed by the "new physics" and "new cosmology", it is statistically certain that huge numbers of conscious life-forms (equivalent in self-awareness to human beings) have arisen throughout the cosmos; as if conscious life has been sown (as a cosmic genome) throughout the cosmos by the very process of cosmic evolution.
<P> Wheeler speculated that reality is created by observers in the universe. "How does something arise from nothing?", he asked about the existence of space and time. He also coined the term "Participatory Anthropic Principle" (PAP), a version of a Strong Anthropic Principle.
| answer: It's basically pseudoscience that postulates that electricity describes most of the features of the universe. It's generally said to be at odds with modern Physics though, and tends to be popular among the conspiracy types. |
15,670 | a3561d | how do you refinance a car? (usa) | Just go talk to a loan officer at your bank and they will help you understand your options. If your current loan is from a different bank/lender, you will have to call them and have them send you a "10 day pay off".. which is just a statement of the amount your new loan would have to be in order to pay off the previous balance including interest for the next 10 days. | [
"Just go talk to a loan officer at your bank and they will help you understand your options. If your current loan is from a different bank/lender, you will have to call them and have them send you a \"10 day pay off\".. which is just a statement of the amount your new loan would have to be in order to pay off the p... | 1 | [] | 0 | <P> Many U.S. states have enacted additional laws that apply specifically to the repossession of purchased and leased automobiles, and which are intended to afford additional consumer protections. Typical requirements include mandating that auto lenders provide consumers with opportunities to either "reinstate" or "redeem" their purchase or lease contracts after their vehicles have been repossessed. A "reinstatement" entails a consumer paying all of his or her past due amounts plus the creditor’s repossession expenses, and then reacquiring the automobile as if the repossession had not occurred. A "redemption" entails the consumer paying off the entire contract balance and then being given ownership of the vehicle free and clear of any contract obligations.
<P> Rebuilding is an old name for remanufacturing. It is still widely used by automotive industry. For example, the Automotive Parts Remanufacturers Association (APRA), have the new term in their name, but to be safe on their own website use the combined term 'rebuild/remanufacture'.
<P> A complete auto restoration could include total removal of the body, engine, driveline components and related parts from the car, total disassembly, cleaning and repairing of each of the major parts and its components, replacing broken, damaged or worn parts and complete re-assembly and testing. As part of the restoration, each part must be thoroughly examined, cleaned and repaired, or if repair of the individual part would be too costly, replaced (assuming correct, quality parts are available) as necessary to return the entire automobile to "as first sold" condition.
<P> Restoration of a vehicle refers to the process of restoring a vehicle to its original condition. Neither updating nor modifying are considered part of the restoration process. A restored car is one that has had all of its systems and/or parts restored to original condition. Selectively restoring parts or systems is referred to as refurbishing. It does not qualify as restoration. Rebuilding an engine may restore that engine, but it does not restore the car, or entitle the car to be called a restoration.
<P> Though automotive restoration is commonly defined as the reconditioning of a vehicle "from original condition in an effort to return it to like-new or better condition," There are many styles of which a vehicle can be restored, any of which can be performed at the discretion, desire, or taste of a vehicle owner or restorer.
<P> There are many restoration facilities in existence offering a broad range and quality of services. Some businesses focus their work on only specific components, such as engines, gas tanks, clocks, or chromed parts. Others perform complete restoration or remanufacture of virtually any car including any of its components. This includes restoration to a finished factory level or better-than-factory condition. Some businesses have the capacity to restore and fabricate all components in-house coupled with the ability to recreate a car no matter what state of decay it is in (or literally how much of the car remains, sometimes as little as a single fender remains and nothing else). There are also restoration services provided by the original manufacturers, such as Ferrari and Aston Martin.
<P> The preservation and restoration of automobiles is the mechanical or cosmetic repair of cars. For example, the guidelines of the Antique Automobile Club of America (AACA) are to "evaluate an antique vehicle, which has been restored to the same state as the dealer could have prepared the vehicle for delivery to the customer."
| question: how do you refinance a car? (usa) context: <P> Many U.S. states have enacted additional laws that apply specifically to the repossession of purchased and leased automobiles, and which are intended to afford additional consumer protections. Typical requirements include mandating that auto lenders provide consumers with opportunities to either "reinstate" or "redeem" their purchase or lease contracts after their vehicles have been repossessed. A "reinstatement" entails a consumer paying all of his or her past due amounts plus the creditor’s repossession expenses, and then reacquiring the automobile as if the repossession had not occurred. A "redemption" entails the consumer paying off the entire contract balance and then being given ownership of the vehicle free and clear of any contract obligations.
<P> Rebuilding is an old name for remanufacturing. It is still widely used by automotive industry. For example, the Automotive Parts Remanufacturers Association (APRA), have the new term in their name, but to be safe on their own website use the combined term 'rebuild/remanufacture'.
<P> A complete auto restoration could include total removal of the body, engine, driveline components and related parts from the car, total disassembly, cleaning and repairing of each of the major parts and its components, replacing broken, damaged or worn parts and complete re-assembly and testing. As part of the restoration, each part must be thoroughly examined, cleaned and repaired, or if repair of the individual part would be too costly, replaced (assuming correct, quality parts are available) as necessary to return the entire automobile to "as first sold" condition.
<P> Restoration of a vehicle refers to the process of restoring a vehicle to its original condition. Neither updating nor modifying are considered part of the restoration process. A restored car is one that has had all of its systems and/or parts restored to original condition. Selectively restoring parts or systems is referred to as refurbishing. It does not qualify as restoration. Rebuilding an engine may restore that engine, but it does not restore the car, or entitle the car to be called a restoration.
<P> Though automotive restoration is commonly defined as the reconditioning of a vehicle "from original condition in an effort to return it to like-new or better condition," There are many styles of which a vehicle can be restored, any of which can be performed at the discretion, desire, or taste of a vehicle owner or restorer.
<P> There are many restoration facilities in existence offering a broad range and quality of services. Some businesses focus their work on only specific components, such as engines, gas tanks, clocks, or chromed parts. Others perform complete restoration or remanufacture of virtually any car including any of its components. This includes restoration to a finished factory level or better-than-factory condition. Some businesses have the capacity to restore and fabricate all components in-house coupled with the ability to recreate a car no matter what state of decay it is in (or literally how much of the car remains, sometimes as little as a single fender remains and nothing else). There are also restoration services provided by the original manufacturers, such as Ferrari and Aston Martin.
<P> The preservation and restoration of automobiles is the mechanical or cosmetic repair of cars. For example, the guidelines of the Antique Automobile Club of America (AACA) are to "evaluate an antique vehicle, which has been restored to the same state as the dealer could have prepared the vehicle for delivery to the customer."
| answer: Just go talk to a loan officer at your bank and they will help you understand your options. If your current loan is from a different bank/lender, you will have to call them and have them send you a "10 day pay off".. which is just a statement of the amount your new loan would have to be in order to pay off the previous balance including interest for the next 10 days. |
102,318 | 3zaoco | Glider units in world War 2 | Glider pilots were trained to fly gliders, but also to fight as infantry (both US and British pilots had to qualify to some extent with all infantry weapons) alongside the troops they had landed for as long as was necessary (a few days, even as long as a week or more) until they could be evacuated to the rear. Many glider pilots were killed. Unlike normal infantry, glider pilots were more expensive (and took more time) to train, so they needed to be protected and reused if necessary. In the case of Normandy, the order to return to the beaches (for British pilots) for evacuation was given on D+2. During Operation Market Garden, glider pilots fought for the whole period of the operation.Sources:*They Flew into Battle on Silent Wings: World War II Glider Pilots of the US Army Air Force*, by Maj. Leon B Spencer, USAF (ret.)_URL_0_United Kingdom- Glider Pilot Regiment_URL_1_ | [
"Glider pilots were trained to fly gliders, but also to fight as infantry (both US and British pilots had to qualify to some extent with all infantry weapons) alongside the troops they had landed for as long as was necessary (a few days, even as long as a week or more) until they could be evacuated to the rear. Man... | 1 | [] | 0 | <P> The Glider Pilot Regiment was a British airborne forces unit of the Second World War, which was responsible for crewing the British Army's military gliders and saw action in the European theatre in support of Allied airborne operations. Established in 1942, the regiment was disbanded in 1957.
<P> The 319th and its sister GFAB, the 320th, are the only two glider field artillery units to make two glider assaults behind enemy lines during the Second World War; at St. Mere Eglise on D-Day and at Nijmegen in the Netherlands. The 319th lost approximately 40% of its strength due to death, wounds and injuries sustained by glider crashes and enemy fire on the night of 5–6 June 1944 during the Normandy landings.
<P> Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. The advantage over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable, leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor.
<P> The 193rd Glider Infantry Regiment was an airborne infantry regiment of the United States Army during World War II. It was part of the 17th Airborne Division and fought during the Battle of the Bulge.
<P> The 194th Glider Infantry Regiment was a Glider infantry regiment of the United States Army that served in World War II. It was a part of the 17th Airborne Division, and saw active combat service until it deactivation in 1945.
<P> "Glider Pilot" wings were awarded to soldiers who completed training as pilots of military gliders (MOS 1026). The wings were issued initially during the Second World War. The final class of Glider Pilots ever to be trained received their wings in January 1945 at South Plains Army Airfield, near Lubbock, Texas. These wings should not be confused with the Glider Badge which was created in 1944 to recognize glider-borne ground troops (mostly Infantry, but also various supporting arms) of U.S. Airborne Divisions, who rode into combat as passengers.
<P> Military gliders were used mainly during the Second World War for carrying troops and heavy equipment (see Glider infantry) to a combat zone. These aircraft were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. Advantages over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor.
| question: Glider units in world War 2 context: <P> The Glider Pilot Regiment was a British airborne forces unit of the Second World War, which was responsible for crewing the British Army's military gliders and saw action in the European theatre in support of Allied airborne operations. Established in 1942, the regiment was disbanded in 1957.
<P> The 319th and its sister GFAB, the 320th, are the only two glider field artillery units to make two glider assaults behind enemy lines during the Second World War; at St. Mere Eglise on D-Day and at Nijmegen in the Netherlands. The 319th lost approximately 40% of its strength due to death, wounds and injuries sustained by glider crashes and enemy fire on the night of 5–6 June 1944 during the Normandy landings.
<P> Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. The advantage over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable, leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor.
<P> The 193rd Glider Infantry Regiment was an airborne infantry regiment of the United States Army during World War II. It was part of the 17th Airborne Division and fought during the Battle of the Bulge.
<P> The 194th Glider Infantry Regiment was a Glider infantry regiment of the United States Army that served in World War II. It was a part of the 17th Airborne Division, and saw active combat service until it deactivation in 1945.
<P> "Glider Pilot" wings were awarded to soldiers who completed training as pilots of military gliders (MOS 1026). The wings were issued initially during the Second World War. The final class of Glider Pilots ever to be trained received their wings in January 1945 at South Plains Army Airfield, near Lubbock, Texas. These wings should not be confused with the Glider Badge which was created in 1944 to recognize glider-borne ground troops (mostly Infantry, but also various supporting arms) of U.S. Airborne Divisions, who rode into combat as passengers.
<P> Military gliders were used mainly during the Second World War for carrying troops and heavy equipment (see Glider infantry) to a combat zone. These aircraft were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. Advantages over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor.
| answer: Glider pilots were trained to fly gliders, but also to fight as infantry (both US and British pilots had to qualify to some extent with all infantry weapons) alongside the troops they had landed for as long as was necessary (a few days, even as long as a week or more) until they could be evacuated to the rear. Many glider pilots were killed. Unlike normal infantry, glider pilots were more expensive (and took more time) to train, so they needed to be protected and reused if necessary. In the case of Normandy, the order to return to the beaches (for British pilots) for evacuation was given on D+2. During Operation Market Garden, glider pilots fought for the whole period of the operation.Sources:*They Flew into Battle on Silent Wings: World War II Glider Pilots of the US Army Air Force*, by Maj. Leon B Spencer, USAF (ret.)_URL_0_United Kingdom- Glider Pilot Regiment_URL_1_ |
129,256 | af4b7g | How is diphenhydramine (benadryl) both an antihistamine and a sleep aid? | Diphenhydramine is an indiscriminate antihistamine, in that it blocks any histamine receptor it sees. It can cross the blood-brain barrier and inhibit one of the other functions of histamines — that is, the pivotal role they play in regulating sleep and wakefulness. This disruption of the action of histamines in the brain results in drowsiness. | [
"Diphenhydramine is an indiscriminate antihistamine, in that it blocks any histamine receptor it sees. It can cross the blood-brain barrier and inhibit one of the other functions of histamines — that is, the pivotal role they play in regulating sleep and wakefulness. This disruption of the action of histamines in t... | 2 | [
"Diphenhydramine is an indiscriminate antihistamine, in that it blocks any histamine receptor it sees. It can cross the blood-brain barrier and inhibit one of the other functions of histamines — that is, the pivotal role they play in regulating sleep and wakefulness. This disruption of the action of histamines in t... | 1 | <P> Because of its sedative properties, diphenhydramine is widely used in nonprescription sleep aids for insomnia. The drug is an ingredient in several products sold as sleep aids, either alone or in combination with other ingredients such as acetaminophen (paracetamol) in Tylenol PM or ibuprofen in Advil PM. Diphenhydramine can cause minor psychological dependence. Diphenhydramine can cause sedation and has also been used as an anxiolytic.
<P> Diphenhydramine is a sedating H1 antagonist. Diphenhydramine works by blocking the effects of histamine and causes drowsiness. The combination of the active ingredients in Panadol night can be used to relieve mild to moderate pain such as headaches, backache or period pain that is causing difficulty getting to sleep.
<P> In addition to a sleep aid, TIK-301 has been found useful in treating other disorders. Because of its affinity for serotonin receptors, it has potential to serve as a possible antidepressant drug, similar to agomelatine. TIK-301 has also been considered for use in patients with mild cognitive impairment (MCI) because of sleep disorder prevalence. TIK-301, as well as other melatonin agonists, has been reported to have potential in preventing or treating urinary incontinence, but have not been tested in humans for this purpose.
<P> Selenium disulfide, also known as selenium sulfide, is a medication used to treat pityriasis versicolor, seborrhoeic dermatitis, and dandruff. It is applied to the affected area as a lotion or shampoo. Dandruff frequently returns if treatment is stopped.
<P> As an alternative to taking prescription drugs, some evidence shows that an average person seeking short-term help may find relief by taking over-the-counter antihistamines such as diphenhydramine or doxylamine. Diphenhydramine and doxylamine are widely used in nonprescription sleep aids. They are the most effective over-the-counter sedatives currently available, at least in much of Europe, Canada, Australia, and the United States, and are more sedating than some prescription hypnotics. Antihistamine effectiveness for sleep may decrease over time, and anticholinergic side-effects (such as dry mouth) may also be a drawback with these particular drugs. While addiction does not seem to be an issue with this class of drugs, they can induce dependence and rebound effects upon abrupt cessation of use. However, people whose insomnia is caused by restless legs syndrome may have worsened symptoms with antihistamines.
<P> Naproxen/diphenhydramine (trade name Aleve PM) is a formulation of naproxen with diphenhydramine marketed by Bayer Healthcare. It is made as an over-the-counter drug. The intended use of the drug is relieve pain specifically when going to sleep.
<P> Stimulants, which inhibit sleep, include caffeine, an adenosine antagonist; amphetamine, MDMA, empathogen-entactogens, and related drugs; cocaine, which can alter the circadian rhythm, and methylphenidate, which acts similarly; and other analeptic drugs like modafinil and armodafinil with poorly understood mechanisms.
| question: How is diphenhydramine (benadryl) both an antihistamine and a sleep aid? context: <P> Because of its sedative properties, diphenhydramine is widely used in nonprescription sleep aids for insomnia. The drug is an ingredient in several products sold as sleep aids, either alone or in combination with other ingredients such as acetaminophen (paracetamol) in Tylenol PM or ibuprofen in Advil PM. Diphenhydramine can cause minor psychological dependence. Diphenhydramine can cause sedation and has also been used as an anxiolytic.
<P> Diphenhydramine is a sedating H1 antagonist. Diphenhydramine works by blocking the effects of histamine and causes drowsiness. The combination of the active ingredients in Panadol night can be used to relieve mild to moderate pain such as headaches, backache or period pain that is causing difficulty getting to sleep.
<P> In addition to a sleep aid, TIK-301 has been found useful in treating other disorders. Because of its affinity for serotonin receptors, it has potential to serve as a possible antidepressant drug, similar to agomelatine. TIK-301 has also been considered for use in patients with mild cognitive impairment (MCI) because of sleep disorder prevalence. TIK-301, as well as other melatonin agonists, has been reported to have potential in preventing or treating urinary incontinence, but have not been tested in humans for this purpose.
<P> Selenium disulfide, also known as selenium sulfide, is a medication used to treat pityriasis versicolor, seborrhoeic dermatitis, and dandruff. It is applied to the affected area as a lotion or shampoo. Dandruff frequently returns if treatment is stopped.
<P> As an alternative to taking prescription drugs, some evidence shows that an average person seeking short-term help may find relief by taking over-the-counter antihistamines such as diphenhydramine or doxylamine. Diphenhydramine and doxylamine are widely used in nonprescription sleep aids. They are the most effective over-the-counter sedatives currently available, at least in much of Europe, Canada, Australia, and the United States, and are more sedating than some prescription hypnotics. Antihistamine effectiveness for sleep may decrease over time, and anticholinergic side-effects (such as dry mouth) may also be a drawback with these particular drugs. While addiction does not seem to be an issue with this class of drugs, they can induce dependence and rebound effects upon abrupt cessation of use. However, people whose insomnia is caused by restless legs syndrome may have worsened symptoms with antihistamines.
<P> Naproxen/diphenhydramine (trade name Aleve PM) is a formulation of naproxen with diphenhydramine marketed by Bayer Healthcare. It is made as an over-the-counter drug. The intended use of the drug is relieve pain specifically when going to sleep.
<P> Stimulants, which inhibit sleep, include caffeine, an adenosine antagonist; amphetamine, MDMA, empathogen-entactogens, and related drugs; cocaine, which can alter the circadian rhythm, and methylphenidate, which acts similarly; and other analeptic drugs like modafinil and armodafinil with poorly understood mechanisms.
| answer: Diphenhydramine is an indiscriminate antihistamine, in that it blocks any histamine receptor it sees. It can cross the blood-brain barrier and inhibit one of the other functions of histamines — that is, the pivotal role they play in regulating sleep and wakefulness. This disruption of the action of histamines in the brain results in drowsiness. |
158,387 | 3jkw92 | how come a burglar, who gets hurt while robbing a house, can sue the owner and win? | Kotka v Briney is probably what you're referring to. In that case it was because the home in question was not occupied and the would be burglar was attacked by a booby trap. The use of booby traps are illegal because they attack people indiscriminately and do not use force to protect human life rather only property.Life, even a burglar's life, is more valuable than a vacated property according to law. This case is famous because the press misrepresented the facts and a lot of people were upset by the non-factual scenario. | [
"First, this is actually very rare. It is extremely uncommon that a burglar is actually able to sue for injuries they suffer. (partly because of the law, and partly because juries aren't especially sympathetic to burglars) \n\nNo \"unknown and unforeseen.\" trespasser can ever sue for damages based on some accident... | 86 | [
"First, this is actually very rare. It is extremely uncommon that a burglar is actually able to sue for injuries they suffer. (partly because of the law, and partly because juries aren't especially sympathetic to burglars) \n\nNo \"unknown and unforeseen.\" trespasser can ever sue for damages based on some accident... | 39 | <P> Burglary can also be committed in "part of a building" and in R v Walkington 1979 1 WLR 1169 the defendant had entered a large shop during trading hours but went behind a counter and put his hand in an empty till. The court held that he had entered that part of the building normally reserved for staff as a trespasser with intention to steal money and was therefore guilty of burglary.
<P> A person commits the offense of burglary when, without authority and with the intent to commit a felony or theft therein, they enters or remains within the dwelling house of another or any building, vehicle, railroad car, watercraft, or other such structure designed for use as the dwelling of another or enters or remains within any other building, railroad car, aircraft, or any room or any part thereof. A person convicted of the offense of burglary, for the first such offense, shall be punished by imprisonment for not less than one nor more than 20 years. For the purposes of this Code section, the term "railroad car" shall also include trailers on flatcars, containers on flatcars, trailers on railroad property, or containers on railroad property. O.C.G.A. § 16-7-1
<P> Robbery occurs if an aggressor forcibly snatched a mobile phone or if they used a knife to make an implied threat of violence to the holder and then took the phone. The person being threatened does not need to be the owner of the property. It is not necessary that the victim was actually frightened, but the defendant must have put or sought to put the victim or some other person in fear of immediate force.
<P> In Wisconsin, burglary is committed by one who forcibly enters a building without consent and with intent to steal or to commit another felony. Burglary may also be committed by entry to a locked truck, car or trailer or a ship. The crime of burglary is treated as being more serious if the burglar is armed with a dangerous weapon when the burglary is committed or arms himself/herself during the commission of the burglary.
<P> In the United States, burglary is prosecuted as a felony or misdemeanor and involves trespassing and theft, entering a building or automobile, or loitering unlawfully with intent to commit any crime, not necessarily a theft--for example, vandalism. Even if nothing is stolen in a burglary, the act is a statutory offense. Buildings can include hangars, sheds, barns, and coops; burglary of boats, aircraft, trucks, and railway cars is possible. Burglary may be an element in crimes involving rape, arson, kidnapping, identity theft, or violation of civil rights; indeed, the "plumbers" of the Watergate scandal were technically burglars. As with all legal definitions in the U.S., the foregoing description may not be applicable in every jurisdiction, since there are 50 separate state criminal codes, plus federal and territorial codes in force.
<P> If the claimant is involved in wrongdoing at the time the alleged negligence occurred, this may extinguish or reduce the defendant's liability. The legal maxim "ex turpi causa non oritur actio", Latin for "no right of action arises from a despicable cause". Thus, if a burglar is verbally challenged by the property owner and sustains injury when jumping from a second story window to escape apprehension, there is no cause of action against the property owner even though that injury would not have been sustained but for the property owner's intervention.
<P> Under Florida State Statutes, "burglary" occurs when a person "enter[s] a dwelling, a structure, or a conveyance with the intent to commit an offense therein, unless the premises are at the time open to the public or the defendant is licensed or invited to enter." Depending on the circumstances of the crime, burglary can be classified as third, second, or first-degree felonies, with maximum sentences of five years, fifteen years, and life, respectively.
| question: how come a burglar, who gets hurt while robbing a house, can sue the owner and win? context: <P> Burglary can also be committed in "part of a building" and in R v Walkington 1979 1 WLR 1169 the defendant had entered a large shop during trading hours but went behind a counter and put his hand in an empty till. The court held that he had entered that part of the building normally reserved for staff as a trespasser with intention to steal money and was therefore guilty of burglary.
<P> A person commits the offense of burglary when, without authority and with the intent to commit a felony or theft therein, they enters or remains within the dwelling house of another or any building, vehicle, railroad car, watercraft, or other such structure designed for use as the dwelling of another or enters or remains within any other building, railroad car, aircraft, or any room or any part thereof. A person convicted of the offense of burglary, for the first such offense, shall be punished by imprisonment for not less than one nor more than 20 years. For the purposes of this Code section, the term "railroad car" shall also include trailers on flatcars, containers on flatcars, trailers on railroad property, or containers on railroad property. O.C.G.A. § 16-7-1
<P> Robbery occurs if an aggressor forcibly snatched a mobile phone or if they used a knife to make an implied threat of violence to the holder and then took the phone. The person being threatened does not need to be the owner of the property. It is not necessary that the victim was actually frightened, but the defendant must have put or sought to put the victim or some other person in fear of immediate force.
<P> In Wisconsin, burglary is committed by one who forcibly enters a building without consent and with intent to steal or to commit another felony. Burglary may also be committed by entry to a locked truck, car or trailer or a ship. The crime of burglary is treated as being more serious if the burglar is armed with a dangerous weapon when the burglary is committed or arms himself/herself during the commission of the burglary.
<P> In the United States, burglary is prosecuted as a felony or misdemeanor and involves trespassing and theft, entering a building or automobile, or loitering unlawfully with intent to commit any crime, not necessarily a theft--for example, vandalism. Even if nothing is stolen in a burglary, the act is a statutory offense. Buildings can include hangars, sheds, barns, and coops; burglary of boats, aircraft, trucks, and railway cars is possible. Burglary may be an element in crimes involving rape, arson, kidnapping, identity theft, or violation of civil rights; indeed, the "plumbers" of the Watergate scandal were technically burglars. As with all legal definitions in the U.S., the foregoing description may not be applicable in every jurisdiction, since there are 50 separate state criminal codes, plus federal and territorial codes in force.
<P> If the claimant is involved in wrongdoing at the time the alleged negligence occurred, this may extinguish or reduce the defendant's liability. The legal maxim "ex turpi causa non oritur actio", Latin for "no right of action arises from a despicable cause". Thus, if a burglar is verbally challenged by the property owner and sustains injury when jumping from a second story window to escape apprehension, there is no cause of action against the property owner even though that injury would not have been sustained but for the property owner's intervention.
<P> Under Florida State Statutes, "burglary" occurs when a person "enter[s] a dwelling, a structure, or a conveyance with the intent to commit an offense therein, unless the premises are at the time open to the public or the defendant is licensed or invited to enter." Depending on the circumstances of the crime, burglary can be classified as third, second, or first-degree felonies, with maximum sentences of five years, fifteen years, and life, respectively.
| answer: Kotka v Briney is probably what you're referring to. In that case it was because the home in question was not occupied and the would be burglar was attacked by a booby trap. The use of booby traps are illegal because they attack people indiscriminately and do not use force to protect human life rather only property.Life, even a burglar's life, is more valuable than a vacated property according to law. This case is famous because the press misrepresented the facts and a lot of people were upset by the non-factual scenario. |
206,461 | 2st861 | Store bought balloons into outer space? | There is no point of zero gravity. Otherwise how would the moon orbit?Things in orbit are in *free fall*, accelerating downwards just like any other object, but they are also travelling sideways so that the distance they gain from travelling on a tangent cancels the distance that they fall. | [
"There is no point of zero gravity. Otherwise how would the moon orbit?\n\nThings in orbit are in *free fall*, accelerating downwards just like any other object, but they are also travelling sideways so that the distance they gain from travelling on a tangent cancels the distance that they fall.",
"Helium only f... | 2 | [] | 0 | <P> In May 2008, the first bagel made it to outer space from the International Space Station on mission STS-124. 45-year old Astronaut Greg Chamitoff, the nephew of the owner of the store, managed to take products from Fairmount Bakery with him in his shuttle into space.
<P> Toy balloons are used as decorations and/or advertising space. Balloons are usually purchased in deflated form, however, some party stores and vendors at special events will fill their balloons before selling them, this is called "balloon stuffing" where the balloons are filled with objects such as smaller balloons, teddy bears, etc.
<P> The practice of carrying arbitrary non-functional items into space, which has previously been carried out by many Space Shuttle missions, is evidence that space travel is still widely seen as special. The value of symbolic items increases enormously if the item has flown in space, due to the restricted access to space. The teddy bear being carried for charity will be auctioned at a much higher price than it would otherwise command. Scaled Composites employees were made to sign a contract forbidding them from selling the mementos they put on the flight.
<P> Montreal-style bagels are currently the only style of bagel known to have ventured into space. Gregory Chamitoff, who grew up in Montreal, took three bags of sesame bagels with him on his assignments to STS-124 as passenger and ISS Expedition 17 as crewmember.
<P> Winzen Research Inc created balloons in the 1950s and 1960s that were used by the United States Navy in its Projects Helios, Skyhook, and Strato-Lab. Balloons were also sold to the United States Air Force for use in Project Manhigh and for a secret reconnaissance mission, called Moby Dick, to overfly the Soviet Union.
<P> In February 2017, Taylor became the first private citizen to manufacture an item in space when a gravity meter he commissioned and co-designed was printed on the International Space Station. The item was subsequently donated to the Museum of Science and Industry in Chicago.
<P> Every toy balloon has an opening (or "mouth") through which gases are blown into it, followed by a connecting tube known as the "neck". Balloons are usually filled by using one's breath, a pump, or a pressurized gas tank. The opening can then be permanently tied off or clamped temporarily. Foil balloons are typically self-sealing. By filling a balloon with a gas lighter than air, such as helium, the balloon can be made to float. Helium is the preferred gas for floating balloons, because it is inert and will not catch fire (like hydrogen) or cause toxic effects when inhaled. Small, light objects (postcards, in balloon mail for example) are sometimes placed in balloons along with helium and released into the air and, when the balloon eventually falls, the object inside might be found by another person. Rubber balloons can also be filled with liquids (usually water) and can burst when they impact a solid object. Liquid-filled balloons are commonly referred to as water balloons or water bombs and used in playful fights, and sometimes vandalism.
| question: Store bought balloons into outer space? context: <P> In May 2008, the first bagel made it to outer space from the International Space Station on mission STS-124. 45-year old Astronaut Greg Chamitoff, the nephew of the owner of the store, managed to take products from Fairmount Bakery with him in his shuttle into space.
<P> Toy balloons are used as decorations and/or advertising space. Balloons are usually purchased in deflated form, however, some party stores and vendors at special events will fill their balloons before selling them, this is called "balloon stuffing" where the balloons are filled with objects such as smaller balloons, teddy bears, etc.
<P> The practice of carrying arbitrary non-functional items into space, which has previously been carried out by many Space Shuttle missions, is evidence that space travel is still widely seen as special. The value of symbolic items increases enormously if the item has flown in space, due to the restricted access to space. The teddy bear being carried for charity will be auctioned at a much higher price than it would otherwise command. Scaled Composites employees were made to sign a contract forbidding them from selling the mementos they put on the flight.
<P> Montreal-style bagels are currently the only style of bagel known to have ventured into space. Gregory Chamitoff, who grew up in Montreal, took three bags of sesame bagels with him on his assignments to STS-124 as passenger and ISS Expedition 17 as crewmember.
<P> Winzen Research Inc created balloons in the 1950s and 1960s that were used by the United States Navy in its Projects Helios, Skyhook, and Strato-Lab. Balloons were also sold to the United States Air Force for use in Project Manhigh and for a secret reconnaissance mission, called Moby Dick, to overfly the Soviet Union.
<P> In February 2017, Taylor became the first private citizen to manufacture an item in space when a gravity meter he commissioned and co-designed was printed on the International Space Station. The item was subsequently donated to the Museum of Science and Industry in Chicago.
<P> Every toy balloon has an opening (or "mouth") through which gases are blown into it, followed by a connecting tube known as the "neck". Balloons are usually filled by using one's breath, a pump, or a pressurized gas tank. The opening can then be permanently tied off or clamped temporarily. Foil balloons are typically self-sealing. By filling a balloon with a gas lighter than air, such as helium, the balloon can be made to float. Helium is the preferred gas for floating balloons, because it is inert and will not catch fire (like hydrogen) or cause toxic effects when inhaled. Small, light objects (postcards, in balloon mail for example) are sometimes placed in balloons along with helium and released into the air and, when the balloon eventually falls, the object inside might be found by another person. Rubber balloons can also be filled with liquids (usually water) and can burst when they impact a solid object. Liquid-filled balloons are commonly referred to as water balloons or water bombs and used in playful fights, and sometimes vandalism.
| answer: There is no point of zero gravity. Otherwise how would the moon orbit?Things in orbit are in *free fall*, accelerating downwards just like any other object, but they are also travelling sideways so that the distance they gain from travelling on a tangent cancels the distance that they fall. |
35,379 | v2jp5 | Is superstition a learned behavior? Are humans naturally superstitious? | To understand superstition, it is useful to understand how learning actually occurs. When something good happens, the brain basically reinforces all the circuits that were active prior to that good thing happening. If something bad happens, the reverse is true. This has the effect that you do more of the things that came before rewards, including the ones that actually caused the reward (if applicable) and less of the things that didn't lead to a reward, including the causes (if applicable). However, for the most part the brain doesn't distinguish between the obvious causes and the things that seem like they couldn't plausibly have caused the reward. So if you're always wearing green socks when you get good news, you will get more of a feeling that green socks are "good" or "lucky", despite having no factual evidence to back this up beyond "good things happened when I wore them in the past".Predicting the future is (computationally) expensive, but doing what worked in the past is easy. Hope that explained it. | [
"To understand superstition, it is useful to understand how learning actually occurs. When something good happens, the brain basically reinforces all the circuits that were active prior to that good thing happening. If something bad happens, the reverse is true. This has the effect that you do more of the things th... | 2 | [] | 0 | <P> Psychologist Stuart Vyse has pointed out that until about 2010, "[m]ost researchers assumed superstitions were irrational and focused their attentions on discovering why people were superstitious." Vyse went on to describe studies that looked at the relationship between performance and superstitious rituals. Preliminary work has indicated that such rituals can reduce stress and thereby improve performance, but, Vyse has said, "...not because they are superstitious but because they are rituals... So there is no real magic, but there is a bit of calming magic in performing a ritualistic sequence before attempting a high-pressure activity... Any old ritual will do."
<P> Superstition in Pakistan () is widespread and many adverse events are attributed to the supernatural effect. Superstition is a belief in supernatural causality: that one event leads to the cause of another without any physical process linking the two events, such as astrology, omens, witchcraft, etc., that contradicts natural science. In Pakistan, the Magical thinking pervades as many acts and events are attributed to supernatural and ritual, such as prayer, sacrifice, or the observance of a taboo are followed. Many believe that magic is effective psychologically as it has placebo effect to psychosomatic diseases. Scholars of Islam view superstition as shirk, denying the unity of God and against Sharia. Within Islam, shirk is an unforgivable crime; God may murder any sins if one dies in that state except for committing shirk. Sleeping on your right side and reciting the Ayat-ul-Kursi () of the Quran can protect person from the evil.
<P> In modern society, relying on superstitions has declined as there is more of an emphasis on rationality. As a result, many people are critical of acting on superstitious beliefs. Blindly turning to superstition, however, can still comfort the mind.
<P> Superstition is a credulous belief or notion, not based on reason or knowledge. The word "superstition" is often used pejoratively to refer to folk beliefs deemed irrational. This leads to some superstitions being called "old wives' tales." It is also commonly applied to beliefs and practices surrounding luck, prophecy and spiritual beings, particularly the irrational belief that future events can be influenced or foretold by specific unrelated prior events.
<P> People seem to believe that superstitions influence events by changing the likelihood of currently possible outcomes rather than by creating new possible outcomes. In sporting events, for example, a lucky ritual or object is thought to increase the chance that an athlete will perform at the peak of their ability, rather than increasing their overall ability at that sport. Consequently, people whose goal is to perform well are more likely to rely on "supernatural assistance" - lucky items and rituals - than are people whose goal is to improve their skills and abilities and learn in the same context.
<P> Superstitions are usually attributed to a lack of education. But, in India educated people have also been observed following beliefs that may be considered superstitious. The literacy rate of India, according to the 2011 census is at 74%. The beliefs and practices vary from region to region, with many regions having their own specific beliefs. The practices may range from harmless lemon-and-chilli totems for warding off evil eye to serious concerns like witch-burning. Some of these beliefs and practices are centuries old and are considered part of the tradition and religion, as a result introduction of new prohibitory laws often face opposition.
<P> To the Platonist philosopher Plutarch (c. 45–125) we owe the treatise "On Superstition". Plutarch defines "'superstition" as "fear of the gods." Specifically, he mentions that fear of the gods leads to the need to resort to magical rites and taboos, the consultation of professional sorcerers and witches, amulets and incantations, and unintelligible language in prayers addressed to the gods.
| question: Is superstition a learned behavior? Are humans naturally superstitious? context: <P> Psychologist Stuart Vyse has pointed out that until about 2010, "[m]ost researchers assumed superstitions were irrational and focused their attentions on discovering why people were superstitious." Vyse went on to describe studies that looked at the relationship between performance and superstitious rituals. Preliminary work has indicated that such rituals can reduce stress and thereby improve performance, but, Vyse has said, "...not because they are superstitious but because they are rituals... So there is no real magic, but there is a bit of calming magic in performing a ritualistic sequence before attempting a high-pressure activity... Any old ritual will do."
<P> Superstition in Pakistan () is widespread and many adverse events are attributed to the supernatural effect. Superstition is a belief in supernatural causality: that one event leads to the cause of another without any physical process linking the two events, such as astrology, omens, witchcraft, etc., that contradicts natural science. In Pakistan, the Magical thinking pervades as many acts and events are attributed to supernatural and ritual, such as prayer, sacrifice, or the observance of a taboo are followed. Many believe that magic is effective psychologically as it has placebo effect to psychosomatic diseases. Scholars of Islam view superstition as shirk, denying the unity of God and against Sharia. Within Islam, shirk is an unforgivable crime; God may murder any sins if one dies in that state except for committing shirk. Sleeping on your right side and reciting the Ayat-ul-Kursi () of the Quran can protect person from the evil.
<P> In modern society, relying on superstitions has declined as there is more of an emphasis on rationality. As a result, many people are critical of acting on superstitious beliefs. Blindly turning to superstition, however, can still comfort the mind.
<P> Superstition is a credulous belief or notion, not based on reason or knowledge. The word "superstition" is often used pejoratively to refer to folk beliefs deemed irrational. This leads to some superstitions being called "old wives' tales." It is also commonly applied to beliefs and practices surrounding luck, prophecy and spiritual beings, particularly the irrational belief that future events can be influenced or foretold by specific unrelated prior events.
<P> People seem to believe that superstitions influence events by changing the likelihood of currently possible outcomes rather than by creating new possible outcomes. In sporting events, for example, a lucky ritual or object is thought to increase the chance that an athlete will perform at the peak of their ability, rather than increasing their overall ability at that sport. Consequently, people whose goal is to perform well are more likely to rely on "supernatural assistance" - lucky items and rituals - than are people whose goal is to improve their skills and abilities and learn in the same context.
<P> Superstitions are usually attributed to a lack of education. But, in India educated people have also been observed following beliefs that may be considered superstitious. The literacy rate of India, according to the 2011 census is at 74%. The beliefs and practices vary from region to region, with many regions having their own specific beliefs. The practices may range from harmless lemon-and-chilli totems for warding off evil eye to serious concerns like witch-burning. Some of these beliefs and practices are centuries old and are considered part of the tradition and religion, as a result introduction of new prohibitory laws often face opposition.
<P> To the Platonist philosopher Plutarch (c. 45–125) we owe the treatise "On Superstition". Plutarch defines "'superstition" as "fear of the gods." Specifically, he mentions that fear of the gods leads to the need to resort to magical rites and taboos, the consultation of professional sorcerers and witches, amulets and incantations, and unintelligible language in prayers addressed to the gods.
| answer: To understand superstition, it is useful to understand how learning actually occurs. When something good happens, the brain basically reinforces all the circuits that were active prior to that good thing happening. If something bad happens, the reverse is true. This has the effect that you do more of the things that came before rewards, including the ones that actually caused the reward (if applicable) and less of the things that didn't lead to a reward, including the causes (if applicable). However, for the most part the brain doesn't distinguish between the obvious causes and the things that seem like they couldn't plausibly have caused the reward. So if you're always wearing green socks when you get good news, you will get more of a feeling that green socks are "good" or "lucky", despite having no factual evidence to back this up beyond "good things happened when I wore them in the past".Predicting the future is (computationally) expensive, but doing what worked in the past is easy. Hope that explained it. |
22,732 | whjqr | why if i put the + of one battery to the - of a different battery does nothing happen? | You need to have a full circuit. Here's an analogy:Think of electricity like water and the battery as a pump. If you put two pumps next to each other but dont loop it, the water wont be able to keep flowing through and back to the first pump again; won't be a cycle.It's nothing like that but if it helps you to understand it, that's fine. So, if you touched the batteries together and then put a wire from one end of the two batteries to the other, it'll work. However, you'll get a short circuit and make some sparks so not good idea. | [
"You need to have a full circuit. Here's an analogy:\n\nThink of electricity like water and the battery as a pump. If you put two pumps next to each other but dont loop it, the water wont be able to keep flowing through and back to the first pump again; won't be a cycle.\n\nIt's nothing like that but if it helps yo... | 1 | [] | 0 | <P> In the latter case, the problem occurs due to the different cells in a battery having slightly different capacities. When one cell reaches discharge level ahead of the rest, the remaining cells will force the current through the discharged cell.
<P> If a battery is connected to a significant load during charging, the end of the Uo-phase may never be reached and the battery will gas and be damaged, depending on the charge current relative to the battery capacity.
<P> The effect can be overcome by subjecting each cell of the battery to one or more deep charge/discharge cycles. This must be done to the individual cells, not a multi-cell battery; in a battery, some cells may discharge before others, resulting in those cells being subjected to a reverse charging current by the remaining cells, potentially leading to irreversible damage.
<P> A secondary cell, for example a rechargeable battery, is a cell in which the chemical reactions are reversible. When the cell is being charged, the anode becomes the positive (+) and the cathode the negative (−) electrode. This is also the case in an electrolytic cell. When the cell is being discharged, it behaves like a primary cell, with the anode as the negative and the cathode as the positive electrode. Charging and discharging processes, such as those in Lithium-ion batteries, tend to incur large losses through contact resistance at electrodes. Minimising these electrode localised losses constitutes an important approach in improving energy usage in electrochemical storage.
<P> In this stage, the battery is continued being charged at a constant (over)voltage U, but the charge current is decreasing. The decrease is imposed by the battery. The voltage in the U-phase is too high to be applied indefinitely (hence, overvoltage), but it allows charging the battery fully in a relatively short time. The U-phase is concluded when the charge current goes below a threshold I, after which the U-phase is entered. This happens when the battery is charged to around 95% of its capacity. Some manufacturers follow this stage by a second constant-current stage (with a gradually increasing voltage) before continuing with the U-phase. The voltage U may be the same as U in the previous stage, or it may be taken slightly higher.
<P> Some batteries sizes are available with terminals in two different configurations: 1) positive on left and negative on right, 2) negative on left and positive on right. Purchasing the wrong configuration may prevent battery cables from reaching the battery terminals.
<P> BULLET::::- In the United States there are codes on batteries to help consumers buy a recently produced one. When batteries are stored, they can start losing their charge. A battery made in October 2015 will have a numeric code of 10-5 or an alphanumeric code of K-5. "A" is for January, "B" is for February, and so on (the letter "I" is skipped).
| question: why if i put the + of one battery to the - of a different battery does nothing happen? context: <P> In the latter case, the problem occurs due to the different cells in a battery having slightly different capacities. When one cell reaches discharge level ahead of the rest, the remaining cells will force the current through the discharged cell.
<P> If a battery is connected to a significant load during charging, the end of the Uo-phase may never be reached and the battery will gas and be damaged, depending on the charge current relative to the battery capacity.
<P> The effect can be overcome by subjecting each cell of the battery to one or more deep charge/discharge cycles. This must be done to the individual cells, not a multi-cell battery; in a battery, some cells may discharge before others, resulting in those cells being subjected to a reverse charging current by the remaining cells, potentially leading to irreversible damage.
<P> A secondary cell, for example a rechargeable battery, is a cell in which the chemical reactions are reversible. When the cell is being charged, the anode becomes the positive (+) and the cathode the negative (−) electrode. This is also the case in an electrolytic cell. When the cell is being discharged, it behaves like a primary cell, with the anode as the negative and the cathode as the positive electrode. Charging and discharging processes, such as those in Lithium-ion batteries, tend to incur large losses through contact resistance at electrodes. Minimising these electrode localised losses constitutes an important approach in improving energy usage in electrochemical storage.
<P> In this stage, the battery is continued being charged at a constant (over)voltage U, but the charge current is decreasing. The decrease is imposed by the battery. The voltage in the U-phase is too high to be applied indefinitely (hence, overvoltage), but it allows charging the battery fully in a relatively short time. The U-phase is concluded when the charge current goes below a threshold I, after which the U-phase is entered. This happens when the battery is charged to around 95% of its capacity. Some manufacturers follow this stage by a second constant-current stage (with a gradually increasing voltage) before continuing with the U-phase. The voltage U may be the same as U in the previous stage, or it may be taken slightly higher.
<P> Some batteries sizes are available with terminals in two different configurations: 1) positive on left and negative on right, 2) negative on left and positive on right. Purchasing the wrong configuration may prevent battery cables from reaching the battery terminals.
<P> BULLET::::- In the United States there are codes on batteries to help consumers buy a recently produced one. When batteries are stored, they can start losing their charge. A battery made in October 2015 will have a numeric code of 10-5 or an alphanumeric code of K-5. "A" is for January, "B" is for February, and so on (the letter "I" is skipped).
| answer: You need to have a full circuit. Here's an analogy:Think of electricity like water and the battery as a pump. If you put two pumps next to each other but dont loop it, the water wont be able to keep flowing through and back to the first pump again; won't be a cycle.It's nothing like that but if it helps you to understand it, that's fine. So, if you touched the batteries together and then put a wire from one end of the two batteries to the other, it'll work. However, you'll get a short circuit and make some sparks so not good idea. |
63,459 | 1ip6t8 | with the sun being 92 million miles away why do locations a few hundred miles from each other on earth have such different temperatures? | It's not actually the distance to the sun which determines how hot a place is. It's the *angle* to the sun.[This diagram](_URL_0_) shows that, nearer the equator, the sun's rays reach the earth much more directly than at the poles. At the poles, the same amount of heat energy is spread over a much larger surface area - resulting in much lower temperatures. | [
"It's not actually the distance to the sun which determines how hot a place is. It's the *angle* to the sun.\n\n[This diagram](_URL_0_) shows that, nearer the equator, the sun's rays reach the earth much more directly than at the poles. At the poles, the same amount of heat energy is spread over a much larger surfa... | 7 | [
"It's not actually the distance to the sun which determines how hot a place is. It's the *angle* to the sun.\n\n[This diagram](_URL_0_) shows that, nearer the equator, the sun's rays reach the earth much more directly than at the poles. At the poles, the same amount of heat energy is spread over a much larger surfa... | 3 | <P> BULLET::::- The distance from the Earth to the Sun varies. The Earth is closest to the Sun (at perihelion) in January, which is summer in the Southern Hemisphere. It is furthest away (at aphelion) in July, which is summer in the Northern Hemisphere, and only 93.55% of the solar radiation from the Sun falls on a given square area of land than at perihelion. Despite this, there are larger land masses in the Northern Hemisphere, which are easier to heat than the seas. Consequently, summers are warmer in the Northern Hemisphere than in the Southern Hemisphere under similar conditions.
<P> BULLET::::- The Sun is larger in diameter than the Earth, so more than half of the Earth is in sunlight at any one time (due to unparallel rays creating tangent points beyond an equal-day-night line).
<P> Using the knowledge that the Sun is very far away, the ancient Greek geographer Eratosthenes performed an experiment using the differences in the observed angle of the Sun from two different locations to calculate the circumference of the Earth. Though modern telecommunications and timekeeping were not available, he was able to make sure the measurements happened at the same time by having them taken when the Sun was highest in the sky (local noon) at both locations. Using slightly inaccurate assumptions about the locations of two cities, he came to a result within 15% of the correct value.
<P> where "T" is the temperature of the Sun, "R" the radius of the Sun, and "a" is the distance between the Earth and the Sun. This gives an effective temperature of 6 °C on the surface of the Earth, assuming that it perfectly absorbs all emission falling on it and has no atmosphere.
<P> In the Northern Hemisphere, when the Earth is at its furthest point from the sun (aphelion) the variation in temperature between winter and summer are less extreme. When the earth is closest to the sun (perihelion), about 5,750 years later, then the variations are at their most extreme. At present the Earth is at its furthest, so the northern hemisphere summers and winters are less extreme and the southern hemisphere climate is more extreme.
<P> These Solar System minor planets that were the farthest from the Sun as of December 2015 and/or March 2018. The objects have been categorized by their approximate heliocentric distance from the Sun, and not by the greatest calculated aphelion of their orbit. The list changes over time because the objects are moving. Some objects are inbound and some are outbound. It would be difficult to detect long-distance comets if it weren't for their comas, which become visible when heated by the Sun. Distances are measured in astronomical units (AU, Sun–Earth distances). The distances are not the minimum (perihelion) or the maximum (aphelion) that may be achieved by these objects in the future.
<P> Because of the axial tilt of the Earth in its orbit, the maximal intensity of Sun rays hits the Earth 23.4 degrees north of equator at the June Solstice (at the Tropic of Cancer), and 23.4 degrees south of equator at the December Solstice (at the Tropic of Capricorn).
| question: with the sun being 92 million miles away why do locations a few hundred miles from each other on earth have such different temperatures? context: <P> BULLET::::- The distance from the Earth to the Sun varies. The Earth is closest to the Sun (at perihelion) in January, which is summer in the Southern Hemisphere. It is furthest away (at aphelion) in July, which is summer in the Northern Hemisphere, and only 93.55% of the solar radiation from the Sun falls on a given square area of land than at perihelion. Despite this, there are larger land masses in the Northern Hemisphere, which are easier to heat than the seas. Consequently, summers are warmer in the Northern Hemisphere than in the Southern Hemisphere under similar conditions.
<P> BULLET::::- The Sun is larger in diameter than the Earth, so more than half of the Earth is in sunlight at any one time (due to unparallel rays creating tangent points beyond an equal-day-night line).
<P> Using the knowledge that the Sun is very far away, the ancient Greek geographer Eratosthenes performed an experiment using the differences in the observed angle of the Sun from two different locations to calculate the circumference of the Earth. Though modern telecommunications and timekeeping were not available, he was able to make sure the measurements happened at the same time by having them taken when the Sun was highest in the sky (local noon) at both locations. Using slightly inaccurate assumptions about the locations of two cities, he came to a result within 15% of the correct value.
<P> where "T" is the temperature of the Sun, "R" the radius of the Sun, and "a" is the distance between the Earth and the Sun. This gives an effective temperature of 6 °C on the surface of the Earth, assuming that it perfectly absorbs all emission falling on it and has no atmosphere.
<P> In the Northern Hemisphere, when the Earth is at its furthest point from the sun (aphelion) the variation in temperature between winter and summer are less extreme. When the earth is closest to the sun (perihelion), about 5,750 years later, then the variations are at their most extreme. At present the Earth is at its furthest, so the northern hemisphere summers and winters are less extreme and the southern hemisphere climate is more extreme.
<P> These Solar System minor planets that were the farthest from the Sun as of December 2015 and/or March 2018. The objects have been categorized by their approximate heliocentric distance from the Sun, and not by the greatest calculated aphelion of their orbit. The list changes over time because the objects are moving. Some objects are inbound and some are outbound. It would be difficult to detect long-distance comets if it weren't for their comas, which become visible when heated by the Sun. Distances are measured in astronomical units (AU, Sun–Earth distances). The distances are not the minimum (perihelion) or the maximum (aphelion) that may be achieved by these objects in the future.
<P> Because of the axial tilt of the Earth in its orbit, the maximal intensity of Sun rays hits the Earth 23.4 degrees north of equator at the June Solstice (at the Tropic of Cancer), and 23.4 degrees south of equator at the December Solstice (at the Tropic of Capricorn).
| answer: It's not actually the distance to the sun which determines how hot a place is. It's the *angle* to the sun.[This diagram](_URL_0_) shows that, nearer the equator, the sun's rays reach the earth much more directly than at the poles. At the poles, the same amount of heat energy is spread over a much larger surface area - resulting in much lower temperatures. |
6,881 | 1jpeft | Which technologies owe their invention and/or diffusion to the porn industry? | The best source I know of is Jonathan Coopersmith's "[Pornography, Technology, and Progress](_URL_0_)," which covers the diffusion of many technologies, from photography to the internet. His main point is that porn consumers are willing to pay a premium for these services, so they allow the technology to mature and drive down the price for later users.Probably the best example of this (the article I linked to doesn't have a lot about it, but google should), is the standardization of VHS over Betamax. Betamax didn't have very much porn (because of higher capital costs and Sony's discouragement), but on VHS a large percentage of the original movies were porn. Because porn aficionados flocked to VHS, they gained so much market share they helped drive "regular" consumers away from Beta. | [
"The lack of data can probably be attributed to a few things;\n\n• Individuals and companies involved with these sort of decisions likely don't have many interviews/discussion in the mainstream\n• The factors involved with the adoption of a technology or solution are likely multifaceted and it would be difficult to... | 2 | [
"The best source I know of is Jonathan Coopersmith's \"[Pornography, Technology, and Progress](_URL_0_),\" which covers the diffusion of many technologies, from photography to the internet. His main point is that porn consumers are willing to pay a premium for these services, so they allow the technology to mature... | 1 | <P> Pornographers have taken advantage of each technological advance in the production and distribution of visual images. Pornography is considered a driving force in the development of technologies from the printing press, through photography (still and motion), to satellite TV, other forms of video, and the Internet.
<P> The first porn daguerreotype appeared in 1855 and with the advent of "moving pictures" by the Lumière brothers the first porn film was made soon after the public exhibition of their creation. Pornographic film production commenced almost immediately after the invention of the motion picture in 1895. Two of the earliest pioneers were Eugène Pirou and Albert Kirchner. Kirchner directed the earliest surviving pornographic film for Pirou under the trade name "Léar". The 1896 film, "Le Coucher de la Marie" showed Louise Willy performing a striptease. Pirou's film inspired a genre of risqué French films showing women disrobing and other filmmakers realized profits could be made from such films. In the United States, one of the Thomas Edison's first efforts using his methods and equipment for making moving pictures was of a nude woman getting up from her bath tub and running away.
<P> Pornography is regarded by some as one of the driving forces behind the expansion of the World Wide Web, like the camcorder VCR and cable television before it. Pornographic images had been transmitted over the Internet as ASCII porn but to send images over network needed computers with graphics capability and also higher network bandwidth. This was possible in the late 1980s and early 1990s through the use of anonymous FTP servers and
<P> The Pörner Group was founded by Kurt Thomas Pörner, Who began the Pörner Technical Bureau in 1972. The company grew from there, opening its first subsidiary in Linz in 1975. In the 1970s Pörner Ingenieurgesellschaft mbH, situated in Vienna Austria, acquired the rights to license the Biturox® process for upgrading bitumen by means of selective air oxidation developed by the Austrian oil company OMV. Pörner licensed its first Biturox® plant to NIOC Isfahan in Iran.
<P> The invention of the World Wide Web spurred both commercial and non-commercial distribution of pornography. The rise of pornography websites offering photos, video clips and streaming media including live webcam access allowed greater access to pornography.
<P> Although pornography dates back thousands of years, its existence in the U.S. can be traced to its 18th-century origins and the influx of foreign trade and immigrants. By the end of the 18th century, France had become the leading country regarding the spread of porn pictures. Porn had become the subject of playing-cards, posters, post cards, and cabinet cards. Prior to this printers were previously limited to engravings, woodcuts, and line cuts for illustrations. As trade increased and more people immigrated from countries with less Puritanical and more relaxed attitudes toward human sexuality, the amount of available visual pornography increased.
<P> Before the age of internet pornography and a general acceptance of the production of pornography, porn was an underground phenomenon. Stag films, also known as blue movies, were made by men for men. The projections of such films were itinerant and were secret exhibitions in brothels or smoker houses. Stag films were an entirely clandestine phenomenon; not until the "porn chic" era of the 1970s would sexually explicit cinema gain any recognition or discussion in mainstream society. Unlike today, the on-screen display of satisfaction, such as male or female orgasm, was not a convention of stag cinema. Instead there was what Linda Williams called the "meat shot", which was a closeup, hardcore depiction of genital intercourse. As there are no direct quotes or oral histories by participants in this underground cinema, film scholars understand what they know of these stag films mainly through written accounts. Stag films persisted for such a great length of time, as Williams argues, simply because they were cut off from more public expressions of sexuality.
| question: Which technologies owe their invention and/or diffusion to the porn industry? context: <P> Pornographers have taken advantage of each technological advance in the production and distribution of visual images. Pornography is considered a driving force in the development of technologies from the printing press, through photography (still and motion), to satellite TV, other forms of video, and the Internet.
<P> The first porn daguerreotype appeared in 1855 and with the advent of "moving pictures" by the Lumière brothers the first porn film was made soon after the public exhibition of their creation. Pornographic film production commenced almost immediately after the invention of the motion picture in 1895. Two of the earliest pioneers were Eugène Pirou and Albert Kirchner. Kirchner directed the earliest surviving pornographic film for Pirou under the trade name "Léar". The 1896 film, "Le Coucher de la Marie" showed Louise Willy performing a striptease. Pirou's film inspired a genre of risqué French films showing women disrobing and other filmmakers realized profits could be made from such films. In the United States, one of the Thomas Edison's first efforts using his methods and equipment for making moving pictures was of a nude woman getting up from her bath tub and running away.
<P> Pornography is regarded by some as one of the driving forces behind the expansion of the World Wide Web, like the camcorder VCR and cable television before it. Pornographic images had been transmitted over the Internet as ASCII porn but to send images over network needed computers with graphics capability and also higher network bandwidth. This was possible in the late 1980s and early 1990s through the use of anonymous FTP servers and
<P> The Pörner Group was founded by Kurt Thomas Pörner, Who began the Pörner Technical Bureau in 1972. The company grew from there, opening its first subsidiary in Linz in 1975. In the 1970s Pörner Ingenieurgesellschaft mbH, situated in Vienna Austria, acquired the rights to license the Biturox® process for upgrading bitumen by means of selective air oxidation developed by the Austrian oil company OMV. Pörner licensed its first Biturox® plant to NIOC Isfahan in Iran.
<P> The invention of the World Wide Web spurred both commercial and non-commercial distribution of pornography. The rise of pornography websites offering photos, video clips and streaming media including live webcam access allowed greater access to pornography.
<P> Although pornography dates back thousands of years, its existence in the U.S. can be traced to its 18th-century origins and the influx of foreign trade and immigrants. By the end of the 18th century, France had become the leading country regarding the spread of porn pictures. Porn had become the subject of playing-cards, posters, post cards, and cabinet cards. Prior to this printers were previously limited to engravings, woodcuts, and line cuts for illustrations. As trade increased and more people immigrated from countries with less Puritanical and more relaxed attitudes toward human sexuality, the amount of available visual pornography increased.
<P> Before the age of internet pornography and a general acceptance of the production of pornography, porn was an underground phenomenon. Stag films, also known as blue movies, were made by men for men. The projections of such films were itinerant and were secret exhibitions in brothels or smoker houses. Stag films were an entirely clandestine phenomenon; not until the "porn chic" era of the 1970s would sexually explicit cinema gain any recognition or discussion in mainstream society. Unlike today, the on-screen display of satisfaction, such as male or female orgasm, was not a convention of stag cinema. Instead there was what Linda Williams called the "meat shot", which was a closeup, hardcore depiction of genital intercourse. As there are no direct quotes or oral histories by participants in this underground cinema, film scholars understand what they know of these stag films mainly through written accounts. Stag films persisted for such a great length of time, as Williams argues, simply because they were cut off from more public expressions of sexuality.
| answer: The best source I know of is Jonathan Coopersmith's "[Pornography, Technology, and Progress](_URL_0_)," which covers the diffusion of many technologies, from photography to the internet. His main point is that porn consumers are willing to pay a premium for these services, so they allow the technology to mature and drive down the price for later users.Probably the best example of this (the article I linked to doesn't have a lot about it, but google should), is the standardization of VHS over Betamax. Betamax didn't have very much porn (because of higher capital costs and Sony's discouragement), but on VHS a large percentage of the original movies were porn. Because porn aficionados flocked to VHS, they gained so much market share they helped drive "regular" consumers away from Beta. |
56,343 | 2iaekf | how could theatre patrons hear the actors on stage before modern electricity? if it has to do with room acoustics, then why are microphones needed now? | Part of it is room acoustics, part of it is vocal projection by actors. As far as I know stage actors still train in projection (basically having a big booming voice that carries well but doesn't sound like you're just shouting your lines), but microphones are probably mainly in use to make it easier on the actors' voices and easier for everybody generally. | [
"Part of it is room acoustics, part of it is vocal projection by actors. As far as I know stage actors still train in projection (basically having a big booming voice that carries well but doesn't sound like you're just shouting your lines), but microphones are probably mainly in use to make it easier on the actors... | 9 | [
"Part of it is room acoustics, part of it is vocal projection by actors. As far as I know stage actors still train in projection (basically having a big booming voice that carries well but doesn't sound like you're just shouting your lines), but microphones are probably mainly in use to make it easier on the actors... | 5 | <P> Since the microphones of the period were not very sensitive, they had to be brought as close to the performers as possible. Because of the visible shadows that the microphones would have cast under the intense lighting of the film sets they had to be hidden in many scenes behind all sorts of objects such as armchairs, bookshelves and vases. The actors had nevertheless to be constantly instructed to speak louder, which caused a number of problems in scenes in which by their nature it was necessary to speak quietly.
<P> Because there was no orchestra, there was no space separating the audience from the stage. The audience could stand directly in front of the elevated wooden platform. This gave them the opportunity to look at the actors from a much different perspective. They would have seen every detail of the actor and hear every word he said. The audience member would have wanted that actor to speak directly to them. It was a part of the thrill of the performance, as it is to this day.
<P> A microphone was included to help tune the system for the room acoustics, much like many home theatre receivers. The soundbar can be mounted vertically or horizontally, and the system will automatically adjust the sound to compensate.
<P> The acoustics were designed by Bob Essert of Sound Space Design and a team that included Aercoustics Engineering, Wilson Ihrig and Engineering Harmonics. The undulating back walls of the venue, which diffuse the sound throughout the auditorium by reflecting the sound waves back to the stage, account for about 90 percent of the audible sound for the audience. To prevent audience members from detecting specific sounds and vibrations including traffic noise, the rumble from the adjacent subway line and streetcar line, and even the sirens of the emergency vehicles rushing to the nearby hospitals, the theatre sits on 489 rubber insulating pads.
<P> One of the problems with the use of standard public address systems in music theatre is that the front-of-house and monitor speakers may obstruct audience sight lines and interfere with the stage appearance. In some cases, this problem can be solved by hiding large speakers behind set constructions or drapes. In productions with little or no onstage set structures, such as a minimalist modern piece, sound engineers may opt to use higher-cost low-profile speakers, which are slimmer. Alternatively, sound engineers may decide to "fly" the speakers by attaching them securely to the rigging above the stage using steel cables.
<P> Although stage, lighting and other production aspects of opera houses often make use of the latest technology, traditional opera houses have not used sound reinforcement systems with microphones and loudspeakers to amplify the singers, since trained opera singers are normally able to project their unamplified voices in the hall. Since the 1990s, however, some opera houses have begun using a subtle form of sound reinforcement called acoustic enhancement (see below).
<P> While designing sound for the musical "Hair", Dugan began to appreciate the human operator's inability to act quickly enough to control multiple microphones. He saw the show's audio mixing person "working rotary knobs for 16 area mics, 9 hand mics, and 10 mics in the band". Dugan thought that a microphone should not be turned on unless it was getting some worthwhile signal, more than just the room ambiance. His frustration with microphone mixing led him to experiment for a few years with microphone signals controlled automatically by voltage-controlled amplifiers (VCAs), finally developing the "Dugan Music System", shown to the AES at their 49th convention, held in New York in 1974. This system used a novel proportional gain algorithm whereby the total gain was divided between all active microphones. The microphones were "continuously and automatically adjusted" by a set of VCAs to bring each microphone up or down in the mix, based on how much signal it was sending relative to the signal received by a reference microphone placed somewhat distant from the other microphones. Dugan's patent application for a "Control Apparatus for Sound Reinforcement Systems" was accepted and published on June 4, 1974. This was the first useful automatic microphone mixing algorithm, the basis for all of Dugan's later systems.
| question: how could theatre patrons hear the actors on stage before modern electricity? if it has to do with room acoustics, then why are microphones needed now? context: <P> Since the microphones of the period were not very sensitive, they had to be brought as close to the performers as possible. Because of the visible shadows that the microphones would have cast under the intense lighting of the film sets they had to be hidden in many scenes behind all sorts of objects such as armchairs, bookshelves and vases. The actors had nevertheless to be constantly instructed to speak louder, which caused a number of problems in scenes in which by their nature it was necessary to speak quietly.
<P> Because there was no orchestra, there was no space separating the audience from the stage. The audience could stand directly in front of the elevated wooden platform. This gave them the opportunity to look at the actors from a much different perspective. They would have seen every detail of the actor and hear every word he said. The audience member would have wanted that actor to speak directly to them. It was a part of the thrill of the performance, as it is to this day.
<P> A microphone was included to help tune the system for the room acoustics, much like many home theatre receivers. The soundbar can be mounted vertically or horizontally, and the system will automatically adjust the sound to compensate.
<P> The acoustics were designed by Bob Essert of Sound Space Design and a team that included Aercoustics Engineering, Wilson Ihrig and Engineering Harmonics. The undulating back walls of the venue, which diffuse the sound throughout the auditorium by reflecting the sound waves back to the stage, account for about 90 percent of the audible sound for the audience. To prevent audience members from detecting specific sounds and vibrations including traffic noise, the rumble from the adjacent subway line and streetcar line, and even the sirens of the emergency vehicles rushing to the nearby hospitals, the theatre sits on 489 rubber insulating pads.
<P> One of the problems with the use of standard public address systems in music theatre is that the front-of-house and monitor speakers may obstruct audience sight lines and interfere with the stage appearance. In some cases, this problem can be solved by hiding large speakers behind set constructions or drapes. In productions with little or no onstage set structures, such as a minimalist modern piece, sound engineers may opt to use higher-cost low-profile speakers, which are slimmer. Alternatively, sound engineers may decide to "fly" the speakers by attaching them securely to the rigging above the stage using steel cables.
<P> Although stage, lighting and other production aspects of opera houses often make use of the latest technology, traditional opera houses have not used sound reinforcement systems with microphones and loudspeakers to amplify the singers, since trained opera singers are normally able to project their unamplified voices in the hall. Since the 1990s, however, some opera houses have begun using a subtle form of sound reinforcement called acoustic enhancement (see below).
<P> While designing sound for the musical "Hair", Dugan began to appreciate the human operator's inability to act quickly enough to control multiple microphones. He saw the show's audio mixing person "working rotary knobs for 16 area mics, 9 hand mics, and 10 mics in the band". Dugan thought that a microphone should not be turned on unless it was getting some worthwhile signal, more than just the room ambiance. His frustration with microphone mixing led him to experiment for a few years with microphone signals controlled automatically by voltage-controlled amplifiers (VCAs), finally developing the "Dugan Music System", shown to the AES at their 49th convention, held in New York in 1974. This system used a novel proportional gain algorithm whereby the total gain was divided between all active microphones. The microphones were "continuously and automatically adjusted" by a set of VCAs to bring each microphone up or down in the mix, based on how much signal it was sending relative to the signal received by a reference microphone placed somewhat distant from the other microphones. Dugan's patent application for a "Control Apparatus for Sound Reinforcement Systems" was accepted and published on June 4, 1974. This was the first useful automatic microphone mixing algorithm, the basis for all of Dugan's later systems.
| answer: Part of it is room acoustics, part of it is vocal projection by actors. As far as I know stage actors still train in projection (basically having a big booming voice that carries well but doesn't sound like you're just shouting your lines), but microphones are probably mainly in use to make it easier on the actors' voices and easier for everybody generally. |
38,138 | 829p01 | Why being a diabetic keeps you from donating blood? | Two factors generally go into consideration for donating blood: health risks for the recipient and health risks for the donor. Usually with diabetics (especially if uncontrolled), they are at high risk for complications related to hypoglycemia because of their natural inability to control body sugar levels. In a similar manner, conditions such as hypertension and kidney problems that are also related to blood pressure may deteriorate if given blood. For most agencies, one may be able to give blood if their sugar levels are well managed with human insulin and if they have no other known complications. This is for the safety of the donor.On the other hand, if the patient giving the blood takes non-human insulin treatments, they may have preformed antibodies in their blood that would be transferred to the recipient during the transfusion. This would spark what is called a transfusion reaction in the recipient, which can be life threatening. | [
"Two factors generally go into consideration for donating blood: health risks for the recipient and health risks for the donor. Usually with diabetics (especially if uncontrolled), they are at high risk for complications related to hypoglycemia because of their natural inability to control body sugar levels. In a s... | 1 | [
"Two factors generally go into consideration for donating blood: health risks for the recipient and health risks for the donor. Usually with diabetics (especially if uncontrolled), they are at high risk for complications related to hypoglycemia because of their natural inability to control body sugar levels. In a s... | 1 | <P> Research published in 2012 demonstrated that repeated blood donation is effective in reducing blood pressure, blood glucose, HbA1c, low-density lipoprotein/high-density lipoprotein ratio, and heart rate in patients with metabolic syndrome.
<P> There are several reasons why individuals can be deferred from donating blood, including intravenous drug use, living in the UK for certain periods of time, coming from an HIV-endemic country, as well HIV high risk activity.
<P> Millions of people have diabetes. When blood sugars are not well controlled, diabetics can quickly develop a wide range of complications. Diabetes results in elevated blood sugars in the body, and this environment allows viruses and bacteria to thrive.
<P> In patients prone to iron overload, blood donation prevents the accumulation of toxic quantities. Donating blood may reduce the risk of heart disease for men, but the link has not been firmly established and may be from selection bias because donors are screened for health problems.
<P> BULLET::::- Diabetic nephropathy: is a complication that occurs in some diabetics. Excess blood sugar accumulates in the kidney causing them to become inflamed and unable to carry out their normal function. This leads to the leakage of proteins into the urine.
<P> Diabetes mellitus – group of metabolic diseases in which a person has high blood sugar, either because the pancreas does not produce enough insulin, or because cells do not respond properly to the insulin that is produced, a condition called insulin resistance. The resultant high blood sugar produces the classical symptoms of polyuria (frequent urination), polydipsia (increased thirst) and polyphagia (increased hunger).
<P> Diabetes mellitus is a chronic disease, for which there is no known cure except in very specific situations. Management concentrates on keeping blood sugar levels as close to normal, without causing low blood sugar. This can usually be accomplished with a healthy diet, exercise, weight loss, and use of appropriate medications (insulin in the case of type 1 diabetes; oral medications, as well as possibly insulin, in type 2 diabetes).
| question: Why being a diabetic keeps you from donating blood? context: <P> Research published in 2012 demonstrated that repeated blood donation is effective in reducing blood pressure, blood glucose, HbA1c, low-density lipoprotein/high-density lipoprotein ratio, and heart rate in patients with metabolic syndrome.
<P> There are several reasons why individuals can be deferred from donating blood, including intravenous drug use, living in the UK for certain periods of time, coming from an HIV-endemic country, as well HIV high risk activity.
<P> Millions of people have diabetes. When blood sugars are not well controlled, diabetics can quickly develop a wide range of complications. Diabetes results in elevated blood sugars in the body, and this environment allows viruses and bacteria to thrive.
<P> In patients prone to iron overload, blood donation prevents the accumulation of toxic quantities. Donating blood may reduce the risk of heart disease for men, but the link has not been firmly established and may be from selection bias because donors are screened for health problems.
<P> BULLET::::- Diabetic nephropathy: is a complication that occurs in some diabetics. Excess blood sugar accumulates in the kidney causing them to become inflamed and unable to carry out their normal function. This leads to the leakage of proteins into the urine.
<P> Diabetes mellitus – group of metabolic diseases in which a person has high blood sugar, either because the pancreas does not produce enough insulin, or because cells do not respond properly to the insulin that is produced, a condition called insulin resistance. The resultant high blood sugar produces the classical symptoms of polyuria (frequent urination), polydipsia (increased thirst) and polyphagia (increased hunger).
<P> Diabetes mellitus is a chronic disease, for which there is no known cure except in very specific situations. Management concentrates on keeping blood sugar levels as close to normal, without causing low blood sugar. This can usually be accomplished with a healthy diet, exercise, weight loss, and use of appropriate medications (insulin in the case of type 1 diabetes; oral medications, as well as possibly insulin, in type 2 diabetes).
| answer: Two factors generally go into consideration for donating blood: health risks for the recipient and health risks for the donor. Usually with diabetics (especially if uncontrolled), they are at high risk for complications related to hypoglycemia because of their natural inability to control body sugar levels. In a similar manner, conditions such as hypertension and kidney problems that are also related to blood pressure may deteriorate if given blood. For most agencies, one may be able to give blood if their sugar levels are well managed with human insulin and if they have no other known complications. This is for the safety of the donor.On the other hand, if the patient giving the blood takes non-human insulin treatments, they may have preformed antibodies in their blood that would be transferred to the recipient during the transfusion. This would spark what is called a transfusion reaction in the recipient, which can be life threatening. |
13,888 | 4dcvsq | Are quarks and electrons really indivisible? | > It's hard for me to understand how something can exist without being made of anything else, it just exists.Do you imagine that the sub-particles of electrons and quarks would also have sub-particles etc.? I find the idea of there being an infinite regression of sub-particles much harder to swallow. | [
" > It's hard for me to understand how something can exist without being made of anything else, it just exists.\n\nDo you imagine that the sub-particles of electrons and quarks would also have sub-particles etc.? I find the idea of there being an infinite regression of sub-particles much harder to swallow."
] | 1 | [
" > It's hard for me to understand how something can exist without being made of anything else, it just exists.\n\nDo you imagine that the sub-particles of electrons and quarks would also have sub-particles etc.? I find the idea of there being an infinite regression of sub-particles much harder to swallow."
] | 1 | <P> Quarks are the fundamental constituents of hadrons and interact via the strong interaction. Quarks are the only known carriers of fractional charge, but because they combine in groups of three (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except that they carry the opposite electric charge (for example the up quark carries charge +, while the up antiquark carries charge −), color charge, and baryon number. There are six flavors of quarks; the three positively charged quarks are called "up-type quarks" while the three negatively charged quarks are called "down-type quarks".
<P> Quarks are spin- particles, implying that they are fermions according to the spin–statistics theorem. They are subject to the Pauli exclusion principle, which states that no two identical fermions can simultaneously occupy the same quantum state. This is in contrast to bosons (particles with integer spin), of which any number can be in the same state. Unlike leptons, quarks possess color charge, which causes them to engage in the strong interaction. The resulting attraction between different quarks causes the formation of composite particles known as "hadrons" (see "Strong interaction and color charge" below).
<P> Quarks are particles of spin-, implying that they are fermions. They carry an electric charge of − e (down-type quarks) or + e (up-type quarks). For comparison, an electron has a charge of −1 e. They also carry colour charge, which is the equivalent of the electric charge for the strong interaction. Quarks also undergo radioactive decay, meaning that they are subject to the weak interaction. Quarks are massive particles, and therefore are also subject to gravity.
<P> This observation was critical to the recognition of quarks as actual elementary particles (rather than just convenient theoretical constructs), and led to the theory of strong interactions known as quantum chromodynamics, where it was understood in terms of the asymptotic freedom property. In Bjorken's picture, the quarks become point-like, observable objects at very short distances (high energies), shorter than the size of the hadrons.
<P> Since free quark searches consistently failed to turn up any evidence for the new particles, and because an elementary particle back then was "defined" as a particle which could be separated and isolated, Gell-Mann often said that quarks were merely convenient mathematical constructs, not real particles. The meaning of this statement was usually clear in context: He meant quarks are confined, but he also was implying that the strong interactions could probably not be fully described by quantum field theory.
<P> Quarks have various intrinsic properties, including electric charge, mass, color charge, and spin. They are the only elementary particles in the Standard Model of particle physics to experience all four fundamental interactions, also known as "fundamental forces" (electromagnetism, gravitation, strong interaction, and weak interaction), as well as the only known particles whose electric charges are not integer multiples of the elementary charge.
<P> Evidence for the existence of quarks comes from deep inelastic scattering: firing electrons at nuclei to determine the distribution of charge within nucleons (which are baryons). If the charge is uniform, the electric field around the proton should be uniform and the electron should scatter elastically. Low-energy electrons do scatter in this way, but, above a particular energy, the protons deflect some electrons through large angles. The recoiling electron has much less energy and a jet of particles is emitted. This inelastic scattering suggests that the charge in the proton is not uniform but split among smaller charged particles: quarks.
| question: Are quarks and electrons really indivisible? context: <P> Quarks are the fundamental constituents of hadrons and interact via the strong interaction. Quarks are the only known carriers of fractional charge, but because they combine in groups of three (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except that they carry the opposite electric charge (for example the up quark carries charge +, while the up antiquark carries charge −), color charge, and baryon number. There are six flavors of quarks; the three positively charged quarks are called "up-type quarks" while the three negatively charged quarks are called "down-type quarks".
<P> Quarks are spin- particles, implying that they are fermions according to the spin–statistics theorem. They are subject to the Pauli exclusion principle, which states that no two identical fermions can simultaneously occupy the same quantum state. This is in contrast to bosons (particles with integer spin), of which any number can be in the same state. Unlike leptons, quarks possess color charge, which causes them to engage in the strong interaction. The resulting attraction between different quarks causes the formation of composite particles known as "hadrons" (see "Strong interaction and color charge" below).
<P> Quarks are particles of spin-, implying that they are fermions. They carry an electric charge of − e (down-type quarks) or + e (up-type quarks). For comparison, an electron has a charge of −1 e. They also carry colour charge, which is the equivalent of the electric charge for the strong interaction. Quarks also undergo radioactive decay, meaning that they are subject to the weak interaction. Quarks are massive particles, and therefore are also subject to gravity.
<P> This observation was critical to the recognition of quarks as actual elementary particles (rather than just convenient theoretical constructs), and led to the theory of strong interactions known as quantum chromodynamics, where it was understood in terms of the asymptotic freedom property. In Bjorken's picture, the quarks become point-like, observable objects at very short distances (high energies), shorter than the size of the hadrons.
<P> Since free quark searches consistently failed to turn up any evidence for the new particles, and because an elementary particle back then was "defined" as a particle which could be separated and isolated, Gell-Mann often said that quarks were merely convenient mathematical constructs, not real particles. The meaning of this statement was usually clear in context: He meant quarks are confined, but he also was implying that the strong interactions could probably not be fully described by quantum field theory.
<P> Quarks have various intrinsic properties, including electric charge, mass, color charge, and spin. They are the only elementary particles in the Standard Model of particle physics to experience all four fundamental interactions, also known as "fundamental forces" (electromagnetism, gravitation, strong interaction, and weak interaction), as well as the only known particles whose electric charges are not integer multiples of the elementary charge.
<P> Evidence for the existence of quarks comes from deep inelastic scattering: firing electrons at nuclei to determine the distribution of charge within nucleons (which are baryons). If the charge is uniform, the electric field around the proton should be uniform and the electron should scatter elastically. Low-energy electrons do scatter in this way, but, above a particular energy, the protons deflect some electrons through large angles. The recoiling electron has much less energy and a jet of particles is emitted. This inelastic scattering suggests that the charge in the proton is not uniform but split among smaller charged particles: quarks.
| answer: > It's hard for me to understand how something can exist without being made of anything else, it just exists.Do you imagine that the sub-particles of electrons and quarks would also have sub-particles etc.? I find the idea of there being an infinite regression of sub-particles much harder to swallow. |
127,435 | 1e40rz | WW2 Veterans, Personal Narratives, etc. | Have you looked at the Library of Congress's Veteran's Project? I think my Dad has a tape in there (WWII B29). I am sure there is a lot of personal artifacts in there. | [
"Have you looked at the Library of Congress's Veteran's Project? I think my Dad has a tape in there (WWII B29). I am sure there is a lot of personal artifacts in there."
] | 1 | [] | 0 | <P> The project preserves the memories of soldiers whose military service occurred around the globe. These oral histories provide first-hand resources for scholarly research in military history and US history. They address veterans’ experiences during the time periods of the Vietnam War, the Persian Gulf War, the Korean War, and the Cold War.
<P> BULLET::::- The Veterans History Project, congressionally mandated in 2000 to collect, preserve, and make accessible the personal accounts of American war veterans from World War I to the present day;
<P> The Veterans History Project of the Library of Congress American Folklife Center (commonly known as the Veterans History Project) was created by the United States Congress in 2000 to collect and preserve the firsthand remembrances of U.S. wartime veterans. Its mandate ensures future generations may hear directly from those who served to better understand the realities of war.
<P> In 2007, Alex and Pure Film went on to co-produce the documentary "The Veteran Story", which chronicles the lives of US Veterans who served in WWII, the Vietnam War and the Korean War. This documentary was developed, produced and released as a philanthropy film project, with all proceeds going to the veterans of the Louisiana War Veterans Home in Jackson, Louisiana.
<P> Ordinary Heroes is a narrative, nonfiction account of World War II as told through the perspective of veterans who served in various theatres of the conflict. Beginning with the Japanese attack on Pearl Harbor in 1941 and ending sometime after V-J Day, the book recounts the soldiers’ experiences at home and abroad, describing in detail what it was like to be at war. The stories are pulled from interviews conducted by the authors, which were verified and assembled into a timeline. Thus, the tales are presented chronologically as the war progresses.
<P> The Second World War Experience Centre based in Walton, West Yorkshire, England, which is near Wetherby, is a registered charity and museum/archive which was set up in 1998 to preserve personal memories of the Second World War before they are lost forever. The archive is international in scope and holds letters, diaries, photographs and papers donated by individuals - the collection is unique as it concentrates only on the Second World War and personal experience. A network of volunteers across the UK also tapes record veterans' memories for the Centre, and its collection now numbers in excess of 9000 lives.
<P> The museum is the home of the Center for U.S. War Veterans’ Oral History Project, which records interviews of veterans about their military experiences. These interviews are available on videotape and DVD for review by researchers and scholars. , the center has recorded over 500 interviews with veterans serving in World War II and other conflicts through Operation Iraqi Freedom.
| question: WW2 Veterans, Personal Narratives, etc. context: <P> The project preserves the memories of soldiers whose military service occurred around the globe. These oral histories provide first-hand resources for scholarly research in military history and US history. They address veterans’ experiences during the time periods of the Vietnam War, the Persian Gulf War, the Korean War, and the Cold War.
<P> BULLET::::- The Veterans History Project, congressionally mandated in 2000 to collect, preserve, and make accessible the personal accounts of American war veterans from World War I to the present day;
<P> The Veterans History Project of the Library of Congress American Folklife Center (commonly known as the Veterans History Project) was created by the United States Congress in 2000 to collect and preserve the firsthand remembrances of U.S. wartime veterans. Its mandate ensures future generations may hear directly from those who served to better understand the realities of war.
<P> In 2007, Alex and Pure Film went on to co-produce the documentary "The Veteran Story", which chronicles the lives of US Veterans who served in WWII, the Vietnam War and the Korean War. This documentary was developed, produced and released as a philanthropy film project, with all proceeds going to the veterans of the Louisiana War Veterans Home in Jackson, Louisiana.
<P> Ordinary Heroes is a narrative, nonfiction account of World War II as told through the perspective of veterans who served in various theatres of the conflict. Beginning with the Japanese attack on Pearl Harbor in 1941 and ending sometime after V-J Day, the book recounts the soldiers’ experiences at home and abroad, describing in detail what it was like to be at war. The stories are pulled from interviews conducted by the authors, which were verified and assembled into a timeline. Thus, the tales are presented chronologically as the war progresses.
<P> The Second World War Experience Centre based in Walton, West Yorkshire, England, which is near Wetherby, is a registered charity and museum/archive which was set up in 1998 to preserve personal memories of the Second World War before they are lost forever. The archive is international in scope and holds letters, diaries, photographs and papers donated by individuals - the collection is unique as it concentrates only on the Second World War and personal experience. A network of volunteers across the UK also tapes record veterans' memories for the Centre, and its collection now numbers in excess of 9000 lives.
<P> The museum is the home of the Center for U.S. War Veterans’ Oral History Project, which records interviews of veterans about their military experiences. These interviews are available on videotape and DVD for review by researchers and scholars. , the center has recorded over 500 interviews with veterans serving in World War II and other conflicts through Operation Iraqi Freedom.
| answer: Have you looked at the Library of Congress's Veteran's Project? I think my Dad has a tape in there (WWII B29). I am sure there is a lot of personal artifacts in there. |
151,614 | 1ybb39 | there's so much hustle and bustle about the united states being so "free", but what exactly separates us from countries such as let's say..canada? or switzerland? what about sweden? aren't they just as free as we are? | This is not a complete answer to your question, and it may even raise more questions (but that is also my experience when answering the questions of real five year olds, so perhaps that is a good thing). When asking your question, you also have to take the meaning of the word "freedom" into consideration. What does "freedom" mean in different parts of the world?From what I have been told, the concept of "freedom" is sometimes different in the US as opposed to, for example, Europe. In the US, "freedom" is taken more to be "freedom from" the involvement of the government or other third parties. Basically, you should be allowed to do whatever you want, as long as that does not infringe on another person's freedom. Now, in many parts of Europe, "freedom" is instead taken to mean "freedom to" do things. While "freedom to" may seem to be the same thing as "freedom from" it really isn't, at least not all the time. For example, if everyone should be "free to" go to university, that requires that university is tuition-free, i.e. paid for by the government. That means that the government must collect more taxes on taxpayers to pay for the tuition fees. In the opposite, if everyone should be able to live their life "free from" government interference, that means minimizing tax collection, giving the taxpayer more choice in where to spend her/his money (perhaps on a college/university fund for her/his kids). The above may explain, for example, why many Americans seem to react strongly against implementing a single payer health care system, while many Europeans would react just as strongly against an insurance/multi payer health care system. Its a difference in the perception of what "freedom" is.But bear in mind that governments and society changes all the time. What I wrote above isn't necessarily true in all instances, and may not be true tomorrow where it is true today. | [
"It's mostly comes out of the fact that the United States had the Bill of Rights when most of the world was still ruled by monarchies. For the most part, most western style democracies have the same level of freedom as the United States. There is a free press, the ability to protest, and the freedom to worship or... | 8 | [
"It's mostly comes out of the fact that the United States had the Bill of Rights when most of the world was still ruled by monarchies. For the most part, most western style democracies have the same level of freedom as the United States. There is a free press, the ability to protest, and the freedom to worship or... | 7 | <P> The United States and Sweden have strong economic relations. The United States is currently the third-largest Swedish export trade partner, and U.S. companies are the most represented foreign companies in Sweden.
<P> In 2012 Canadian news columnist Andrew Coyne described countries with free trade with both the EU and the United States as a "select group" that includes Colombia, Israel, Jordan, Mexico, Morocco, and Peru. He described South Korea, Chile, and Singapore as "buccaneering free traders" and the only countries that rivaled Canada in "scale and scope of the trade agreements" that they had signed (roughly 75% of Canada’s trade is tariff-free).
<P> Nobel Prize-winning economist Joseph Stiglitz has noted that there is higher social mobility in the Scandinavian countries than in the United States and argues that Scandinavia is now the land of opportunity that the United States once was. American author Ann Jones, who lived in Norway for four years, contends that "the Nordic countries give their populations freedom "from" the market by using capitalism as a tool to benefit everyone" whereas in the United States "neoliberal politics puts the foxes in charge of the henhouse, and capitalists have used the wealth generated by their enterprises (as well as financial and political manipulations) to capture the state and pluck the chickens".
<P> Free trade with the United States has resulted in continued exposure to the US system. Since the United States is Canada's largest trading partner and vice versa, Canadian exporters and importers must be accustomed to dealing in US customary units as well as metric.
<P> Sweden is a Scandinavian country in Northern Europe and the third-largest country in the European Union by area. It is also a member of the United Nations, the Nordic Council, Council of Europe, the World Trade Organization and the Organisation for Economic Co-operation and Development (OECD). Sweden maintains a Nordic social welfare system that provides universal health care and tertiary education for its citizens. It has the world's eighth-highest per capita income and ranks highly in numerous metrics of national performance, including quality of life, health, education, protection of civil liberties, economic competitiveness, equality, prosperity and human development.
<P> According to the United Nations' 2008 E-Government Survey, Sweden is internationally acknowledged as one of the most successful eGovernment countries and the world leader in terms of e-Government Readiness. As for the 8th EU Benchmark, it places the country among the top five European Union members states.
<P> The United States is home to a number of perceptions about Canadian culture, due to the countries' partially shared heritage and the relatively large number of cultural features common to both the US and Canada. For example, the average Canadian may be perceived as more reserved than his or her American counterpart. Canada and the United States are often inevitably compared as sibling countries, and the perceptions that arise from this oft-held contrast have gone to shape the advertised worldwide identities of both nations: the United States is seen as the rebellious child of the British Crown, forged in the fires of violent revolution; Canada is the calmer offspring of the United Kingdom, known for a more relaxed national demeanour.
| question: there's so much hustle and bustle about the united states being so "free", but what exactly separates us from countries such as let's say..canada? or switzerland? what about sweden? aren't they just as free as we are? context: <P> The United States and Sweden have strong economic relations. The United States is currently the third-largest Swedish export trade partner, and U.S. companies are the most represented foreign companies in Sweden.
<P> In 2012 Canadian news columnist Andrew Coyne described countries with free trade with both the EU and the United States as a "select group" that includes Colombia, Israel, Jordan, Mexico, Morocco, and Peru. He described South Korea, Chile, and Singapore as "buccaneering free traders" and the only countries that rivaled Canada in "scale and scope of the trade agreements" that they had signed (roughly 75% of Canada’s trade is tariff-free).
<P> Nobel Prize-winning economist Joseph Stiglitz has noted that there is higher social mobility in the Scandinavian countries than in the United States and argues that Scandinavia is now the land of opportunity that the United States once was. American author Ann Jones, who lived in Norway for four years, contends that "the Nordic countries give their populations freedom "from" the market by using capitalism as a tool to benefit everyone" whereas in the United States "neoliberal politics puts the foxes in charge of the henhouse, and capitalists have used the wealth generated by their enterprises (as well as financial and political manipulations) to capture the state and pluck the chickens".
<P> Free trade with the United States has resulted in continued exposure to the US system. Since the United States is Canada's largest trading partner and vice versa, Canadian exporters and importers must be accustomed to dealing in US customary units as well as metric.
<P> Sweden is a Scandinavian country in Northern Europe and the third-largest country in the European Union by area. It is also a member of the United Nations, the Nordic Council, Council of Europe, the World Trade Organization and the Organisation for Economic Co-operation and Development (OECD). Sweden maintains a Nordic social welfare system that provides universal health care and tertiary education for its citizens. It has the world's eighth-highest per capita income and ranks highly in numerous metrics of national performance, including quality of life, health, education, protection of civil liberties, economic competitiveness, equality, prosperity and human development.
<P> According to the United Nations' 2008 E-Government Survey, Sweden is internationally acknowledged as one of the most successful eGovernment countries and the world leader in terms of e-Government Readiness. As for the 8th EU Benchmark, it places the country among the top five European Union members states.
<P> The United States is home to a number of perceptions about Canadian culture, due to the countries' partially shared heritage and the relatively large number of cultural features common to both the US and Canada. For example, the average Canadian may be perceived as more reserved than his or her American counterpart. Canada and the United States are often inevitably compared as sibling countries, and the perceptions that arise from this oft-held contrast have gone to shape the advertised worldwide identities of both nations: the United States is seen as the rebellious child of the British Crown, forged in the fires of violent revolution; Canada is the calmer offspring of the United Kingdom, known for a more relaxed national demeanour.
| answer: This is not a complete answer to your question, and it may even raise more questions (but that is also my experience when answering the questions of real five year olds, so perhaps that is a good thing). When asking your question, you also have to take the meaning of the word "freedom" into consideration. What does "freedom" mean in different parts of the world?From what I have been told, the concept of "freedom" is sometimes different in the US as opposed to, for example, Europe. In the US, "freedom" is taken more to be "freedom from" the involvement of the government or other third parties. Basically, you should be allowed to do whatever you want, as long as that does not infringe on another person's freedom. Now, in many parts of Europe, "freedom" is instead taken to mean "freedom to" do things. While "freedom to" may seem to be the same thing as "freedom from" it really isn't, at least not all the time. For example, if everyone should be "free to" go to university, that requires that university is tuition-free, i.e. paid for by the government. That means that the government must collect more taxes on taxpayers to pay for the tuition fees. In the opposite, if everyone should be able to live their life "free from" government interference, that means minimizing tax collection, giving the taxpayer more choice in where to spend her/his money (perhaps on a college/university fund for her/his kids). The above may explain, for example, why many Americans seem to react strongly against implementing a single payer health care system, while many Europeans would react just as strongly against an insurance/multi payer health care system. Its a difference in the perception of what "freedom" is.But bear in mind that governments and society changes all the time. What I wrote above isn't necessarily true in all instances, and may not be true tomorrow where it is true today. |
89,036 | 7j05xu | the tommy john surgery and how it makes baseball pitchers throw better | Throwing baseballs at 80+ MPH for years is really hard on your arm. This includes the joints, tendons, and ligaments all the way from your wrist to your shoulder.One of the most affected areas is the elbow, specifically the UCL or "ulnar collateral ligament." Tommy John surgery is basically a tissue graft where they replace or reinforce the damaged UCL with ligament tissue taken elsewhere from your body, or from a cadaver.It doesn't necessarily make pitchers "throw better," unless you're considering that those with a severely damaged UCL can't throw a ball at all anymore. | [
"Throwing baseballs at 80+ MPH for years is really hard on your arm. This includes the joints, tendons, and ligaments all the way from your wrist to your shoulder.\n\nOne of the most affected areas is the elbow, specifically the UCL or \"ulnar collateral ligament.\" Tommy John surgery is basically a tissue graft wh... | 1 | [
"Throwing baseballs at 80+ MPH for years is really hard on your arm. This includes the joints, tendons, and ligaments all the way from your wrist to your shoulder.\n\nOne of the most affected areas is the elbow, specifically the UCL or \"ulnar collateral ligament.\" Tommy John surgery is basically a tissue graft wh... | 1 | <P> The procedure is named for Major League Baseball pitcher Curt Schilling, who required the surgery to be able to pitch for the Boston Red Sox in Game 6 of the 2004 American League Championship Series and Game 2 of the 2004 World Series.
<P> Some baseball pitchers believe they can throw harder after ulnar collateral ligament reconstruction than they did beforehand. As a result, orthopedic surgeons have reported that parents of young pitchers have come to them and asked them to perform the procedure on their un-injured sons in the hope that this will increase their sons' performance. However, many people—including Frank Jobe—believe any post-surgical increases in performance are most likely due to the increased stability of the elbow joint and pitchers' increased attention to their fitness and conditioning. Jobe believed that, rather than allowing pitchers to gain speed, the surgery and rehab protocols merely allow pitchers to return to their pre-injury levels of performance.
<P> For baseball players, full rehabilitation takes about one year for pitchers and about six months for position players. Players typically begin throwing about 16 weeks after surgery. Prior to his surgery, John had won 124 games. He won 164 after surgery, retiring in 1989 at age 46. Other pitchers to extend their careers after Tommy John surgery include Stephen Strasburg, David Wells, A. J. Burnett, Francisco Liriano, Chris Carpenter, Tim Hudson, John Smoltz, Joe Nathan, Brian Wilson, Billy Wagner, and Matt Harvey. Sandy Koufax, who suffered a similar injury to John's in 1966, once asked Jobe "why didn’t you do that on me?"
<P> For baseball players, full rehabilitation takes about one year for pitchers and about six months for position players. Players typically begin throwing about 16 weeks after surgery. While 80 percent of players return to pitching at the same level as before the surgery, for those Major League Baseball pitchers who receive the surgery twice, 35 percent do not return to pitch in the majors at all.
<P> The procedure was invented in 1974 by orthopedic surgeon Frank Jobe, a Los Angeles Dodgers team physician who served as a special advisor to the team until his death in 2014. It is named after the first baseball player to undergo the surgery, major league pitcher Tommy John, whose record of 288 career victories ranks seventh among left-handed pitchers. The initial operation, John's successful post-surgery career, and the relationship between the two men is the subject of a 2013 ESPN "30 for 30" documentary.
<P> In 1974, Jobe performed the first "Tommy John surgery" on then-Los Angeles Dodgers pitcher Tommy John. The procedure has become so prevalent an estimated one-third of all major league pitchers have undergone it. Jobe also performed the first major reconstructive shoulder surgery on a big league player in 1990, which allowed Dodger star Orel Hershiser to continue his career. Jobe served as a special medical adviser to the Dodgers until his death.
<P> Effective pitching is critical to a baseball team, as pitching is the key for the defensive team to retire batters and to prevent runners from getting on base. A full game usually involves over one hundred pitches thrown by each team. However, most pitchers begin to tire before they reach this point. In previous eras, pitchers would often throw up to four complete games (all nine innings) in a week. With new advances in medical research and thus a better understanding of how the human body functions and tires out, starting pitchers tend more often to throw fractions of a game (typically six or seven innings, depending on their performance) about every five days (though a few complete games do still occur each year).
| question: the tommy john surgery and how it makes baseball pitchers throw better context: <P> The procedure is named for Major League Baseball pitcher Curt Schilling, who required the surgery to be able to pitch for the Boston Red Sox in Game 6 of the 2004 American League Championship Series and Game 2 of the 2004 World Series.
<P> Some baseball pitchers believe they can throw harder after ulnar collateral ligament reconstruction than they did beforehand. As a result, orthopedic surgeons have reported that parents of young pitchers have come to them and asked them to perform the procedure on their un-injured sons in the hope that this will increase their sons' performance. However, many people—including Frank Jobe—believe any post-surgical increases in performance are most likely due to the increased stability of the elbow joint and pitchers' increased attention to their fitness and conditioning. Jobe believed that, rather than allowing pitchers to gain speed, the surgery and rehab protocols merely allow pitchers to return to their pre-injury levels of performance.
<P> For baseball players, full rehabilitation takes about one year for pitchers and about six months for position players. Players typically begin throwing about 16 weeks after surgery. Prior to his surgery, John had won 124 games. He won 164 after surgery, retiring in 1989 at age 46. Other pitchers to extend their careers after Tommy John surgery include Stephen Strasburg, David Wells, A. J. Burnett, Francisco Liriano, Chris Carpenter, Tim Hudson, John Smoltz, Joe Nathan, Brian Wilson, Billy Wagner, and Matt Harvey. Sandy Koufax, who suffered a similar injury to John's in 1966, once asked Jobe "why didn’t you do that on me?"
<P> For baseball players, full rehabilitation takes about one year for pitchers and about six months for position players. Players typically begin throwing about 16 weeks after surgery. While 80 percent of players return to pitching at the same level as before the surgery, for those Major League Baseball pitchers who receive the surgery twice, 35 percent do not return to pitch in the majors at all.
<P> The procedure was invented in 1974 by orthopedic surgeon Frank Jobe, a Los Angeles Dodgers team physician who served as a special advisor to the team until his death in 2014. It is named after the first baseball player to undergo the surgery, major league pitcher Tommy John, whose record of 288 career victories ranks seventh among left-handed pitchers. The initial operation, John's successful post-surgery career, and the relationship between the two men is the subject of a 2013 ESPN "30 for 30" documentary.
<P> In 1974, Jobe performed the first "Tommy John surgery" on then-Los Angeles Dodgers pitcher Tommy John. The procedure has become so prevalent an estimated one-third of all major league pitchers have undergone it. Jobe also performed the first major reconstructive shoulder surgery on a big league player in 1990, which allowed Dodger star Orel Hershiser to continue his career. Jobe served as a special medical adviser to the Dodgers until his death.
<P> Effective pitching is critical to a baseball team, as pitching is the key for the defensive team to retire batters and to prevent runners from getting on base. A full game usually involves over one hundred pitches thrown by each team. However, most pitchers begin to tire before they reach this point. In previous eras, pitchers would often throw up to four complete games (all nine innings) in a week. With new advances in medical research and thus a better understanding of how the human body functions and tires out, starting pitchers tend more often to throw fractions of a game (typically six or seven innings, depending on their performance) about every five days (though a few complete games do still occur each year).
| answer: Throwing baseballs at 80+ MPH for years is really hard on your arm. This includes the joints, tendons, and ligaments all the way from your wrist to your shoulder.One of the most affected areas is the elbow, specifically the UCL or "ulnar collateral ligament." Tommy John surgery is basically a tissue graft where they replace or reinforce the damaged UCL with ligament tissue taken elsewhere from your body, or from a cadaver.It doesn't necessarily make pitchers "throw better," unless you're considering that those with a severely damaged UCL can't throw a ball at all anymore. |
172,656 | 4ecga5 | Why do wet things dry, though they're not at boiling temperature ? | Water and all liquids have a vapor pressure. So at equilibrium at a certain temperature, a fraction of the liquid will exist as a vapor. In a closed container, once equilibrium has been established (reached 100% humidity) no further net evaporation will occur. However in an open system, the vapor will leave and now more of the liquid will evaporate and it will dry.Actually the boiling temperature is when the vapor pressure of the liquid is equal to the external pressure. That's why water boils at a lower temperature in the mountains. | [
"The University of Cambridge brings up this subject in regards to clothes drying, where they state: \n\n > Water has energy. So, in other words, at any given temperature, the water molecules are vibrating or moving around, washing machine proportional to the temperature of the water and when we give energy to wate... | 2 | [
"The University of Cambridge brings up this subject in regards to clothes drying, where they state: \n\n > Water has energy. So, in other words, at any given temperature, the water molecules are vibrating or moving around, washing machine proportional to the temperature of the water and when we give energy to wate... | 2 | <P> Not all ceramic pieces are dry when they need cleaning. Some ceramics, such as those that are excavated archaeologically, will be damp or wet in nature. Conservators tend to remove the surface dirt before the object is completely dry. This is done because it is easier to do before the dirt hardens and because as it dried the dirt may shrink and cause physical damage to the ceramic surface. Some ceramics are kept damp until treatment can be completed.
<P> Sludge drying is necessary to remove remaining water available due to mechanical limitation during sludge dewatering. The thermal drying process is affected by the specific behaviour (depends on the dryness to be reached) of the sludge.
<P> Many dryers consist of a rotating drum called a "tumbler" through which heated air is circulated to evaporate the moisture, while the tumbler is rotated to maintain air space between the articles. Using these machines may cause clothes to shrink or become less soft (due to loss of short soft fibers/lint). A simpler non-rotating machine called a "drying cabinet" may be used for delicate fabrics and other items not suitable for a tumble dryer.
<P> Drying is a mass transfer process consisting of the removal of water or another solvent by evaporation from a solid, semi-solid or liquid. This process is often used as a final production step before selling or packaging products. To be considered "dried", the final product must be solid, in the form of a continuous sheet (e.g., paper), long pieces (e.g., wood), particles (e.g., cereal grains or corn flakes) or powder (e.g., sand, salt, washing powder, milk powder). A source of heat and an agent to remove the vapor produced by the process are often involved. In bioproducts like food, grains, and pharmaceuticals like vaccines, the solvent to be removed is almost invariably water. Desiccation may be synonymous with drying or considered an extreme form of drying.
<P> In many cases, damp is caused by "bridging" of a damp-proof course that is otherwise working effectively. For example, a flower bed next to an affected wall might result in soil being piled up against the wall above the level of the DPC. In this example, moisture from the ground would be able to ingress through the wall from the soil. Such a damp problem could be rectified by simply lowering the flower bed to below DPC level.
<P> Dryers can be heated electrically (heating cartridge or heating cable), or by circulating hot water (connected to the central heating). Often, a combination of the methods is used. In these cases, the dryer is heated by hot water in wintertime, and by electricity in the summer. A towel dryer, with high output, can also serve as a radiator in a small bathroom.
<P> In a typical Finnish sauna, the temperature of the air, the room and the benches is above the dew point even when water is thrown on the hot stones and vaporized. Thus, they remain dry. In contrast, the sauna bathers are at about , which is below the dew point, so that water is condensed on the bathers' skin. This process releases heat and makes the steam feel hot.
| question: Why do wet things dry, though they're not at boiling temperature ? context: <P> Not all ceramic pieces are dry when they need cleaning. Some ceramics, such as those that are excavated archaeologically, will be damp or wet in nature. Conservators tend to remove the surface dirt before the object is completely dry. This is done because it is easier to do before the dirt hardens and because as it dried the dirt may shrink and cause physical damage to the ceramic surface. Some ceramics are kept damp until treatment can be completed.
<P> Sludge drying is necessary to remove remaining water available due to mechanical limitation during sludge dewatering. The thermal drying process is affected by the specific behaviour (depends on the dryness to be reached) of the sludge.
<P> Many dryers consist of a rotating drum called a "tumbler" through which heated air is circulated to evaporate the moisture, while the tumbler is rotated to maintain air space between the articles. Using these machines may cause clothes to shrink or become less soft (due to loss of short soft fibers/lint). A simpler non-rotating machine called a "drying cabinet" may be used for delicate fabrics and other items not suitable for a tumble dryer.
<P> Drying is a mass transfer process consisting of the removal of water or another solvent by evaporation from a solid, semi-solid or liquid. This process is often used as a final production step before selling or packaging products. To be considered "dried", the final product must be solid, in the form of a continuous sheet (e.g., paper), long pieces (e.g., wood), particles (e.g., cereal grains or corn flakes) or powder (e.g., sand, salt, washing powder, milk powder). A source of heat and an agent to remove the vapor produced by the process are often involved. In bioproducts like food, grains, and pharmaceuticals like vaccines, the solvent to be removed is almost invariably water. Desiccation may be synonymous with drying or considered an extreme form of drying.
<P> In many cases, damp is caused by "bridging" of a damp-proof course that is otherwise working effectively. For example, a flower bed next to an affected wall might result in soil being piled up against the wall above the level of the DPC. In this example, moisture from the ground would be able to ingress through the wall from the soil. Such a damp problem could be rectified by simply lowering the flower bed to below DPC level.
<P> Dryers can be heated electrically (heating cartridge or heating cable), or by circulating hot water (connected to the central heating). Often, a combination of the methods is used. In these cases, the dryer is heated by hot water in wintertime, and by electricity in the summer. A towel dryer, with high output, can also serve as a radiator in a small bathroom.
<P> In a typical Finnish sauna, the temperature of the air, the room and the benches is above the dew point even when water is thrown on the hot stones and vaporized. Thus, they remain dry. In contrast, the sauna bathers are at about , which is below the dew point, so that water is condensed on the bathers' skin. This process releases heat and makes the steam feel hot.
| answer: Water and all liquids have a vapor pressure. So at equilibrium at a certain temperature, a fraction of the liquid will exist as a vapor. In a closed container, once equilibrium has been established (reached 100% humidity) no further net evaporation will occur. However in an open system, the vapor will leave and now more of the liquid will evaporate and it will dry.Actually the boiling temperature is when the vapor pressure of the liquid is equal to the external pressure. That's why water boils at a lower temperature in the mountains. |
78,210 | 11buef | Why do animals who live deep underwater on Hydrothermal vents have eyes? | The presence of eyes, while somewhat useless in the abyss, are not an inherent disadvantage, and were likely retained as vestigial remnants of the eyes the creatures' ancestors had when they first went into the depths. This is the explanation for most of the eyed-creatures in the abyss.Though I'm not sure which creatures you are referring to- the only creatures I recall living on hydrothermal vents are bacteria and worms. | [
"The presence of eyes, while somewhat useless in the abyss, are not an inherent disadvantage, and were likely retained as vestigial remnants of the eyes the creatures' ancestors had when they first went into the depths. This is the explanation for most of the eyed-creatures in the abyss.\n\nThough I'm not sure whic... | 2 | [] | 0 | <P> Because of the lack of light, some species do not have eyes. Those possessing eyes in this zone include the viperfish and the frill shark. Many forms of nekton live in the bathyal zone, such as squid, large whales, and octopuses. In the bathyal, some of the world's largest whales feed. Sponges, brachiopods, sea stars, and echinoids are also common in the bathyal zone. Animals in the bathyal zone are not threatened by predators that can see them, so they do not have powerful muscles. This zone is difficult for fish to live in since it is especially hard to find nutrients. They have become very energy efficient, and many have slow metabolic rates to conserve energy. The fish are characterized by weak muscles, soft skin, and slimy bodies. The adaptations of some of the fish that live there include small eyes and transparent skin.
<P> Cave-dwelling animals have been prompted, among other adaptations, to develop and improve non-visual sensory systems in order to orient in and adapt to permanently dark habitats. The olm's sensory system is also adapted to life in the subterranean aquatic environment. Unable to use vision for orientation, the olm compensates with other senses, which are better developed than in amphibians living on the surface. It retains larval proportions, like a long, slender body and a large, flattened head, and is thus able to carry a larger number of sensory receptors.
<P> Still deeper down the water column, below 1000 metres, are found the bathypelagic fishes. At this depth the ocean is pitch black, and the fish are sedentary, adapted to outputting minimum energy in a habitat with very little food and no sunlight. Bioluminescence is the only light available at these depths. This lack of light means the organisms have to rely on senses other than vision. Their eyes are small and may not function at all.
<P> An extension of this concept is that the eyes of predators typically have a zone of very acute vision at their centre, to assist in the identification of prey. In deep water organisms, it may not be the centre of the eye that is enlarged. The hyperiid amphipods are deep water animals that feed on organisms above them. Their eyes are almost divided into two, with the upper region thought to be involved in detecting the silhouettes of potential prey—or predators—against the faint light of the sky above. Accordingly, deeper water hyperiids, where the light against which the silhouettes must be compared is dimmer, have larger "upper-eyes", and may lose the lower portion of their eyes altogether. In the giant Antarctic isopod Glyptonotus a small ventral compound eye is physically completely separated from the much larger dorsal compound eye. Depth perception can be enhanced by having eyes which are enlarged in one direction; distorting the eye slightly allows the distance to the object to be estimated with a high degree of accuracy.
<P> Other challenges faced by life in the abyssal zone are the pressure and darkness caused by the zone’s depth. Many organisms living in this zone have evolved to minimize internal air spaces, such as swim bladders. This adaptation helps to protect them from the extreme pressure, which can reach around 11,000 psi. The absence of light also spawned many different adaptations, such as having large eyes or the ability to produce their own light. Large eyes would allow the detection and use of any light available, no matter how small. Another eye adaptation is that many deep-sea organisms have evolved eyes that are extremely sensitive to blue light. This is because as sunlight shines into the ocean, the water absorbs red light, while blue light, with its short wavelength continues moving down to the waters depths. This means that in the deep ocean, if any light remains then it is most likely blue light so animals wanting to capitalize on that light would need specialized eyes tuned to use it. Many organisms use other specialized organs or methods for sensing their surroundings, some in conjunction with specialized eyes. The ability to make their own light is called bioluminescence. Fishes and organisms living in the abyssal zone have developed this ability in order to not only produce light for vision, but also to lure in prey or a mate and conceal their silhouette. Scientists believe that over 90% of life in the abyssal zone use some form of bioluminescence. Many animals that are bioluminescent will produce blue light since it moves farther underwater than other colored lights, as explained earlier. Due to this lack of light, complex designs and bright colors are not needed. Most fish species have evolved to be transparent, red, or black in order to better blend in with the darkness and not waste energy on developing and maintaining bright or complex designs.
<P> The shape and function of the eyes in aquatic animals are dependent on water depth and light exposure: limited light exposure results in a retina similar to that of nocturnal terrestrial mammals. Additionally, cetaceans have two areas of high ganglion cell concentration ("best-vision areas"), where other aquatic mammals (e.g. seals, manatees, otters) only have one.
<P> Like other deepwater fish, "Opisthoproctus soleatus" needs to find its prey in a very poorly-lit environment and avoid being detected itself by a larger predatory species. The diverticulum in the eye provides a greater area of retina and is present in certain other deepwater fish such as "Macropinna microstoma". Its function may be connected to reception of information from below, the direction from which potential predators are most likely to approach. At the depths at which this fish lives, light is still directional, and many fish species have photophores (luminous organs) on their underside which provide them with camouflage by replicating the scintillations on the surface of the water above. "O. soleatus" does not have photophores, but instead has a luminous organ inside its anus. The light produced is shone on a reflector which reflects it downward between the ventral scales to create an effect similar to that of the photophores of other species.
| question: Why do animals who live deep underwater on Hydrothermal vents have eyes? context: <P> Because of the lack of light, some species do not have eyes. Those possessing eyes in this zone include the viperfish and the frill shark. Many forms of nekton live in the bathyal zone, such as squid, large whales, and octopuses. In the bathyal, some of the world's largest whales feed. Sponges, brachiopods, sea stars, and echinoids are also common in the bathyal zone. Animals in the bathyal zone are not threatened by predators that can see them, so they do not have powerful muscles. This zone is difficult for fish to live in since it is especially hard to find nutrients. They have become very energy efficient, and many have slow metabolic rates to conserve energy. The fish are characterized by weak muscles, soft skin, and slimy bodies. The adaptations of some of the fish that live there include small eyes and transparent skin.
<P> Cave-dwelling animals have been prompted, among other adaptations, to develop and improve non-visual sensory systems in order to orient in and adapt to permanently dark habitats. The olm's sensory system is also adapted to life in the subterranean aquatic environment. Unable to use vision for orientation, the olm compensates with other senses, which are better developed than in amphibians living on the surface. It retains larval proportions, like a long, slender body and a large, flattened head, and is thus able to carry a larger number of sensory receptors.
<P> Still deeper down the water column, below 1000 metres, are found the bathypelagic fishes. At this depth the ocean is pitch black, and the fish are sedentary, adapted to outputting minimum energy in a habitat with very little food and no sunlight. Bioluminescence is the only light available at these depths. This lack of light means the organisms have to rely on senses other than vision. Their eyes are small and may not function at all.
<P> An extension of this concept is that the eyes of predators typically have a zone of very acute vision at their centre, to assist in the identification of prey. In deep water organisms, it may not be the centre of the eye that is enlarged. The hyperiid amphipods are deep water animals that feed on organisms above them. Their eyes are almost divided into two, with the upper region thought to be involved in detecting the silhouettes of potential prey—or predators—against the faint light of the sky above. Accordingly, deeper water hyperiids, where the light against which the silhouettes must be compared is dimmer, have larger "upper-eyes", and may lose the lower portion of their eyes altogether. In the giant Antarctic isopod Glyptonotus a small ventral compound eye is physically completely separated from the much larger dorsal compound eye. Depth perception can be enhanced by having eyes which are enlarged in one direction; distorting the eye slightly allows the distance to the object to be estimated with a high degree of accuracy.
<P> Other challenges faced by life in the abyssal zone are the pressure and darkness caused by the zone’s depth. Many organisms living in this zone have evolved to minimize internal air spaces, such as swim bladders. This adaptation helps to protect them from the extreme pressure, which can reach around 11,000 psi. The absence of light also spawned many different adaptations, such as having large eyes or the ability to produce their own light. Large eyes would allow the detection and use of any light available, no matter how small. Another eye adaptation is that many deep-sea organisms have evolved eyes that are extremely sensitive to blue light. This is because as sunlight shines into the ocean, the water absorbs red light, while blue light, with its short wavelength continues moving down to the waters depths. This means that in the deep ocean, if any light remains then it is most likely blue light so animals wanting to capitalize on that light would need specialized eyes tuned to use it. Many organisms use other specialized organs or methods for sensing their surroundings, some in conjunction with specialized eyes. The ability to make their own light is called bioluminescence. Fishes and organisms living in the abyssal zone have developed this ability in order to not only produce light for vision, but also to lure in prey or a mate and conceal their silhouette. Scientists believe that over 90% of life in the abyssal zone use some form of bioluminescence. Many animals that are bioluminescent will produce blue light since it moves farther underwater than other colored lights, as explained earlier. Due to this lack of light, complex designs and bright colors are not needed. Most fish species have evolved to be transparent, red, or black in order to better blend in with the darkness and not waste energy on developing and maintaining bright or complex designs.
<P> The shape and function of the eyes in aquatic animals are dependent on water depth and light exposure: limited light exposure results in a retina similar to that of nocturnal terrestrial mammals. Additionally, cetaceans have two areas of high ganglion cell concentration ("best-vision areas"), where other aquatic mammals (e.g. seals, manatees, otters) only have one.
<P> Like other deepwater fish, "Opisthoproctus soleatus" needs to find its prey in a very poorly-lit environment and avoid being detected itself by a larger predatory species. The diverticulum in the eye provides a greater area of retina and is present in certain other deepwater fish such as "Macropinna microstoma". Its function may be connected to reception of information from below, the direction from which potential predators are most likely to approach. At the depths at which this fish lives, light is still directional, and many fish species have photophores (luminous organs) on their underside which provide them with camouflage by replicating the scintillations on the surface of the water above. "O. soleatus" does not have photophores, but instead has a luminous organ inside its anus. The light produced is shone on a reflector which reflects it downward between the ventral scales to create an effect similar to that of the photophores of other species.
| answer: The presence of eyes, while somewhat useless in the abyss, are not an inherent disadvantage, and were likely retained as vestigial remnants of the eyes the creatures' ancestors had when they first went into the depths. This is the explanation for most of the eyed-creatures in the abyss.Though I'm not sure which creatures you are referring to- the only creatures I recall living on hydrothermal vents are bacteria and worms. |
107,504 | 74vtpq | Where does the expression 'break a leg' come from? | Check out this answer by /u/caffarelli and /u/AshkenazeeYankee:[When/how was the phrase: 'Break a leg' first coined?](_URL_0_)TL;DR: we don't know, but there are some interesting theories about it.EDIT: fixed circular link | [
"Check out this answer by /u/caffarelli and /u/AshkenazeeYankee:\n\n[When/how was the phrase: 'Break a leg' first coined?](_URL_0_)\n\nTL;DR: we don't know, but there are some interesting theories about it.\n\nEDIT: fixed circular link"
] | 1 | [
"Check out this answer by /u/caffarelli and /u/AshkenazeeYankee:\n\n[When/how was the phrase: 'Break a leg' first coined?](_URL_0_)\n\nTL;DR: we don't know, but there are some interesting theories about it.\n\nEDIT: fixed circular link"
] | 1 | <P> "Break a leg" is an idiom in theatre used to wish a performer "good luck" in an ironic way. Well-wishers typically say "Break a leg" to actors and musicians before they go on stage to perform. The origin of the phrase remains obscure.
<P> Other idioms are deliberately figurative. "Break a leg", used as an ironic way of wishing good luck in a performance or presentation, may have arisen from the belief that one ought not to utter the words "good luck" to an actor. By wishing someone bad luck, it is supposed that the opposite will occur.
<P> Equivalent to the English actor's idiom "break a leg", the expression reflects a theatrical superstition in which wishing a person "good luck" is considered bad luck. The expression is commonly used in Italy off stage, as superstitions and customs travel through other professions and then into common use, and it can sometimes be heard outside of Italy.
<P> It alludes to "breaking on the wheel", a form of torture in which victims had their long bones broken by an iron bar while tied to a Catherine wheel. The quotation is used to suggest someone is "[employing] superabundant effort in the accomplishment of a small matter".
<P> Lance is credited with popularizing the phrase "if it ain't broke, don't fix it", when he was quoted saying it in the May 1977 issue of the magazine "Nation's Business". The expression became widespread, and William Safire wrote that it "has become a source of inspiration to anti-activists".
<P> The English-language title, "The Broken Spears", comes from a phrase in one version (BnF MS 22) of the Annals of Tlatelolco, "xaxama[n]toc omitl". According to historian James Lockhart, this is a mistranslation resulting from confusion between the Nahuatl words "mitl" "arrow", "dart" or "spear", and "omitl" "bone"; an alternative translation is thus "broken bones".
<P> BULLET::::- 2007: I Brake Together (a complex German-English wordplay: The German expression for "I am collapsing" ("Ich breche zusammen") can be literally translated as "I break (not: brake) together")
| question: Where does the expression 'break a leg' come from? context: <P> "Break a leg" is an idiom in theatre used to wish a performer "good luck" in an ironic way. Well-wishers typically say "Break a leg" to actors and musicians before they go on stage to perform. The origin of the phrase remains obscure.
<P> Other idioms are deliberately figurative. "Break a leg", used as an ironic way of wishing good luck in a performance or presentation, may have arisen from the belief that one ought not to utter the words "good luck" to an actor. By wishing someone bad luck, it is supposed that the opposite will occur.
<P> Equivalent to the English actor's idiom "break a leg", the expression reflects a theatrical superstition in which wishing a person "good luck" is considered bad luck. The expression is commonly used in Italy off stage, as superstitions and customs travel through other professions and then into common use, and it can sometimes be heard outside of Italy.
<P> It alludes to "breaking on the wheel", a form of torture in which victims had their long bones broken by an iron bar while tied to a Catherine wheel. The quotation is used to suggest someone is "[employing] superabundant effort in the accomplishment of a small matter".
<P> Lance is credited with popularizing the phrase "if it ain't broke, don't fix it", when he was quoted saying it in the May 1977 issue of the magazine "Nation's Business". The expression became widespread, and William Safire wrote that it "has become a source of inspiration to anti-activists".
<P> The English-language title, "The Broken Spears", comes from a phrase in one version (BnF MS 22) of the Annals of Tlatelolco, "xaxama[n]toc omitl". According to historian James Lockhart, this is a mistranslation resulting from confusion between the Nahuatl words "mitl" "arrow", "dart" or "spear", and "omitl" "bone"; an alternative translation is thus "broken bones".
<P> BULLET::::- 2007: I Brake Together (a complex German-English wordplay: The German expression for "I am collapsing" ("Ich breche zusammen") can be literally translated as "I break (not: brake) together")
| answer: Check out this answer by /u/caffarelli and /u/AshkenazeeYankee:[When/how was the phrase: 'Break a leg' first coined?](_URL_0_)TL;DR: we don't know, but there are some interesting theories about it.EDIT: fixed circular link |
24,848 | 15jp4g | Can you fail a lie detector test for things you think about intensely but never have done? | Forgive me for ignoring the specific content of your question (since I don't really know much about lie detector technology.) It sounds like you feel guilty and ashamed about these thoughts you're having, and fear about other people realizing you think about those things. > I’ve even thought about some very weird and illegal sexual fantasies Welcome to adulthood. If your sexual fantasies aren't sometimes weird and even illegal, then you're missing the point of a fantasy. While it's normal not to be particularly proud of the things you masturbate to, there isn't any particular reason you should feel guilt about these thoughts. While fantasies on their own are basically harmless, feelings of guilt resulting from an inability to cope with those fantasies are not. You might end up unnecessarily isolating yourself by avoiding opening up to those around you, just to avoid some perceived risk of detection as a deviant. If you feel that your fear of people learning of your fantasies is significantly affecting how you form or maintain relationships with people (family, friends, crushes, whoever) then I would strongly urge you to talk to a psychiatrist or someone similar. They will willingly listen to whatever fantasies are causing this internal tension, and may be able to help you embrace and understand that side of yourself. In particular, just telling someone the details of your fantasies and learning how mind-numbingly typical they are could be pretty reassuring.Sorry for the psychology talk, it's probably not what you were really asking about, but hopefully it will be helpful anyways. | [
"Forgive me for ignoring the specific content of your question (since I don't really know much about lie detector technology.) It sounds like you feel guilty and ashamed about these thoughts you're having, and fear about other people realizing you think about those things.\n\n > I’ve even thought about some very ... | 1 | [] | 0 | <P> Langleben was inspired to test lie detection while he was at Stanford University studying the effects of a drug on children with Attention Deficit Disorder (ADD). He found that these children have a more difficult time inhibiting the truth. He postulated that lying requires increased brain activity compared to truth because the truth must be suppressed, essentially creating more work for the brain. In 2001, he published his first work with lie detection using a modified form of the Guilty Knowledge Test, which is sometimes used in polygraph tests. The subjects, right-handed, male college students, were given a card and a Yes/No handheld clicker. They were told to lie to a computer asking questions while they underwent a brain scan only when the question would reveal their card. The subjects were given $20 for participating, and told they would receive more money if they deceived the computer; however, none did.
<P> The cumulative research evidence suggests that machines do detect deception better than chance, but with significant error rates and that strategies used to "beat" polygraph examinations, so-called countermeasures, may be effective. Despite unreliability, results are admissible in court in some countries such as Japan. Lie detector results are very rarely admitted in evidence in the US courts.
<P> Psychologists Charles F. Bond and Ahmet Uysal of Texas Christian University criticized the methodology used by Ekman and O'Sullivan and suspected the performance of the reported Truth Wizards to be due to chance (a type I error), concluding that "convincing evidence of lie detection wizardry has never been presented." Gary D. Bond from Winston Salem State University later replicated the experiment using a more rigorous protocol and found two people to be exceptionally fast and accurate at lie detection out of 112 law enforcement officers and 122 undergraduate students, a result consistent with Ekman and O'Sullivan's. Both experts at lie detection were female Native American BIA correctional officers.
<P> The control question test, also known as the probable lie test, was developed to overcome or mitigate the problems with the relevant-irrelevant testing method. Although the relevant questions in the probable lie test are used to obtain a reaction from liars, the physiological reactions that "distinguish" liars may also occur in innocent individuals who fear a false detection or feel passionately that they did not commit the crime. Therefore, although a physiological reaction may be occurring, the reasoning behind the response may be different. Further examination of the probable lie test has indicated that it is biased against innocent subjects. Those who are unable to think of a lie related to the relevant question will automatically fail the test.
<P> A recent study found that lying takes longer than telling the truth, and thus the time to answer a question may be used as a method of lie detection. However, it has also been shown that instant answers can be proof of a prepared lie. The only compromise is to try to surprise the victim and find a midway answer, not too quick, nor too long.
<P> Lie detectors use questioning techniques in conjunction with technology to measure human responses to these stimuli, to attempt to ascertain if that person is lying, or telling the truth. The most longstanding and still most frequently used measure is the polygraph test. A polygraph, popularly referred to as a lie detector, measures and records several physiological indices such as blood pressure, pulse, respiration, and skin conductivity while the subject is asked and answers a series of questions. The polygraph is currently being used in 19 states of the US. The use of polygraph in court testimony remains controversial, and no judge can force a witness to go through with the test although it is used extensively in post-conviction supervision, particularly of sex offenders. The reason that the test is controversial, and the reason that lie detector tests are fundamentally flawed, is the Othello error—an especially emotional, angry or distraught subject produces similar results to a supposed liar. Ekman's "Telling lies" has a chapter dedicated to the usage of the polygraph, in which he discusses the element of "fear" and states "The severity of the punishment will influence the truthful person's fear of being misjudged just as much as the lying person's fear of being spotted—both suffer the same consequences."
<P> Mark Frank proposes that deception is detected at the cognitive level. Lying requires deliberate conscious behavior, so listening to speech and watching body language are important factors in detecting lies. If a response to a question has a lot disturbances, less talking time, repeated words, and poor logical structure, then the person may be lying. Vocal cues such as frequency height and variation may also provide meaningful clues to deceit.
| question: Can you fail a lie detector test for things you think about intensely but never have done? context: <P> Langleben was inspired to test lie detection while he was at Stanford University studying the effects of a drug on children with Attention Deficit Disorder (ADD). He found that these children have a more difficult time inhibiting the truth. He postulated that lying requires increased brain activity compared to truth because the truth must be suppressed, essentially creating more work for the brain. In 2001, he published his first work with lie detection using a modified form of the Guilty Knowledge Test, which is sometimes used in polygraph tests. The subjects, right-handed, male college students, were given a card and a Yes/No handheld clicker. They were told to lie to a computer asking questions while they underwent a brain scan only when the question would reveal their card. The subjects were given $20 for participating, and told they would receive more money if they deceived the computer; however, none did.
<P> The cumulative research evidence suggests that machines do detect deception better than chance, but with significant error rates and that strategies used to "beat" polygraph examinations, so-called countermeasures, may be effective. Despite unreliability, results are admissible in court in some countries such as Japan. Lie detector results are very rarely admitted in evidence in the US courts.
<P> Psychologists Charles F. Bond and Ahmet Uysal of Texas Christian University criticized the methodology used by Ekman and O'Sullivan and suspected the performance of the reported Truth Wizards to be due to chance (a type I error), concluding that "convincing evidence of lie detection wizardry has never been presented." Gary D. Bond from Winston Salem State University later replicated the experiment using a more rigorous protocol and found two people to be exceptionally fast and accurate at lie detection out of 112 law enforcement officers and 122 undergraduate students, a result consistent with Ekman and O'Sullivan's. Both experts at lie detection were female Native American BIA correctional officers.
<P> The control question test, also known as the probable lie test, was developed to overcome or mitigate the problems with the relevant-irrelevant testing method. Although the relevant questions in the probable lie test are used to obtain a reaction from liars, the physiological reactions that "distinguish" liars may also occur in innocent individuals who fear a false detection or feel passionately that they did not commit the crime. Therefore, although a physiological reaction may be occurring, the reasoning behind the response may be different. Further examination of the probable lie test has indicated that it is biased against innocent subjects. Those who are unable to think of a lie related to the relevant question will automatically fail the test.
<P> A recent study found that lying takes longer than telling the truth, and thus the time to answer a question may be used as a method of lie detection. However, it has also been shown that instant answers can be proof of a prepared lie. The only compromise is to try to surprise the victim and find a midway answer, not too quick, nor too long.
<P> Lie detectors use questioning techniques in conjunction with technology to measure human responses to these stimuli, to attempt to ascertain if that person is lying, or telling the truth. The most longstanding and still most frequently used measure is the polygraph test. A polygraph, popularly referred to as a lie detector, measures and records several physiological indices such as blood pressure, pulse, respiration, and skin conductivity while the subject is asked and answers a series of questions. The polygraph is currently being used in 19 states of the US. The use of polygraph in court testimony remains controversial, and no judge can force a witness to go through with the test although it is used extensively in post-conviction supervision, particularly of sex offenders. The reason that the test is controversial, and the reason that lie detector tests are fundamentally flawed, is the Othello error—an especially emotional, angry or distraught subject produces similar results to a supposed liar. Ekman's "Telling lies" has a chapter dedicated to the usage of the polygraph, in which he discusses the element of "fear" and states "The severity of the punishment will influence the truthful person's fear of being misjudged just as much as the lying person's fear of being spotted—both suffer the same consequences."
<P> Mark Frank proposes that deception is detected at the cognitive level. Lying requires deliberate conscious behavior, so listening to speech and watching body language are important factors in detecting lies. If a response to a question has a lot disturbances, less talking time, repeated words, and poor logical structure, then the person may be lying. Vocal cues such as frequency height and variation may also provide meaningful clues to deceit.
| answer: Forgive me for ignoring the specific content of your question (since I don't really know much about lie detector technology.) It sounds like you feel guilty and ashamed about these thoughts you're having, and fear about other people realizing you think about those things. > I’ve even thought about some very weird and illegal sexual fantasies Welcome to adulthood. If your sexual fantasies aren't sometimes weird and even illegal, then you're missing the point of a fantasy. While it's normal not to be particularly proud of the things you masturbate to, there isn't any particular reason you should feel guilt about these thoughts. While fantasies on their own are basically harmless, feelings of guilt resulting from an inability to cope with those fantasies are not. You might end up unnecessarily isolating yourself by avoiding opening up to those around you, just to avoid some perceived risk of detection as a deviant. If you feel that your fear of people learning of your fantasies is significantly affecting how you form or maintain relationships with people (family, friends, crushes, whoever) then I would strongly urge you to talk to a psychiatrist or someone similar. They will willingly listen to whatever fantasies are causing this internal tension, and may be able to help you embrace and understand that side of yourself. In particular, just telling someone the details of your fantasies and learning how mind-numbingly typical they are could be pretty reassuring.Sorry for the psychology talk, it's probably not what you were really asking about, but hopefully it will be helpful anyways. |
186,065 | 3x0541 | How big of an impact does the smog in China have on the rest of the world? And, in detail, what type of impact is it having? | Being "downwind" of China; Japan and Korea are shown on this site that shows air pollution 2.5-micron particulate matter counts "PM2.5"_URL_0_It's in Japanese but the format and maps are pretty intuitive. The levels can often reach the moderately bad level, which can cause problems for people with asthma, etc. | [
"Being \"downwind\" of China; Japan and Korea are shown on this site that shows air pollution 2.5-micron particulate matter counts \"PM2.5\"\n\n_URL_0_\n\nIt's in Japanese but the format and maps are pretty intuitive. The levels can often reach the moderately bad level, which can cause problems for people with asth... | 1 | [
"Being \"downwind\" of China; Japan and Korea are shown on this site that shows air pollution 2.5-micron particulate matter counts \"PM2.5\"\n\n_URL_0_\n\nIt's in Japanese but the format and maps are pretty intuitive. The levels can often reach the moderately bad level, which can cause problems for people with asth... | 1 | <P> Modern studies continue to find links between mortality and the presence of smog. One study, published in Nature magazine, found that smog episodes in the city of Jinan, a large city in eastern China, during 2011–15, were associated with a 5.87% (95% CI 0.16–11.58%) increase in the rate of overall mortality. This study highlights the effect of exposure to air pollution on the rate of mortality in China.
<P> Zhong Nanshan, the president of the China Medical Association, warned in 2012 that air pollution could become China's biggest health threat. Measurements by Beijing municipal government in January 2013 showed that highest recorded level of PM2.5 (particulate matter smaller than 2.5 micrometers in size), was at nearly 1,000 μg per cubic meter. PM, consisting of K, Ca, NO, and SO, had the most fearsome impact on people’s health in Beijing throughout the year, especially in cold seasons. Traces of smog from mainland China has been observed to reach as far as California.
<P> Pollutants emitted into the air and water by China's rapid industrialization has brought major health concerns. The anthropogenic activities in China have decreased food safety and antibiotic resistance and have increased resurging infectious diseases. Air pollution, alone, is directly linked to increased risk of lung cancer, breast cancer, and bladder cancer and has already led to more than 1.3 million premature deaths in China and linked to 1.6 million deaths a year - 17% of all annual Chinese deaths. 92% of Chinese have had at least 120 annual hours of unhealthy air determined by EPA standards. As the World Health Organization states hazardous air is more deadly than AIDS, malaria, breast cancer, or tuberculosis, than Chinese air quality is especially problematic because of the scale at which it occurs.
<P> Environmental pollution and ecological degradation has resulted in economic losses for China. In 2005, economic losses (mainly from air pollution) were calculated at 7.7% of China's GDP. This grew to 10.3% by 2002 and the economic loss from water pollution (6.1%) began to exceed that caused by air pollution.
<P> The immense urban growth of Chinese cities substantially increases the need for consumer goods, vehicles and energy. This in turn increases the burning of fossil fuels, resulting in smog. Exposure to Smog poses a threat to the health of Chinese citizens. A study from 2012 shows fine particles in the air, which cause respiratory and cardiovascular diseases are one of the key pollutants that are accounted for a large fraction of damage on the health of Chinese citizens.
<P> In 1997, the World Bank issued a report targeting China's policy towards industrial pollution. The report stated that "hundreds of thousands of premature deaths and incidents of serious respiratory illness have been caused by exposure to industrial air pollution. Seriously contaminated by industrial discharges, many of China's waterways are largely unfit for direct human use". However, the report did acknowledge that environmental regulations and industrial reforms had had some effect. It was determined that continued environmental reforms were likely to have a large effect on reducing industrial pollution.
<P> The 2013 Eastern China smog was a severe air pollution episode that affected East China, including all or parts of the municipalities of Shanghai and Tianjin, and the provinces of Hebei, Shandong, Jiangsu, Anhui, Henan, and Zhejiang, during December 2013. A lack of cold air flow, combined with slow-moving air masses carrying industrial emissions, collected airborne pollutants to form a thick layer of smog over the region. Levels of PM particulate matter averaged over 150 micrograms per cubic metre; in some areas, they were 300 to 500 micrograms per cubic metre.
| question: How big of an impact does the smog in China have on the rest of the world? And, in detail, what type of impact is it having? context: <P> Modern studies continue to find links between mortality and the presence of smog. One study, published in Nature magazine, found that smog episodes in the city of Jinan, a large city in eastern China, during 2011–15, were associated with a 5.87% (95% CI 0.16–11.58%) increase in the rate of overall mortality. This study highlights the effect of exposure to air pollution on the rate of mortality in China.
<P> Zhong Nanshan, the president of the China Medical Association, warned in 2012 that air pollution could become China's biggest health threat. Measurements by Beijing municipal government in January 2013 showed that highest recorded level of PM2.5 (particulate matter smaller than 2.5 micrometers in size), was at nearly 1,000 μg per cubic meter. PM, consisting of K, Ca, NO, and SO, had the most fearsome impact on people’s health in Beijing throughout the year, especially in cold seasons. Traces of smog from mainland China has been observed to reach as far as California.
<P> Pollutants emitted into the air and water by China's rapid industrialization has brought major health concerns. The anthropogenic activities in China have decreased food safety and antibiotic resistance and have increased resurging infectious diseases. Air pollution, alone, is directly linked to increased risk of lung cancer, breast cancer, and bladder cancer and has already led to more than 1.3 million premature deaths in China and linked to 1.6 million deaths a year - 17% of all annual Chinese deaths. 92% of Chinese have had at least 120 annual hours of unhealthy air determined by EPA standards. As the World Health Organization states hazardous air is more deadly than AIDS, malaria, breast cancer, or tuberculosis, than Chinese air quality is especially problematic because of the scale at which it occurs.
<P> Environmental pollution and ecological degradation has resulted in economic losses for China. In 2005, economic losses (mainly from air pollution) were calculated at 7.7% of China's GDP. This grew to 10.3% by 2002 and the economic loss from water pollution (6.1%) began to exceed that caused by air pollution.
<P> The immense urban growth of Chinese cities substantially increases the need for consumer goods, vehicles and energy. This in turn increases the burning of fossil fuels, resulting in smog. Exposure to Smog poses a threat to the health of Chinese citizens. A study from 2012 shows fine particles in the air, which cause respiratory and cardiovascular diseases are one of the key pollutants that are accounted for a large fraction of damage on the health of Chinese citizens.
<P> In 1997, the World Bank issued a report targeting China's policy towards industrial pollution. The report stated that "hundreds of thousands of premature deaths and incidents of serious respiratory illness have been caused by exposure to industrial air pollution. Seriously contaminated by industrial discharges, many of China's waterways are largely unfit for direct human use". However, the report did acknowledge that environmental regulations and industrial reforms had had some effect. It was determined that continued environmental reforms were likely to have a large effect on reducing industrial pollution.
<P> The 2013 Eastern China smog was a severe air pollution episode that affected East China, including all or parts of the municipalities of Shanghai and Tianjin, and the provinces of Hebei, Shandong, Jiangsu, Anhui, Henan, and Zhejiang, during December 2013. A lack of cold air flow, combined with slow-moving air masses carrying industrial emissions, collected airborne pollutants to form a thick layer of smog over the region. Levels of PM particulate matter averaged over 150 micrograms per cubic metre; in some areas, they were 300 to 500 micrograms per cubic metre.
| answer: Being "downwind" of China; Japan and Korea are shown on this site that shows air pollution 2.5-micron particulate matter counts "PM2.5"_URL_0_It's in Japanese but the format and maps are pretty intuitive. The levels can often reach the moderately bad level, which can cause problems for people with asthma, etc. |
223,201 | 6f71b8 | Why do we build larger particle colliders with bigger diameters instead smaller diameters traveled multiple times? | To go to higher energies at a fixed bending radius, you need stronger bending magnets. The momentum per unit charge of a particle along the central orbit inside a bending element is called its *magnetic rigidty*: Bρ = p/q.B is the magnetic field strength of the bending magnet, ρ is the bending radius of the central orbit, p is the momentum of the test particle, and q is the charge of the test particle.If you want to increase p while leaving ρ fixed, you need to increase the magnetic field strength proportionally to p (or in terms of energy, sqrt[E^(2) - m^(2)]).We can only make our bending magnets so strong, and it ends up being better just to increase the bending radius. That means that if you need a larger diameter accelerator.Or you could sidestep the need to bend the beam entirely by using a linear accelerator. But then you lose the ability to put the beam particles on target (or collide them with another beam) more than once. | [
"To go to higher energies at a fixed bending radius, you need stronger bending magnets. The momentum per unit charge of a particle along the central orbit inside a bending element is called its *magnetic rigidty*: Bρ = p/q.\n\nB is the magnetic field strength of the bending magnet, ρ is the bending radius of the ce... | 2 | [
"To go to higher energies at a fixed bending radius, you need stronger bending magnets. The momentum per unit charge of a particle along the central orbit inside a bending element is called its *magnetic rigidty*: Bρ = p/q.\n\nB is the magnetic field strength of the bending magnet, ρ is the bending radius of the ce... | 2 | <P> The shape of the collider is also important. High energy physics colliders collect particles into bunches, and then collide the bunches together. However, only a very tiny fraction of particles in each bunch actually collide. In circular colliders, these bunches travel around a roughly circular shape in opposite directions and therefore can be collided over and over. This enables a high rate of collisions and facilitates collection of a large amount of data, which is important for precision measurements or for observing very rare decays. However, the energy of the bunches is limited due to losses from synchrotron radiation. In linear colliders, particles move in a straight line and therefore do not suffer from synchrotron radiation, but bunches cannot be re-used and it is therefore more challenging to collect large amounts of data.
<P> Because the size of the dispersed phase may be difficult to measure, and because colloids have the appearance of solutions, colloids are sometimes identified and characterized by their physico-chemical and transport properties. For example, if a colloid consists of a solid phase dispersed in a liquid, the solid particles will not diffuse through a membrane, whereas with a true solution the dissolved ions or molecules will diffuse through a membrane. Because of the size exclusion, the colloidal particles are unable to pass through the pores of an ultrafiltration membrane with a size smaller than their own dimension. The smaller the size of the pore of the ultrafiltration membrane, the lower the concentration of the dispersed colloidal particles remaining in the ultrafiltered liquid. The measured value of the concentration of a truly dissolved species will thus depend on the experimental conditions applied to separate it from the colloidal particles also dispersed in the liquid. This is particularly important for solubility studies of readily hydrolyzed species such as Al, Eu, Am, Cm, or organic matter complexing these species.
<P> A variation commonly used for particle physics research is a collider, also called a "storage ring collider". Two circular synchrotrons are built in close proximityusually on top of each other and using the same magnets (which are then of more complicated design to accommodate both beam tubes). Bunches of particles travel in opposite directions around the two accelerators and collide at intersections between them. This can increase the energy enormously; whereas in a fixed-target experiment the energy available to produce new particles is proportional to the square root of the beam energy, in a collider the available energy is linear.
<P> This means that changing to particles that are half as big, keeping the size of the column the same, will double the performance, but increase the required pressure by a factor of four. Larger particles are used in preparative HPLC (column diameters 5 cm up to 30 cm) and for non-HPLC applications such as solid-phase extraction.
<P> Collisions at velocities that would result in the fragmentation of equal sized particles can instead result in growth via mass transfer from the small to the larger particle. This process requires an initial population of 'lucky' particles that have grown larger than the majority of particles. These particles may form if collision velocities have a wide distribution, with a small fraction occurring at velocities that allow objects beyond the bouncing barrier to stick. However, the growth via mass transfer is slow relative to radial drift timescales, although it may occur locally if radial drift is halted locally at a pressure bump allowing the formation of planetesimals in 10^5 yrs.
<P> Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for tiny periods of time, and therefore may be hard or impossible to study in other ways.
<P> It is the very large difference in size between the colloidal particle, which may be 1μm across, and the size of the ions or molecules, which are less than 1 nm across, that makes diffusiophoresis closely related to diffusioosomosis at a flat surface. In both cases the forces that drive the motion are largely localised to the interfacial region, which is a few molecules across and so typically of order a nanometer across. Over distances of order a nanometer, there is little difference between the surface of a colloidal particle 1 μm across, and a flat surface.
| question: Why do we build larger particle colliders with bigger diameters instead smaller diameters traveled multiple times? context: <P> The shape of the collider is also important. High energy physics colliders collect particles into bunches, and then collide the bunches together. However, only a very tiny fraction of particles in each bunch actually collide. In circular colliders, these bunches travel around a roughly circular shape in opposite directions and therefore can be collided over and over. This enables a high rate of collisions and facilitates collection of a large amount of data, which is important for precision measurements or for observing very rare decays. However, the energy of the bunches is limited due to losses from synchrotron radiation. In linear colliders, particles move in a straight line and therefore do not suffer from synchrotron radiation, but bunches cannot be re-used and it is therefore more challenging to collect large amounts of data.
<P> Because the size of the dispersed phase may be difficult to measure, and because colloids have the appearance of solutions, colloids are sometimes identified and characterized by their physico-chemical and transport properties. For example, if a colloid consists of a solid phase dispersed in a liquid, the solid particles will not diffuse through a membrane, whereas with a true solution the dissolved ions or molecules will diffuse through a membrane. Because of the size exclusion, the colloidal particles are unable to pass through the pores of an ultrafiltration membrane with a size smaller than their own dimension. The smaller the size of the pore of the ultrafiltration membrane, the lower the concentration of the dispersed colloidal particles remaining in the ultrafiltered liquid. The measured value of the concentration of a truly dissolved species will thus depend on the experimental conditions applied to separate it from the colloidal particles also dispersed in the liquid. This is particularly important for solubility studies of readily hydrolyzed species such as Al, Eu, Am, Cm, or organic matter complexing these species.
<P> A variation commonly used for particle physics research is a collider, also called a "storage ring collider". Two circular synchrotrons are built in close proximityusually on top of each other and using the same magnets (which are then of more complicated design to accommodate both beam tubes). Bunches of particles travel in opposite directions around the two accelerators and collide at intersections between them. This can increase the energy enormously; whereas in a fixed-target experiment the energy available to produce new particles is proportional to the square root of the beam energy, in a collider the available energy is linear.
<P> This means that changing to particles that are half as big, keeping the size of the column the same, will double the performance, but increase the required pressure by a factor of four. Larger particles are used in preparative HPLC (column diameters 5 cm up to 30 cm) and for non-HPLC applications such as solid-phase extraction.
<P> Collisions at velocities that would result in the fragmentation of equal sized particles can instead result in growth via mass transfer from the small to the larger particle. This process requires an initial population of 'lucky' particles that have grown larger than the majority of particles. These particles may form if collision velocities have a wide distribution, with a small fraction occurring at velocities that allow objects beyond the bouncing barrier to stick. However, the growth via mass transfer is slow relative to radial drift timescales, although it may occur locally if radial drift is halted locally at a pressure bump allowing the formation of planetesimals in 10^5 yrs.
<P> Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for tiny periods of time, and therefore may be hard or impossible to study in other ways.
<P> It is the very large difference in size between the colloidal particle, which may be 1μm across, and the size of the ions or molecules, which are less than 1 nm across, that makes diffusiophoresis closely related to diffusioosomosis at a flat surface. In both cases the forces that drive the motion are largely localised to the interfacial region, which is a few molecules across and so typically of order a nanometer across. Over distances of order a nanometer, there is little difference between the surface of a colloidal particle 1 μm across, and a flat surface.
| answer: To go to higher energies at a fixed bending radius, you need stronger bending magnets. The momentum per unit charge of a particle along the central orbit inside a bending element is called its *magnetic rigidty*: Bρ = p/q.B is the magnetic field strength of the bending magnet, ρ is the bending radius of the central orbit, p is the momentum of the test particle, and q is the charge of the test particle.If you want to increase p while leaving ρ fixed, you need to increase the magnetic field strength proportionally to p (or in terms of energy, sqrt[E^(2) - m^(2)]).We can only make our bending magnets so strong, and it ends up being better just to increase the bending radius. That means that if you need a larger diameter accelerator.Or you could sidestep the need to bend the beam entirely by using a linear accelerator. But then you lose the ability to put the beam particles on target (or collide them with another beam) more than once. |
127,317 | 63mm0h | How can my portable battery charger drain itself completely when charging my phone? Shouldn't the two batteries come to equilibrium? | Portable chargers and phones are not directly connected batteries. The portable charger has a boost converter, that takes the voltage (between 3 and 4v) from the internal battery(ies), oscillates it to create an alternating current that can be boosted in voltage using a coil, capacitor or both, and then it gets rectified back to DC at 5v (USB standard).(edit: in a discussion below, I'm corrected: the current is not alternating, is in fact pulsating DC. And therefore doesn't get rectified to 5v, it gets smoothed to reduce ripple) The phone, on the other side, takes the 5v from the USB input and steps it down to the required voltage to charge the internal phone battery. That varies according to the technology of the battery, the charge remaining in it, the charge speed the phone wants, and so on.When you connect two batteries directly, yes, they more or less reach an equilibrium, because as the most charged one (higher voltage) recharges the empty one, its voltage decreases as the other one's increases. When there is not enough difference of potential (voltage, in other words) to make the charge flow from one battery to another, they reach the equilibrium.In the portable chargers, the boost circuitry makes sure the output voltage is 5v as long as possible, of course by drawing more current from the lower voltage internal cells, because in both sides of the conversion circuit the power should be almost equal [ P = I * V ] (ignoring losses in the conversion). So the charger extracts more current from cells to generate those 5v the phone needs, and the phone as long as it haves the right voltage, can charge the internal battery, so eliminating the "equilibrium".I'm sure a lot of people can give a more detailed and strict answer, but this is the way I get it.PD: sorry for any mistakes, english is not my main language. Regards! | [
"Portable chargers and phones are not directly connected batteries. The portable charger has a boost converter, that takes the voltage (between 3 and 4v) from the internal battery(ies), oscillates it to create an alternating current that can be boosted in voltage using a coil, capacitor or both, and then it gets re... | 9 | [
"Portable chargers and phones are not directly connected batteries. The portable charger has a boost converter, that takes the voltage (between 3 and 4v) from the internal battery(ies), oscillates it to create an alternating current that can be boosted in voltage using a coil, capacitor or both, and then it gets re... | 8 | <P> A series charge controller or series regulator disables further current flow into batteries when they are full. A shunt charge controller or shunt regulator diverts excess electricity to an auxiliary or "shunt" load, such as an electric water heater, when batteries are full.
<P> If a battery is connected to a significant load during charging, the end of the Uo-phase may never be reached and the battery will gas and be damaged, depending on the charge current relative to the battery capacity.
<P> The charging protocol (how much voltage or current for how long, and what to do when charging is complete, for instance) depends on the size and type of the battery being charged. Some battery types have high tolerance for overcharging (i.e., continued charging after the battery has been fully charged) and can be recharged by connection to a constant voltage source or a constant current source, depending on battery type. Simple chargers of this type must be manually disconnected at the end of the charge cycle, and some battery types absolutely require, or may use a timer, to cut off charging current at some fixed time, approximately when charging is complete. Other battery types cannot withstand over-charging, being damaged (reduced capacity, reduced lifetime), over heating or even exploding.
<P> This is beneficial because a weak or dead battery will drain the charge from a strong battery if both are connected directly together. The disadvantage to an isolator is added cost and complexity, and if a diode-type isolator is used (which is very common) there is additional voltage drop in the circuit between the charging source and the batteries.
<P> In this stage, the battery is continued being charged at a constant (over)voltage U, but the charge current is decreasing. The decrease is imposed by the battery. The voltage in the U-phase is too high to be applied indefinitely (hence, overvoltage), but it allows charging the battery fully in a relatively short time. The U-phase is concluded when the charge current goes below a threshold I, after which the U-phase is entered. This happens when the battery is charged to around 95% of its capacity. Some manufacturers follow this stage by a second constant-current stage (with a gradually increasing voltage) before continuing with the U-phase. The voltage U may be the same as U in the previous stage, or it may be taken slightly higher.
<P> The charging battery load can be viewed as a resistor which absorbs power, but stores this for later use (instead of immediately dissipating heat). It is included as part of the "control resistor". The charging battery load is not treated as a "base resistance" though, as the charging circuit can be turned off at any time. When off, the operations can be continued without interruption using the power stored in the batteries.
<P> The constant charging method adjusts the output voltage of charging devices or the resistance in series with the battery to keep the current constant. It is using the constant current value form the beginning to the end of charging. As nickel-cadmium batteries are easy to polarize during conventional charging, both conventional constant voltage and constant current charging will make the electrolyte continuously produce hydrogen-oxygen gas. Under the action of internal high pressure, the oxygen penetrates to the negative electrode and interacts with the cadmium plate to generate CdO, resulting in the decrease of effective capacity of the electrode plate.As the acceptable current capacity of the battery decreases gradually with the progress of the charging process, it will lead to the overcharging of the battery in the later charging period. Constant current in the late charge is mostly used for electrolysis of water to produce gas, making the battery internal pressure rise, do not control easy to make the battery dry due to water loss. Eventually, it will also lead to a sharp drop in battery capacity.
| question: How can my portable battery charger drain itself completely when charging my phone? Shouldn't the two batteries come to equilibrium? context: <P> A series charge controller or series regulator disables further current flow into batteries when they are full. A shunt charge controller or shunt regulator diverts excess electricity to an auxiliary or "shunt" load, such as an electric water heater, when batteries are full.
<P> If a battery is connected to a significant load during charging, the end of the Uo-phase may never be reached and the battery will gas and be damaged, depending on the charge current relative to the battery capacity.
<P> The charging protocol (how much voltage or current for how long, and what to do when charging is complete, for instance) depends on the size and type of the battery being charged. Some battery types have high tolerance for overcharging (i.e., continued charging after the battery has been fully charged) and can be recharged by connection to a constant voltage source or a constant current source, depending on battery type. Simple chargers of this type must be manually disconnected at the end of the charge cycle, and some battery types absolutely require, or may use a timer, to cut off charging current at some fixed time, approximately when charging is complete. Other battery types cannot withstand over-charging, being damaged (reduced capacity, reduced lifetime), over heating or even exploding.
<P> This is beneficial because a weak or dead battery will drain the charge from a strong battery if both are connected directly together. The disadvantage to an isolator is added cost and complexity, and if a diode-type isolator is used (which is very common) there is additional voltage drop in the circuit between the charging source and the batteries.
<P> In this stage, the battery is continued being charged at a constant (over)voltage U, but the charge current is decreasing. The decrease is imposed by the battery. The voltage in the U-phase is too high to be applied indefinitely (hence, overvoltage), but it allows charging the battery fully in a relatively short time. The U-phase is concluded when the charge current goes below a threshold I, after which the U-phase is entered. This happens when the battery is charged to around 95% of its capacity. Some manufacturers follow this stage by a second constant-current stage (with a gradually increasing voltage) before continuing with the U-phase. The voltage U may be the same as U in the previous stage, or it may be taken slightly higher.
<P> The charging battery load can be viewed as a resistor which absorbs power, but stores this for later use (instead of immediately dissipating heat). It is included as part of the "control resistor". The charging battery load is not treated as a "base resistance" though, as the charging circuit can be turned off at any time. When off, the operations can be continued without interruption using the power stored in the batteries.
<P> The constant charging method adjusts the output voltage of charging devices or the resistance in series with the battery to keep the current constant. It is using the constant current value form the beginning to the end of charging. As nickel-cadmium batteries are easy to polarize during conventional charging, both conventional constant voltage and constant current charging will make the electrolyte continuously produce hydrogen-oxygen gas. Under the action of internal high pressure, the oxygen penetrates to the negative electrode and interacts with the cadmium plate to generate CdO, resulting in the decrease of effective capacity of the electrode plate.As the acceptable current capacity of the battery decreases gradually with the progress of the charging process, it will lead to the overcharging of the battery in the later charging period. Constant current in the late charge is mostly used for electrolysis of water to produce gas, making the battery internal pressure rise, do not control easy to make the battery dry due to water loss. Eventually, it will also lead to a sharp drop in battery capacity.
| answer: Portable chargers and phones are not directly connected batteries. The portable charger has a boost converter, that takes the voltage (between 3 and 4v) from the internal battery(ies), oscillates it to create an alternating current that can be boosted in voltage using a coil, capacitor or both, and then it gets rectified back to DC at 5v (USB standard).(edit: in a discussion below, I'm corrected: the current is not alternating, is in fact pulsating DC. And therefore doesn't get rectified to 5v, it gets smoothed to reduce ripple) The phone, on the other side, takes the 5v from the USB input and steps it down to the required voltage to charge the internal phone battery. That varies according to the technology of the battery, the charge remaining in it, the charge speed the phone wants, and so on.When you connect two batteries directly, yes, they more or less reach an equilibrium, because as the most charged one (higher voltage) recharges the empty one, its voltage decreases as the other one's increases. When there is not enough difference of potential (voltage, in other words) to make the charge flow from one battery to another, they reach the equilibrium.In the portable chargers, the boost circuitry makes sure the output voltage is 5v as long as possible, of course by drawing more current from the lower voltage internal cells, because in both sides of the conversion circuit the power should be almost equal [ P = I * V ] (ignoring losses in the conversion). So the charger extracts more current from cells to generate those 5v the phone needs, and the phone as long as it haves the right voltage, can charge the internal battery, so eliminating the "equilibrium".I'm sure a lot of people can give a more detailed and strict answer, but this is the way I get it.PD: sorry for any mistakes, english is not my main language. Regards! |
15,484 | 20lybi | why taking antibiotics long term does not lead to antibiotic resistance, but taking them for too short of a time (ie not finishing the prescription) does? | It's like weeding a garden.If you don't get the roots out, the weeds come back.Kill all the weeds and you keep a healthy garden, leave the roots and they come back.But with anti-biotics it survival of the fittest, the most resistant survive the die off and re-populate with a more resistant strain. | [
"It's like weeding a garden.\n\nIf you don't get the roots out, the weeds come back.\n\nKill all the weeds and you keep a healthy garden, leave the roots and they come back.\n\nBut with anti-biotics it survival of the fittest, the most resistant survive the die off and re-populate with a more resistant strain.",
... | 4 | [
"It's like weeding a garden.\n\nIf you don't get the roots out, the weeds come back.\n\nKill all the weeds and you keep a healthy garden, leave the roots and they come back.\n\nBut with anti-biotics it survival of the fittest, the most resistant survive the die off and re-populate with a more resistant strain."
] | 1 | <P> Antibiotic resistance increases with duration of treatment. Therefore, as long as an effective minimum is kept, shorter courses of antibiotics are likely to decrease rates of resistance, reduce cost, and have better outcomes with fewer complications. Short course regimens exist for community-acquired pneumonia spontaneous bacterial peritonitis, suspected lung infections in intense care wards, so-called acute abdomen, middle ear infections, sinusitis and throat infections, and penetrating gut injuries. In some situations a short course may not cure the infection as well as a long course. A BMJ editorial recommended that antibiotics can often be safely stopped 72 hours after symptoms resolve.
<P> Oral antibiotics are recommended for no longer than three months as antibiotic courses exceeding this duration are associated with the development of antibiotic resistance and show no clear benefit over shorter courses. Furthermore, if long-term oral antibiotics beyond three months are thought to be necessary, it is recommended that benzoyl peroxide and/or a retinoid be used at the same time to limit the risk of "C. acnes" developing antibiotic resistance.
<P> A course of one week of antibiotics is usually sufficient to treat the condition. However, if the condition recurs, antibiotics can be given in a cyclical fashion in order to prevent tolerance. For example, antibiotics may be given for a week, followed by three weeks off antibiotics, followed by another week of treatment. Alternatively, the choice of antibiotic used can be cycled.
<P> BULLET::::- antibiotics, called prophylactic when given as prevention rather as treatment of infection. However, long term use of antibiotics leads to resistance of bacteria. While humans do not become immune to antibiotics, the bacteria does. Thus, avoiding using antibiotics longer than necessary helps preventing bacteria from forming mutations that aide in antibiotic resistance.
<P> Until resistance has emerged against a previous generation of antibiotic, commercial return for any given new drug is uncertain. Therefore, the de-linkage model may be preferable in the context of developing new antibiotics and the fight against resistance where new antibiotics initially are unlikely to sell in large quantities because they should be reserved for use only when all other options have been exhausted. De-linkage also removes the incentive for the industry to boost sales that may encourage overuse that accelerate the development of antibiotic resistance.
<P> Though effective, antibiotics are not recommended for prevention of TD in most situations because of the risk of allergy or adverse reactions to the antibiotics, and because intake of preventive antibiotics may decrease effectiveness of such drugs should a serious infection develop subsequently. Antibiotics can also cause vaginal yeast infections, or overgrowth of the bacterium "Clostridium difficile", leading to pseudomembranous colitis and its associated severe, unrelenting diarrhea.
<P> Antibiotic treatment duration should be based on the infection and other health problems a person may have. For many infections once a person has improved there is little evidence that stopping treatment causes more resistance. Some therefore feel that stopping early may be reasonable in some cases. Other infections, however, do require long courses regardless of whether a person feels better.
| question: why taking antibiotics long term does not lead to antibiotic resistance, but taking them for too short of a time (ie not finishing the prescription) does? context: <P> Antibiotic resistance increases with duration of treatment. Therefore, as long as an effective minimum is kept, shorter courses of antibiotics are likely to decrease rates of resistance, reduce cost, and have better outcomes with fewer complications. Short course regimens exist for community-acquired pneumonia spontaneous bacterial peritonitis, suspected lung infections in intense care wards, so-called acute abdomen, middle ear infections, sinusitis and throat infections, and penetrating gut injuries. In some situations a short course may not cure the infection as well as a long course. A BMJ editorial recommended that antibiotics can often be safely stopped 72 hours after symptoms resolve.
<P> Oral antibiotics are recommended for no longer than three months as antibiotic courses exceeding this duration are associated with the development of antibiotic resistance and show no clear benefit over shorter courses. Furthermore, if long-term oral antibiotics beyond three months are thought to be necessary, it is recommended that benzoyl peroxide and/or a retinoid be used at the same time to limit the risk of "C. acnes" developing antibiotic resistance.
<P> A course of one week of antibiotics is usually sufficient to treat the condition. However, if the condition recurs, antibiotics can be given in a cyclical fashion in order to prevent tolerance. For example, antibiotics may be given for a week, followed by three weeks off antibiotics, followed by another week of treatment. Alternatively, the choice of antibiotic used can be cycled.
<P> BULLET::::- antibiotics, called prophylactic when given as prevention rather as treatment of infection. However, long term use of antibiotics leads to resistance of bacteria. While humans do not become immune to antibiotics, the bacteria does. Thus, avoiding using antibiotics longer than necessary helps preventing bacteria from forming mutations that aide in antibiotic resistance.
<P> Until resistance has emerged against a previous generation of antibiotic, commercial return for any given new drug is uncertain. Therefore, the de-linkage model may be preferable in the context of developing new antibiotics and the fight against resistance where new antibiotics initially are unlikely to sell in large quantities because they should be reserved for use only when all other options have been exhausted. De-linkage also removes the incentive for the industry to boost sales that may encourage overuse that accelerate the development of antibiotic resistance.
<P> Though effective, antibiotics are not recommended for prevention of TD in most situations because of the risk of allergy or adverse reactions to the antibiotics, and because intake of preventive antibiotics may decrease effectiveness of such drugs should a serious infection develop subsequently. Antibiotics can also cause vaginal yeast infections, or overgrowth of the bacterium "Clostridium difficile", leading to pseudomembranous colitis and its associated severe, unrelenting diarrhea.
<P> Antibiotic treatment duration should be based on the infection and other health problems a person may have. For many infections once a person has improved there is little evidence that stopping treatment causes more resistance. Some therefore feel that stopping early may be reasonable in some cases. Other infections, however, do require long courses regardless of whether a person feels better.
| answer: It's like weeding a garden.If you don't get the roots out, the weeds come back.Kill all the weeds and you keep a healthy garden, leave the roots and they come back.But with anti-biotics it survival of the fittest, the most resistant survive the die off and re-populate with a more resistant strain. |
128,498 | 2tp0gm | During or after it rains, it appears like headlights and flashlights are less powerful. Why? | What you're probably noticing is light being scattered by the raindrops or foggy humidity in the air. Less light gets delivered to the target of the beam, which is where you're looking, and more gets tossed in all directions. That's why you can see the beams under those conditions as well, because of the light that's being scattered along the way, some of which is making it back to your eyes. | [
"What you're probably noticing is light being scattered by the raindrops or foggy humidity in the air. Less light gets delivered to the target of the beam, which is where you're looking, and more gets tossed in all directions. That's why you can see the beams under those conditions as well, because of the light tha... | 2 | [
"What you're probably noticing is light being scattered by the raindrops or foggy humidity in the air. Less light gets delivered to the target of the beam, which is where you're looking, and more gets tossed in all directions. That's why you can see the beams under those conditions as well, because of the light tha... | 2 | <P> These radars had S-band wavelengths, so attenuation by rain was almost entirely avoided (Atlas and Banks 1951); however, detection of light rain and snow was minimal due to system performance limitations.
<P> The most common type of floodlight is the metal-halide lamp, which emits a bright white light (typically 75–100 lumens/Watt). Sodium-vapor lamps are also commonly used for sporting events, as they have a very high lumen to watt ratio (typically 80–140 lumens/Watt), making them a cost-effective choice when certain lux levels must be provided.
<P> LED floodlights are bright enough to be used for illumination purposes on large sport fields. The main advantages of LEDs in this application are their lower power consumption, longer life, and instant start-up (the lack of a "warm-up" period reduces game delays after power outages).
<P> A floodlight is a broad-beamed, high-intensity artificial light. They are often used to illuminate outdoor playing fields while an outdoor sports event is being held during low-light conditions. More focused kinds are often used as a stage lighting instrument in live performances such as concerts and plays.
<P> Floodlights were installed in 1960, with the towering lights being used for the first time in a 2–2 Football League Cup draw with Leyton Orient in October 1960. They were officially opened later in the season with a prestigious friendly against Manchester United.
<P> Excess light occurs at the top of canopies and on open ground when cloud cover is low and the sun's zenith angle is low, typically this occurs in the tropics and at high altitudes. Excess light incident on a leaf can result in photoinhibition and photodestruction. Plants adapted to high light environments have a range of adaptations to avoid or dissipate the excess light energy, as well as mechanisms that reduce the amount of injury caused.
<P> The intensity of the radar echoes (reflectivity) is proportional to the form (water or ice) of the precipitation and its diameter. In fact, rain has much stronger reflective power than snow but its diameter is much smaller. So the reflectivity of rain coming from melted snow is only slightly higher. However, in the layer where the snow is melting, the wet flakes still have a large diameter and are coated with water so the returns to the radar is much stronger.
| question: During or after it rains, it appears like headlights and flashlights are less powerful. Why? context: <P> These radars had S-band wavelengths, so attenuation by rain was almost entirely avoided (Atlas and Banks 1951); however, detection of light rain and snow was minimal due to system performance limitations.
<P> The most common type of floodlight is the metal-halide lamp, which emits a bright white light (typically 75–100 lumens/Watt). Sodium-vapor lamps are also commonly used for sporting events, as they have a very high lumen to watt ratio (typically 80–140 lumens/Watt), making them a cost-effective choice when certain lux levels must be provided.
<P> LED floodlights are bright enough to be used for illumination purposes on large sport fields. The main advantages of LEDs in this application are their lower power consumption, longer life, and instant start-up (the lack of a "warm-up" period reduces game delays after power outages).
<P> A floodlight is a broad-beamed, high-intensity artificial light. They are often used to illuminate outdoor playing fields while an outdoor sports event is being held during low-light conditions. More focused kinds are often used as a stage lighting instrument in live performances such as concerts and plays.
<P> Floodlights were installed in 1960, with the towering lights being used for the first time in a 2–2 Football League Cup draw with Leyton Orient in October 1960. They were officially opened later in the season with a prestigious friendly against Manchester United.
<P> Excess light occurs at the top of canopies and on open ground when cloud cover is low and the sun's zenith angle is low, typically this occurs in the tropics and at high altitudes. Excess light incident on a leaf can result in photoinhibition and photodestruction. Plants adapted to high light environments have a range of adaptations to avoid or dissipate the excess light energy, as well as mechanisms that reduce the amount of injury caused.
<P> The intensity of the radar echoes (reflectivity) is proportional to the form (water or ice) of the precipitation and its diameter. In fact, rain has much stronger reflective power than snow but its diameter is much smaller. So the reflectivity of rain coming from melted snow is only slightly higher. However, in the layer where the snow is melting, the wet flakes still have a large diameter and are coated with water so the returns to the radar is much stronger.
| answer: What you're probably noticing is light being scattered by the raindrops or foggy humidity in the air. Less light gets delivered to the target of the beam, which is where you're looking, and more gets tossed in all directions. That's why you can see the beams under those conditions as well, because of the light that's being scattered along the way, some of which is making it back to your eyes. |
209,849 | 88emhp | Why were live-in domestic servants so much more common in the 19th and early 20th centuries than they are today? | I'm giving this answer mostly based on two books by the same historian, Frank Trentmann: - The Empire of Things (2016) - The Oxford Handbook of the History of Consumption (2012) From their titles alone, you may be able to guess that Trentmann would give a very goods-based answer.There's the obvious answer that domestic tasks became far less time consuming numerous thanks to inventions during the interwar years. To give just a few examples, the first electric dishwasher was created in 1929, fridges from 1913 (and had leaps in technology in 1920s with the development of freon to aid cooling), and electric irons from 1926 (though it took until the 1930s for any commercial success). All of these were the preserve of the upper-middle classes before WW2, as the upper class retained live-in servants, and the lower-middle class and working classes performed some combination of hired help and own work. Increasingly available home plumbing and electrification after WWII (particularly in Europe due to the need to rebuild) helped these technologies spread to most people, reducing the *need* for domestic servants.Conversely, Trentmann also argues that the *want* for domestic servants was actually reducing during the 19th Century. The Industrial Revolution was at its height, allowing for a gluttony of consumerism of manufactured goods from Europe, and cheap availability of rarer goods from the world. The abolition of slavery in Europe in the early 1800s also played a minor role, as the overt 'use' of people became slightly tainted (exploitation of Europe's empires remained just dandy though...) As a result, many people, particularly the middle-class, began to shift how they wished to exhibit their wealth. Whilst in previous centuries, a key form of wealth display was by employing others to work for you, it became far more fashionable to show off your refined taste in goods such as fine china and houseware, or high quality clothes and furnishings. One example that sticks in my mind is an American woman in the early 1800s who proudly served her houseguests dinner on delicate china while the wind whistled through the windows; Trentmann's point is that there was a fixation on indulging on certain parts of one's life, even at the obvious cost to others (same as today to be fair). Taken together, changes in what was considered the best way to exhibit wealth in the 1800s, and technological improvements both becoming widely available and desirable in themselves, meant that the demand for domestic servants had drastically reduced by the mid-20th Century. Hope this helps! | [
"I'm giving this answer mostly based on two books by the same historian, Frank Trentmann:\n\n - The Empire of Things (2016)\n - The Oxford Handbook of the History of Consumption (2012)\n \nFrom their titles alone, you may be able to guess that Trentmann would give a very goods-based answer.\n\nThere's the obvious a... | 1 | [
"I'm giving this answer mostly based on two books by the same historian, Frank Trentmann:\n\n - The Empire of Things (2016)\n - The Oxford Handbook of the History of Consumption (2012)\n \nFrom their titles alone, you may be able to guess that Trentmann would give a very goods-based answer.\n\nThere's the obvious a... | 1 | <P> Edwardian Britain had large numbers of male and female domestic servants, in both urban and rural areas. Middle and upper-class women relied on servants to run their homes smoothly. Servants were provided with food, clothing, housing, and a small wage, and lived in a self-enclosed social system inside the mansion. The number of domestic servants fell in the Edwardian era due to a declining number of young people willing to be employed in this area.
<P> Edwardian Britain had large numbers of male and female domestic servants, in both urban and rural areas. Men relied on working class women to run their homes smoothly, and employers often looked to these working class women for sexual partners. Servants were provided with food, clothing, housing, and a small wage, and lived in a self-enclosed social system inside the mansion. The number of domestic servants fell in the Edwardian period due to a declining number of young people willing to be employed in this area.
<P> Domestic life for a working-class family was far less comfortable. Legal standards for minimum housing conditions were a new concept during the Victorian era, and a working-class wife was responsible for keeping her family as clean, warm, and dry as possible in housing stock that was often literally rotting around them. In London, overcrowding was endemic in the slums inhabited by the working classes. (See Life and Labour of the People in London.) Families living in single rooms were not unusual. The worst areas had examples such as 90 people crammed into a 10-room house, or 12 people living in a single room (7 feet 3 inches by 14 feet). Rents were exorbitant; 85 percent of working-class households in London spent at least one-fifth of their income on rent, with 50 percent paying one-quarter to one-half of their income on rent. The poorer the neighbourhood, the higher the rents. Rents in the Old Nichol area near Hackney, per cubic foot, were five to eleven times higher than rents in the fine streets and squares of the West End of London. The owners of the slum housing included peers, churchmen, and investment trusts for estates of long-deceased members of the upper classes.
<P> During this period, life expectancy was often low, and indentured servants came from overpopulated European areas. With the lower price of servants compared to slaves, and the high mortality of the servants, planters often found it much more economical to use servants.
<P> Domestic life for a working-class family meant the housewife had to handle the chores servants did in wealthier families. A working-class wife was responsible for keeping her family as clean, warm, and dry as possible in housing stock that was often literally rotting around them. In London, overcrowding was endemic in the slums; a family living in one room was common. Rents were high in London; half of working-class households paid one-quarter to one-half of their income on rent.
<P> By the end of the 18th century, the propriety underwent some structural enlargements to host the family and the servants all year around, in order for them to be more present on their business’ location.
<P> Domestic servants, two-thirds of whom were women, made up about fifteen to twenty percent of the population of the capital. Before the Revolution they had worked largely for the nobility, whose families sometimes had as many as thirty servants. During the Empire they were employed more commonly by the new nobility, the newly wealthy and middle class. Upper-middle-class families often had three servants; families of artisans and shopkeepers usually had one. Living conditions of servants depended largely upon the personality of the master, but were never easy. Napoleon abolished the death penalty which previously could be given to a servant who stole from his master, but any servant who was even suspected of stealing would never be able to get another job. Any servant who became pregnant, married or not, could be dismissed immediately.
| question: Why were live-in domestic servants so much more common in the 19th and early 20th centuries than they are today? context: <P> Edwardian Britain had large numbers of male and female domestic servants, in both urban and rural areas. Middle and upper-class women relied on servants to run their homes smoothly. Servants were provided with food, clothing, housing, and a small wage, and lived in a self-enclosed social system inside the mansion. The number of domestic servants fell in the Edwardian era due to a declining number of young people willing to be employed in this area.
<P> Edwardian Britain had large numbers of male and female domestic servants, in both urban and rural areas. Men relied on working class women to run their homes smoothly, and employers often looked to these working class women for sexual partners. Servants were provided with food, clothing, housing, and a small wage, and lived in a self-enclosed social system inside the mansion. The number of domestic servants fell in the Edwardian period due to a declining number of young people willing to be employed in this area.
<P> Domestic life for a working-class family was far less comfortable. Legal standards for minimum housing conditions were a new concept during the Victorian era, and a working-class wife was responsible for keeping her family as clean, warm, and dry as possible in housing stock that was often literally rotting around them. In London, overcrowding was endemic in the slums inhabited by the working classes. (See Life and Labour of the People in London.) Families living in single rooms were not unusual. The worst areas had examples such as 90 people crammed into a 10-room house, or 12 people living in a single room (7 feet 3 inches by 14 feet). Rents were exorbitant; 85 percent of working-class households in London spent at least one-fifth of their income on rent, with 50 percent paying one-quarter to one-half of their income on rent. The poorer the neighbourhood, the higher the rents. Rents in the Old Nichol area near Hackney, per cubic foot, were five to eleven times higher than rents in the fine streets and squares of the West End of London. The owners of the slum housing included peers, churchmen, and investment trusts for estates of long-deceased members of the upper classes.
<P> During this period, life expectancy was often low, and indentured servants came from overpopulated European areas. With the lower price of servants compared to slaves, and the high mortality of the servants, planters often found it much more economical to use servants.
<P> Domestic life for a working-class family meant the housewife had to handle the chores servants did in wealthier families. A working-class wife was responsible for keeping her family as clean, warm, and dry as possible in housing stock that was often literally rotting around them. In London, overcrowding was endemic in the slums; a family living in one room was common. Rents were high in London; half of working-class households paid one-quarter to one-half of their income on rent.
<P> By the end of the 18th century, the propriety underwent some structural enlargements to host the family and the servants all year around, in order for them to be more present on their business’ location.
<P> Domestic servants, two-thirds of whom were women, made up about fifteen to twenty percent of the population of the capital. Before the Revolution they had worked largely for the nobility, whose families sometimes had as many as thirty servants. During the Empire they were employed more commonly by the new nobility, the newly wealthy and middle class. Upper-middle-class families often had three servants; families of artisans and shopkeepers usually had one. Living conditions of servants depended largely upon the personality of the master, but were never easy. Napoleon abolished the death penalty which previously could be given to a servant who stole from his master, but any servant who was even suspected of stealing would never be able to get another job. Any servant who became pregnant, married or not, could be dismissed immediately.
| answer: I'm giving this answer mostly based on two books by the same historian, Frank Trentmann: - The Empire of Things (2016) - The Oxford Handbook of the History of Consumption (2012) From their titles alone, you may be able to guess that Trentmann would give a very goods-based answer.There's the obvious answer that domestic tasks became far less time consuming numerous thanks to inventions during the interwar years. To give just a few examples, the first electric dishwasher was created in 1929, fridges from 1913 (and had leaps in technology in 1920s with the development of freon to aid cooling), and electric irons from 1926 (though it took until the 1930s for any commercial success). All of these were the preserve of the upper-middle classes before WW2, as the upper class retained live-in servants, and the lower-middle class and working classes performed some combination of hired help and own work. Increasingly available home plumbing and electrification after WWII (particularly in Europe due to the need to rebuild) helped these technologies spread to most people, reducing the *need* for domestic servants.Conversely, Trentmann also argues that the *want* for domestic servants was actually reducing during the 19th Century. The Industrial Revolution was at its height, allowing for a gluttony of consumerism of manufactured goods from Europe, and cheap availability of rarer goods from the world. The abolition of slavery in Europe in the early 1800s also played a minor role, as the overt 'use' of people became slightly tainted (exploitation of Europe's empires remained just dandy though...) As a result, many people, particularly the middle-class, began to shift how they wished to exhibit their wealth. Whilst in previous centuries, a key form of wealth display was by employing others to work for you, it became far more fashionable to show off your refined taste in goods such as fine china and houseware, or high quality clothes and furnishings. One example that sticks in my mind is an American woman in the early 1800s who proudly served her houseguests dinner on delicate china while the wind whistled through the windows; Trentmann's point is that there was a fixation on indulging on certain parts of one's life, even at the obvious cost to others (same as today to be fair). Taken together, changes in what was considered the best way to exhibit wealth in the 1800s, and technological improvements both becoming widely available and desirable in themselves, meant that the demand for domestic servants had drastically reduced by the mid-20th Century. Hope this helps! |
36,505 | hpfhq | How do capacitors actually "store" energy? | I don't know how deep you want to go with this, but I'll answer this way and then we can gauge whether you want to go deeper or not.When you charge a parallel-plate capacitor (the type you're describing in the question), one side is positive and the other side is negative, like you said. When this happens, you generate an electric field between the positive and negative plates. Energy is stored in this electric field.It's a type of potential energy in the sense that the electric potential is calculated from the field (taking an integral of the field will give you the potential). So in that sense, it is a form of potential energy, since the energy is stored in the field. | [
"Energy is stored in the electric field between the plates. Voltage is a measure of electric potential energy.",
"I don't know how deep you want to go with this, but I'll answer this way and then we can gauge whether you want to go deeper or not.\n\nWhen you charge a parallel-plate capacitor (the type you're desc... | 2 | [
"Energy is stored in the electric field between the plates. Voltage is a measure of electric potential energy.",
"I don't know how deep you want to go with this, but I'll answer this way and then we can gauge whether you want to go deeper or not.\n\nWhen you charge a parallel-plate capacitor (the type you're desc... | 2 | <P> A capacitor (originally known as a 'condenser') is a passive two-terminal electrical component used to store energy electrostatically. Practical capacitors vary widely, but all contain at least two electrical conductors (plates) separated by a dielectric (i.e., insulator). A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries change. (This prevents loss of information in volatile memory.) Conventional capacitors provide less than 360 joules per kilogram, while a conventional alkaline battery has a density of 590 kJ/kg.
<P> A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries are being changed. (This prevents loss of information in volatile memory.)
<P> Capacitors are connected in parallel with the power circuits of most electronic devices and larger systems (such as factories) to shunt away and conceal current fluctuations from the primary power source to provide a "clean" power supply for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor compensates for the inductance and resistance of the leads to the lead-acid car battery.
<P> A capacitor can store electric energy when it is connected to its charging circuit. And when it is disconnected from its charging circuit, it can dissipate that stored energy, so it can be used like a temporary battery. Capacitors are commonly used in electronic devices to maintain power supply while batteries are being changed. (This prevents loss of information in volatile memory.)
<P> Capacitors store and release electrical charge. They are used for filtering power supply lines, tuning resonant circuits, and for blocking DC voltages while passing AC signals, among numerous other uses.
<P> Capacitors are connected in parallel with the DC power circuits of most electronic devices to smooth current fluctuations for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor compensates for the inductance and resistance of the leads to the lead-acid car battery.
<P> Capacitors are components used in High Voltage Direct Current (HVDC) schemes and Flexible Alternative Current Transmission Systems (FACTS). HVDC and FACTS both help reduce CO emissions by respectively minimizing power losses and ensuring the balance and efficiency of high-voltage transmission networks. They also facilitate the connection of renewable energy sources into the power network.
| question: How do capacitors actually "store" energy? context: <P> A capacitor (originally known as a 'condenser') is a passive two-terminal electrical component used to store energy electrostatically. Practical capacitors vary widely, but all contain at least two electrical conductors (plates) separated by a dielectric (i.e., insulator). A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries change. (This prevents loss of information in volatile memory.) Conventional capacitors provide less than 360 joules per kilogram, while a conventional alkaline battery has a density of 590 kJ/kg.
<P> A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries are being changed. (This prevents loss of information in volatile memory.)
<P> Capacitors are connected in parallel with the power circuits of most electronic devices and larger systems (such as factories) to shunt away and conceal current fluctuations from the primary power source to provide a "clean" power supply for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor compensates for the inductance and resistance of the leads to the lead-acid car battery.
<P> A capacitor can store electric energy when it is connected to its charging circuit. And when it is disconnected from its charging circuit, it can dissipate that stored energy, so it can be used like a temporary battery. Capacitors are commonly used in electronic devices to maintain power supply while batteries are being changed. (This prevents loss of information in volatile memory.)
<P> Capacitors store and release electrical charge. They are used for filtering power supply lines, tuning resonant circuits, and for blocking DC voltages while passing AC signals, among numerous other uses.
<P> Capacitors are connected in parallel with the DC power circuits of most electronic devices to smooth current fluctuations for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor compensates for the inductance and resistance of the leads to the lead-acid car battery.
<P> Capacitors are components used in High Voltage Direct Current (HVDC) schemes and Flexible Alternative Current Transmission Systems (FACTS). HVDC and FACTS both help reduce CO emissions by respectively minimizing power losses and ensuring the balance and efficiency of high-voltage transmission networks. They also facilitate the connection of renewable energy sources into the power network.
| answer: I don't know how deep you want to go with this, but I'll answer this way and then we can gauge whether you want to go deeper or not.When you charge a parallel-plate capacitor (the type you're describing in the question), one side is positive and the other side is negative, like you said. When this happens, you generate an electric field between the positive and negative plates. Energy is stored in this electric field.It's a type of potential energy in the sense that the electric potential is calculated from the field (taking an integral of the field will give you the potential). So in that sense, it is a form of potential energy, since the energy is stored in the field. |
57,460 | 1r9c4k | How did the rise of punk rock differ in NY, LA and London and which scene came first? | New York City is unquestionably the birthplace of punk rock including the term (via *Punk* magazine, founded by *Please Kill Me* author Legs McNeil).Without Ramones first UK tour there are about half a dozen prominent UK punk bands who probably wouldn't exist as they site that band and tour with their direct inspiration to start a band.From the first paragraph of the Wikipedia entry on Ramones: > Despite achieving only limited commercial success, the band was a major influence on the punk rock movement in both the United States and, *perhaps to a greater extent, in the United Kingdom.* (emphasis mine)The UK scene can be credited with injecting a politcal (or just as often, faux or pseudo-political) energy as well as many of the fashion and stylistic elements (ransom note printing, etc.) associated with punk via Vivianne Westwood and Malcolm McLaren.I must say I don't know as much about West Coast punk which was isolated compared to the NY and UK which were co-mingling quite a bit. Scenes where happening in LA and the Bay area, the midwest and DC scenes are also worth noting.Eventually scenes popped up across the US, a DIY touring network of small town all ages venues that persisted even a punk slid from mainstream view but paved the way for the "2nd wave" in the 90s (Green Day, Blink) the Warped tour, etc.*Punk Diary 1970-1979* by George Gilmarc (St. Martin's Press, may be out of print? I've seen it used and probably on amazon) is a good reference of the early UK scene.The documentary "Another State of Mind" covers the early Southern California punk scene with a bit on DC and Steven Blush's *American Hardcore* (Feral House, 2001) is yet another "oral history" style text that pretty broad but captures a lot of perspectives. | [
"I would strongly recommend reading \"Please Kill Me: The Uncensored Oral History of Punk\". The whole book is a story told through quotes from the members of the punk scene at the beginning. Though NY focused there is some overlap with other scenes.",
"New York City is unquestionably the birthplace of punk roc... | 2 | [
"I would strongly recommend reading \"Please Kill Me: The Uncensored Oral History of Punk\". The whole book is a story told through quotes from the members of the punk scene at the beginning. Though NY focused there is some overlap with other scenes.",
"New York City is unquestionably the birthplace of punk roc... | 2 | <P> New York City had the earliest documented punk rock scene in the United States. Drawing on local influences such as The Velvet Underground, Richard Hell, and the New York Dolls, punk music developed at clubs such as CBGB and Max's Kansas City. Patti Smith, Talking Heads, Blondie, Suicide, Television, The Fleshtones, and other artsy new wave artists were popular in the mid-to-late 1970s, as bands like the Ramones were establishing an American punk rock sound. CBGB and Max's Kansas City opened their doors and became influential venues. No Wave was a short-lived rock movement in New York and raised James Chance, DNA, Glenn Branca, Lydia Lunch, the Contortions, Teenage Jesus and the Jerks, Mars began experimenting with noise, dissonance and atonality in addition to non-rock styles. Brian Eno-produced "No New York" compilation, often considered the quintessential testament to the scene. Swans, and later Sonic Youth were famous in New York City punk scene.
<P> In the 1970s, punk rock emerged in New York's downtown music scene with seminal bands such as the New York Dolls, Ramones and Patti Smith. Anthrax and KISS were the best known heavy metal and glam rock performers from the city. The downtown scene developed into the "new wave" style of rock music at downtown clubs like CBGB's. The 1970s were also when the Salsa and Latin Jazz movements grew and branched out to the world. Labels such as the "Fania All Stars", musicians like Tito Puente and Celia Cruz and Ralph Mercado, the creator of the RM&M record label, all contributed to stars like Hector LaVoe, Ruben Blades and many others. The New Yorican Sound, differed somewhat from Salsa that came from Puerto Rico, it was being sung by Puerto Rican Americans from New York City and had the swagger of the Big Apple.
<P> The New York City punk rock scene arose from a subcultural underground promoted by artists, reporters, musicians and a wide variety of non-mainstream enthusiasts. The Velvet Underground's harsh and experimental yet often melodic sound in the mid to late-1960s, much of it relating to transgressive media work by visual artist Andy Warhol, is credited for influencing 1970s bands such as the New York Dolls, The Stooges and the Ramones. Early New York City punk bands were often short-lived, in part due to widespread use of recreational drugs, promiscuous sex, and sometimes violent power struggles, but the relative popularity of the music led to the evolution of punk into a movement and lifestyle.
<P> By late 1976, acts such as the Ramones and Patti Smith, in New York City, and the Sex Pistols and the Clash, in London, were recognized as the vanguard of a new musical movement. The following year saw punk rock spreading around the world. Punk quickly, though briefly, became a major cultural phenomenon in the United Kingdom. For the most part, punk took root in local scenes that tended to reject association with the mainstream. An associated punk subculture emerged, expressing youthful rebellion and characterized by distinctive clothing styles and a variety of anti-authoritarian ideologies.
<P> The origins of New York's punk rock scene can be traced back to such sources as late 1960s trash culture and an early 1970s underground rock movement centered on the Mercer Arts Center in Greenwich Village, where the New York Dolls performed. In early 1974, a new scene began to develop around the CBGB club, also in lower Manhattan. At its core was Television, described by critic John Walker as "the ultimate garage band with pretensions". Their influences ranged from the Velvet Underground to the staccato guitar work of Dr. Feelgood's Wilko Johnson. The band's bassist/singer, Richard Hell, created a look with cropped, ragged hair, ripped T-shirts, and black leather jackets credited as the basis for punk rock visual style. In April 1974, Patti Smith, a member of the Mercer Arts Center crowd and a friend of Hell's, came to CBGB for the first time to see the band perform. A veteran of independent theater and performance poetry, Smith was developing an intellectual, feminist take on rock 'n' roll. On June 5, she recorded the single "Hey Joe"/"Piss Factory", featuring Television guitarist Tom Verlaine; released on her own Mer Records label, it heralded the scene's do it yourself (DIY) ethic and has often been cited as the first punk rock record. By August, Smith and Television were gigging together at another downtown New York club, Max's Kansas City.
<P> Around the mid to late 1970s New York City was arguably the birthplace of punk rock with the Ramones and the scene at CBGB. While the next generation of punks emerged in places like Washington, D.C. (Bad Brains and Minor Threat) and California (Black Flag, Dead Kennedys) in the early 1980s, NYC was initially quiet. A few bands like The Mad and The Stimulators hinted at a new direction. The Stimulators featured Harley Flanagan on drums, and attracted some of what would become the NYHC scene to their shows. The Stimulators and the Mad also made friends with Bad Brains, and gave the latter places to stay in town. In late 1980, Vinnie Stigma formed Agnostic Front, a long-running group who became known as the godfathers of New York Hardcore and arguably its most crucial band. Around the same time the term "hardcore" started being used instead of "punk rock" and bands like Cro-Mags, Murphy's Law and Warzone emerged, further cementing the blueprint for the characteristic NYHC sound. Roger Miret of Agnostic Front asserts that "We started using the term 'hardcore' because we wanted to separate ourselves from the punk scene that was happening in New York at the time ... We were rougher kids living in the streets. It had a rougher edge". The early scene was documented on the 1982 "New York Thrash" compilation. Rock clubs like L'Amour's, A7, Max's, and the already established CBGB's quickly became crucial spots for this newly formed scene.
<P> In the mid-1970s, various American groups (some with ties to Downtown Manhattan's punk scene, including Television and Suicide) had begun expanding on the vocabulary of punk music. Midwestern groups such as Pere Ubu and Devo drew inspiration from the region's derelict industrial environments, employing conceptual art techniques, musique concrète and unconventional verbal styles that would presage the post-punk movement by several years. A variety of subsequent groups, including the Boston-based Mission of Burma and the New York-based Talking Heads, combined elements of punk with art school sensibilities. In 1978, the latter band began with British ambient pioneer and ex-Roxy Music member Brian Eno, experimenting with Dadaist lyrical techniques, electronic sounds, and African polyrhythms. San Francisco's vibrant post-punk scene was centered on such groups as Chrome, the Residents, Tuxedomoon and MX-80, whose influences extended to multimedia experimentation, cabaret and the dramatic theory of Antonin Artaud's Theater of Cruelty.
| question: How did the rise of punk rock differ in NY, LA and London and which scene came first? context: <P> New York City had the earliest documented punk rock scene in the United States. Drawing on local influences such as The Velvet Underground, Richard Hell, and the New York Dolls, punk music developed at clubs such as CBGB and Max's Kansas City. Patti Smith, Talking Heads, Blondie, Suicide, Television, The Fleshtones, and other artsy new wave artists were popular in the mid-to-late 1970s, as bands like the Ramones were establishing an American punk rock sound. CBGB and Max's Kansas City opened their doors and became influential venues. No Wave was a short-lived rock movement in New York and raised James Chance, DNA, Glenn Branca, Lydia Lunch, the Contortions, Teenage Jesus and the Jerks, Mars began experimenting with noise, dissonance and atonality in addition to non-rock styles. Brian Eno-produced "No New York" compilation, often considered the quintessential testament to the scene. Swans, and later Sonic Youth were famous in New York City punk scene.
<P> In the 1970s, punk rock emerged in New York's downtown music scene with seminal bands such as the New York Dolls, Ramones and Patti Smith. Anthrax and KISS were the best known heavy metal and glam rock performers from the city. The downtown scene developed into the "new wave" style of rock music at downtown clubs like CBGB's. The 1970s were also when the Salsa and Latin Jazz movements grew and branched out to the world. Labels such as the "Fania All Stars", musicians like Tito Puente and Celia Cruz and Ralph Mercado, the creator of the RM&M record label, all contributed to stars like Hector LaVoe, Ruben Blades and many others. The New Yorican Sound, differed somewhat from Salsa that came from Puerto Rico, it was being sung by Puerto Rican Americans from New York City and had the swagger of the Big Apple.
<P> The New York City punk rock scene arose from a subcultural underground promoted by artists, reporters, musicians and a wide variety of non-mainstream enthusiasts. The Velvet Underground's harsh and experimental yet often melodic sound in the mid to late-1960s, much of it relating to transgressive media work by visual artist Andy Warhol, is credited for influencing 1970s bands such as the New York Dolls, The Stooges and the Ramones. Early New York City punk bands were often short-lived, in part due to widespread use of recreational drugs, promiscuous sex, and sometimes violent power struggles, but the relative popularity of the music led to the evolution of punk into a movement and lifestyle.
<P> By late 1976, acts such as the Ramones and Patti Smith, in New York City, and the Sex Pistols and the Clash, in London, were recognized as the vanguard of a new musical movement. The following year saw punk rock spreading around the world. Punk quickly, though briefly, became a major cultural phenomenon in the United Kingdom. For the most part, punk took root in local scenes that tended to reject association with the mainstream. An associated punk subculture emerged, expressing youthful rebellion and characterized by distinctive clothing styles and a variety of anti-authoritarian ideologies.
<P> The origins of New York's punk rock scene can be traced back to such sources as late 1960s trash culture and an early 1970s underground rock movement centered on the Mercer Arts Center in Greenwich Village, where the New York Dolls performed. In early 1974, a new scene began to develop around the CBGB club, also in lower Manhattan. At its core was Television, described by critic John Walker as "the ultimate garage band with pretensions". Their influences ranged from the Velvet Underground to the staccato guitar work of Dr. Feelgood's Wilko Johnson. The band's bassist/singer, Richard Hell, created a look with cropped, ragged hair, ripped T-shirts, and black leather jackets credited as the basis for punk rock visual style. In April 1974, Patti Smith, a member of the Mercer Arts Center crowd and a friend of Hell's, came to CBGB for the first time to see the band perform. A veteran of independent theater and performance poetry, Smith was developing an intellectual, feminist take on rock 'n' roll. On June 5, she recorded the single "Hey Joe"/"Piss Factory", featuring Television guitarist Tom Verlaine; released on her own Mer Records label, it heralded the scene's do it yourself (DIY) ethic and has often been cited as the first punk rock record. By August, Smith and Television were gigging together at another downtown New York club, Max's Kansas City.
<P> Around the mid to late 1970s New York City was arguably the birthplace of punk rock with the Ramones and the scene at CBGB. While the next generation of punks emerged in places like Washington, D.C. (Bad Brains and Minor Threat) and California (Black Flag, Dead Kennedys) in the early 1980s, NYC was initially quiet. A few bands like The Mad and The Stimulators hinted at a new direction. The Stimulators featured Harley Flanagan on drums, and attracted some of what would become the NYHC scene to their shows. The Stimulators and the Mad also made friends with Bad Brains, and gave the latter places to stay in town. In late 1980, Vinnie Stigma formed Agnostic Front, a long-running group who became known as the godfathers of New York Hardcore and arguably its most crucial band. Around the same time the term "hardcore" started being used instead of "punk rock" and bands like Cro-Mags, Murphy's Law and Warzone emerged, further cementing the blueprint for the characteristic NYHC sound. Roger Miret of Agnostic Front asserts that "We started using the term 'hardcore' because we wanted to separate ourselves from the punk scene that was happening in New York at the time ... We were rougher kids living in the streets. It had a rougher edge". The early scene was documented on the 1982 "New York Thrash" compilation. Rock clubs like L'Amour's, A7, Max's, and the already established CBGB's quickly became crucial spots for this newly formed scene.
<P> In the mid-1970s, various American groups (some with ties to Downtown Manhattan's punk scene, including Television and Suicide) had begun expanding on the vocabulary of punk music. Midwestern groups such as Pere Ubu and Devo drew inspiration from the region's derelict industrial environments, employing conceptual art techniques, musique concrète and unconventional verbal styles that would presage the post-punk movement by several years. A variety of subsequent groups, including the Boston-based Mission of Burma and the New York-based Talking Heads, combined elements of punk with art school sensibilities. In 1978, the latter band began with British ambient pioneer and ex-Roxy Music member Brian Eno, experimenting with Dadaist lyrical techniques, electronic sounds, and African polyrhythms. San Francisco's vibrant post-punk scene was centered on such groups as Chrome, the Residents, Tuxedomoon and MX-80, whose influences extended to multimedia experimentation, cabaret and the dramatic theory of Antonin Artaud's Theater of Cruelty.
| answer: New York City is unquestionably the birthplace of punk rock including the term (via *Punk* magazine, founded by *Please Kill Me* author Legs McNeil).Without Ramones first UK tour there are about half a dozen prominent UK punk bands who probably wouldn't exist as they site that band and tour with their direct inspiration to start a band.From the first paragraph of the Wikipedia entry on Ramones: > Despite achieving only limited commercial success, the band was a major influence on the punk rock movement in both the United States and, *perhaps to a greater extent, in the United Kingdom.* (emphasis mine)The UK scene can be credited with injecting a politcal (or just as often, faux or pseudo-political) energy as well as many of the fashion and stylistic elements (ransom note printing, etc.) associated with punk via Vivianne Westwood and Malcolm McLaren.I must say I don't know as much about West Coast punk which was isolated compared to the NY and UK which were co-mingling quite a bit. Scenes where happening in LA and the Bay area, the midwest and DC scenes are also worth noting.Eventually scenes popped up across the US, a DIY touring network of small town all ages venues that persisted even a punk slid from mainstream view but paved the way for the "2nd wave" in the 90s (Green Day, Blink) the Warped tour, etc.*Punk Diary 1970-1979* by George Gilmarc (St. Martin's Press, may be out of print? I've seen it used and probably on amazon) is a good reference of the early UK scene.The documentary "Another State of Mind" covers the early Southern California punk scene with a bit on DC and Steven Blush's *American Hardcore* (Feral House, 2001) is yet another "oral history" style text that pretty broad but captures a lot of perspectives. |
95,520 | 1ojnmb | How can tree roots grow under concrete slabs, sidewalks or roads; where dirt is compacted, there is almost no water, oxygen and organic material? | The organic material comes from water and carbon dioxide. Trees build almost all of their mass from [synthesizing](_URL_1_) simple sugar molecules. They use those as energy sources for their cells and as building blocks in large chain molecules such as [cellulose](_URL_2_). These chain molecules are what constitutes the bulk of a tree's mass.The compacted dirt, when viewed in the scale of really small structures like the [growing tip of a root](_URL_0_) is not nearly as impenetrable as it may seem. The cells can exert large amounts of pressure on the surrounding material and penetrate it. I believe, the same principle (slow and steady build up of pressure) applies with grown roots cracking concrete and rocks.For the other parts of your question I don't have a very good idea. | [
"The organic material comes from water and carbon dioxide. Trees build almost all of their mass from [synthesizing](_URL_1_) simple sugar molecules. They use those as energy sources for their cells and as building blocks in large chain molecules such as [cellulose](_URL_2_). These chain molecules are what constitut... | 1 | [] | 0 | <P> Permeable pavements may give urban trees the rooting space they need to grow to full size. A "structural-soil" pavement base combines structural aggregate with soil; a porous surface admits vital air and water to the rooting zone. This integrates healthy ecology and thriving cities, with the living tree canopy above, the city's traffic on the ground, and living tree roots below. The benefits of permeables on urban tree growth have not been conclusively demonstrated and many researchers have observed tree growth is not increased if construction practices compact materials before permeable pavements are installed.
<P> Previously the main problem facing the establishment of trees in paved areas is the lack of enough volume of soil for tree root growth. Soils under pavements are typically so compacted that it stops roots from growing. Older established trees with their roots under pavement grow poorly and often die. They can also cause pavement failure and displacement. Overall pavement preparation and repairs can shorten the life expectancy of a tree to 7–10 years where we could see them grow for at least 50 more years.
<P> The tree grows well in urban areas, and is very good for "sidewalk holes" along busy roads with a lot of traffic where most trees will not grow well. It can provide shade to counter the heat island effect of mainly-concrete areas, as well as habitat for urban animals such as lizards and birds.
<P> Structural Soil is a medium that can be compacted to pavement design and installation requirements while permitting root growth. It is a mixture of gap-graded gravels (mostly made of crushed stone) and soil (mineral content and organic content). It provides an integrated, root penetrable, high strength pavement system that shifts design away from individual tree pits.
<P> A living root bridge is formed by guiding the pliable roots of the "Ficus elastica" tree across a stream or river, and then allowing the roots to grow and strengthen over time until they can hold the weight of a human being. The young roots are sometimes tied or twisted together, and are often encouraged to combine with one another via the process of inosculation. As the "Ficus elastica" tree is well suited to anchoring itself to steep slopes and rocky surfaces, it is not difficult to encourage its roots to take hold on the opposite sides of river banks. As they are made from living, growing, organisms, the useful lifespan of any given living root bridge is variable. It is thought that, under ideal conditions, a root bridge can last for many hundreds of years. As long as the tree from which it is formed remains healthy, the bridge will naturally self-renew and self-strengthen as its component roots grow thicker.
<P> Roads, sidewalks and foundations can all suffer structural issues from tree roots. Several methods of control have been attempted, from barriers to encouraging growth in desirable directs. Selection of plants with root systems that will not conflict with nearby structures is the most effective method of damage control.
<P> Tree roots can heave and destroy concrete sidewalks and crush or clog buried pipes. The aerial roots of strangler fig have damaged ancient Mayan temples in Central America and the temple of Angkor Wat in Cambodia.
| question: How can tree roots grow under concrete slabs, sidewalks or roads; where dirt is compacted, there is almost no water, oxygen and organic material? context: <P> Permeable pavements may give urban trees the rooting space they need to grow to full size. A "structural-soil" pavement base combines structural aggregate with soil; a porous surface admits vital air and water to the rooting zone. This integrates healthy ecology and thriving cities, with the living tree canopy above, the city's traffic on the ground, and living tree roots below. The benefits of permeables on urban tree growth have not been conclusively demonstrated and many researchers have observed tree growth is not increased if construction practices compact materials before permeable pavements are installed.
<P> Previously the main problem facing the establishment of trees in paved areas is the lack of enough volume of soil for tree root growth. Soils under pavements are typically so compacted that it stops roots from growing. Older established trees with their roots under pavement grow poorly and often die. They can also cause pavement failure and displacement. Overall pavement preparation and repairs can shorten the life expectancy of a tree to 7–10 years where we could see them grow for at least 50 more years.
<P> The tree grows well in urban areas, and is very good for "sidewalk holes" along busy roads with a lot of traffic where most trees will not grow well. It can provide shade to counter the heat island effect of mainly-concrete areas, as well as habitat for urban animals such as lizards and birds.
<P> Structural Soil is a medium that can be compacted to pavement design and installation requirements while permitting root growth. It is a mixture of gap-graded gravels (mostly made of crushed stone) and soil (mineral content and organic content). It provides an integrated, root penetrable, high strength pavement system that shifts design away from individual tree pits.
<P> A living root bridge is formed by guiding the pliable roots of the "Ficus elastica" tree across a stream or river, and then allowing the roots to grow and strengthen over time until they can hold the weight of a human being. The young roots are sometimes tied or twisted together, and are often encouraged to combine with one another via the process of inosculation. As the "Ficus elastica" tree is well suited to anchoring itself to steep slopes and rocky surfaces, it is not difficult to encourage its roots to take hold on the opposite sides of river banks. As they are made from living, growing, organisms, the useful lifespan of any given living root bridge is variable. It is thought that, under ideal conditions, a root bridge can last for many hundreds of years. As long as the tree from which it is formed remains healthy, the bridge will naturally self-renew and self-strengthen as its component roots grow thicker.
<P> Roads, sidewalks and foundations can all suffer structural issues from tree roots. Several methods of control have been attempted, from barriers to encouraging growth in desirable directs. Selection of plants with root systems that will not conflict with nearby structures is the most effective method of damage control.
<P> Tree roots can heave and destroy concrete sidewalks and crush or clog buried pipes. The aerial roots of strangler fig have damaged ancient Mayan temples in Central America and the temple of Angkor Wat in Cambodia.
| answer: The organic material comes from water and carbon dioxide. Trees build almost all of their mass from [synthesizing](_URL_1_) simple sugar molecules. They use those as energy sources for their cells and as building blocks in large chain molecules such as [cellulose](_URL_2_). These chain molecules are what constitutes the bulk of a tree's mass.The compacted dirt, when viewed in the scale of really small structures like the [growing tip of a root](_URL_0_) is not nearly as impenetrable as it may seem. The cells can exert large amounts of pressure on the surrounding material and penetrate it. I believe, the same principle (slow and steady build up of pressure) applies with grown roots cracking concrete and rocks.For the other parts of your question I don't have a very good idea. |
175,184 | 4covkn | What really happens when I "get used" to cold water? | The thermoreceptors in your skin send signals towards your brain when there is a *change in temperature*.When you have exposed yourself to cold water, you feel the immediate change in temperature at the surface of your skin. At this point, your sympathetic nervous system (which controls the unconscious 'fight or flight' responses) will stimulate the release of hormones which begin to cause vasoconstriction in your skin, arms and legs.Your extremities will reduce in temperature, and the temperature gradient between the water and your core will reduce, along with the feeling of 'cold'. Heat flow is proportional to temperature gradient, so you will actually lose less heat. Diminished skin and extremity blood flow increases the thermal insulation of those superficial tissues more than 300% [[1](_URL_0_)]. | [
"You perceive cold from the rate of change in thermal energy. The greater the difference in energy (temperature) two objects are, the faster the rate of change in thermal energy occurs. \n\nThermal energy flows from high to low. So your body to the water. When you initially jump in the water the difference in energ... | 8 | [
"The thermoreceptors in your skin send signals towards your brain when there is a *change in temperature*.\n\nWhen you have exposed yourself to cold water, you feel the immediate change in temperature at the surface of your skin. At this point, your sympathetic nervous system (which controls the unconscious 'fight ... | 6 | <P> The HELP is an attempt to reduce heat loss enough to lessen the effect of hypothermia. Hypothermia is essentially a condition where bodily temperature drops too low to perform normal voluntary or involuntary functions. Cold water causes "immersion hypothermia", which can cause damage to extremities or the body's core, including unconsciousness or death.
<P> Cold water dousing is used to "shock" the body into a kind of fever. The body's reaction is similar to the mammalian diving reflex or possibly temperature biofeedback. Several meditative and awareness techniques seem to share similar effects with elevated temperature, such as Tummo. Compare cold water dousing with ice swimming.
<P> One report suggested that if ice water is circulating, it's even colder such that the water will be colder than measured by a thermometer, and that athletes should avoid overexposure. Physical therapist Nikki Kimball explained a way to make the bath more endurable:
<P> Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute after falling into cold water can survive for at least thirty minutes provided they do not drown. The ability to stay afloat declines substantially after about ten minutes as the chilled muscles lose strength and co-ordination.
<P> The simplest use of cold water is for air conditioning: using the cold water itself to cool air saves the energy that would be used by the compressors for traditional refrigeration. Another use could be to replace expensive desalination plants. When cold water passes through a pipe surrounded by humid air, condensation results. The condensate is pure water, suitable for humans to drink or for crop irrigation. Via a technology called Ocean thermal energy conversion, the temperature difference can be turned into electricity.
<P> For peripheral hyperhidrosis, some chronic sufferers have found relief by simply ingesting crushed ice water. Ice water helps to cool excessive body heat during its transport through the blood vessels to the extremities, effectively lowering overall body temperature to normal levels within ten to thirty minutes.
<P> Water supply and treatment is especially challenging in the cold. US Army tactical water purification systems require a winter kit operate between . Water storage may require heating. Water source exploitation may require angering through ice, if shaped charges are not available. The water distribution system can be subject to freezing and clogging from frazil ice. Where chemical treatment is used, it takes longer to dissolve in the treated water.
| question: What really happens when I "get used" to cold water? context: <P> The HELP is an attempt to reduce heat loss enough to lessen the effect of hypothermia. Hypothermia is essentially a condition where bodily temperature drops too low to perform normal voluntary or involuntary functions. Cold water causes "immersion hypothermia", which can cause damage to extremities or the body's core, including unconsciousness or death.
<P> Cold water dousing is used to "shock" the body into a kind of fever. The body's reaction is similar to the mammalian diving reflex or possibly temperature biofeedback. Several meditative and awareness techniques seem to share similar effects with elevated temperature, such as Tummo. Compare cold water dousing with ice swimming.
<P> One report suggested that if ice water is circulating, it's even colder such that the water will be colder than measured by a thermometer, and that athletes should avoid overexposure. Physical therapist Nikki Kimball explained a way to make the bath more endurable:
<P> Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute after falling into cold water can survive for at least thirty minutes provided they do not drown. The ability to stay afloat declines substantially after about ten minutes as the chilled muscles lose strength and co-ordination.
<P> The simplest use of cold water is for air conditioning: using the cold water itself to cool air saves the energy that would be used by the compressors for traditional refrigeration. Another use could be to replace expensive desalination plants. When cold water passes through a pipe surrounded by humid air, condensation results. The condensate is pure water, suitable for humans to drink or for crop irrigation. Via a technology called Ocean thermal energy conversion, the temperature difference can be turned into electricity.
<P> For peripheral hyperhidrosis, some chronic sufferers have found relief by simply ingesting crushed ice water. Ice water helps to cool excessive body heat during its transport through the blood vessels to the extremities, effectively lowering overall body temperature to normal levels within ten to thirty minutes.
<P> Water supply and treatment is especially challenging in the cold. US Army tactical water purification systems require a winter kit operate between . Water storage may require heating. Water source exploitation may require angering through ice, if shaped charges are not available. The water distribution system can be subject to freezing and clogging from frazil ice. Where chemical treatment is used, it takes longer to dissolve in the treated water.
| answer: The thermoreceptors in your skin send signals towards your brain when there is a *change in temperature*.When you have exposed yourself to cold water, you feel the immediate change in temperature at the surface of your skin. At this point, your sympathetic nervous system (which controls the unconscious 'fight or flight' responses) will stimulate the release of hormones which begin to cause vasoconstriction in your skin, arms and legs.Your extremities will reduce in temperature, and the temperature gradient between the water and your core will reduce, along with the feeling of 'cold'. Heat flow is proportional to temperature gradient, so you will actually lose less heat. Diminished skin and extremity blood flow increases the thermal insulation of those superficial tissues more than 300% [[1](_URL_0_)]. |
179,205 | 6shlcc | how am i able to identify a tv show as being a soap opera simply by watching for a few seconds? what are they doing with the camera that's so unusual? | Soap operas are film at 60hz and your tv displays 60hz. If you watch a soap opera at 30hz the effect goes away. Film with your phone at 60 and it will cause the same effect. Ot watch movies at 120hz. The blur and noise effect goes away. Also the aweful production sets give them away. | [
"Soap operas are film at 60hz and your tv displays 60hz. If you watch a soap opera at 30hz the effect goes away. Film with your phone at 60 and it will cause the same effect. Ot watch movies at 120hz. The blur and noise effect goes away. Also the aweful production sets give them away."
] | 1 | [] | 0 | <P> The show's format, unchanged since its debut, began with a 15-minute daily recap of soap opera serials aired that day, with the last 45 minutes being a talk/variety-type show. In the latter portion of the show, subjects could vary from show to show. In one show, a famous TV, movie, or newsmaking celebrity may make an appearance as a special guest (usually discussing their latest work, etc.), while the very next day could focus on a guest who has survived against all odds.
<P> Daytime soap operas frequently present clip shows as a way to commemorate a show's milestone anniversary or the death of a long-running character. Many fans take advantage of the shows in order to see vintage clips of a particular soap opera. One example was an episode of "As the World Turns" in which seven of the longest running characters were stranded in a forest and remembered some of their best moments, all in honor of "AtWT"'s 50th anniversary.
<P> This TV series showed film clips, commercials, television clips and screen tests of celebrities before they became famous, divided into segments spotlighting different genres: "Song of the Week" featured a musical performance by a then-unknown celebrity, while "As the Star Turns" showed celebrities who got their start in minor soap opera roles, and each episode ended with a "Viewer Mail" segment, in which Baio would read letters from home viewers who wanted to see a particular celebrity early in his or her career.
<P> Somewhat inspired by "Monty Python's Flying Circus", each episode of "No Soap, Radio" was filled with sight gags, blackouts, and non-sequiturs. The show would frequently cut away to "Special Reports" right in the middle of a scene, with a fictitious news anchor detailing an improbable story. At other times, characters would watch a television commercial that would suddenly become the focus of a scene. Still other times, doors within the hotel might be opened to reveal any sort of environment from a business to a national park, and entire scenes would play out in these "hotel rooms" with no seeming connection to the main plot.
<P> Unlike the typical studio based soap opera, the series was shot entirely on location on Sparrow Lake in Muskoka and Whitevale, Ontario, which gave the show an authentic looking background. It was produced entirely in digital format, to reduce production costs, and allow for easier editing when adapted for foreign markets. The show's production company is Breakthrough Entertainment.
<P> British soaps have never credited cast members or crew members in their opening titles nor do they show video or images of the cast members. However, in recent years these programmes have listed the writers, producers and directors over the first scene of the episode and episode titles if they apply. The opening titles of "Hollyoaks" feature regular characters in short (less than one second) scenes intended to capture their character.
<P> UK soap operas are shot on videotape in the studio using a multi-camera setup. In their early years "Coronation Street" and "Emmerdale" used 16 mm film for footage shot on location. Since the 1980s UK soap opera have routinely featured scenes shot outdoors in each episode. This footage is shot on videotape on a purpose-built outdoor set that represents the community that the soap focuses on.
| question: how am i able to identify a tv show as being a soap opera simply by watching for a few seconds? what are they doing with the camera that's so unusual? context: <P> The show's format, unchanged since its debut, began with a 15-minute daily recap of soap opera serials aired that day, with the last 45 minutes being a talk/variety-type show. In the latter portion of the show, subjects could vary from show to show. In one show, a famous TV, movie, or newsmaking celebrity may make an appearance as a special guest (usually discussing their latest work, etc.), while the very next day could focus on a guest who has survived against all odds.
<P> Daytime soap operas frequently present clip shows as a way to commemorate a show's milestone anniversary or the death of a long-running character. Many fans take advantage of the shows in order to see vintage clips of a particular soap opera. One example was an episode of "As the World Turns" in which seven of the longest running characters were stranded in a forest and remembered some of their best moments, all in honor of "AtWT"'s 50th anniversary.
<P> This TV series showed film clips, commercials, television clips and screen tests of celebrities before they became famous, divided into segments spotlighting different genres: "Song of the Week" featured a musical performance by a then-unknown celebrity, while "As the Star Turns" showed celebrities who got their start in minor soap opera roles, and each episode ended with a "Viewer Mail" segment, in which Baio would read letters from home viewers who wanted to see a particular celebrity early in his or her career.
<P> Somewhat inspired by "Monty Python's Flying Circus", each episode of "No Soap, Radio" was filled with sight gags, blackouts, and non-sequiturs. The show would frequently cut away to "Special Reports" right in the middle of a scene, with a fictitious news anchor detailing an improbable story. At other times, characters would watch a television commercial that would suddenly become the focus of a scene. Still other times, doors within the hotel might be opened to reveal any sort of environment from a business to a national park, and entire scenes would play out in these "hotel rooms" with no seeming connection to the main plot.
<P> Unlike the typical studio based soap opera, the series was shot entirely on location on Sparrow Lake in Muskoka and Whitevale, Ontario, which gave the show an authentic looking background. It was produced entirely in digital format, to reduce production costs, and allow for easier editing when adapted for foreign markets. The show's production company is Breakthrough Entertainment.
<P> British soaps have never credited cast members or crew members in their opening titles nor do they show video or images of the cast members. However, in recent years these programmes have listed the writers, producers and directors over the first scene of the episode and episode titles if they apply. The opening titles of "Hollyoaks" feature regular characters in short (less than one second) scenes intended to capture their character.
<P> UK soap operas are shot on videotape in the studio using a multi-camera setup. In their early years "Coronation Street" and "Emmerdale" used 16 mm film for footage shot on location. Since the 1980s UK soap opera have routinely featured scenes shot outdoors in each episode. This footage is shot on videotape on a purpose-built outdoor set that represents the community that the soap focuses on.
| answer: Soap operas are film at 60hz and your tv displays 60hz. If you watch a soap opera at 30hz the effect goes away. Film with your phone at 60 and it will cause the same effect. Ot watch movies at 120hz. The blur and noise effect goes away. Also the aweful production sets give them away. |
93,082 | ehvap3 | Why did the Spanish destroy almost all of the Mayan's records? | I think you are referring to the famous auto de fe of Maní organised by Fray Diego de Landa, where not only the Mayan codices were burnt, but also many statues, idols, and any lithurgical implement Fray Diego and his enforcers could get their hands on. The motivation, as you comment, was religious, the idea being to suppress pagan religious practices, that 50 years after the conquest were still very prevalent in the Yucatán peninsula.When the exploration of America and its conquest started, the Spanish monarchs were under the obligation to christianise those new territories, as clearly stated by the Inter Cetera papal bulls, which were incorporated into the common practice of evangelisation of the Indies. This crystallised in the Spanish laws such as the Laws of Burgos from 1512, ratified and expanded in 1513 in Valladolid. One of the obligations the encomenderos had was to evangelise the natives, as well as teach them to write, read, and the "four rules" (addition, substraction, multiplication, and division). Failing to fulfill these duties, or mistreating the natives, resulted, as per the laws, in the loss of the encomienda, complete withba fine of 50 ducados per native affected. These obligations made sense in context, as the new christians would be full subjects of the Crown, with the same rights and duties as anyone from Castile. We can describe the encomienda as a transitory regime towards a late feudal or early modern society.Back to the point, Fray Diego de Landa was extremely thorough in suppressing pagan practises, not only out of a sense of fulfilling the duties he had, but also because he had been very influenced by the teachings of Cardinal Francisco Jiménez de Cisneros. This cardinal had been this thorough in the Kingdom of Granada, to the point of organising a massive bonfire of books in Arabic in order to eradicate Islam from Granada, as Fray Hernando de Talavera's efforts in converting people through reasoning, good will, and charitable works had proved to be not very fruitful. Cisneros opted for a stronger approach, confiscating all the books in Arabic he could and burning them, except for the books on medical matters, which he took and transferred to the university he had recently created in Alcalá de Henares.Landa, in the spirit of Cisneros, burnt anything heretic. However, being an intellectual and a curious man, he annotated many of the drawings and hieroglyphs from the Mayan codices, which were later useful in deciphering the very few surviving Mayan codices, as well as the Mayan inscriptions. Other Spanish scholars of the next century had a lot of interest in Mayan language and compiled grammars and vocabularies of the language, but transliterated into Spanish writing customs.As for the sources of information, Diego de Landa himself wrote a "Relación de las cosas de Yucatán", wheee he gives a detailed account of all of this. There is also the juicio de residencia or process he was subject to after his tenure, where many more details are consigned.I shall also recommend other Spanish sources about the conquest and evangelisation of America, such as Gonzalo Fernández de Oviedo's "Historia General y Natural de las Indias" (1535), Bartolomé de las Casas' "Historia General de las Indias" (1559), and José de Acosta's "Historia natural y moral de las Indias (1590). They also contain the impressions the Spanish had on the cultures they encountered.As for primary sources on the Spanish laws for America, there is the massive "Cedulario Indiano" or "Cedulario de Encinas", from the very late XVI century, accessible online in a very good edition published for free in the BOE (State's Official Bulletin). This cedulario or legal compilation contains every single law or ordinance that concerned the Spsnish America.Sources: Landa, Diego de (1566), *Relación de las cosas del Yucatán*. Modern edition by the Association of Mayaists, [here](_URL_3_). Fernández de Oviedo, Gonzalo (1535), *Historia General y Natural de las Indias*. Modern edition by José Amador de los Ríos (Madrid: BAE, 1959). Digitisation of the princeps edition, [here](_URL_1_). Casas, Bartolomé de las (1559), *Historia general de las Indias*. Modern edition available in Biblioteca Virtual Miguel de Cervantes. Digitisation of the manuscript, [here](_URL_4_). Acosta, José de (1590), *Historia natural y moral de las Indias*. Modern edition by Fermín del Pino (Madrid: Consejo Superior de Investigaciones Científicas, 2008). Digitisation of the princeps edition, [here](_URL_2_). García Oro, José (2002), *Cisneros: El cardenal de España*. Barcelona: Ariel. Morley, Sylvanus Griswold (1915), *An introduction to the study of the Mayan hieroglyphs*. Washington: Government Printing Office. Available on Project Gutenberg, [here](_URL_0_). Oroza Díaz, Jaime (1984). *Historia de Yucatán*. Mérida: Ediciones de la Universidad Autónoma de Yucatán | [
"I think you are referring to the famous auto de fe of Maní organised by Fray Diego de Landa, where not only the Mayan codices were burnt, but also many statues, idols, and any lithurgical implement Fray Diego and his enforcers could get their hands on. The motivation, as you comment, was religious, the idea being ... | 1 | [
"I think you are referring to the famous auto de fe of Maní organised by Fray Diego de Landa, where not only the Mayan codices were burnt, but also many statues, idols, and any lithurgical implement Fray Diego and his enforcers could get their hands on. The motivation, as you comment, was religious, the idea being ... | 1 | <P> The interpretation of Maya hieroglyphs was lost as a result of the Spanish Conquest of Central America. However, recent work by Maya epigraphers and linguists has yielded a considerable amount of information on this complex writing system.
<P> Because the Spanish were now in power, native culture and religion were forbidden. The Spanish even went as far as burning the Maya Codices (like books). These codices contained information about astrology, religion, Gods, and rituals. There are four codices known to exist today; these are the Dresden Codex, Paris Codex, Madrid Codex, and HI Codex. The Spanish also melted down countless pieces of golden artwork so they could bring the gold back to Spain and destroyed countless pieces of art that they viewed as unchristian.
<P> The Catholic Church and colonial officials, notably Bishop Diego de Landa, destroyed Maya texts wherever they found them, and with them the knowledge of Maya writing, but by chance three uncontested pre-Columbian books dated to the Postclassic period have been preserved. These are known as the "Madrid Codex", the "Dresden Codex" and the "Paris Codex". A few pages survive from a fourth, the "Grolier Codex", whose authenticity is disputed. Archaeology conducted at Maya sites often reveals other fragments, rectangular lumps of plaster and paint chips which were codices; these tantalizing remains are, however, too severely damaged for any inscriptions to have survived, most of the organic material having decayed. In reference to the few extant Maya writings, Michael D. Coe stated:
<P> Only four Maya codices are known to have survived the conquistadors. Most surviving texts are found on pottery recovered from Maya tombs, or from monuments and stelae erected in sites which were abandoned or buried before the arrival of the Spanish.
<P> Unfortunately, the Spanish arrival meant that preexisting Mesoamerican books and libraries were destroyed by conquistadores and missionaries. Only 15 codices survived after 1521; these include the Borgia codex, the Vatican B codex, and the Tro-Cortesiano codex. However, codices were slow to die out; Spanish-language, bilingual, and indigenous-language codices continued to be produced, with the list of materials changing to include paper and the subjects focusing on the Christian religion and tribute to colonial administrators. One such example is the Codex Mendoza; it contains ethnography of the Aztecs with a commentary by Spanish priests and was created in 1541 as a gift for Charles V of Spain. The first Mexican printing press was established in 1539 by Juan Pablos Due to the lack of widespread Spanish literacy, most printed items were stored in the library of the university of Mexico City or in the private libraries of clergy, noblemen, and government officials. Sor Juana Inés de la Cruz was one of the intellectuals of Mexico during the late 15th century. The Carmelite nun used a 4,000-volume library established by her grandfather to further her education; she corresponded with Sir Isaac Newton and was also renowned for her skill in poetry. Unfortunately, Sor Juana became embroiled in a battle with Church politics in 1690; although she passionately defended the right of women to an education, she was banned from writing and her library in 1691, dying four years later.
<P> Some scholars however dispute the claim that in the early days of Spanish intrusion, priests in their zealous rage against paganism destroyed all existing records, as well as all forms of writing and art works, regarding ancient Philippine folk heroes.
<P> Although many indigenous manuscripts have been lost or destroyed, texts known Aztec codices, Mayan codices, and Mixtec codices still survive and are of intense interest to scholars of the prehispanic era.
| question: Why did the Spanish destroy almost all of the Mayan's records? context: <P> The interpretation of Maya hieroglyphs was lost as a result of the Spanish Conquest of Central America. However, recent work by Maya epigraphers and linguists has yielded a considerable amount of information on this complex writing system.
<P> Because the Spanish were now in power, native culture and religion were forbidden. The Spanish even went as far as burning the Maya Codices (like books). These codices contained information about astrology, religion, Gods, and rituals. There are four codices known to exist today; these are the Dresden Codex, Paris Codex, Madrid Codex, and HI Codex. The Spanish also melted down countless pieces of golden artwork so they could bring the gold back to Spain and destroyed countless pieces of art that they viewed as unchristian.
<P> The Catholic Church and colonial officials, notably Bishop Diego de Landa, destroyed Maya texts wherever they found them, and with them the knowledge of Maya writing, but by chance three uncontested pre-Columbian books dated to the Postclassic period have been preserved. These are known as the "Madrid Codex", the "Dresden Codex" and the "Paris Codex". A few pages survive from a fourth, the "Grolier Codex", whose authenticity is disputed. Archaeology conducted at Maya sites often reveals other fragments, rectangular lumps of plaster and paint chips which were codices; these tantalizing remains are, however, too severely damaged for any inscriptions to have survived, most of the organic material having decayed. In reference to the few extant Maya writings, Michael D. Coe stated:
<P> Only four Maya codices are known to have survived the conquistadors. Most surviving texts are found on pottery recovered from Maya tombs, or from monuments and stelae erected in sites which were abandoned or buried before the arrival of the Spanish.
<P> Unfortunately, the Spanish arrival meant that preexisting Mesoamerican books and libraries were destroyed by conquistadores and missionaries. Only 15 codices survived after 1521; these include the Borgia codex, the Vatican B codex, and the Tro-Cortesiano codex. However, codices were slow to die out; Spanish-language, bilingual, and indigenous-language codices continued to be produced, with the list of materials changing to include paper and the subjects focusing on the Christian religion and tribute to colonial administrators. One such example is the Codex Mendoza; it contains ethnography of the Aztecs with a commentary by Spanish priests and was created in 1541 as a gift for Charles V of Spain. The first Mexican printing press was established in 1539 by Juan Pablos Due to the lack of widespread Spanish literacy, most printed items were stored in the library of the university of Mexico City or in the private libraries of clergy, noblemen, and government officials. Sor Juana Inés de la Cruz was one of the intellectuals of Mexico during the late 15th century. The Carmelite nun used a 4,000-volume library established by her grandfather to further her education; she corresponded with Sir Isaac Newton and was also renowned for her skill in poetry. Unfortunately, Sor Juana became embroiled in a battle with Church politics in 1690; although she passionately defended the right of women to an education, she was banned from writing and her library in 1691, dying four years later.
<P> Some scholars however dispute the claim that in the early days of Spanish intrusion, priests in their zealous rage against paganism destroyed all existing records, as well as all forms of writing and art works, regarding ancient Philippine folk heroes.
<P> Although many indigenous manuscripts have been lost or destroyed, texts known Aztec codices, Mayan codices, and Mixtec codices still survive and are of intense interest to scholars of the prehispanic era.
| answer: I think you are referring to the famous auto de fe of Maní organised by Fray Diego de Landa, where not only the Mayan codices were burnt, but also many statues, idols, and any lithurgical implement Fray Diego and his enforcers could get their hands on. The motivation, as you comment, was religious, the idea being to suppress pagan religious practices, that 50 years after the conquest were still very prevalent in the Yucatán peninsula.When the exploration of America and its conquest started, the Spanish monarchs were under the obligation to christianise those new territories, as clearly stated by the Inter Cetera papal bulls, which were incorporated into the common practice of evangelisation of the Indies. This crystallised in the Spanish laws such as the Laws of Burgos from 1512, ratified and expanded in 1513 in Valladolid. One of the obligations the encomenderos had was to evangelise the natives, as well as teach them to write, read, and the "four rules" (addition, substraction, multiplication, and division). Failing to fulfill these duties, or mistreating the natives, resulted, as per the laws, in the loss of the encomienda, complete withba fine of 50 ducados per native affected. These obligations made sense in context, as the new christians would be full subjects of the Crown, with the same rights and duties as anyone from Castile. We can describe the encomienda as a transitory regime towards a late feudal or early modern society.Back to the point, Fray Diego de Landa was extremely thorough in suppressing pagan practises, not only out of a sense of fulfilling the duties he had, but also because he had been very influenced by the teachings of Cardinal Francisco Jiménez de Cisneros. This cardinal had been this thorough in the Kingdom of Granada, to the point of organising a massive bonfire of books in Arabic in order to eradicate Islam from Granada, as Fray Hernando de Talavera's efforts in converting people through reasoning, good will, and charitable works had proved to be not very fruitful. Cisneros opted for a stronger approach, confiscating all the books in Arabic he could and burning them, except for the books on medical matters, which he took and transferred to the university he had recently created in Alcalá de Henares.Landa, in the spirit of Cisneros, burnt anything heretic. However, being an intellectual and a curious man, he annotated many of the drawings and hieroglyphs from the Mayan codices, which were later useful in deciphering the very few surviving Mayan codices, as well as the Mayan inscriptions. Other Spanish scholars of the next century had a lot of interest in Mayan language and compiled grammars and vocabularies of the language, but transliterated into Spanish writing customs.As for the sources of information, Diego de Landa himself wrote a "Relación de las cosas de Yucatán", wheee he gives a detailed account of all of this. There is also the juicio de residencia or process he was subject to after his tenure, where many more details are consigned.I shall also recommend other Spanish sources about the conquest and evangelisation of America, such as Gonzalo Fernández de Oviedo's "Historia General y Natural de las Indias" (1535), Bartolomé de las Casas' "Historia General de las Indias" (1559), and José de Acosta's "Historia natural y moral de las Indias (1590). They also contain the impressions the Spanish had on the cultures they encountered.As for primary sources on the Spanish laws for America, there is the massive "Cedulario Indiano" or "Cedulario de Encinas", from the very late XVI century, accessible online in a very good edition published for free in the BOE (State's Official Bulletin). This cedulario or legal compilation contains every single law or ordinance that concerned the Spsnish America.Sources: Landa, Diego de (1566), *Relación de las cosas del Yucatán*. Modern edition by the Association of Mayaists, [here](_URL_3_). Fernández de Oviedo, Gonzalo (1535), *Historia General y Natural de las Indias*. Modern edition by José Amador de los Ríos (Madrid: BAE, 1959). Digitisation of the princeps edition, [here](_URL_1_). Casas, Bartolomé de las (1559), *Historia general de las Indias*. Modern edition available in Biblioteca Virtual Miguel de Cervantes. Digitisation of the manuscript, [here](_URL_4_). Acosta, José de (1590), *Historia natural y moral de las Indias*. Modern edition by Fermín del Pino (Madrid: Consejo Superior de Investigaciones Científicas, 2008). Digitisation of the princeps edition, [here](_URL_2_). García Oro, José (2002), *Cisneros: El cardenal de España*. Barcelona: Ariel. Morley, Sylvanus Griswold (1915), *An introduction to the study of the Mayan hieroglyphs*. Washington: Government Printing Office. Available on Project Gutenberg, [here](_URL_0_). Oroza Díaz, Jaime (1984). *Historia de Yucatán*. Mérida: Ediciones de la Universidad Autónoma de Yucatán |
158,059 | 23h2pq | If our eyes have red, green, and blue cones, how do we see yellow and orange? | Yellow light will stimulate both the red and green cones, to a particular extent. The amount of stimulation will determine the exact color you see. This is true for every color you see, very rarely will you encounter pure blue/red/green light source. | [
"Yellow light will stimulate both the red and green cones, to a particular extent. The amount of stimulation will determine the exact color you see. \n\nThis is true for every color you see, very rarely will you encounter pure blue/red/green light source. "
] | 1 | [] | 0 | <P> People who are colorblind have mutations in their genes that cause a loss of either red or green cones, and they therefore have a hard time distinguishing between colors. There are three kinds of cones in the human eye: red, green, and blue.
<P> An orange color is not as close to white. It doesn't activate the retina as much as yellow. Orange's complement is blue, which is that much closer to white than was violet. A red color is halfway between white and black. Red's complement is green which is also halfway between white and black. With red and green, the retina's qualitatively divided activity consists of two equal halves.
<P> For instance, yellow light uses different proportions of red and green, but little blue, so any hue depends on a mix of all three cones, for example, a strong blue, medium green, and low red. Moreover, the intensity of colors can be changed without changing their hues, since intensity depends on the frequency of discharge to the brain, as a blue-green can be brightened but retain the same hue. The system is not perfect, as it does not distinguish yellow from a red-green mixture, but can powerfully detect subtle environmental changes.
<P> In humans, the pigmentation of the iris varies from light brown to black, depending on the concentration of melanin in the iris pigment epithelium (located on the back of the iris), the melanin content within the iris stroma (located at the front of the iris), and the cellular density of the stroma. The appearance of blue and green, as well as hazel eyes, results from the Tyndall scattering of light in the stroma, a phenomenon similar to that which accounts for the blueness of the sky called Rayleigh scattering. Neither blue nor green pigments are ever present in the human iris or ocular fluid. Eye color is thus an instance of structural color and varies depending on the lighting conditions, especially for lighter-colored eyes.
<P> As with blue eyes, the color of green eyes does not result simply from the pigmentation of the iris. The green color is caused by the combination of: 1) an amber or light brown pigmentation in the stroma of the iris (which has a low or moderate concentration of melanin) with: 2) a blue shade created by the Rayleigh scattering of reflected light. Green eyes contain the yellowish pigment lipochrome.
<P> Protanopes, who are missing long wavelength sensitive cones, are unable to distinguish between colours in the green-yellow-red section of the electromagnetic spectrum. They find yellow, red and orange colours to have much lower brightness when compared to a trichromat. The dimming of these colours can result in confusion in many cases, such as when attempting to identify red traffic lights, which appear to be clear. Other colour perception issues include having trouble distinguishing yellows from reds and violet, lavender and purple from blue. In other cases, objects that reflect both red and blue light may appear to just be blue to these individuals.
<P> Red and green are two completely equal qualitative halves of the retina's activity. Orange is 2/3 of this activity, and its complement, blue, is only 1/3. Yellow is ¾ of the full activity, and its complement, violet, is only ¼.
| question: If our eyes have red, green, and blue cones, how do we see yellow and orange? context: <P> People who are colorblind have mutations in their genes that cause a loss of either red or green cones, and they therefore have a hard time distinguishing between colors. There are three kinds of cones in the human eye: red, green, and blue.
<P> An orange color is not as close to white. It doesn't activate the retina as much as yellow. Orange's complement is blue, which is that much closer to white than was violet. A red color is halfway between white and black. Red's complement is green which is also halfway between white and black. With red and green, the retina's qualitatively divided activity consists of two equal halves.
<P> For instance, yellow light uses different proportions of red and green, but little blue, so any hue depends on a mix of all three cones, for example, a strong blue, medium green, and low red. Moreover, the intensity of colors can be changed without changing their hues, since intensity depends on the frequency of discharge to the brain, as a blue-green can be brightened but retain the same hue. The system is not perfect, as it does not distinguish yellow from a red-green mixture, but can powerfully detect subtle environmental changes.
<P> In humans, the pigmentation of the iris varies from light brown to black, depending on the concentration of melanin in the iris pigment epithelium (located on the back of the iris), the melanin content within the iris stroma (located at the front of the iris), and the cellular density of the stroma. The appearance of blue and green, as well as hazel eyes, results from the Tyndall scattering of light in the stroma, a phenomenon similar to that which accounts for the blueness of the sky called Rayleigh scattering. Neither blue nor green pigments are ever present in the human iris or ocular fluid. Eye color is thus an instance of structural color and varies depending on the lighting conditions, especially for lighter-colored eyes.
<P> As with blue eyes, the color of green eyes does not result simply from the pigmentation of the iris. The green color is caused by the combination of: 1) an amber or light brown pigmentation in the stroma of the iris (which has a low or moderate concentration of melanin) with: 2) a blue shade created by the Rayleigh scattering of reflected light. Green eyes contain the yellowish pigment lipochrome.
<P> Protanopes, who are missing long wavelength sensitive cones, are unable to distinguish between colours in the green-yellow-red section of the electromagnetic spectrum. They find yellow, red and orange colours to have much lower brightness when compared to a trichromat. The dimming of these colours can result in confusion in many cases, such as when attempting to identify red traffic lights, which appear to be clear. Other colour perception issues include having trouble distinguishing yellows from reds and violet, lavender and purple from blue. In other cases, objects that reflect both red and blue light may appear to just be blue to these individuals.
<P> Red and green are two completely equal qualitative halves of the retina's activity. Orange is 2/3 of this activity, and its complement, blue, is only 1/3. Yellow is ¾ of the full activity, and its complement, violet, is only ¼.
| answer: Yellow light will stimulate both the red and green cones, to a particular extent. The amount of stimulation will determine the exact color you see. This is true for every color you see, very rarely will you encounter pure blue/red/green light source. |
125,108 | 3jis2w | Some words change their meanings over time based on popular usage (e.g. literally/figuratively). Does this happen often? Does it happen in languages other than English? | Yes to both your questions, [semantic drift](_URL_0_) is one of the main ways in which language change occurs. It's the main reason why the word *gay* has changed its meaning so drastically in the last 50 years, and also why the word *starve* means to die specifically of hunger and not just generally 'to die' as it does in German (*sterben*). The non-English example of semantic drift that comes to me off the top of my head is the word *lecker*, which in German means something like *delicious* and refers specifically to food, whereas in Dutch/Afrikaans has evolved to just generally mean great/cool (so a German might be a little confused as to why Dutch people wish each other a "Delicious New Year" on the 1st of January).I've also been told that semantic drift is one of the main barriers to Finnish people understanding Estonians, since a word in Estonian very often means something completely different in Finnish. But that's entirely anecdotal information. | [
"Yes to both your questions, [semantic drift](_URL_0_) is one of the main ways in which language change occurs. It's the main reason why the word *gay* has changed its meaning so drastically in the last 50 years, and also why the word *starve* means to die specifically of hunger and not just generally 'to die' as i... | 1 | [
"Yes to both your questions, [semantic drift](_URL_0_) is one of the main ways in which language change occurs. It's the main reason why the word *gay* has changed its meaning so drastically in the last 50 years, and also why the word *starve* means to die specifically of hunger and not just generally 'to die' as i... | 1 | <P> Since such change due to very frequent use occurs much more rapidly than the change in meaning all words go through, and since such words are even sometimes still simultaneously used in their original sense, the new usage is often considered incorrect by some speakers. Other examples include "nice", "terrific", "terrible", "awful", "tremendous", "swell", "hopefully" and "very fine" (degrading the meaning of "fine" to "OK").
<P> In linguistic change caused by folk etymology, the form of a word changes so that it better matches its popular rationalisation. Typically this happens either to unanalyzable foreign words or to compounds where the word underlying one part of the compound becomes obsolete.
<P> BULLET::::- References to popular culture, usually celebrities or TV shows. They can be selected to replace a word in reference to the things they were famous for, simply because parts of the words rhyme, or both.
<P> Because they are things of continual discovery and re-invention, science and technology have historically generated forms of speech and writing which have dated and fallen into disuse relatively quickly. However, the emotional associations of certain words have kept them alive, for example: 'Wireless' rather than 'Radio' for a generation of British citizens who lived through the Second World War, even though the older word 'wireless' is an archaism, and in recent years the term has gained renewed popularity.
<P> The word is an example of "Time" magazine's habit of supplying new words through "unusual use of affixes", although "Time" itself objected to the term's inclusion in the 1991 "Random Webster's College Dictionary", citing it as an example of the dictionary "straining ... to avoid giving offense, except to good usage" and "[lending] authority to scores of questionable usages, many of them tinged with politically correct views."
<P> At least one source suggests that the phrase "derives from a literal usage of the exclamation. In the 19th century, when English people used French expressions in conversation they often apologized for it - presumably because many of their listeners (then as now) wouldn't be familiar with the language". The definition cites an example from "The Lady's Magazine", 1830:
<P> Believing a word to have a certain origin, people begin to pronounce, spell, or otherwise use the word in a manner appropriate to that perceived origin. This popular etymologizing has had a powerful influence on the forms which words take. Examples in English include "crayfish" or "crawfish", which are not historically related to "fish" but come from Middle English "crevis", cognate with French "écrevisse". Likewise "chaise lounge", from the original French "chaise longue" ("long chair"), has come to be associated with the word "lounge".
| question: Some words change their meanings over time based on popular usage (e.g. literally/figuratively). Does this happen often? Does it happen in languages other than English? context: <P> Since such change due to very frequent use occurs much more rapidly than the change in meaning all words go through, and since such words are even sometimes still simultaneously used in their original sense, the new usage is often considered incorrect by some speakers. Other examples include "nice", "terrific", "terrible", "awful", "tremendous", "swell", "hopefully" and "very fine" (degrading the meaning of "fine" to "OK").
<P> In linguistic change caused by folk etymology, the form of a word changes so that it better matches its popular rationalisation. Typically this happens either to unanalyzable foreign words or to compounds where the word underlying one part of the compound becomes obsolete.
<P> BULLET::::- References to popular culture, usually celebrities or TV shows. They can be selected to replace a word in reference to the things they were famous for, simply because parts of the words rhyme, or both.
<P> Because they are things of continual discovery and re-invention, science and technology have historically generated forms of speech and writing which have dated and fallen into disuse relatively quickly. However, the emotional associations of certain words have kept them alive, for example: 'Wireless' rather than 'Radio' for a generation of British citizens who lived through the Second World War, even though the older word 'wireless' is an archaism, and in recent years the term has gained renewed popularity.
<P> The word is an example of "Time" magazine's habit of supplying new words through "unusual use of affixes", although "Time" itself objected to the term's inclusion in the 1991 "Random Webster's College Dictionary", citing it as an example of the dictionary "straining ... to avoid giving offense, except to good usage" and "[lending] authority to scores of questionable usages, many of them tinged with politically correct views."
<P> At least one source suggests that the phrase "derives from a literal usage of the exclamation. In the 19th century, when English people used French expressions in conversation they often apologized for it - presumably because many of their listeners (then as now) wouldn't be familiar with the language". The definition cites an example from "The Lady's Magazine", 1830:
<P> Believing a word to have a certain origin, people begin to pronounce, spell, or otherwise use the word in a manner appropriate to that perceived origin. This popular etymologizing has had a powerful influence on the forms which words take. Examples in English include "crayfish" or "crawfish", which are not historically related to "fish" but come from Middle English "crevis", cognate with French "écrevisse". Likewise "chaise lounge", from the original French "chaise longue" ("long chair"), has come to be associated with the word "lounge".
| answer: Yes to both your questions, [semantic drift](_URL_0_) is one of the main ways in which language change occurs. It's the main reason why the word *gay* has changed its meaning so drastically in the last 50 years, and also why the word *starve* means to die specifically of hunger and not just generally 'to die' as it does in German (*sterben*). The non-English example of semantic drift that comes to me off the top of my head is the word *lecker*, which in German means something like *delicious* and refers specifically to food, whereas in Dutch/Afrikaans has evolved to just generally mean great/cool (so a German might be a little confused as to why Dutch people wish each other a "Delicious New Year" on the 1st of January).I've also been told that semantic drift is one of the main barriers to Finnish people understanding Estonians, since a word in Estonian very often means something completely different in Finnish. But that's entirely anecdotal information. |
56,798 | 445vf6 | group think or hive mind. and why does reddit suffer from hive mind instead of other sites such as twitter, facebook, or instagram? or do they suffer as well? is there any advantages to hive mind? | Reddit has a hive mind because all user votes impact all other users. You can't choose to have Reddit not count votes from certain people, so the majority opinions on the website will almost always appear on top for everyone.Twitter, Facebook, and Instagram are more like echo chambers. Users control who they hear from/follow and they tend to follow people with similar viewpoints, so they rarely see views that challenge or oppose the ones they already hold. Users have a little more control over this on Twitter and Instagram. Facebook will automatically start showing you more posts from friends whose past posts you have checked out often. Since you probably read the walls and links of friends you agree with more than the ones you disagree with, Facebook ends up showing you more content from friends you agree with. So even if you have a lot of friends with contrary views you may not end up seeing their posts.tl;dr - Reddit's voting system makes it so one clear winner comes out on top for the whole site. Facebook, Twitter, and Instagram allow groups with similar beliefs to isolate themselves into their own social media circle. | [
"I don't know about the psychology behind groupthink but I would argue one of the major things Reddit has that Twitter, Facebook, and Instagram don't is the upvote/downvote function, which essentially ensures that popular opinions are brought to the top while unpopular ones are pushed to the bottom. Twitter, Instag... | 2 | [
"Reddit has a hive mind because all user votes impact all other users. You can't choose to have Reddit not count votes from certain people, so the majority opinions on the website will almost always appear on top for everyone.\n\nTwitter, Facebook, and Instagram are more like echo chambers. Users control who they h... | 1 | <P> BULLET::::- " [...] critics of Twitter point to the predominance of the hive mind in such social media, the kind of groupthink that submerges independent thinking in favor of conformity to the group, the collective"
<P> As conceived in speculative fiction, hive minds often imply (almost) complete loss (or lack) of individuality, identity, and personhood. The individuals forming the hive may specialize in different functions, similarly to social insects.
<P> Aphorisms are popular because their brevity is inherently suited to Twitter. People often share well known classic aphorisms on Twitter, but some also seek to craft and share their own brief insights on every conceivable topic. Boing Boing has described Twitter as encouraging "a new age of the aphorism", citing the novel aphorisms of Aaron Haspel.
<P> The use of ping servers to direct attention to recent blog posts has led to a rash of "ping spam" or sping, which attempts to direct readers to web pages that are not, in fact, recent blog posts. Examples:
<P> Twitterbots are capable of influencing public opinion about culture, products and political agendas by automatically generating mass amounts of tweets through imitating human communication. "The New York Times" states, "They have sleep-wake cycles so their fakery is more convincing, making them less prone to repetitive patterns that flag them as mere programs." The tweets generated vary anywhere from a simple automated response to content creation and information sharing, all of which depends on the intention of the person purchasing or creating the bot. The social implications these Twitterbots potentially have on human perception are sizeable according to a study published by the ScienceDirect Journal. Looking at the Computers as Social Actors (CASA) paradigm, the journal notes, "people exhibit remarkable social reactions to computers and other media, treating them as if they were real people or real places." The study concluded that Twitterbots were viewed as credible and competent in communication and interaction making them suitable for transmitting information in the social media sphere. While the technological advances have enabled the ability of successful Human-Computer Interaction, the implications are questioned due to the appearance of both benign and malicious bots in the Twitter realm. Benign Twitterbots may generate creative content and relevant product updates whereas malicious bots can make unpopular people seem popular, push irrelevant products on users and spread misinformation, spam and/or slander.
<P> Researchers have hypothesized that queen bee behavior may be developed by women who have achieved high workplace positions within their respective fields as a way to defend against any gender bias found in their cultures. By opposing attempts of subordinates of their own sex to advance in career paths, women displaying queen bee behavior try to fit in with their male counterparts by adhering to the cultural stigmas placed on gender in the workplace. Distancing themselves from female subordinates can allow for the opportunity to show more masculine qualities, stereotypically seen as more culturally valuable and professional. By showing these supposedly important masculine qualities, women displaying queen bee behavior seek to further legitimize their right to be in important professional positions as well as attaining job security by showing commitment to their professional roles.
<P> Yossarian Lives is a metaphorical search engine, a type of Internet search engine. Its algorithms return results that are disparate, but potentially metaphorically related to the user's query. These results are intended to encourage creative thinking and diversity of thought. "We don't want you to know what everyone else knows, we want you to generate new knowledge." The search engine emphasizes new knowledge vs. the reinforcing of existing information. The site works to avoid the "filter bubble" by returning results that are conceptually related but disparate, compared with traditional search engines that return the most popular or common results.
| question: group think or hive mind. and why does reddit suffer from hive mind instead of other sites such as twitter, facebook, or instagram? or do they suffer as well? is there any advantages to hive mind? context: <P> BULLET::::- " [...] critics of Twitter point to the predominance of the hive mind in such social media, the kind of groupthink that submerges independent thinking in favor of conformity to the group, the collective"
<P> As conceived in speculative fiction, hive minds often imply (almost) complete loss (or lack) of individuality, identity, and personhood. The individuals forming the hive may specialize in different functions, similarly to social insects.
<P> Aphorisms are popular because their brevity is inherently suited to Twitter. People often share well known classic aphorisms on Twitter, but some also seek to craft and share their own brief insights on every conceivable topic. Boing Boing has described Twitter as encouraging "a new age of the aphorism", citing the novel aphorisms of Aaron Haspel.
<P> The use of ping servers to direct attention to recent blog posts has led to a rash of "ping spam" or sping, which attempts to direct readers to web pages that are not, in fact, recent blog posts. Examples:
<P> Twitterbots are capable of influencing public opinion about culture, products and political agendas by automatically generating mass amounts of tweets through imitating human communication. "The New York Times" states, "They have sleep-wake cycles so their fakery is more convincing, making them less prone to repetitive patterns that flag them as mere programs." The tweets generated vary anywhere from a simple automated response to content creation and information sharing, all of which depends on the intention of the person purchasing or creating the bot. The social implications these Twitterbots potentially have on human perception are sizeable according to a study published by the ScienceDirect Journal. Looking at the Computers as Social Actors (CASA) paradigm, the journal notes, "people exhibit remarkable social reactions to computers and other media, treating them as if they were real people or real places." The study concluded that Twitterbots were viewed as credible and competent in communication and interaction making them suitable for transmitting information in the social media sphere. While the technological advances have enabled the ability of successful Human-Computer Interaction, the implications are questioned due to the appearance of both benign and malicious bots in the Twitter realm. Benign Twitterbots may generate creative content and relevant product updates whereas malicious bots can make unpopular people seem popular, push irrelevant products on users and spread misinformation, spam and/or slander.
<P> Researchers have hypothesized that queen bee behavior may be developed by women who have achieved high workplace positions within their respective fields as a way to defend against any gender bias found in their cultures. By opposing attempts of subordinates of their own sex to advance in career paths, women displaying queen bee behavior try to fit in with their male counterparts by adhering to the cultural stigmas placed on gender in the workplace. Distancing themselves from female subordinates can allow for the opportunity to show more masculine qualities, stereotypically seen as more culturally valuable and professional. By showing these supposedly important masculine qualities, women displaying queen bee behavior seek to further legitimize their right to be in important professional positions as well as attaining job security by showing commitment to their professional roles.
<P> Yossarian Lives is a metaphorical search engine, a type of Internet search engine. Its algorithms return results that are disparate, but potentially metaphorically related to the user's query. These results are intended to encourage creative thinking and diversity of thought. "We don't want you to know what everyone else knows, we want you to generate new knowledge." The search engine emphasizes new knowledge vs. the reinforcing of existing information. The site works to avoid the "filter bubble" by returning results that are conceptually related but disparate, compared with traditional search engines that return the most popular or common results.
| answer: Reddit has a hive mind because all user votes impact all other users. You can't choose to have Reddit not count votes from certain people, so the majority opinions on the website will almost always appear on top for everyone.Twitter, Facebook, and Instagram are more like echo chambers. Users control who they hear from/follow and they tend to follow people with similar viewpoints, so they rarely see views that challenge or oppose the ones they already hold. Users have a little more control over this on Twitter and Instagram. Facebook will automatically start showing you more posts from friends whose past posts you have checked out often. Since you probably read the walls and links of friends you agree with more than the ones you disagree with, Facebook ends up showing you more content from friends you agree with. So even if you have a lot of friends with contrary views you may not end up seeing their posts.tl;dr - Reddit's voting system makes it so one clear winner comes out on top for the whole site. Facebook, Twitter, and Instagram allow groups with similar beliefs to isolate themselves into their own social media circle. |
5,265 | dhb9b9 | What are the most concerning potential shortages in natural resources? | > Helium? Lithium? Sand?None of those are required for our survival. Compare this to the [loss of insect biomass](_URL_0_) or a [shortage of farmland](_URL_0_) due to climate change. Either of those things would mean that humanity will eventually run out of *food*. That means mass starvation and wars for whatever usable land remains. | [
" > Helium? Lithium? Sand?\n\nNone of those are required for our survival. Compare this to the [loss of insect biomass](_URL_0_) or a [shortage of farmland](_URL_0_) due to climate change. Either of those things would mean that humanity will eventually run out of *food*. That means mass starvation and wars for what... | 1 | [
" > Helium? Lithium? Sand?\n\nNone of those are required for our survival. Compare this to the [loss of insect biomass](_URL_0_) or a [shortage of farmland](_URL_0_) due to climate change. Either of those things would mean that humanity will eventually run out of *food*. That means mass starvation and wars for what... | 1 | <P> The overarching thesis on why there is no resource crisis is that as a particular resource becomes more scarce, its price rises. This price rise creates an incentive for people to discover more of the resource, ration and recycle it, and eventually, develop substitutes. The "ultimate resource" is not any particular physical object but the capacity for humans to invent and adapt.
<P> Several other kinds of resources need to be introduced. If strategic and critical materials are the worst case for resources, unless mitigated by substitution and/or recycling, one of the best is an abundant resource. An abundant resource is one whose material has so far found little use, such as using high-aluminous clays or anorthosite to produce alumina, and magnesium before it was recovered from seawater. An abundant resource is quite similar to a perpetual resource. The reserve base is the part of an identified resource that has a reasonable potential for becoming economically available at a time beyond when currently proven technology and current economics are in operation. Identified resources are those whose location, grade, quality, and quantity are known or estimated from specific geologic evidence. Reserves are that part of the reserve base that can be economically extracted at the time of determination; reserves should not be used as a surrogate for resources because they are often distorted by taxation or the owning firm's public relations needs.
<P> BULLET::::- : The paradox of plenty (resource curse) refers to the paradox that countries and regions with an abundance of natural resources, specifically point-source non-renewable resources like minerals and fuels, tend to have less economic growth and worse development outcomes than countries with fewer natural resources.
<P> Natural resources are a source of economic rent which can generate large revenues for those controlling them even in the absence of political stability and wider economic growth. Their existence is a potential source of conflict between factions fighting for a share of the revenue, which may take the form of armed separatist conflicts in regions where the resources are produced or internal conflict between different government ministries or departments for access to budgetary allocations. This tends to erode governments' abilities to function effectively.
<P> Scholarship on the resource curse has increasingly shifted towards explaining why some resource-rich countries succeed and why others do not, as opposed to just investigating the average economic effects of resources. Research suggests that the manner in which resource income is spent, system of government, institutional quality, type of resources, and early vs. late industrialization all have been used to explain successes and failures.
<P> Local food shortages can be caused by a lack of arable land, adverse weather, lower farming skills such as crop rotation, or by a lack of technology or resources needed for the higher yields found in modern agriculture, such as fertilizers, pesticides, irrigation, machinery and storage facilities. As a result of widespread poverty, farmers cannot afford or governments cannot provide the resources necessary to improve local yields. The World Bank and some wealthy donor countries also press nations that depend on aid to cut or eliminate subsidized agricultural inputs such as fertilizer, in the name of free market policies even as the United States and Europe extensively subsidized their own farmers. Many, if not most, farmers cannot afford fertilizer at market prices, leading to low agricultural production and wages and high, unaffordable food prices.
<P> Many earlier predictions of resource depletion, such as Thomas Malthus' 1798 predictions about approaching famines in Europe, "The Population Bomb" (1968), and the Simon–Ehrlich wager (1980) have not materialized. Diminished production of most resources has not occurred so far, one reason being that advancements in technology and science have allowed some previously unavailable resources to be produced. In some cases, substitution of more abundant materials, such as plastics for cast metals, lowered growth of usage for some metals. In the case of the limited resource of land, famine was relieved firstly by the revolution in transportation caused by railroads and steam ships, and later by the Green Revolution and chemical fertilizers, especially the Haber process for ammonia synthesis.
| question: What are the most concerning potential shortages in natural resources? context: <P> The overarching thesis on why there is no resource crisis is that as a particular resource becomes more scarce, its price rises. This price rise creates an incentive for people to discover more of the resource, ration and recycle it, and eventually, develop substitutes. The "ultimate resource" is not any particular physical object but the capacity for humans to invent and adapt.
<P> Several other kinds of resources need to be introduced. If strategic and critical materials are the worst case for resources, unless mitigated by substitution and/or recycling, one of the best is an abundant resource. An abundant resource is one whose material has so far found little use, such as using high-aluminous clays or anorthosite to produce alumina, and magnesium before it was recovered from seawater. An abundant resource is quite similar to a perpetual resource. The reserve base is the part of an identified resource that has a reasonable potential for becoming economically available at a time beyond when currently proven technology and current economics are in operation. Identified resources are those whose location, grade, quality, and quantity are known or estimated from specific geologic evidence. Reserves are that part of the reserve base that can be economically extracted at the time of determination; reserves should not be used as a surrogate for resources because they are often distorted by taxation or the owning firm's public relations needs.
<P> BULLET::::- : The paradox of plenty (resource curse) refers to the paradox that countries and regions with an abundance of natural resources, specifically point-source non-renewable resources like minerals and fuels, tend to have less economic growth and worse development outcomes than countries with fewer natural resources.
<P> Natural resources are a source of economic rent which can generate large revenues for those controlling them even in the absence of political stability and wider economic growth. Their existence is a potential source of conflict between factions fighting for a share of the revenue, which may take the form of armed separatist conflicts in regions where the resources are produced or internal conflict between different government ministries or departments for access to budgetary allocations. This tends to erode governments' abilities to function effectively.
<P> Scholarship on the resource curse has increasingly shifted towards explaining why some resource-rich countries succeed and why others do not, as opposed to just investigating the average economic effects of resources. Research suggests that the manner in which resource income is spent, system of government, institutional quality, type of resources, and early vs. late industrialization all have been used to explain successes and failures.
<P> Local food shortages can be caused by a lack of arable land, adverse weather, lower farming skills such as crop rotation, or by a lack of technology or resources needed for the higher yields found in modern agriculture, such as fertilizers, pesticides, irrigation, machinery and storage facilities. As a result of widespread poverty, farmers cannot afford or governments cannot provide the resources necessary to improve local yields. The World Bank and some wealthy donor countries also press nations that depend on aid to cut or eliminate subsidized agricultural inputs such as fertilizer, in the name of free market policies even as the United States and Europe extensively subsidized their own farmers. Many, if not most, farmers cannot afford fertilizer at market prices, leading to low agricultural production and wages and high, unaffordable food prices.
<P> Many earlier predictions of resource depletion, such as Thomas Malthus' 1798 predictions about approaching famines in Europe, "The Population Bomb" (1968), and the Simon–Ehrlich wager (1980) have not materialized. Diminished production of most resources has not occurred so far, one reason being that advancements in technology and science have allowed some previously unavailable resources to be produced. In some cases, substitution of more abundant materials, such as plastics for cast metals, lowered growth of usage for some metals. In the case of the limited resource of land, famine was relieved firstly by the revolution in transportation caused by railroads and steam ships, and later by the Green Revolution and chemical fertilizers, especially the Haber process for ammonia synthesis.
| answer: > Helium? Lithium? Sand?None of those are required for our survival. Compare this to the [loss of insect biomass](_URL_0_) or a [shortage of farmland](_URL_0_) due to climate change. Either of those things would mean that humanity will eventually run out of *food*. That means mass starvation and wars for whatever usable land remains. |
59,145 | 4igw01 | is 1 minute in a 500watt microwave the same as 30 seconds in a 1000watt microwave? | Same amount of energy applied, but if it's a food that can't be uniformly heated, you won't get the same result. Sort of like putting a chicken in a 700 degree oven instead of 350 and expecting to cook it twice as fast. Microwaves penetrate more and heat water and fats, so not a perfect analogy. | [
"Under ideal circumstances, yes. For instance, the 1000W microwave will heat a cup of water roughly twice as fast. However, if the food is frozen and must be defrosted first, you would likely want to run the higher power microwave at a reduced power setting to allow the food to defrost first. Otherwise, you can ... | 4 | [
"Same amount of energy applied, but if it's a food that can't be uniformly heated, you won't get the same result. Sort of like putting a chicken in a 700 degree oven instead of 350 and expecting to cook it twice as fast. Microwaves penetrate more and heat water and fats, so not a perfect analogy. "
] | 1 | <P> This was one early method used to generate microwave frequencies of moderate power, 1–2 GHz at 1–5 watts, from about 20 watts at a frequency of 3–400 MHz before adequate transistors had been developed to operate at this higher frequency. This technique is still used to generate much higher frequencies, in the 100 GHz – 1 THz range, where even the fastest GaAs transistors are still inadequate.
<P> Microwaves with a frequency of 300 GHz have a wavelength of 1 mm. Using wavelengths between 30 GHz and 300 GHz for data transmission, in contrast to the 300 MHz to 3 GHz normally used in mobile devices, has the potential to allow data transfer rates of 10 gigabits per second.
<P> Microwaves are electromagnetic waves with wavelengths ranging from as short as one millimeter to as long as one meter, which equates to a frequency range of 300 MHz to 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), but various sources use different other limits. In all cases, microwaves include the entire super high frequency band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3mm).
<P> Microwaves are electromagnetic waves with wavelengths ranging from as long as one meter to as short as one millimeter, or equivalently, with frequencies between 300 MHz (0.3 GHz) and 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. In all cases, microwave includes the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3mm). Applications include cellphone (mobile) telephones, radars, airport scanners, microwave ovens, earth remote sensing satellites, and radio and satellite communications.
<P> The microwave frequencies extend from 300 MHz to 300 GHz corresponding to wavelengths between 1 m and 1 mm. The section from 30 GHz to 300 GHz with wavelengths between 10 mm and 1 mm is also called millimeter waves. Microwaves are in the order of the size of the components to be tested. In different dielectric media they propagate differently fast and at surfaces between them they are reflected. Another part propagates beyond the surface. The larger the difference in the wave impedance, the larger is the reflected part.
<P> An attosecond is equal to 1000 zeptoseconds, or of a femtosecond. Because the next higher SI unit for time is the femtosecond (10 seconds), durations of 10 s and 10 s will typically be expressed as tens or hundreds of attoseconds:
<P> Consumer household microwaves usually come with a cooking power of 600 watts and up, with 1000 or 1200 watts on some models. The size of household microwaves can vary, but usually have an internal volume of around , and external dimensions of approximately wide, deep and tall.
| question: is 1 minute in a 500watt microwave the same as 30 seconds in a 1000watt microwave? context: <P> This was one early method used to generate microwave frequencies of moderate power, 1–2 GHz at 1–5 watts, from about 20 watts at a frequency of 3–400 MHz before adequate transistors had been developed to operate at this higher frequency. This technique is still used to generate much higher frequencies, in the 100 GHz – 1 THz range, where even the fastest GaAs transistors are still inadequate.
<P> Microwaves with a frequency of 300 GHz have a wavelength of 1 mm. Using wavelengths between 30 GHz and 300 GHz for data transmission, in contrast to the 300 MHz to 3 GHz normally used in mobile devices, has the potential to allow data transfer rates of 10 gigabits per second.
<P> Microwaves are electromagnetic waves with wavelengths ranging from as short as one millimeter to as long as one meter, which equates to a frequency range of 300 MHz to 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), but various sources use different other limits. In all cases, microwaves include the entire super high frequency band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3mm).
<P> Microwaves are electromagnetic waves with wavelengths ranging from as long as one meter to as short as one millimeter, or equivalently, with frequencies between 300 MHz (0.3 GHz) and 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. In all cases, microwave includes the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3mm). Applications include cellphone (mobile) telephones, radars, airport scanners, microwave ovens, earth remote sensing satellites, and radio and satellite communications.
<P> The microwave frequencies extend from 300 MHz to 300 GHz corresponding to wavelengths between 1 m and 1 mm. The section from 30 GHz to 300 GHz with wavelengths between 10 mm and 1 mm is also called millimeter waves. Microwaves are in the order of the size of the components to be tested. In different dielectric media they propagate differently fast and at surfaces between them they are reflected. Another part propagates beyond the surface. The larger the difference in the wave impedance, the larger is the reflected part.
<P> An attosecond is equal to 1000 zeptoseconds, or of a femtosecond. Because the next higher SI unit for time is the femtosecond (10 seconds), durations of 10 s and 10 s will typically be expressed as tens or hundreds of attoseconds:
<P> Consumer household microwaves usually come with a cooking power of 600 watts and up, with 1000 or 1200 watts on some models. The size of household microwaves can vary, but usually have an internal volume of around , and external dimensions of approximately wide, deep and tall.
| answer: Same amount of energy applied, but if it's a food that can't be uniformly heated, you won't get the same result. Sort of like putting a chicken in a 700 degree oven instead of 350 and expecting to cook it twice as fast. Microwaves penetrate more and heat water and fats, so not a perfect analogy. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.