index int64 10 229k | q_id stringlengths 5 6 | question stringlengths 4 300 | best_answer stringlengths 13 15k | all_answers sequence | num_answers int64 1 170 | top_answers sequence | num_top_answers int64 0 119 | context stringlengths 1.72k 9.92k | orig stringlengths 1.82k 10k | target stringlengths 21 15k |
|---|---|---|---|---|---|---|---|---|---|---|
144,892 | 1aw68y | What was the crime rate among American GIs during World War 2? | Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.During the Battle of the Bulge German soldiers wearing American uniform, or parts thereof, were summarily executed. That's also a war crime. Summary executions are illegal under the Hague Convention. There were summary executions of SS guards during the liberation of Dachau Concentration Camp.In the Pacific it wasn't uncommon for Japanese attempting to surrender, or having already surrendered, to be killed. It's generally explained by the often no quarter given by the Japanese and so it was returned in kind. There were also quite a lot of cases of American soldiers mutilating the bodies of dead Japanese for souvenirs. Not just talking about your common looting of the dead, but the taking of body parts. I imagine someone will come in and try to claim that the Atomic Bombings of Japan was a crime but it was actually perfectly legal under the rules of the Hague Convention. | [
"Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.\n\nDuring the Battle of the ... | 1 | [
"Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.\n\nDuring the Battle of the ... | 1 | <P> BULLET::::- Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Secret wartime files made public in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in the United Kingdom, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in Great Britain, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common. In the 2007 publication "Taken by Force", sociology and criminology professor J. Robert Lilly estimates US soldiers raped around 11,040 women and children during the occupation of Germany. Many armed soldiers committed gang rapes at gunpoint against female civilians and children. According to German historian Miriam Gebhardt, some 190,000 women were raped by American soldiers in Germany
<P> According to the FBI’s Supplementary Homicide Reports, between the early 1960s and the late 1970s, the rate of homicides doubled. For every 100,000 U.S residents, the homicide victim rate elevated from 4.6 to 9.7.
<P> Taken by Force: Rape and American GIs in Europe in World War II is a 2007 book by Northern Kentucky University sociology and criminology professor J. Robert Lilly that examines the issue of rape by U.S. servicemen in European theatre of World War II.
<P> In the United States, murder rates have been higher and have fluctuated. They fell below 2 per 100,000 by 1900, rose during the first half of the century, dropped in the years following World War II, and bottomed out at 4.0 in 1957 before rising again. The rate stayed in 9 to 10 range most of the period from 1972 to 1994, before falling to 5 in present times. The increase since 1957 would have been even greater if not for the significant improvements in medical techniques and emergency response times, which mean that more and more attempted homicide victims survive. According to one estimate, if the lethality levels of criminal assaults of 1964 still applied in 1993, the country would have seen the murder rate of around 26 per 100,000, almost triple the actually observed rate of 9.5 per 100,000.
<P> Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Other estimates vary greatly, with one magazine for former POWs putting the number of deaths from the Gross Tychow march alone at 1,500. A senior YMCA official closely involved with the POW camps put the number of Commonwealth and American POW deaths at 8,348 between September 1944 and May 1945.
| question: What was the crime rate among American GIs during World War 2? context: <P> BULLET::::- Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Secret wartime files made public in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in the United Kingdom, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in Great Britain, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common. In the 2007 publication "Taken by Force", sociology and criminology professor J. Robert Lilly estimates US soldiers raped around 11,040 women and children during the occupation of Germany. Many armed soldiers committed gang rapes at gunpoint against female civilians and children. According to German historian Miriam Gebhardt, some 190,000 women were raped by American soldiers in Germany
<P> According to the FBI’s Supplementary Homicide Reports, between the early 1960s and the late 1970s, the rate of homicides doubled. For every 100,000 U.S residents, the homicide victim rate elevated from 4.6 to 9.7.
<P> Taken by Force: Rape and American GIs in Europe in World War II is a 2007 book by Northern Kentucky University sociology and criminology professor J. Robert Lilly that examines the issue of rape by U.S. servicemen in European theatre of World War II.
<P> In the United States, murder rates have been higher and have fluctuated. They fell below 2 per 100,000 by 1900, rose during the first half of the century, dropped in the years following World War II, and bottomed out at 4.0 in 1957 before rising again. The rate stayed in 9 to 10 range most of the period from 1972 to 1994, before falling to 5 in present times. The increase since 1957 would have been even greater if not for the significant improvements in medical techniques and emergency response times, which mean that more and more attempted homicide victims survive. According to one estimate, if the lethality levels of criminal assaults of 1964 still applied in 1993, the country would have seen the murder rate of around 26 per 100,000, almost triple the actually observed rate of 9.5 per 100,000.
<P> Secret wartime files made public only in 2006 reveal that American GIs committed 400 sexual offenses in Europe, including 126 rapes in England, between 1942 and 1945. A study by Robert J. Lilly estimates that a total of 14,000 civilian women in England, France and Germany were raped by American GIs during World War II. It is estimated that there were around 3,500 rapes by American servicemen in France between June 1944 and the end of the war and one historian has claimed that sexual violence against women in liberated France was common.
<P> Other estimates vary greatly, with one magazine for former POWs putting the number of deaths from the Gross Tychow march alone at 1,500. A senior YMCA official closely involved with the POW camps put the number of Commonwealth and American POW deaths at 8,348 between September 1944 and May 1945.
| answer: Off the top of my head I can't at the moment think of any large scale war crimes committed by American troops. However, you're always going to have things done by the individual soldier. Band of Brothers (the book) talked about how Liebgott would kill prisoners. That is a war crime.During the Battle of the Bulge German soldiers wearing American uniform, or parts thereof, were summarily executed. That's also a war crime. Summary executions are illegal under the Hague Convention. There were summary executions of SS guards during the liberation of Dachau Concentration Camp.In the Pacific it wasn't uncommon for Japanese attempting to surrender, or having already surrendered, to be killed. It's generally explained by the often no quarter given by the Japanese and so it was returned in kind. There were also quite a lot of cases of American soldiers mutilating the bodies of dead Japanese for souvenirs. Not just talking about your common looting of the dead, but the taking of body parts. I imagine someone will come in and try to claim that the Atomic Bombings of Japan was a crime but it was actually perfectly legal under the rules of the Hague Convention. |
153,467 | 2st4aa | what is happening to my body when i get high while simultaneously being drunk? | It sounds like the problem originates in the brain, which ultimately controls the vomiting reflex. Since both alcohol and vegetable-based hallucinogens scramble the neurons' normal functions, somewhere the decision is taken to park the tiger. | [
"It sounds like the problem originates in the brain, which ultimately controls the vomiting reflex. Since both alcohol and vegetable-based hallucinogens scramble the neurons' normal functions, somewhere the decision is taken to park the tiger."
] | 1 | [] | 0 | <P> Alcohol intoxication, also known as drunkenness or alcohol poisoning, is the negative behavior and physical effects due to the recent drinking of ethanol (alcohol). Symptoms at lower doses may include mild sedation and poor coordination. At higher doses, there may be slurred speech, trouble walking, and vomiting. Extreme doses may result in a decreased effort to breathe (respiratory depression), coma, or death. Complications may include seizures, aspiration pneumonia, injuries including suicide, and low blood sugar.
<P> As drinking increases, people become sleepy, or fall into a stupor. After a very high level of consumption, the respiratory system becomes depressed and the person will stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor co-ordination along with poor judgment increases the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury.
<P> Alcohol also limits the production of vasopressin (ADH) from the hypothalamus and the secretion of this hormone from the posterior pituitary gland. This is what causes severe dehydration when alcohol is consumed in large amounts. It also causes a high concentration of water in the urine and vomit and the intense thirst that goes along with a hangover.
<P> Vomiting excessive amounts of alcohol is an attempt by the body to prevent alcohol poisoning and death. Vomiting may also be caused by other drugs, such as opiates, or toxins found in some foods and plants. Food allergies and sensitivities, such as lactose intolerance, can cause vomiting.
<P> Alcohol is a depressant. After consumption, alcohol causes the body’s systems to slow down. Often, feelings of drunkenness are associated with elation and happiness but other feelings of anger or depression can arise. Balance, judgment, and coordination are also negatively affected. One of the most significant short term side effects of alcohol is reduced inhibition. Reduced inhibitions can lead to an increase in sexual behavior.
<P> Alcohol can also cause alterations in the vestibular system for short periods and will result in vertigo and possibly nystagmus due to the variable viscosity of the blood and the endolymph during the consumption of alcohol. The common term for this type of sensation is the "bed spins".
<P> Several other studies have shown that students who were told they were consuming alcoholic beverages (which in fact were non-alcoholic) perceived themselves as being "drunk", exhibited fewer physiological symptoms of social stress, and drove a simulated car similarly to other subjects who had actually consumed alcohol. The result is somewhat similar to the placebo effect.
| question: what is happening to my body when i get high while simultaneously being drunk? context: <P> Alcohol intoxication, also known as drunkenness or alcohol poisoning, is the negative behavior and physical effects due to the recent drinking of ethanol (alcohol). Symptoms at lower doses may include mild sedation and poor coordination. At higher doses, there may be slurred speech, trouble walking, and vomiting. Extreme doses may result in a decreased effort to breathe (respiratory depression), coma, or death. Complications may include seizures, aspiration pneumonia, injuries including suicide, and low blood sugar.
<P> As drinking increases, people become sleepy, or fall into a stupor. After a very high level of consumption, the respiratory system becomes depressed and the person will stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor co-ordination along with poor judgment increases the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury.
<P> Alcohol also limits the production of vasopressin (ADH) from the hypothalamus and the secretion of this hormone from the posterior pituitary gland. This is what causes severe dehydration when alcohol is consumed in large amounts. It also causes a high concentration of water in the urine and vomit and the intense thirst that goes along with a hangover.
<P> Vomiting excessive amounts of alcohol is an attempt by the body to prevent alcohol poisoning and death. Vomiting may also be caused by other drugs, such as opiates, or toxins found in some foods and plants. Food allergies and sensitivities, such as lactose intolerance, can cause vomiting.
<P> Alcohol is a depressant. After consumption, alcohol causes the body’s systems to slow down. Often, feelings of drunkenness are associated with elation and happiness but other feelings of anger or depression can arise. Balance, judgment, and coordination are also negatively affected. One of the most significant short term side effects of alcohol is reduced inhibition. Reduced inhibitions can lead to an increase in sexual behavior.
<P> Alcohol can also cause alterations in the vestibular system for short periods and will result in vertigo and possibly nystagmus due to the variable viscosity of the blood and the endolymph during the consumption of alcohol. The common term for this type of sensation is the "bed spins".
<P> Several other studies have shown that students who were told they were consuming alcoholic beverages (which in fact were non-alcoholic) perceived themselves as being "drunk", exhibited fewer physiological symptoms of social stress, and drove a simulated car similarly to other subjects who had actually consumed alcohol. The result is somewhat similar to the placebo effect.
| answer: It sounds like the problem originates in the brain, which ultimately controls the vomiting reflex. Since both alcohol and vegetable-based hallucinogens scramble the neurons' normal functions, somewhere the decision is taken to park the tiger. |
41,677 | 1wuprh | why fruits get juicier after ripening after they have been cut off the tree | Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. | [
"Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. ",
"Well usually the fruit has a seed and the rest of the fruit is to provide nutrients to the seed. When the fruit ripens, it slowly starts to feed th... | 2 | [
"Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. "
] | 1 | <P> Fruit maturity is not always apparent visually, as the fruits remain the same shade of green until they are overripe or rotting. One usually may sense ripeness, however, by giving the fruit a soft squeeze; a ripe feijoa yields to pressure somewhat like a just-ripe banana. Generally, the fruit is at its optimum ripeness the day it drops from the tree. While still hanging, it may well prove bitter; once fallen, however, the fruit very quickly becomes overripe, so daily collection of fallen fruit is advisable during the season.
<P> Citrus fruits are nonclimacteric and respiration slowly declines and the production and release of ethylene is gradual. The fruits do not go through a ripening process in the sense that they become "tree ripe". Some fruits, for example cherries, physically mature and then continue to ripen on the tree. Other fruits, such as pears, are picked when mature, but before they ripen, then continue to ripen off the tree. Citrus fruits pass from immaturity to maturity to overmaturity while still on the tree. Once they are separated from the tree, they do not increase in sweetness or continue to ripen. The only way change may happen after being picked is that they eventually start to decay.
<P> The fruits are orange, woody arils and may remain on the parent for several years after splitting open. Fruit production is very rare. Studies from 2010-2012 show that most populations continue to produce no fruit.
<P> Although not a disease as such, irregular supplies of water can cause growing or ripening fruit to split. Besides cosmetic damage, the splits may allow decay to start, although growing fruits have some ability to heal after a split. In addition, a deformity called cat-facing can be caused by pests, temperature stress, or poor soil conditions. Affected fruit usually remains edible, but its appearance may be unsightly.
<P> Ripening occurs when a fruit is mature. Ripeness is followed by senescence and breakdown of the fruit. The category “fruit” refers also to products such as aubergine, sweet pepper and tomato. Non-climacteric fruit only ripen while still attached to the parent plant. Their eating quality suffers if they are harvested before fully ripe as their sugar and acid content does not increase further. Examples are citrus, grapes and pineapple. Early harvesting is often carried out for export shipments to minimise loss during transport, but a consequence of this is that the flavour suffers. Climacteric fruit are those that can be harvested when mature but before ripening has begun. These include banana, melon, papaya, and tomato. In commercial fruit marketing the rate of ripening is controlled artificially, thus enabling transport and distribution to be carefully planned. Ethylene gas is produced in most plant tissues and is important in starting off the ripening process. It can be used commercially for the ripening of climacteric fruits. However, natural ethylene produced by fruits can lead to in- storage losses. For example, ethylene destroys the green colour of plants. Leafy vegetables will be damaged if stored with ripening fruit. Ethylene production is increased when fruits are injured or decaying and this can cause early ripening of climacteric fruit during transport.
<P> The plant becomes woody as the fruits develop. As they ripen, the plant begins to die, dries out and becomes brittle. In that state the base of the stem breaks off easily, particularly in a high wind. The plant then rolls readily before the wind and disperses its seeds as a tumbleweed.
<P> A single stem bears 20 to 30 fruiting spikes. The harvest begins as soon as one or two fruits at the base of the spikes begin to turn red, and before the fruit is fully mature, and still hard; if allowed to ripen completely, the fruit lose pungency, and ultimately fall off and are lost. The spikes are collected and spread out to dry in the sun, then the peppercorns are stripped off the spikes.
| question: why fruits get juicier after ripening after they have been cut off the tree context: <P> Fruit maturity is not always apparent visually, as the fruits remain the same shade of green until they are overripe or rotting. One usually may sense ripeness, however, by giving the fruit a soft squeeze; a ripe feijoa yields to pressure somewhat like a just-ripe banana. Generally, the fruit is at its optimum ripeness the day it drops from the tree. While still hanging, it may well prove bitter; once fallen, however, the fruit very quickly becomes overripe, so daily collection of fallen fruit is advisable during the season.
<P> Citrus fruits are nonclimacteric and respiration slowly declines and the production and release of ethylene is gradual. The fruits do not go through a ripening process in the sense that they become "tree ripe". Some fruits, for example cherries, physically mature and then continue to ripen on the tree. Other fruits, such as pears, are picked when mature, but before they ripen, then continue to ripen off the tree. Citrus fruits pass from immaturity to maturity to overmaturity while still on the tree. Once they are separated from the tree, they do not increase in sweetness or continue to ripen. The only way change may happen after being picked is that they eventually start to decay.
<P> The fruits are orange, woody arils and may remain on the parent for several years after splitting open. Fruit production is very rare. Studies from 2010-2012 show that most populations continue to produce no fruit.
<P> Although not a disease as such, irregular supplies of water can cause growing or ripening fruit to split. Besides cosmetic damage, the splits may allow decay to start, although growing fruits have some ability to heal after a split. In addition, a deformity called cat-facing can be caused by pests, temperature stress, or poor soil conditions. Affected fruit usually remains edible, but its appearance may be unsightly.
<P> Ripening occurs when a fruit is mature. Ripeness is followed by senescence and breakdown of the fruit. The category “fruit” refers also to products such as aubergine, sweet pepper and tomato. Non-climacteric fruit only ripen while still attached to the parent plant. Their eating quality suffers if they are harvested before fully ripe as their sugar and acid content does not increase further. Examples are citrus, grapes and pineapple. Early harvesting is often carried out for export shipments to minimise loss during transport, but a consequence of this is that the flavour suffers. Climacteric fruit are those that can be harvested when mature but before ripening has begun. These include banana, melon, papaya, and tomato. In commercial fruit marketing the rate of ripening is controlled artificially, thus enabling transport and distribution to be carefully planned. Ethylene gas is produced in most plant tissues and is important in starting off the ripening process. It can be used commercially for the ripening of climacteric fruits. However, natural ethylene produced by fruits can lead to in- storage losses. For example, ethylene destroys the green colour of plants. Leafy vegetables will be damaged if stored with ripening fruit. Ethylene production is increased when fruits are injured or decaying and this can cause early ripening of climacteric fruit during transport.
<P> The plant becomes woody as the fruits develop. As they ripen, the plant begins to die, dries out and becomes brittle. In that state the base of the stem breaks off easily, particularly in a high wind. The plant then rolls readily before the wind and disperses its seeds as a tumbleweed.
<P> A single stem bears 20 to 30 fruiting spikes. The harvest begins as soon as one or two fruits at the base of the spikes begin to turn red, and before the fruit is fully mature, and still hard; if allowed to ripen completely, the fruit lose pungency, and ultimately fall off and are lost. The spikes are collected and spread out to dry in the sun, then the peppercorns are stripped off the spikes.
| answer: Many fruits do not need the tree to ripen; they have their own energy store (their sugars and starches) and the chemicals necessary to ripen are already present. |
28,247 | 1oolvd | I biked home in the rain tonight. Would I have got just as wet from walking? | Consider purely vertical rain.When standing or biking horizontally you're traveling *vertically* through the rain field at the speed v_rain. The amount of water that hits you is the rain density D_rain times the volume you sweep out. That volume is just your vertical cross-section times the speed the rain is falling and the time you sit out in the rain: * W_from-above = D_rain * A_top * v_rain * timeMeanwhile, if you're traveling horizontally, you sweep out a volume from your horizontal motion. If the rain is falling vertically, then those two volumes are independent: * W_from_ahead = D_rain * A_front * V_forward * time or * W_from_ahead = D_rain * A_front * distSo in vertical rain you *always* get wetter by going slower: * W = W_from_above + W_from_ahead = D_rain * (A_top * v_rain * time + A_front * dist)the wetness from above depends on how long you're out (but not directly on how far you travel), and the wetness from in front depends on how far you travel (but not at all on how fast you go).Now, if there's a wind and the rain is coming down diagonally, the calculation gets more complex and depends on which direction you are going. If the wind is blowing in the same direction you want to go, there is an ideal speed that minimizes your total wetness, but if it's blowing crossways or against you, you always do better just to go fast and get it over with.Biking gives you different overhead and frontal cross sections, too -- you have maybe 2-3x greater overhead cross section and maybe 1.5x-2x less frontal cross section. So you get less wet per meter of forward travel, but more wet per second of exposure, if you're on a bike. | [
"Consider purely vertical rain.\n\nWhen standing or biking horizontally you're traveling *vertically* through the rain field at the speed v_rain. The amount of water that hits you is the rain density D_rain times the volume you sweep out. That volume is just your vertical cross-section times the speed the rain is... | 1 | [] | 0 | <P> According to Roll, "I am sure Lance had probably never met a bike racer like me...a person who could still find some joy and happiness in such weather misery. We had eight hours a day, for eight straight days, of continuous riding in the pouring rain - rain in Biblical proportions! I think Lance would've turned things around even without our talks and rides in the Appalachia[n]s, but it turned out to be a pivotal career event for him (and Roll had made a new cycling friend)." A refocused and encouraged Armstrong went on to a successful fourth-place finish in the Vuelta a España, and within a year and a half he had won his first yellow jersey overall victory in the Tour de France road race. Armstrong has since had his yellow jersey wins nullified due to doping. (Roll's tale of the ride is in "Bobke II"; Armstrong's is in "It's Not About the Bike".)
<P> Jason Mraz described the weather as "perfect", saying that "going in and out of rain and sun [...] is a great way to celebrate." He then claimed: "I've never experiences torrential rain in my life." MC Double D of Sneaky Sound System described the differing tolerance for rain in Ireland and his native country. "You like it wet over here. I'm from Australia and we don't like it wet over there. You guys get a bit annoyed if it's too sunny I think. [...] We're playing in a tent so I can't complain."
<P> It was on returning to England that he made a second attempt on the long-distance record. He kept a diary which appeared in a newspaper in Aberdeen and in "The London Bicycle Club Gazette". On his first day he rode into sweeping rain near Bodmin.
<P> In 2005, up to 6,000 cyclists met at Federation Square to have breakfast as part of Ride to Work Day, double the number from 2004. About 10,000 Victorians are estimated to have left their cars at home in favour of the bike.
<P> The Bike Race is another race where the entrant has to ride up the steepest road in the Old Town, using a Butchers Bike in the quickest time possible, without taking their buttocks off the saddle. This event is undertaken in memory of a local fisherman who died during the Great Storm of 1987.
<P> In 1965, Percy Stallard (aged 55) rode his bicycle solo over the Theodul Pass. The Rough Stuff Fellowship, an organisation for enthusiasts of cross-country cycling, acknowledged that it was probably the first time a cyclist had done it. Stallard made it in less than 15 hours, sometimes through deep snow.
<P> I felt he was suspicious because it was raining. He was in-between houses, cutting in-between houses, and he was walking very leisurely for the weather... It didn't look like he was a resident that went to check their mail and got caught in the rain and was hurrying back home. He didn't look like a fitness fanatic that would train in the rain.
| question: I biked home in the rain tonight. Would I have got just as wet from walking? context: <P> According to Roll, "I am sure Lance had probably never met a bike racer like me...a person who could still find some joy and happiness in such weather misery. We had eight hours a day, for eight straight days, of continuous riding in the pouring rain - rain in Biblical proportions! I think Lance would've turned things around even without our talks and rides in the Appalachia[n]s, but it turned out to be a pivotal career event for him (and Roll had made a new cycling friend)." A refocused and encouraged Armstrong went on to a successful fourth-place finish in the Vuelta a España, and within a year and a half he had won his first yellow jersey overall victory in the Tour de France road race. Armstrong has since had his yellow jersey wins nullified due to doping. (Roll's tale of the ride is in "Bobke II"; Armstrong's is in "It's Not About the Bike".)
<P> Jason Mraz described the weather as "perfect", saying that "going in and out of rain and sun [...] is a great way to celebrate." He then claimed: "I've never experiences torrential rain in my life." MC Double D of Sneaky Sound System described the differing tolerance for rain in Ireland and his native country. "You like it wet over here. I'm from Australia and we don't like it wet over there. You guys get a bit annoyed if it's too sunny I think. [...] We're playing in a tent so I can't complain."
<P> It was on returning to England that he made a second attempt on the long-distance record. He kept a diary which appeared in a newspaper in Aberdeen and in "The London Bicycle Club Gazette". On his first day he rode into sweeping rain near Bodmin.
<P> In 2005, up to 6,000 cyclists met at Federation Square to have breakfast as part of Ride to Work Day, double the number from 2004. About 10,000 Victorians are estimated to have left their cars at home in favour of the bike.
<P> The Bike Race is another race where the entrant has to ride up the steepest road in the Old Town, using a Butchers Bike in the quickest time possible, without taking their buttocks off the saddle. This event is undertaken in memory of a local fisherman who died during the Great Storm of 1987.
<P> In 1965, Percy Stallard (aged 55) rode his bicycle solo over the Theodul Pass. The Rough Stuff Fellowship, an organisation for enthusiasts of cross-country cycling, acknowledged that it was probably the first time a cyclist had done it. Stallard made it in less than 15 hours, sometimes through deep snow.
<P> I felt he was suspicious because it was raining. He was in-between houses, cutting in-between houses, and he was walking very leisurely for the weather... It didn't look like he was a resident that went to check their mail and got caught in the rain and was hurrying back home. He didn't look like a fitness fanatic that would train in the rain.
| answer: Consider purely vertical rain.When standing or biking horizontally you're traveling *vertically* through the rain field at the speed v_rain. The amount of water that hits you is the rain density D_rain times the volume you sweep out. That volume is just your vertical cross-section times the speed the rain is falling and the time you sit out in the rain: * W_from-above = D_rain * A_top * v_rain * timeMeanwhile, if you're traveling horizontally, you sweep out a volume from your horizontal motion. If the rain is falling vertically, then those two volumes are independent: * W_from_ahead = D_rain * A_front * V_forward * time or * W_from_ahead = D_rain * A_front * distSo in vertical rain you *always* get wetter by going slower: * W = W_from_above + W_from_ahead = D_rain * (A_top * v_rain * time + A_front * dist)the wetness from above depends on how long you're out (but not directly on how far you travel), and the wetness from in front depends on how far you travel (but not at all on how fast you go).Now, if there's a wind and the rain is coming down diagonally, the calculation gets more complex and depends on which direction you are going. If the wind is blowing in the same direction you want to go, there is an ideal speed that minimizes your total wetness, but if it's blowing crossways or against you, you always do better just to go fast and get it over with.Biking gives you different overhead and frontal cross sections, too -- you have maybe 2-3x greater overhead cross section and maybe 1.5x-2x less frontal cross section. So you get less wet per meter of forward travel, but more wet per second of exposure, if you're on a bike. |
130,315 | 1vhj2t | is there an actual genetic proof to the penis size stereotypes? | You should try [r/askscience](_URL_0_) for a more accurate answer | [
"There are documented statistical patterns among men of varios races. The stereotypes are consistent with these statistical patterns in terms of rank but not in terms of absolute measurements.",
"The difference is only .2 inches or so, but the stereotypes did match up. I'll edit my comment later with a real sourc... | 6 | [
"There are documented statistical patterns among men of varios races. The stereotypes are consistent with these statistical patterns in terms of rank but not in terms of absolute measurements.",
"The difference is only .2 inches or so, but the stereotypes did match up. I'll edit my comment later with a real sourc... | 6 | <P> There are certain genes, like homeobox (Hox a and d) genes, which may have a role in regulating penis size. In humans, the AR gene located on the X chromosome at Xq11-12 which may determine the penis size. The SRY gene located on the Y chromosome may have a role to play. Variance in size can often be attributed to "de novo" mutations. Deficiency of pituitary growth hormone or gonadotropins or mild degrees of androgen insensitivity can cause small penis size in males and can be addressed with growth hormone or testosterone treatment in early childhood.
<P> Morris said that "Homo sapiens" not only have the largest brains of all higher primates, but that sexual selection in human evolution has caused humans to have the highest ratio of penis size to body mass. Morris conjectured that human ear-lobes developed as an additional erogenous zone to facilitate the extended sexuality necessary in the evolution of human monogamous pair bonding. Morris further stated that the more rounded shape of human female breasts means they are mainly a sexual signalling device rather than simply for providing milk for infants.
<P> The belief that penis size varies according to race is not supported by scientific evidence. A 2005 study reported that "there is no scientific background to support the alleged 'oversized' penis in black people". In fact, a study of 253 men from Tanzania found that the average stretched flaccid penis length of Tanzanian males is 11 cm (4.53 inches) long, smaller than the worldwide average, stretched flaccid penis length of 13.24 cm (5.21 inches), and average erect penis length of 13.12 cm (5.17 inches).
<P> In an interview with "The New York Times," Sarich agreed with his critics, who stated that there was little or no scientific basis for his claims about homosexuality, or on the relationship that he was then teaching of brain size to intelligence. He told the "Times" there seems to be a correlation but "there is not a lot of evidence to support that theory because there isn't a lot of research done on the subject."
<P> He later returned to explain how Deon is able to take undistorted pictures of his enormous penis with the iPhone 5's panoramic camera. He also explained that Deon's claim reinforces the stereotype that all black men have larger than average sized penises.
<P> It has been suggested that differences in penis size between individuals are caused not only by genetics, but also by environmental factors such as culture, diet and chemical or pollution exposure. Endocrine disruption resulting from chemical exposure has been linked to genital deformation in both sexes (among many other problems). Chemicals from both synthetic (e.g., pesticides, anti-bacterial triclosan, plasticizers for plastics) and natural (e.g., chemicals found in tea tree oil and lavender oil) sources have been linked to various degrees of endocrine disruption.
<P> The theory of sexual selection has been used to explain a number of human anatomical features. These include rounded breasts, facial hair, pubic hair and penis size. The breasts of primates are flat, yet are able to produce sufficient milk for feeding their young. The breasts of non-lactating human females are filled with fatty tissue and not milk. Thus it has been suggested the rounded female breasts are signals of fertility. Richard Dawkins has speculated that the loss of the penis bone in humans, when it is present in other primates, may be due to sexual selection by females looking for a clear sign of good health in prospective mates. Since a human erection relies on a hydraulic pumping system, erection failure is a sensitive early warning of certain kinds of physical and mental ill health.
| question: is there an actual genetic proof to the penis size stereotypes? context: <P> There are certain genes, like homeobox (Hox a and d) genes, which may have a role in regulating penis size. In humans, the AR gene located on the X chromosome at Xq11-12 which may determine the penis size. The SRY gene located on the Y chromosome may have a role to play. Variance in size can often be attributed to "de novo" mutations. Deficiency of pituitary growth hormone or gonadotropins or mild degrees of androgen insensitivity can cause small penis size in males and can be addressed with growth hormone or testosterone treatment in early childhood.
<P> Morris said that "Homo sapiens" not only have the largest brains of all higher primates, but that sexual selection in human evolution has caused humans to have the highest ratio of penis size to body mass. Morris conjectured that human ear-lobes developed as an additional erogenous zone to facilitate the extended sexuality necessary in the evolution of human monogamous pair bonding. Morris further stated that the more rounded shape of human female breasts means they are mainly a sexual signalling device rather than simply for providing milk for infants.
<P> The belief that penis size varies according to race is not supported by scientific evidence. A 2005 study reported that "there is no scientific background to support the alleged 'oversized' penis in black people". In fact, a study of 253 men from Tanzania found that the average stretched flaccid penis length of Tanzanian males is 11 cm (4.53 inches) long, smaller than the worldwide average, stretched flaccid penis length of 13.24 cm (5.21 inches), and average erect penis length of 13.12 cm (5.17 inches).
<P> In an interview with "The New York Times," Sarich agreed with his critics, who stated that there was little or no scientific basis for his claims about homosexuality, or on the relationship that he was then teaching of brain size to intelligence. He told the "Times" there seems to be a correlation but "there is not a lot of evidence to support that theory because there isn't a lot of research done on the subject."
<P> He later returned to explain how Deon is able to take undistorted pictures of his enormous penis with the iPhone 5's panoramic camera. He also explained that Deon's claim reinforces the stereotype that all black men have larger than average sized penises.
<P> It has been suggested that differences in penis size between individuals are caused not only by genetics, but also by environmental factors such as culture, diet and chemical or pollution exposure. Endocrine disruption resulting from chemical exposure has been linked to genital deformation in both sexes (among many other problems). Chemicals from both synthetic (e.g., pesticides, anti-bacterial triclosan, plasticizers for plastics) and natural (e.g., chemicals found in tea tree oil and lavender oil) sources have been linked to various degrees of endocrine disruption.
<P> The theory of sexual selection has been used to explain a number of human anatomical features. These include rounded breasts, facial hair, pubic hair and penis size. The breasts of primates are flat, yet are able to produce sufficient milk for feeding their young. The breasts of non-lactating human females are filled with fatty tissue and not milk. Thus it has been suggested the rounded female breasts are signals of fertility. Richard Dawkins has speculated that the loss of the penis bone in humans, when it is present in other primates, may be due to sexual selection by females looking for a clear sign of good health in prospective mates. Since a human erection relies on a hydraulic pumping system, erection failure is a sensitive early warning of certain kinds of physical and mental ill health.
| answer: You should try [r/askscience](_URL_0_) for a more accurate answer |
185,726 | 1svtb1 | If I wore goggles that inverted my vision, would my brain adapt and make it seem as if its not? | [Yes.](_URL_0_) *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, "normal vision was restored instantaneously and without any disturbance in the natural appearance or position of objects.* | [
"[Yes.](_URL_0_) \n *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, \"normal vision was restored in... | 3 | [
"[Yes.](_URL_0_) \n *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, \"normal vision was restored in... | 2 | <P> The initial pointing errors induced by the prismatic goggles are caused by the misalignment of the observer's motor and proprioceptive maps. Once the error has been detected, the observer makes a conscious effort to try and fix the error via strategic recalibration. The reduction in error is also helped by an unconscious process referred to as spatial realignment, which gradually realigns the visual and proprioceptive maps (Newport and Schenk, 2012). This means that over a series of repeated attempts, the observer is able to reduce the margin of error and become more accurate in pointing to the visual target despite the visual displacement. Usually it takes an individual as few as 10 trials to adapt to the visual displacement and successfully point to the target (Rosetti et al., 1993).
<P> The brain naturally guards against double vision. In an attempt to avoid double vision, the brain can sometimes ignore the image from one eye, a process known as suppression. The ability to suppress is to be found particularly in childhood when the brain is still developing. Thus, those with childhood strabismus almost never complain of diplopia, while adults who develop strabismus almost always do. While this ability to suppress might seem an entirely positive adaptation to strabismus, in the developing child, this can prevent the proper development of vision in the affected eye, resulting in amblyopia. Some adults are also able to suppress their diplopia, but their suppression is rarely as deep or as effective and takes much longer to establish, thus they are not at risk of permanently compromising their vision. In some cases, diplopia disappears without medical intervention, but in other cases, the cause of the double vision may still be present.
<P> On a later experiment, Stratton wore the glasses for eight whole days. By day four, the images seen through the instrument were still upside down. However, on day five, images appeared upright until he concentrated on them; then they became inverted again. By having to concentrate on his vision to turn it upside down again, especially when he knew images were hitting his retinas in the opposite orientation as normal, Stratton deduced his brain had adapted to the changes in vision.
<P> The purpose of the goggles is to disable the patient's ability to visually fixate on an object while at the same time allowing the examiner to adequately visualize the eye. This is done by using high-powered (+20 diopters) magnifying glasses with an illumination system. With such a high-powered lens, it is unlikely that the patient can adequately focus and visually fixate on an object to suppress nystagmus.
<P> Additionally in adults who have had exotropia since childhood, the brain may adapt to using a "blind-spot" whereby it receives images from both eyes, but no full image from the deviating eye, thus avoiding double vision and in fact increasing peripheral vision on the side of the deviating eye.
<P> Vision deficit usually occurs when lesions grow in the occipital lobe of the brain, causing a blurred daze for patients, especially in sensitivity to light. Focusing upon finer objects becomes a challenge, along with edge and border detection. Driving behind the wheel is dangerous when astroblastoma grows in residual tissue size, since peripheral vision can be insufficient. Horizontal nystagmus and other involuntary eye disorders can occur.
<P> During prism adaptation, an individual wears special prismatic goggles that are made of prism wedges that displace the visual field laterally or vertically. In most cases the visual field is shifted laterally either in the rightward or leftward direction. While wearing the goggles, the individual engages in a perceptual motor task such as pointing to a visual target directly in front of them. A prism adaptation session includes three components: the pre-test, prism exposure, and the post-test. The effects of the prism adaptation paradigm are observed when the performance on the perceptual motor task of the pre-and post-test are compared.
| question: If I wore goggles that inverted my vision, would my brain adapt and make it seem as if its not? context: <P> The initial pointing errors induced by the prismatic goggles are caused by the misalignment of the observer's motor and proprioceptive maps. Once the error has been detected, the observer makes a conscious effort to try and fix the error via strategic recalibration. The reduction in error is also helped by an unconscious process referred to as spatial realignment, which gradually realigns the visual and proprioceptive maps (Newport and Schenk, 2012). This means that over a series of repeated attempts, the observer is able to reduce the margin of error and become more accurate in pointing to the visual target despite the visual displacement. Usually it takes an individual as few as 10 trials to adapt to the visual displacement and successfully point to the target (Rosetti et al., 1993).
<P> The brain naturally guards against double vision. In an attempt to avoid double vision, the brain can sometimes ignore the image from one eye, a process known as suppression. The ability to suppress is to be found particularly in childhood when the brain is still developing. Thus, those with childhood strabismus almost never complain of diplopia, while adults who develop strabismus almost always do. While this ability to suppress might seem an entirely positive adaptation to strabismus, in the developing child, this can prevent the proper development of vision in the affected eye, resulting in amblyopia. Some adults are also able to suppress their diplopia, but their suppression is rarely as deep or as effective and takes much longer to establish, thus they are not at risk of permanently compromising their vision. In some cases, diplopia disappears without medical intervention, but in other cases, the cause of the double vision may still be present.
<P> On a later experiment, Stratton wore the glasses for eight whole days. By day four, the images seen through the instrument were still upside down. However, on day five, images appeared upright until he concentrated on them; then they became inverted again. By having to concentrate on his vision to turn it upside down again, especially when he knew images were hitting his retinas in the opposite orientation as normal, Stratton deduced his brain had adapted to the changes in vision.
<P> The purpose of the goggles is to disable the patient's ability to visually fixate on an object while at the same time allowing the examiner to adequately visualize the eye. This is done by using high-powered (+20 diopters) magnifying glasses with an illumination system. With such a high-powered lens, it is unlikely that the patient can adequately focus and visually fixate on an object to suppress nystagmus.
<P> Additionally in adults who have had exotropia since childhood, the brain may adapt to using a "blind-spot" whereby it receives images from both eyes, but no full image from the deviating eye, thus avoiding double vision and in fact increasing peripheral vision on the side of the deviating eye.
<P> Vision deficit usually occurs when lesions grow in the occipital lobe of the brain, causing a blurred daze for patients, especially in sensitivity to light. Focusing upon finer objects becomes a challenge, along with edge and border detection. Driving behind the wheel is dangerous when astroblastoma grows in residual tissue size, since peripheral vision can be insufficient. Horizontal nystagmus and other involuntary eye disorders can occur.
<P> During prism adaptation, an individual wears special prismatic goggles that are made of prism wedges that displace the visual field laterally or vertically. In most cases the visual field is shifted laterally either in the rightward or leftward direction. While wearing the goggles, the individual engages in a perceptual motor task such as pointing to a visual target directly in front of them. A prism adaptation session includes three components: the pre-test, prism exposure, and the post-test. The effects of the prism adaptation paradigm are observed when the performance on the perceptual motor task of the pre-and post-test are compared.
| answer: [Yes.](_URL_0_) *Psychologist George M. Stratton conducted, in the 1890s, experiments in which he tested the theory of perceptual adaptation. In one experiment, he wore a reversing glasses for 21½ hours over three days, with no change in his vision. After removing the glasses, "normal vision was restored instantaneously and without any disturbance in the natural appearance or position of objects.* |
179,115 | 2m56x4 | why would someone want a flexible spending account? | Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.If you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable healthcare expenses. however...If you have kids, there's a number of scheduled checkups, immunizations and whatnot.If you're older, you may have medical problems that require regular visits to the doctor & prescription drugs that you've been taking daily for years.If you have health problems, you'll also have a bunch of medication you need to take.It can also be good if you have some predictable expenses. If you have poor eyesight, you might want to plan ahead to get a new pair of glasses or contacts. A friend planned ahead for her laser eye surgery, effectively getting a 25% discount on the procedure by not paying taxes on the money. | [
"Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.\n\nIf you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable hea... | 1 | [
"Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.\n\nIf you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable hea... | 1 | <P> A flexible spending account (FSA), also known as a flexible spending arrangement, is one of a number of tax-advantaged financial accounts, resulting in payroll tax savings. Before the Patient Protection and Affordable Care Act, one significant disadvantage to using an FSA was that funds not used by the end of the plan year were forfeited to the employer, known as the "use it or lose it" rule. Under the terms of the Affordable Care Act, a plan may permit an employee to carry over up to $500 into the following year without losing the funds.
<P> Having an account planner involved in the account has led to more integration within the agency, which has resulted in better teamwork in trying to combine the needs of the client, the market and the consumer. Account planners stimulate discussions about things that were overlooked before, such as, purchasing decisions, brand-consumer relationship and specific circumstance evaluation.
<P> Advocates of defined contribution plans point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts. This debate parallels the discussion currently going on in the U.S., where many Republican leaders favor transforming the Social Security system, at least in part, to a self-directed investment plan.
<P> He also supports personal accounts for Social Security and Medicare, funded using the employee's portion of FICA payroll taxes, to replace all or part of the benefits paid under the current system. According to Gingrich, private accounts would offer workers retirement and medical benefits much better than what these programs currently offer while greatly reducing the need for government spending.
<P> In 1984, the Internal Revenue Service issued a ruling that, while flexible spending accounts were allowable, employees must elect a certain amount for the plan each year and that any unused amounts would be forfeited at the end of the year. Until that point, some employers had set up flexible spending account plans that allowed employees to simply request reimbursement of any qualifying medical expense with no preset annual limit and no risk of forfeiture by employees.
<P> The cost-benefit relationship constraint is also called cost effectiveness constraints and is pervasive throughout the framework. The companies need to spend money and time in the process of providing financial statements. To be more specific, Costs can constraint the range of information when providing financial reporting on the grounds that the companies must "collect, process, analyze and disseminate relevant information" which need time and money. For investors, they want to know all financial information if possible in ideal condition, which may cause tremendous financial burden in the corporations. Moreover, some financial information may not valuable for external users to acquire a huge benefit, for example, how much money do a company spend for its greening of headquarters. Therefore, when deciding the components of financial reporting, companies need to measure the sense of particular financial information and the expenditure of providing particular information and the benefits they can acquire from this particular information. Properly speaking, If the costs in particular information exceed the benefit they can acquire, companies may choose to not disclose this particular information. For example, If there is $0.1 difference between checkbook register and bank statement, accountant should ignore the $0.1 rather than waste time and money to find the $0.1.
<P> Advocates of Defined contribution plan point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts.
| question: why would someone want a flexible spending account? context: <P> A flexible spending account (FSA), also known as a flexible spending arrangement, is one of a number of tax-advantaged financial accounts, resulting in payroll tax savings. Before the Patient Protection and Affordable Care Act, one significant disadvantage to using an FSA was that funds not used by the end of the plan year were forfeited to the employer, known as the "use it or lose it" rule. Under the terms of the Affordable Care Act, a plan may permit an employee to carry over up to $500 into the following year without losing the funds.
<P> Having an account planner involved in the account has led to more integration within the agency, which has resulted in better teamwork in trying to combine the needs of the client, the market and the consumer. Account planners stimulate discussions about things that were overlooked before, such as, purchasing decisions, brand-consumer relationship and specific circumstance evaluation.
<P> Advocates of defined contribution plans point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts. This debate parallels the discussion currently going on in the U.S., where many Republican leaders favor transforming the Social Security system, at least in part, to a self-directed investment plan.
<P> He also supports personal accounts for Social Security and Medicare, funded using the employee's portion of FICA payroll taxes, to replace all or part of the benefits paid under the current system. According to Gingrich, private accounts would offer workers retirement and medical benefits much better than what these programs currently offer while greatly reducing the need for government spending.
<P> In 1984, the Internal Revenue Service issued a ruling that, while flexible spending accounts were allowable, employees must elect a certain amount for the plan each year and that any unused amounts would be forfeited at the end of the year. Until that point, some employers had set up flexible spending account plans that allowed employees to simply request reimbursement of any qualifying medical expense with no preset annual limit and no risk of forfeiture by employees.
<P> The cost-benefit relationship constraint is also called cost effectiveness constraints and is pervasive throughout the framework. The companies need to spend money and time in the process of providing financial statements. To be more specific, Costs can constraint the range of information when providing financial reporting on the grounds that the companies must "collect, process, analyze and disseminate relevant information" which need time and money. For investors, they want to know all financial information if possible in ideal condition, which may cause tremendous financial burden in the corporations. Moreover, some financial information may not valuable for external users to acquire a huge benefit, for example, how much money do a company spend for its greening of headquarters. Therefore, when deciding the components of financial reporting, companies need to measure the sense of particular financial information and the expenditure of providing particular information and the benefits they can acquire from this particular information. Properly speaking, If the costs in particular information exceed the benefit they can acquire, companies may choose to not disclose this particular information. For example, If there is $0.1 difference between checkbook register and bank statement, accountant should ignore the $0.1 rather than waste time and money to find the $0.1.
<P> Advocates of Defined contribution plan point out that each employee has the ability to tailor the investment portfolio to his or her individual needs and financial situation, including the choice of how much to contribute, if anything at all. However, others state that these apparent advantages could also hinder some workers who might not possess the financial savvy to choose the correct investment vehicles or have the discipline to voluntarily contribute money to retirement accounts.
| answer: Money put into your FSA is taken out before you pay taxes on it. Most people are taxed somewhere around a third of their income so, if you can use the money in the FSA, it's a good deal.If you're single, young & healthy, it might seem ridiculous because you don't actually spend much money on predictable healthcare expenses. however...If you have kids, there's a number of scheduled checkups, immunizations and whatnot.If you're older, you may have medical problems that require regular visits to the doctor & prescription drugs that you've been taking daily for years.If you have health problems, you'll also have a bunch of medication you need to take.It can also be good if you have some predictable expenses. If you have poor eyesight, you might want to plan ahead to get a new pair of glasses or contacts. A friend planned ahead for her laser eye surgery, effectively getting a 25% discount on the procedure by not paying taxes on the money. |
77,892 | 2qt93z | why does my vision get obscured when a strong light source hit my eyes | Because your eye will adapt (iris will close to a pinhole) to adapt to the bright light, which in turn does not let in much light from faint sources as well. The reason this is done automatically and cannot be overridden by you is because bright light in high doses is quite damaging to your retinas. | [
"Because your eye will adapt (iris will close to a pinhole) to adapt to the bright light, which in turn does not let in much light from faint sources as well. The reason this is done automatically and cannot be overridden by you is because bright light in high doses is quite damaging to your retinas."
] | 1 | [] | 0 | <P> These cause permanent obstruction of aqueous outflow. In some cases, pressure may rapidly build up in the eye, causing pain and redness (symptomatic, or so-called "acute" angle closure). In this situation, the vision may become blurred, and halos may be seen around bright lights. Accompanying symptoms may include a headache and vomiting.
<P> The blinding effect is caused in large part by reduced contrast due to light scattering in the eye by excessive brightness, or to reflection of light from dark areas in the field of vision, with luminance similar to the background luminance. This kind of glare is a particular instance of disability glare, called veiling glare. (This is not the same as loss of accommodation of night vision which is caused by the direct effect of the light itself on the eye.)
<P> As objects radiate light in straight lines in all directions, the eye must also be hit with this light over its outer surface. This idea presented a problem for al-Haytham and his predecessors, as if this was the case, the rays received by the eye from every point on the object would cause a blurred image. Al-Haytham solved this problem using his theory of refraction. He argued that although the object sends an infinite number of rays of light to the eye, only one of these lines falls on the eye perpendicularly: the other rays meet the eye at angles that are not perpendicular. According to al-Haytham, this causes them to be refracted and weakened. He claimed that all the rays other than the one that hits the eye perpendicularly are not involved in vision.
<P> Al-Haytham offered many reasons against the extramission theory, pointing to the fact that eyes can be damaged by looking directly at bright lights, such as the sun. He claimed the low probability that the eye can fill the entirety of space as soon as the eyelids are opened as an observer looks up into the night sky. Using the intromission theory as a foundation, he formed his own theory that an object emits rays of light from every point on its surface which then travel in all directions, thereby allowing some light into a viewer's eyes. According to this theory, the object being viewed is considered to be a compilation of an infinite number of points, from which rays of light are projected.
<P> One can observe the effect of straylight by looking at a distant bright light source against a dark background. If the source is small, it would look like a small bright spot if the eye imaged it perfectly. Scattering in the eye makes the source appear spread out, surrounded by glare. The disability glare caused by such a situation has been found to correspond precisely to the effect of true light. As a consequence, disability glare was subsequently defined by this true light, called "straylight".
<P> Averted vision works because there are virtually no rods (cells which detect dim light in black and white) in the fovea: a small area in the center of the eye. The fovea contains primarily cone cells, which serve as bright light and color detectors and are not as useful during the night. This situation results in a decrease in visual sensitivity in central vision at night. Based on the early work of Osterberg (1935), and later confirmed by modern adaptive optics, the density of the rod cells usually reaches a maximum around 20 degrees off the center of vision.
<P> As with any optical system experiencing a defocus aberration, the effect can be exaggerated or masked by changing the aperture size. In the case of the eye, a large pupil emphasizes refractive error and a small pupil masks it. This phenomenon can cause a condition in which an individual has a greater difficulty seeing in low-illumination areas, even though there are no symptoms in bright light, such as daylight.
| question: why does my vision get obscured when a strong light source hit my eyes context: <P> These cause permanent obstruction of aqueous outflow. In some cases, pressure may rapidly build up in the eye, causing pain and redness (symptomatic, or so-called "acute" angle closure). In this situation, the vision may become blurred, and halos may be seen around bright lights. Accompanying symptoms may include a headache and vomiting.
<P> The blinding effect is caused in large part by reduced contrast due to light scattering in the eye by excessive brightness, or to reflection of light from dark areas in the field of vision, with luminance similar to the background luminance. This kind of glare is a particular instance of disability glare, called veiling glare. (This is not the same as loss of accommodation of night vision which is caused by the direct effect of the light itself on the eye.)
<P> As objects radiate light in straight lines in all directions, the eye must also be hit with this light over its outer surface. This idea presented a problem for al-Haytham and his predecessors, as if this was the case, the rays received by the eye from every point on the object would cause a blurred image. Al-Haytham solved this problem using his theory of refraction. He argued that although the object sends an infinite number of rays of light to the eye, only one of these lines falls on the eye perpendicularly: the other rays meet the eye at angles that are not perpendicular. According to al-Haytham, this causes them to be refracted and weakened. He claimed that all the rays other than the one that hits the eye perpendicularly are not involved in vision.
<P> Al-Haytham offered many reasons against the extramission theory, pointing to the fact that eyes can be damaged by looking directly at bright lights, such as the sun. He claimed the low probability that the eye can fill the entirety of space as soon as the eyelids are opened as an observer looks up into the night sky. Using the intromission theory as a foundation, he formed his own theory that an object emits rays of light from every point on its surface which then travel in all directions, thereby allowing some light into a viewer's eyes. According to this theory, the object being viewed is considered to be a compilation of an infinite number of points, from which rays of light are projected.
<P> One can observe the effect of straylight by looking at a distant bright light source against a dark background. If the source is small, it would look like a small bright spot if the eye imaged it perfectly. Scattering in the eye makes the source appear spread out, surrounded by glare. The disability glare caused by such a situation has been found to correspond precisely to the effect of true light. As a consequence, disability glare was subsequently defined by this true light, called "straylight".
<P> Averted vision works because there are virtually no rods (cells which detect dim light in black and white) in the fovea: a small area in the center of the eye. The fovea contains primarily cone cells, which serve as bright light and color detectors and are not as useful during the night. This situation results in a decrease in visual sensitivity in central vision at night. Based on the early work of Osterberg (1935), and later confirmed by modern adaptive optics, the density of the rod cells usually reaches a maximum around 20 degrees off the center of vision.
<P> As with any optical system experiencing a defocus aberration, the effect can be exaggerated or masked by changing the aperture size. In the case of the eye, a large pupil emphasizes refractive error and a small pupil masks it. This phenomenon can cause a condition in which an individual has a greater difficulty seeing in low-illumination areas, even though there are no symptoms in bright light, such as daylight.
| answer: Because your eye will adapt (iris will close to a pinhole) to adapt to the bright light, which in turn does not let in much light from faint sources as well. The reason this is done automatically and cannot be overridden by you is because bright light in high doses is quite damaging to your retinas. |
143,153 | 2m6jqu | what are you hearing different between 320kbps and 128kbps. also flac, mp3, or aac audio | Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or "blur" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.If you want to hear the differences, put on good quality headphones and listen to music with lots of drums and cymbal crashes - those don't sound quite as good in a 128kbps MP3.A properly encoded 320kbps file is indistinguishable from the original. It's compressed, but the data that's lost is beyond human hearing.FLAC is a format that compresses an audio file *losslessly* - not just beyond human hearing, it doesn't change a single bit in the file. Purists love this, but no listening test has ever shown FLAC to be superior to a 320 kbps MP3. If you're recording and mixing, FLAC makes sense, you don't want to compress your raw audio before mixing.MP3 and AAC are two different "lossy" algorithms for compressing audio. They both throw away details that are hard to hear. They're different algorithms, with different pros and cons, but with similar results. At the same bit rate, AAC is slightly better quality than MP3, but not dramatically. | [
"Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or \"blur\" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.\n\nIf you want to hear the differences, put on good quality headphones a... | 1 | [
"Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or \"blur\" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.\n\nIf you want to hear the differences, put on good quality headphones a... | 1 | <P> E-MU 20K is the commercial name for a line of audio chips by Creative Technology, commercially known as the Sound Blaster X-Fi chipset. The series comprises the E-MU 20K1 (CA20K1) and E-MU 20K2 (CA20K2) audio chips.
<P> Microsoft has sometimes claimed that the sound quality of WMA at 64 kbit/s equals or exceeds that of MP3 at 128 kbit/s (both WMA and MP3 are considered near-transparent at 192 kbit/s by most listeners). In a 1999 study funded by Microsoft, National Software Testing Laboratories (NSTL) found that listeners preferred WMA at 64 kbit/s to MP3 at 128 kbit/s (as encoded by MusicMatch Jukebox).
<P> BULLET::::- AAX files are encrypted M4B's. The audio is encoded in variable quality AAC format. While the vast majority of books are encoded at 64 kbit/s, 22.050 kHz, stereo, some are as low as 32k, mono. Radio plays are often encoded at 128kbit/s and 44.1 kHz. Additionally, many audiobooks in Germany are encoded at the latter bitrate and are marketed as "AAX+"; however, there is no difference in the actual file format.
<P> In listening tests around 64 kbit/s, Opus shows superior quality compared to HE-AAC codecs, which were previously dominant due to their use of the patented spectral band replication (SBR) technology. In listening tests around 96 kbit/s, Opus shows slightly superior quality compared to AAC and significantly better quality compared to Vorbis and MP3.
<P> The Sony NWZ-A826 is one of many MP3 players belonging to the Walkman Z-series. This edition features 4 GB flash memory, as well as a large monitor; in addition the MP3 player offers several audio options in a housing with a thickness of 9.3 mm. The EX earplugs come packaged. There are four audio options: Clear Stereo, Clear Bass, VPT Surround and DSEE Sound Enhancer.The ear plugs are a combination of earplugs and a normal earset in one.
<P> MPEG-1 Layer II (MP2—often incorrectly called MUSICAM) is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. Decoding MP2 audio is computationally simple, relative to MP3, AAC, etc.
<P> 24-bit audio does not require dithering, as the noise level of the digital converter is always louder than the required level of any dither that might be applied. 24-bit audio could theoretically encode 144 dB of dynamic range, but based on manufacturer's datasheets no ADCs exist that can provide higher than ~125 dB.
| question: what are you hearing different between 320kbps and 128kbps. also flac, mp3, or aac audio context: <P> E-MU 20K is the commercial name for a line of audio chips by Creative Technology, commercially known as the Sound Blaster X-Fi chipset. The series comprises the E-MU 20K1 (CA20K1) and E-MU 20K2 (CA20K2) audio chips.
<P> Microsoft has sometimes claimed that the sound quality of WMA at 64 kbit/s equals or exceeds that of MP3 at 128 kbit/s (both WMA and MP3 are considered near-transparent at 192 kbit/s by most listeners). In a 1999 study funded by Microsoft, National Software Testing Laboratories (NSTL) found that listeners preferred WMA at 64 kbit/s to MP3 at 128 kbit/s (as encoded by MusicMatch Jukebox).
<P> BULLET::::- AAX files are encrypted M4B's. The audio is encoded in variable quality AAC format. While the vast majority of books are encoded at 64 kbit/s, 22.050 kHz, stereo, some are as low as 32k, mono. Radio plays are often encoded at 128kbit/s and 44.1 kHz. Additionally, many audiobooks in Germany are encoded at the latter bitrate and are marketed as "AAX+"; however, there is no difference in the actual file format.
<P> In listening tests around 64 kbit/s, Opus shows superior quality compared to HE-AAC codecs, which were previously dominant due to their use of the patented spectral band replication (SBR) technology. In listening tests around 96 kbit/s, Opus shows slightly superior quality compared to AAC and significantly better quality compared to Vorbis and MP3.
<P> The Sony NWZ-A826 is one of many MP3 players belonging to the Walkman Z-series. This edition features 4 GB flash memory, as well as a large monitor; in addition the MP3 player offers several audio options in a housing with a thickness of 9.3 mm. The EX earplugs come packaged. There are four audio options: Clear Stereo, Clear Bass, VPT Surround and DSEE Sound Enhancer.The ear plugs are a combination of earplugs and a normal earset in one.
<P> MPEG-1 Layer II (MP2—often incorrectly called MUSICAM) is a lossy audio format designed to provide high quality at about 192 kbit/s for stereo sound. Decoding MP2 audio is computationally simple, relative to MP3, AAC, etc.
<P> 24-bit audio does not require dithering, as the noise level of the digital converter is always louder than the required level of any dither that might be applied. 24-bit audio could theoretically encode 144 dB of dynamic range, but based on manufacturer's datasheets no ADCs exist that can provide higher than ~125 dB.
| answer: Modern audio compression algorithms are very, very good. Based on decades of psychoacoustic research, they can remove or "blur" only parts of the sound that you can't hear. A modern 128kbps audio file sounds amazingly close to the original.If you want to hear the differences, put on good quality headphones and listen to music with lots of drums and cymbal crashes - those don't sound quite as good in a 128kbps MP3.A properly encoded 320kbps file is indistinguishable from the original. It's compressed, but the data that's lost is beyond human hearing.FLAC is a format that compresses an audio file *losslessly* - not just beyond human hearing, it doesn't change a single bit in the file. Purists love this, but no listening test has ever shown FLAC to be superior to a 320 kbps MP3. If you're recording and mixing, FLAC makes sense, you don't want to compress your raw audio before mixing.MP3 and AAC are two different "lossy" algorithms for compressing audio. They both throw away details that are hard to hear. They're different algorithms, with different pros and cons, but with similar results. At the same bit rate, AAC is slightly better quality than MP3, but not dramatically. |
907 | 7i45d5 | how was the internet made? like how did they discover coding, etc? | Computers predate the internet by several decades, but the origins of the internet can be traced back to a US Military project in the 1960's called Arpanet. They wanted to see if they could get computers to communicate with each other. The first data packet was sent from a computer at UCLA to one at Stanford in 1969. The technology that came out of Arpanet ultimately led to the commercial internet. | [
"Computers predate the internet by several decades, but the origins of the internet can be traced back to a US Military project in the 1960's called Arpanet. They wanted to see if they could get computers to communicate with each other. The first data packet was sent from a computer at UCLA to one at Stanford in 19... | 1 | [] | 0 | <P> As the Internet grew from a forum for sharing information to a marketplace for doing business, a technology matured that allowed computers to transact with each other more easily. Out of these Internet roots, web service technology was born.
<P> While the Internet began with a U.S. Government research project in the late 1950s, the web in its present form did not appear on the Internet until after Tim Berners-Lee and his colleagues at the European laboratory (CERN) proposed the concept of linking documents with hypertext. But it was not until Mosaic, the forerunner of the famous Netscape Navigator appeared, that the Internet became more than a file serving system.
<P> The history of the Internet begins with the development of electronic computers in the 1950s. Initial concepts of wide area networking originated in several computer science laboratories in the United States, United Kingdom, and France. The U.S. Department of Defense awarded contracts as early as the 1960s, including for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI).
<P> Made with Code is an initiative launched by Google on 19 July 2014. Google aimed to empower young women in middle and high schools with computer programming skills. Made with Code was created after Google's own research found out that encouragement and exposure are the critical factors that would influence young females to pursue Computer Science. It was reported that Google is funding $50 million to Made with Code, on top of the initial $40 million invested since 2010 in organizations like Code.org, Black Girls Code, and Girls Who Code. The Made with Code initiative involves both online activities as well as real life events, collaborating with notable firms like Shapeways and App Inventor.
<P> Internetworking started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The original term for an internetwork was catenet.
<P> The origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia since the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life.
<P> In the 1950s and 1960s, with the creation of computers, is where the history of the Internet begins. In 1969 came the invention of Arpanet, the first network to run on packet-switching technology. These were the first hosts on what would one day become the Internet. The concept of email was first created by Ray Tomlinson in 1971, and this innovation was followed by Project Gutenberg and eBooks. Tim Berners-Lee is considered the inventor of the World Wide Web; he implemented the first successful communication between a HyperText Transfer Protocol client and a server.
| question: how was the internet made? like how did they discover coding, etc? context: <P> As the Internet grew from a forum for sharing information to a marketplace for doing business, a technology matured that allowed computers to transact with each other more easily. Out of these Internet roots, web service technology was born.
<P> While the Internet began with a U.S. Government research project in the late 1950s, the web in its present form did not appear on the Internet until after Tim Berners-Lee and his colleagues at the European laboratory (CERN) proposed the concept of linking documents with hypertext. But it was not until Mosaic, the forerunner of the famous Netscape Navigator appeared, that the Internet became more than a file serving system.
<P> The history of the Internet begins with the development of electronic computers in the 1950s. Initial concepts of wide area networking originated in several computer science laboratories in the United States, United Kingdom, and France. The U.S. Department of Defense awarded contracts as early as the 1960s, including for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI).
<P> Made with Code is an initiative launched by Google on 19 July 2014. Google aimed to empower young women in middle and high schools with computer programming skills. Made with Code was created after Google's own research found out that encouragement and exposure are the critical factors that would influence young females to pursue Computer Science. It was reported that Google is funding $50 million to Made with Code, on top of the initial $40 million invested since 2010 in organizations like Code.org, Black Girls Code, and Girls Who Code. The Made with Code initiative involves both online activities as well as real life events, collaborating with notable firms like Shapeways and App Inventor.
<P> Internetworking started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The original term for an internetwork was catenet.
<P> The origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia since the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life.
<P> In the 1950s and 1960s, with the creation of computers, is where the history of the Internet begins. In 1969 came the invention of Arpanet, the first network to run on packet-switching technology. These were the first hosts on what would one day become the Internet. The concept of email was first created by Ray Tomlinson in 1971, and this innovation was followed by Project Gutenberg and eBooks. Tim Berners-Lee is considered the inventor of the World Wide Web; he implemented the first successful communication between a HyperText Transfer Protocol client and a server.
| answer: Computers predate the internet by several decades, but the origins of the internet can be traced back to a US Military project in the 1960's called Arpanet. They wanted to see if they could get computers to communicate with each other. The first data packet was sent from a computer at UCLA to one at Stanford in 1969. The technology that came out of Arpanet ultimately led to the commercial internet. |
64,562 | a99h6s | Which plane was the first to have radar? | Very first country to develop the technology of mobile radar systems was the Great Britain. It was established on plane 'Avro Anson' in 1937 with coverage of approximately 1 mile for airborne targets and 3 miles to ships.On the other hand, serial production of these systems later called 'Al Mk. IV' began in 1940. They were mounted on 'Bristol Blenheim' bombersIf you are interested, here the information about other major countries of that period: •The USA received its first mobile radar in 1941 specially for 'Douglas P-70' fighters, but the actual serial production started in 1942 for 'Northrop P-61 Black Widow' night fighters. •The USSR developed its first prototype in 1941 which was then fully developed only in 1942, where was mounted on 'Pe-2' fighters in Stalingrad battle •Germany started testing mobile radar systems in 1941. But the fully operational model was deployed only in 1942 for Ju-88 night fighters •Japan's first mobile radars were introduced in 1942, but their mass production began in 1944. They were first mounted on H8K 'Emily' flying boats. | [
"Very first country to develop the technology of mobile radar systems was the Great Britain. It was established on plane 'Avro Anson' in 1937 with coverage of approximately 1 mile for airborne targets and 3 miles to ships.\nOn the other hand, serial production of these systems later called 'Al Mk. IV' began in 1940... | 1 | [] | 0 | <P> Initially, the radar was designed to detect fighter aircraft at 100 miles and 16,000 feet. The radar used five transmitters that operated at S-band frequencies ranging from 2700 to 3019 MHz. It took twenty-five people to operate the radar.
<P> The Air-Surface Vessel Mark I, using electronics similar to those of the AI sets, was the first aircraft-carried radar to enter service, in early 1940. It was quickly replaced by the improved Mark II, which included side-scanning antennas that allowed the aircraft to sweep twice the area in a single pass. The later ASV Mk. II had the power needed to detect submarines on the surface, eventually making such operations suicidal.
<P> The specially designed and built AN/APS-70 Radar with its massive internal antenna was the best airborne radar system built for detecting other aircraft because its low frequency penetrated weather and showed only the more electronically visible returns. A large radome on top of the envelope held the height-finding radar.
<P> The first version of this radar, Type 79X, was mounted on the RN Signal School's tender, the minesweeper , in October 1936. This equipment used a frequency of 75 MHz and a wavelength of 4 metres and its antennae were strung between the ship's masts. They detected an aircraft at an altitude of and a range of during tests in July 1937.
<P> Primary radar operation is based on the principle of echolocation. Electromagnetic pulses of high power emitted by the radar antenna are converted into a narrow wavefront which propagates at the speed of light (300 000 000 m/s). This is reflected by the aircraft and then picked up again by the rotating antenna on its own axis. A primary radar detects all aircraft without selection, regardless of whether or not they possess a transponder.
<P> Airborne Interception radar, Mark VIII, or AI Mk. VIII for short, was the first operational microwave-frequency air-to-air radar. It was used by Royal Air Force night fighters from late 1941 until the end of World War II. The basic concept, using a moving parabolic antenna to search for targets and track them accurately, remained in use by most airborne radars well into the 1980s.
<P> The experiments with pulsed radar were continued, primarily in improving the receiver for handling the short pulses. In June 1936, the NRL's first prototype radar system, now operating at 28.6 MHz, was demonstrated to government officials, successfully tracking an aircraft at distances up to . Their radar was based on low frequency signals, at least by today's standards, and thus required large antennas, making it impractical for ship or aircraft mounting.
| question: Which plane was the first to have radar? context: <P> Initially, the radar was designed to detect fighter aircraft at 100 miles and 16,000 feet. The radar used five transmitters that operated at S-band frequencies ranging from 2700 to 3019 MHz. It took twenty-five people to operate the radar.
<P> The Air-Surface Vessel Mark I, using electronics similar to those of the AI sets, was the first aircraft-carried radar to enter service, in early 1940. It was quickly replaced by the improved Mark II, which included side-scanning antennas that allowed the aircraft to sweep twice the area in a single pass. The later ASV Mk. II had the power needed to detect submarines on the surface, eventually making such operations suicidal.
<P> The specially designed and built AN/APS-70 Radar with its massive internal antenna was the best airborne radar system built for detecting other aircraft because its low frequency penetrated weather and showed only the more electronically visible returns. A large radome on top of the envelope held the height-finding radar.
<P> The first version of this radar, Type 79X, was mounted on the RN Signal School's tender, the minesweeper , in October 1936. This equipment used a frequency of 75 MHz and a wavelength of 4 metres and its antennae were strung between the ship's masts. They detected an aircraft at an altitude of and a range of during tests in July 1937.
<P> Primary radar operation is based on the principle of echolocation. Electromagnetic pulses of high power emitted by the radar antenna are converted into a narrow wavefront which propagates at the speed of light (300 000 000 m/s). This is reflected by the aircraft and then picked up again by the rotating antenna on its own axis. A primary radar detects all aircraft without selection, regardless of whether or not they possess a transponder.
<P> Airborne Interception radar, Mark VIII, or AI Mk. VIII for short, was the first operational microwave-frequency air-to-air radar. It was used by Royal Air Force night fighters from late 1941 until the end of World War II. The basic concept, using a moving parabolic antenna to search for targets and track them accurately, remained in use by most airborne radars well into the 1980s.
<P> The experiments with pulsed radar were continued, primarily in improving the receiver for handling the short pulses. In June 1936, the NRL's first prototype radar system, now operating at 28.6 MHz, was demonstrated to government officials, successfully tracking an aircraft at distances up to . Their radar was based on low frequency signals, at least by today's standards, and thus required large antennas, making it impractical for ship or aircraft mounting.
| answer: Very first country to develop the technology of mobile radar systems was the Great Britain. It was established on plane 'Avro Anson' in 1937 with coverage of approximately 1 mile for airborne targets and 3 miles to ships.On the other hand, serial production of these systems later called 'Al Mk. IV' began in 1940. They were mounted on 'Bristol Blenheim' bombersIf you are interested, here the information about other major countries of that period: •The USA received its first mobile radar in 1941 specially for 'Douglas P-70' fighters, but the actual serial production started in 1942 for 'Northrop P-61 Black Widow' night fighters. •The USSR developed its first prototype in 1941 which was then fully developed only in 1942, where was mounted on 'Pe-2' fighters in Stalingrad battle •Germany started testing mobile radar systems in 1941. But the fully operational model was deployed only in 1942 for Ju-88 night fighters •Japan's first mobile radars were introduced in 1942, but their mass production began in 1944. They were first mounted on H8K 'Emily' flying boats. |
225,452 | 5u32gg | what makes gordon ramsay such an incredible chef? wouldn't the skill level of top level culinary artists not vary a lot? | He's an incredible restauranteur, which is a bit different. He understands the entire business.Creating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant business, choosing good staff, and setting standards. | [
"He's an incredible restauranteur, which is a bit different. He understands the entire business.\n\nCreating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant b... | 3 | [
"He's an incredible restauranteur, which is a bit different. He understands the entire business.\n\nCreating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant b... | 1 | <P> Ramsay's reputation is built upon his goal of culinary perfection, which is associated with winning three Michelin stars. His mentor, Marco Pierre White noted that he is highly competitive. Since the airing of "Boiling Point", which followed Ramsay's quest of earning three Michelin stars, the chef has also become infamous for his fiery temper and use of expletives. Ramsay once famously ejected food critic A. A. Gill, whose dining companion was Joan Collins, from his restaurant, leading Gill to state that "Ramsay is a wonderful chef, just a really second-rate human being." Ramsay admitted in his autobiography that he did not mind if Gill insulted his food, but a personal insult he was not going to stand for. Ramsay has also had confrontations with his kitchen staff, including one incident that resulted in the pastry chef calling the police. A 2005 interview reported Ramsay had retained 85% of his staff since 1993. Ramsay attributes his management style to the influence of previous mentors, notably chefs Marco Pierre White and Guy Savoy, father-in-law, Chris Hutcheson, and Jock Wallace, his manager while a footballer at Rangers.
<P> Chef Ramsay is closely followed during eight of the most intense months of his life as he opens his first (and now flagship) restaurant in Royal Hospital Road in Chelsea in September 1998. This establishment would ultimately earn him the highly prestigious (and rare) three Michelin Stars. It also covers his participation in the dinner made at the Palace of Versailles on 11 July 1998 to celebrate the closing of the 1998 World Cup and features young chefs Marcus Wareing and Mark Sargeant at the early stages of their careers, as well as mentor Marco Pierre White.
<P> Gordon James Ramsay (born 8 November 1966) is a British chef, restaurateur, writer, television personality and food critic. Born in Johnstone, Scotland, and raised in Stratford-upon-Avon, England, Ramsay's restaurants have been awarded 16 Michelin stars in total and currently hold a total of seven. His signature restaurant, Restaurant Gordon Ramsay in Chelsea, London, has held three Michelin stars since 2001. Appearing on the British television miniseries "Boiling Point" in 1998, by 2004 Ramsay had become one of the best-known and most influential chefs in the UK.
<P> Gordon Ramsay is a Scottish Chef, restaurateur, writer, television personality and food critic. He has owned and operated a series of restaurants since he first became head chef of Aubergine in 1993. He owned 25% of that restaurant, where he earned his first two Michelin stars. Following the sacking of protege Marcus Wareing from sister restaurant L'Oranger, Ramsay organised a staff walkout from both restaurants and subsequently took them to open up Restaurant Gordon Ramsay, at Royal Hospital Road, London. His self-titled restaurant went on to become his first and only three Michelin star restaurant. Ramsay has become one of the chefs with the most Michelin stars in the world. In 2008, following the awarding of two stars for Gordon Ramsay at The London in New York, he drew with Alain Ducasse as the holder of the most Michelin stars with twelve. However, he has since been overtaken by both Ducasse and Joël Robuchon and currently has eight stars as of the 2014 New York City Michelin Guide.
<P> Ramsay's Best Restaurant is a television programme featuring British celebrity chef Gordon Ramsay broadcast on Channel 4. During the series restaurants from all over Britain competed in order to win the "Ramsay's Best Restaurant" title. The initial 16 restaurants were selected by Ramsay from a pool of some 12,000 entries submitted by Channel 4 viewers.
<P> Ramsay's flagship restaurant, Restaurant Gordon Ramsay, was voted London's top restaurant in "Harden's" for eight years, but in 2008 was placed below Petrus, a restaurant run by former protégé Marcus Wareing. In January 2013, Ramsay was inducted into the Culinary Hall of Fame.
<P> In 1998, Ramsay opened his own restaurant in Chelsea, Restaurant Gordon Ramsay, with the help of his father-in-law, Chris Hutcheson, and his former colleagues at Aubergine. The restaurant gained its third Michelin star in 2001, making Ramsay the first Scot to achieve that feat. In 2011, "The Good Food Guide" listed Restaurant Gordon Ramsay as the second best in the UK, only bettered by The Fat Duck in Bray, Berkshire.
| question: what makes gordon ramsay such an incredible chef? wouldn't the skill level of top level culinary artists not vary a lot? context: <P> Ramsay's reputation is built upon his goal of culinary perfection, which is associated with winning three Michelin stars. His mentor, Marco Pierre White noted that he is highly competitive. Since the airing of "Boiling Point", which followed Ramsay's quest of earning three Michelin stars, the chef has also become infamous for his fiery temper and use of expletives. Ramsay once famously ejected food critic A. A. Gill, whose dining companion was Joan Collins, from his restaurant, leading Gill to state that "Ramsay is a wonderful chef, just a really second-rate human being." Ramsay admitted in his autobiography that he did not mind if Gill insulted his food, but a personal insult he was not going to stand for. Ramsay has also had confrontations with his kitchen staff, including one incident that resulted in the pastry chef calling the police. A 2005 interview reported Ramsay had retained 85% of his staff since 1993. Ramsay attributes his management style to the influence of previous mentors, notably chefs Marco Pierre White and Guy Savoy, father-in-law, Chris Hutcheson, and Jock Wallace, his manager while a footballer at Rangers.
<P> Chef Ramsay is closely followed during eight of the most intense months of his life as he opens his first (and now flagship) restaurant in Royal Hospital Road in Chelsea in September 1998. This establishment would ultimately earn him the highly prestigious (and rare) three Michelin Stars. It also covers his participation in the dinner made at the Palace of Versailles on 11 July 1998 to celebrate the closing of the 1998 World Cup and features young chefs Marcus Wareing and Mark Sargeant at the early stages of their careers, as well as mentor Marco Pierre White.
<P> Gordon James Ramsay (born 8 November 1966) is a British chef, restaurateur, writer, television personality and food critic. Born in Johnstone, Scotland, and raised in Stratford-upon-Avon, England, Ramsay's restaurants have been awarded 16 Michelin stars in total and currently hold a total of seven. His signature restaurant, Restaurant Gordon Ramsay in Chelsea, London, has held three Michelin stars since 2001. Appearing on the British television miniseries "Boiling Point" in 1998, by 2004 Ramsay had become one of the best-known and most influential chefs in the UK.
<P> Gordon Ramsay is a Scottish Chef, restaurateur, writer, television personality and food critic. He has owned and operated a series of restaurants since he first became head chef of Aubergine in 1993. He owned 25% of that restaurant, where he earned his first two Michelin stars. Following the sacking of protege Marcus Wareing from sister restaurant L'Oranger, Ramsay organised a staff walkout from both restaurants and subsequently took them to open up Restaurant Gordon Ramsay, at Royal Hospital Road, London. His self-titled restaurant went on to become his first and only three Michelin star restaurant. Ramsay has become one of the chefs with the most Michelin stars in the world. In 2008, following the awarding of two stars for Gordon Ramsay at The London in New York, he drew with Alain Ducasse as the holder of the most Michelin stars with twelve. However, he has since been overtaken by both Ducasse and Joël Robuchon and currently has eight stars as of the 2014 New York City Michelin Guide.
<P> Ramsay's Best Restaurant is a television programme featuring British celebrity chef Gordon Ramsay broadcast on Channel 4. During the series restaurants from all over Britain competed in order to win the "Ramsay's Best Restaurant" title. The initial 16 restaurants were selected by Ramsay from a pool of some 12,000 entries submitted by Channel 4 viewers.
<P> Ramsay's flagship restaurant, Restaurant Gordon Ramsay, was voted London's top restaurant in "Harden's" for eight years, but in 2008 was placed below Petrus, a restaurant run by former protégé Marcus Wareing. In January 2013, Ramsay was inducted into the Culinary Hall of Fame.
<P> In 1998, Ramsay opened his own restaurant in Chelsea, Restaurant Gordon Ramsay, with the help of his father-in-law, Chris Hutcheson, and his former colleagues at Aubergine. The restaurant gained its third Michelin star in 2001, making Ramsay the first Scot to achieve that feat. In 2011, "The Good Food Guide" listed Restaurant Gordon Ramsay as the second best in the UK, only bettered by The Fat Duck in Bray, Berkshire.
| answer: He's an incredible restauranteur, which is a bit different. He understands the entire business.Creating top quality food is not actually super difficult. He doesn't do any wacky trendy stuff; just honest high-quality ingredients, fresh food, and good execution. He's particularly good are running a restaurant business, choosing good staff, and setting standards. |
120,644 | 10n8gg | Shouldn't there be a theoretical limit to data storage capacity per mass? Do we know what this limit is? | The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can be stored and retrieved is much lower. | [
"DNA is the only thing I can think of with the highest data:size ratio, but there may be smaller.",
"The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can b... | 3 | [
"The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can be stored and retrieved is much lower. "
] | 1 | <P> It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.
<P> Storing large volumes of data – When storing XML to either file or database, the volume of data a system produces can often exceed reasonable limits, with a number of detriments: the access times go up as more data is read, CPU load goes up as XML data takes more power to process, and storage costs go up. By storing XML data in Fast Infoset format, data volume may be reduced by as much as 80 percent.
<P> For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.
<P> The original data contains a certain amount of information, and there is a lower limit to the size of file that can carry all the information. Basic information theory says that there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely. As an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data.
<P> The limits of data storage depend on the technology to write and read such data. For example, an 8″ × 10″ (roughly A4 without margins) 300dpi 8-bit greyscale image map contains 7.2 megabytes of data—assuming a scanner can accurately reproduce the printed image to that resolution and color depth, and a program can accurately interpret such an image. A similarly sized image in 2400dpi 24-bit true color theoretically contains 1.38 gigabytes of information.
<P> The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes).
<P> Assuming your data cannot be compressed, the 8.192 seconds to transmit a 64 kilobyte file over a 64 kilobit/s communications link is a theoretical minimum time which will not be achieved in practice. This is due to the effect of overheads which are used to format the data in an agreed manner so that both ends of a connection have a consistent view of the data.
| question: Shouldn't there be a theoretical limit to data storage capacity per mass? Do we know what this limit is? context: <P> It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.
<P> Storing large volumes of data – When storing XML to either file or database, the volume of data a system produces can often exceed reasonable limits, with a number of detriments: the access times go up as more data is read, CPU load goes up as XML data takes more power to process, and storage costs go up. By storing XML data in Fast Infoset format, data volume may be reduced by as much as 80 percent.
<P> For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.
<P> The original data contains a certain amount of information, and there is a lower limit to the size of file that can carry all the information. Basic information theory says that there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely. As an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data.
<P> The limits of data storage depend on the technology to write and read such data. For example, an 8″ × 10″ (roughly A4 without margins) 300dpi 8-bit greyscale image map contains 7.2 megabytes of data—assuming a scanner can accurately reproduce the printed image to that resolution and color depth, and a program can accurately interpret such an image. A similarly sized image in 2400dpi 24-bit true color theoretically contains 1.38 gigabytes of information.
<P> The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes).
<P> Assuming your data cannot be compressed, the 8.192 seconds to transmit a 64 kilobyte file over a 64 kilobit/s communications link is a theoretical minimum time which will not be achieved in practice. This is due to the effect of overheads which are used to format the data in an agreed manner so that both ends of a connection have a consistent view of the data.
| answer: The [Bekenstein bound](_URL_0_) represents the limit on the amount of information which can be contained in a region before it collapses into a black hole. Though I imagine a limit on the amount of data that can be stored and retrieved is much lower. |
176,246 | 2xl733 | Why was Jazz considered degenerated music? | Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.There were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the time.Chances are this didn't matter either way, Jazz was considered degenerate because its pioneers were black, end of story. | [
"Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.\n\nThere were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the ti... | 1 | [
"Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.\n\nThere were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the ti... | 1 | <P> Jazz music during the first half of the '60s was largely a continuation of '50s styles, retaining its core audience of young, urban, college-educated whites. By 1967, the death of several important jazz figures such as John Coltrane and Nat King Cole precipitated a decline in the genre. The takeover of rock in the late '60s largely spelled the end of jazz as a mainstream form of music, after it had dominated much of the first half of the 20th century.
<P> Jazz culture was transformed, by way of Rhythm and Blues into Rock and Roll culture. There are various suggested candidates for which record might have been the First rock and roll record. At the same time, jazz culture itself continued but changed into a more respected form, no longer necessarily associated with wild behaviour and criminality.
<P> The breakdown of form and rhythmic structure has been seen by some critics to coincide with jazz musicians' exposure to and use of elements from non-Western music, especially African, Arabic, and Indian. The atonality of free jazz is often credited by historians and jazz performers to a return to non-tonal music of the nineteenth century, including field hollers, street cries, and jubilees (part of the "return to the roots" element of free jazz). This suggests that perhaps the movement away from tonality was not a conscious effort to devise a formal atonal system, but rather a reflection of the concepts surrounding free jazz. Jazz became "free" by removing dependence on chord progressions and instead using polytempic and polyrhythmic structures.
<P> Although jazz is considered difficult to define, in part because it contains many subgenres, improvisation is one of its defining elements. The centrality of improvisation is attributed to the influence of earlier forms of music such as blues, a form of folk music which arose in part from the work songs and field hollers of African-American slaves on plantations. These work songs were commonly structured around a repetitive call-and-response pattern, but early blues was also improvisational. Classical music performance is evaluated more by its fidelity to the musical score, with less attention given to interpretation, ornamentation, and accompaniment. The classical performer's goal is to play the composition as it was written. In contrast, jazz is often characterized by the product of interaction and collaboration, placing less value on the contribution of the composer, if there is one, and more on the performer. The jazz performer interprets a tune in individual ways, never playing the same composition twice. Depending on the performer's mood, experience, and interaction with band members or audience members, the performer may change melodies, harmonies, and time signatures.
<P> In the late 1940s, during the "anti-cosmopolitanism" campaigns, jazz music suffered from ideological oppression, as it was labeled "bourgeois" music. Many bands were dissolved, and those that remained avoided being labeled as jazz bands.
<P> Jazz quickly replaced the blues as American popular music, in the form of big band swing, a kind of dance music from the early 1930s. Swing used large ensembles, and was not generally improvised, in contrast with the free-flowing form of other kinds of jazz. With swing spreading across the nation, other genres continued to evolve towards popular traditions. In Louisiana, Cajun and Creole music was adding influences from blues and generating some regional hit records, while Appalachian folk music was spawning jug bands, honky tonk bars and close harmony duets, which were to evolve into the pop-folk of the 1940s, bluegrass and country.The American Popular music reflects and defines American Society.
<P> Since the emergence of bebop, forms of jazz that are commercially oriented or influenced by popular music have been criticized. According to Bruce Johnson, there has always been a "tension between jazz as a commercial music and an art form". Traditional jazz enthusiasts have dismissed bebop, free jazz, and jazz fusion as forms of debasement and betrayal. An alternative view is that jazz can absorb and transform diverse musical styles. By avoiding the creation of norms, jazz allows avant-garde styles to emerge.
| question: Why was Jazz considered degenerated music? context: <P> Jazz music during the first half of the '60s was largely a continuation of '50s styles, retaining its core audience of young, urban, college-educated whites. By 1967, the death of several important jazz figures such as John Coltrane and Nat King Cole precipitated a decline in the genre. The takeover of rock in the late '60s largely spelled the end of jazz as a mainstream form of music, after it had dominated much of the first half of the 20th century.
<P> Jazz culture was transformed, by way of Rhythm and Blues into Rock and Roll culture. There are various suggested candidates for which record might have been the First rock and roll record. At the same time, jazz culture itself continued but changed into a more respected form, no longer necessarily associated with wild behaviour and criminality.
<P> The breakdown of form and rhythmic structure has been seen by some critics to coincide with jazz musicians' exposure to and use of elements from non-Western music, especially African, Arabic, and Indian. The atonality of free jazz is often credited by historians and jazz performers to a return to non-tonal music of the nineteenth century, including field hollers, street cries, and jubilees (part of the "return to the roots" element of free jazz). This suggests that perhaps the movement away from tonality was not a conscious effort to devise a formal atonal system, but rather a reflection of the concepts surrounding free jazz. Jazz became "free" by removing dependence on chord progressions and instead using polytempic and polyrhythmic structures.
<P> Although jazz is considered difficult to define, in part because it contains many subgenres, improvisation is one of its defining elements. The centrality of improvisation is attributed to the influence of earlier forms of music such as blues, a form of folk music which arose in part from the work songs and field hollers of African-American slaves on plantations. These work songs were commonly structured around a repetitive call-and-response pattern, but early blues was also improvisational. Classical music performance is evaluated more by its fidelity to the musical score, with less attention given to interpretation, ornamentation, and accompaniment. The classical performer's goal is to play the composition as it was written. In contrast, jazz is often characterized by the product of interaction and collaboration, placing less value on the contribution of the composer, if there is one, and more on the performer. The jazz performer interprets a tune in individual ways, never playing the same composition twice. Depending on the performer's mood, experience, and interaction with band members or audience members, the performer may change melodies, harmonies, and time signatures.
<P> In the late 1940s, during the "anti-cosmopolitanism" campaigns, jazz music suffered from ideological oppression, as it was labeled "bourgeois" music. Many bands were dissolved, and those that remained avoided being labeled as jazz bands.
<P> Jazz quickly replaced the blues as American popular music, in the form of big band swing, a kind of dance music from the early 1930s. Swing used large ensembles, and was not generally improvised, in contrast with the free-flowing form of other kinds of jazz. With swing spreading across the nation, other genres continued to evolve towards popular traditions. In Louisiana, Cajun and Creole music was adding influences from blues and generating some regional hit records, while Appalachian folk music was spawning jug bands, honky tonk bars and close harmony duets, which were to evolve into the pop-folk of the 1940s, bluegrass and country.The American Popular music reflects and defines American Society.
<P> Since the emergence of bebop, forms of jazz that are commercially oriented or influenced by popular music have been criticized. According to Bruce Johnson, there has always been a "tension between jazz as a commercial music and an art form". Traditional jazz enthusiasts have dismissed bebop, free jazz, and jazz fusion as forms of debasement and betrayal. An alternative view is that jazz can absorb and transform diverse musical styles. By avoiding the creation of norms, jazz allows avant-garde styles to emerge.
| answer: Much of pre-modern music is written around a set of rules (which ones was probably not agreed on by everyone) which a lot of it stuck to.There were some who broke with them in some ways and it wasn't as strict as I'm making it out to be. But Jazz just sounded radically different from everything known at the time.Chances are this didn't matter either way, Jazz was considered degenerate because its pioneers were black, end of story. |
26,796 | 1xc8kd | in prehistoric times, why didn't insects evolve to become much larger? | they did _URL_0_ | [
"they did _URL_0_",
"Insects were actually a lot bigger during the dinosaur era. For instance, there was a Giant Dragonfly that was approximately the size of a large seagull in wingspan, there was a Giant Centipede that was more than 8 feet long and 3 feet wide.\n\nTo answer your question: they did. ",
"It's ... | 4 | [
"they did _URL_0_",
"Insects were actually a lot bigger during the dinosaur era. For instance, there was a Giant Dragonfly that was approximately the size of a large seagull in wingspan, there was a Giant Centipede that was more than 8 feet long and 3 feet wide.\n\nTo answer your question: they did. "
] | 2 | <P> The differences between modern and prehistoric varieties can be essential, and, like many other creatures of prehistory, the latter tended to be much larger than their contemporary equivalents. This size difference is thought to be due to higher atmospheric oxygen levels (allowing diffusion through spiracles over greater distances), higher temperatures (enhancing metabolism), and the absence of birds as key predators of insect life.
<P> BULLET::::- Lack of predators. Other explanations for the large size of Meganeurids compared to living relatives are warranted. suggested that the lack of aerial vertebrate predators allowed pterygote insects to evolve to maximum sizes during the Carboniferous and Permian periods, perhaps accelerated by an evolutionary "arms race" for increase in body size between plant-feeding Palaeodictyoptera and Meganisoptera as their predators.
<P> Controversy has prevailed as to how insects of the Carboniferous period were able to grow so large. The way oxygen is diffused through the insect's body via its tracheal breathing system puts an upper limit on body size, which prehistoric insects seem to have well exceeded. It was originally proposed in that "Meganeura" was only able to fly because the atmosphere at that time contained more oxygen than the present 20%. This theory was dismissed by fellow scientists, but has found approval more recently through further study into the relationship between gigantism and oxygen availability. If this theory is correct, these insects would have been susceptible to falling oxygen levels and certainly could not survive in our modern atmosphere. Other research indicates that insects really do breathe, with "rapid cycles of tracheal compression and expansion". Recent analysis of the flight energetics of modern insects and birds suggests that both the oxygen levels and air density provide a bound on size.
<P> The small size has forced many species to sacrifice some of their anatomy, like the heart, crop and gizzard. While the exoskeleton and respiration system of the insects seems to be the major limiting factors regarding how large they can get, the limit for how small they can become appears to be related to the space required for their nervous and reproductive systems.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Maraapunisaurus", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Amphicoelias", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion. Throughout their evolutionary history, sauropod dinosaurs were found primarily in semi-arid, seasonally dry environments, with a corresponding seasonal drop in the quality of food during the dry season. The environment of "Amphicoelias" was essentially a savanna, similar to the arid environments in which modern giant herbivores are found, supporting the idea that poor-quality food in an arid environment promotes the evolution of giant herbivores. Carpenter argued that other benefits of large size, such as relative immunity from predators, lower energy expenditure, and longer life span, are probably secondary advantages.
<P> Recent theories propose that theropod body size shrank continuously over a period of 50 million years, from an average of down to , eventually evolving into modern birds. This was based on evidence that theropods were the only dinosaurs to get continuously smaller, and that their skeletons changed four times as fast as those of other dinosaur species.
| question: in prehistoric times, why didn't insects evolve to become much larger? context: <P> The differences between modern and prehistoric varieties can be essential, and, like many other creatures of prehistory, the latter tended to be much larger than their contemporary equivalents. This size difference is thought to be due to higher atmospheric oxygen levels (allowing diffusion through spiracles over greater distances), higher temperatures (enhancing metabolism), and the absence of birds as key predators of insect life.
<P> BULLET::::- Lack of predators. Other explanations for the large size of Meganeurids compared to living relatives are warranted. suggested that the lack of aerial vertebrate predators allowed pterygote insects to evolve to maximum sizes during the Carboniferous and Permian periods, perhaps accelerated by an evolutionary "arms race" for increase in body size between plant-feeding Palaeodictyoptera and Meganisoptera as their predators.
<P> Controversy has prevailed as to how insects of the Carboniferous period were able to grow so large. The way oxygen is diffused through the insect's body via its tracheal breathing system puts an upper limit on body size, which prehistoric insects seem to have well exceeded. It was originally proposed in that "Meganeura" was only able to fly because the atmosphere at that time contained more oxygen than the present 20%. This theory was dismissed by fellow scientists, but has found approval more recently through further study into the relationship between gigantism and oxygen availability. If this theory is correct, these insects would have been susceptible to falling oxygen levels and certainly could not survive in our modern atmosphere. Other research indicates that insects really do breathe, with "rapid cycles of tracheal compression and expansion". Recent analysis of the flight energetics of modern insects and birds suggests that both the oxygen levels and air density provide a bound on size.
<P> The small size has forced many species to sacrifice some of their anatomy, like the heart, crop and gizzard. While the exoskeleton and respiration system of the insects seems to be the major limiting factors regarding how large they can get, the limit for how small they can become appears to be related to the space required for their nervous and reproductive systems.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Maraapunisaurus", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion.
<P> In his 2006 re-evaluation, Carpenter examined the paleobiology of giant sauropods, including "Amphicoelias", and addressed the question of why this group attained such a huge size. He pointed out that gigantic sizes were reached early in sauropod evolution, with very large sized species present as early as the late Triassic Period, and concluded that whatever evolutionary pressure caused large size was present from the early origins of the group. Carpenter cited several studies of giant mammalian herbivores, such as elephants and rhinoceros, which showed that larger size in plant-eating animals leads to greater efficiency in digesting food. Since larger animals have longer digestive systems, food is kept in digestion for significantly longer periods of time, allowing large animals to survive on lower-quality food sources. This is especially true of animals with a large number of 'fermentation chambers' along the intestine, which allow microbes to accumulate and ferment plant material, aiding digestion. Throughout their evolutionary history, sauropod dinosaurs were found primarily in semi-arid, seasonally dry environments, with a corresponding seasonal drop in the quality of food during the dry season. The environment of "Amphicoelias" was essentially a savanna, similar to the arid environments in which modern giant herbivores are found, supporting the idea that poor-quality food in an arid environment promotes the evolution of giant herbivores. Carpenter argued that other benefits of large size, such as relative immunity from predators, lower energy expenditure, and longer life span, are probably secondary advantages.
<P> Recent theories propose that theropod body size shrank continuously over a period of 50 million years, from an average of down to , eventually evolving into modern birds. This was based on evidence that theropods were the only dinosaurs to get continuously smaller, and that their skeletons changed four times as fast as those of other dinosaur species.
| answer: they did _URL_0_ |
193,841 | ea51vt | Why does the French Foreign Legion have such a romantic reputation while other foreign formations have been forgotten? | Hi there!First off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order to answer it, I would argue that the core reason for their romance is their longevity, continued existence, and achieving their zenith in comparatively modern conflicts. I would also argue they are not the first to enjoy such a reputation, and that the Irish Brigade in French service plotted a similar trajectory a century prior, and this answer should help contextualise the romantic image surrounding the legion. Before tackling this, it may be wise to evaluate exactly what you mean by "romantic". Obviously the Foreign Legion has had an illustrious service in the late 19th and twentieth centuries in particular, and if we choose to equate a celebrated combat record to romance then yes, absolutely they are romantic. The idea of a band of vagabonds given one more chance, a clean slate, and the toughest missions the nation requires of them is the underdog story at its most primal. One only has to look at cinema to see countless examples of this kind of story; The Dirty Dozen, the Magnificent Seven, even Rogue One (if the mods pardon some sci-fi!) to name a few off the top of my head. A distinction therefore needs to be drawn between the foreign nature of their soldiers, and the incredible challenges they overcame. Are they romantic due to their multicultural composition? Or rather are they celebrated due to their military record? Let us consider the military angle first of all, and the origin of the Legion's celebrated status. The Legion's first 'famous' battle was the skirmish at Camerone in 1863. Here, a small detachment of legionaries held off a numerically superior force, and were allowed to walk away. Famously their commanding officer Captain Danjou was killed during the battle, and his wooden hand became a touchstone and relic of the legion thereafter. Every year the Legion celebrate Camerone, and with such a celebration one can see the origin of a staunch regimental tradition. This is a crucial factor when considering longevity. Old regiments with traditions associated with them are likely to catch the public's eye. They mark themselves out from the generic rank and file, and when this is actively encouraged both through their foreign composition and deployment, we can start to see how the Legion built its mythical status from the inside out.How has this myth endured? Through selective memory of certain engagements. If one asks about French Foreign Legion service during the early 20th century, you'll probably have Bir Hakiem thrown in your face. Again, this battle in Libya saw the Legion mount a staunch defence against overwhelming odds. What about the latter portion of the 20th century? Well despite being a catastrophic defeat, Dien Bien Phu tells a similar story. The Legion, stranded in hostile conditions, fighting against the odds. You may have started to notice a pattern! (It is worth noting that the Legion's actions in Indo-China were largely despicable, and are a far cry from any sort of romance whatsoever. For a basic primer on this, try Max Hasting's *Vietnam.*)So what we have is a group of 'underdog' soldiers, being put in situations which are almost impossible to overcome, and succeeding in doing so, all within recent memory. In addition, they are still in active service. This presence in the modern military has allowed their mythos to survive compared to other, arguably more romantic groups. For the sake of comparison, I would like to discuss the Irish Brigade of France in the 18th century, and hopefully explain how an equally celebrated foreign contingent both came to praise, and eventually faded away.First of all, let us consider the soldiers themselves. Like the Legion, the odds were very much stacked against the men of the Irish Brigade. They were exiles, sent away from their homeland for supporting Catholic James II in a war against the new English regime of William of Orange in 1689-91. Their French hosts hardly viewed them with much enthusiasm upon their arrival. "*Il est fort foible!*" declared the Comte d'Avaux upon seeing them disembark. Overall, their inauspicious beginning mirrors the origin of the Legion in at least some aspects. Whilst they were more homogeneous, they weren't seen as pleasant company and there are many stories of Irish soldier turning to highway robbery on the roads around the exiled court of James II.Like the Legion, this changed following several battles in which the Irish contribution became famous. Most of these early engagements took place during the War of the Spanish Succession. At the siege of Cremona (1702) Irish soldiers, roused from their beds and without uniform, held off a Holy Roman attack in brutal streetfighting. At the defeat at Blenheim (1704), the Irish again excelled, to the point that the Allied commander Colonel Goore (whose command had been devastated by the Irish attack) had nothing but praise to offer them.This romantic status was triumphed in both Ireland and in France, and the musical tradition surrounding the brigade combined with their integration with French society both show how popular they were at the time. However, they would in due course slip into obscurity. As the century progressed, more and more French (or other nationalities) joined the ranks, and the Irish Brigades began to be less and less Irish. This loss of identity and regimental tradition which is so critical to the continuation of the Foreign Legion's mythos eventually saw the Irish Brigade disband. It was partially incorporated into other Regiments, and some went to serve the British crown instead. What we can see here is a pattern of a disparate band, whose unique foreign-ness lends a certain quality worthy of extolling, and whose combat performance only enhances that uniqueness. I could offer other examples too; the Irish Brigade in the American Civil War, the Swiss Guard of France, the 442nd Combat Group in WW2. However what sets the Legion apart is its continued endurance and relevance in the modern day. The fact that it has continued to serve allows their mythos to continue as well. I hope this has answered your question, and I am an open door when it comes to follow-up questions as well. Sources and further reading.Hastings, Max, *Vietnam* Hogan, James, (ed) *Negociations de M. le Comte D'Avaux en Irlande*McGarry, Stephen *Irish Brigades Abroad; From the Wild Geese to Napoleon*Murtagh, Harman., 'Irish soldiers abroad 1600-1800' in Bartlett & Jeffery (eds) *A Military History of Ireland* Reynolds, Robert Grey, *The Battle of Bir Hakeim: June 1942 Triumph of the Free French* | [
"Hi there!\n\nFirst off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order ... | 1 | [
"Hi there!\n\nFirst off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order ... | 1 | <P> Beyond its reputation as an elite unit often engaged in serious fighting, the recruitment practices of the French Foreign Legion have also led to a somewhat romanticised view of it being a place for disgraced or "wronged" men looking to leave behind their old lives and start new ones. This view of the legion is common in literature, and has been used for dramatic effect in many films, not the least of which are the several versions of "Beau Geste".
<P> The French Foreign Legion is an elite force composed of soldiers of different race, trade, religion, and sentiments, which began as part of the French Army. Through the years, it has earned a quasi-legendary reputation due to its victories and also its gallant defeats. It was founded in 1831 and was given the right to hire foreign recruits. The Foreign Legion was deeply rooted in the French conquest of Algeria. Since its inception, the Legion played an important role in advancing France's colonial expansion.
<P> The principal distinguishing characteristic of the French Foreign Legion is that it is constituted of foreigners. Well before it created a specific military unit, France recruited foreigners for its military. The French Foreign Legion is also distinctive in that all recruits volunteer; other countries' foreign regiments were constituted of conscripts or prisoners of war (not the case of the 1831 Legion).
<P> The French Foreign Legion has had a long and unique history amongst the units of the French Army. The French Foreign Legion was historically formed of expatriate enlisted personnel led by French officers. Founded by a royal ordinance issued by King Louis Philippe of France on March 9, 1831 with aim of bolstering the strength of the French Army while also finding a use for the influx of refugees inundating France at the time. The Foreign Legion subsequently found a permanent home in the ranks of the French military. The Foreign Legion's history spans across Conquest of Algeria, the Franco-Prussian War, numerous colonial exploits, both World Wars, the First Indochina War, and the Algerian War.
<P> The French Foreign Legion is a military arm of the French army, established in 1831, and it has seen action throughout the world, recently in Africa and the Middle East. It has been featured in a large number of films, including a number about the legion itself, such as 1949's "Outpost in Morocco".
<P> The creation of the Foreign Legion was in large part was due to the Three Glorious Days and its European consequences. Even before its creation of this version of the Legion, the enlistment of foreigners has always taken place. In theory, the Legion was not to engage under any circumstances in combat in France.
<P> The French Foreign Legion is part of the History of France. The Legion was created by a King, combat engaged at Camarón under an Emperor and has known to endure the most heavy losses under the Republic.
| question: Why does the French Foreign Legion have such a romantic reputation while other foreign formations have been forgotten? context: <P> Beyond its reputation as an elite unit often engaged in serious fighting, the recruitment practices of the French Foreign Legion have also led to a somewhat romanticised view of it being a place for disgraced or "wronged" men looking to leave behind their old lives and start new ones. This view of the legion is common in literature, and has been used for dramatic effect in many films, not the least of which are the several versions of "Beau Geste".
<P> The French Foreign Legion is an elite force composed of soldiers of different race, trade, religion, and sentiments, which began as part of the French Army. Through the years, it has earned a quasi-legendary reputation due to its victories and also its gallant defeats. It was founded in 1831 and was given the right to hire foreign recruits. The Foreign Legion was deeply rooted in the French conquest of Algeria. Since its inception, the Legion played an important role in advancing France's colonial expansion.
<P> The principal distinguishing characteristic of the French Foreign Legion is that it is constituted of foreigners. Well before it created a specific military unit, France recruited foreigners for its military. The French Foreign Legion is also distinctive in that all recruits volunteer; other countries' foreign regiments were constituted of conscripts or prisoners of war (not the case of the 1831 Legion).
<P> The French Foreign Legion has had a long and unique history amongst the units of the French Army. The French Foreign Legion was historically formed of expatriate enlisted personnel led by French officers. Founded by a royal ordinance issued by King Louis Philippe of France on March 9, 1831 with aim of bolstering the strength of the French Army while also finding a use for the influx of refugees inundating France at the time. The Foreign Legion subsequently found a permanent home in the ranks of the French military. The Foreign Legion's history spans across Conquest of Algeria, the Franco-Prussian War, numerous colonial exploits, both World Wars, the First Indochina War, and the Algerian War.
<P> The French Foreign Legion is a military arm of the French army, established in 1831, and it has seen action throughout the world, recently in Africa and the Middle East. It has been featured in a large number of films, including a number about the legion itself, such as 1949's "Outpost in Morocco".
<P> The creation of the Foreign Legion was in large part was due to the Three Glorious Days and its European consequences. Even before its creation of this version of the Legion, the enlistment of foreigners has always taken place. In theory, the Legion was not to engage under any circumstances in combat in France.
<P> The French Foreign Legion is part of the History of France. The Legion was created by a King, combat engaged at Camarón under an Emperor and has known to endure the most heavy losses under the Republic.
| answer: Hi there!First off, I would like to apologise, I am typing this up on a viciously bumpy train journey, and I don't have access to a lot of sources I'd like to use. Secondly, great question. The concept of foreign soldiers in the service of another nation is fascinating, and served the basis of my MA. In order to answer it, I would argue that the core reason for their romance is their longevity, continued existence, and achieving their zenith in comparatively modern conflicts. I would also argue they are not the first to enjoy such a reputation, and that the Irish Brigade in French service plotted a similar trajectory a century prior, and this answer should help contextualise the romantic image surrounding the legion. Before tackling this, it may be wise to evaluate exactly what you mean by "romantic". Obviously the Foreign Legion has had an illustrious service in the late 19th and twentieth centuries in particular, and if we choose to equate a celebrated combat record to romance then yes, absolutely they are romantic. The idea of a band of vagabonds given one more chance, a clean slate, and the toughest missions the nation requires of them is the underdog story at its most primal. One only has to look at cinema to see countless examples of this kind of story; The Dirty Dozen, the Magnificent Seven, even Rogue One (if the mods pardon some sci-fi!) to name a few off the top of my head. A distinction therefore needs to be drawn between the foreign nature of their soldiers, and the incredible challenges they overcame. Are they romantic due to their multicultural composition? Or rather are they celebrated due to their military record? Let us consider the military angle first of all, and the origin of the Legion's celebrated status. The Legion's first 'famous' battle was the skirmish at Camerone in 1863. Here, a small detachment of legionaries held off a numerically superior force, and were allowed to walk away. Famously their commanding officer Captain Danjou was killed during the battle, and his wooden hand became a touchstone and relic of the legion thereafter. Every year the Legion celebrate Camerone, and with such a celebration one can see the origin of a staunch regimental tradition. This is a crucial factor when considering longevity. Old regiments with traditions associated with them are likely to catch the public's eye. They mark themselves out from the generic rank and file, and when this is actively encouraged both through their foreign composition and deployment, we can start to see how the Legion built its mythical status from the inside out.How has this myth endured? Through selective memory of certain engagements. If one asks about French Foreign Legion service during the early 20th century, you'll probably have Bir Hakiem thrown in your face. Again, this battle in Libya saw the Legion mount a staunch defence against overwhelming odds. What about the latter portion of the 20th century? Well despite being a catastrophic defeat, Dien Bien Phu tells a similar story. The Legion, stranded in hostile conditions, fighting against the odds. You may have started to notice a pattern! (It is worth noting that the Legion's actions in Indo-China were largely despicable, and are a far cry from any sort of romance whatsoever. For a basic primer on this, try Max Hasting's *Vietnam.*)So what we have is a group of 'underdog' soldiers, being put in situations which are almost impossible to overcome, and succeeding in doing so, all within recent memory. In addition, they are still in active service. This presence in the modern military has allowed their mythos to survive compared to other, arguably more romantic groups. For the sake of comparison, I would like to discuss the Irish Brigade of France in the 18th century, and hopefully explain how an equally celebrated foreign contingent both came to praise, and eventually faded away.First of all, let us consider the soldiers themselves. Like the Legion, the odds were very much stacked against the men of the Irish Brigade. They were exiles, sent away from their homeland for supporting Catholic James II in a war against the new English regime of William of Orange in 1689-91. Their French hosts hardly viewed them with much enthusiasm upon their arrival. "*Il est fort foible!*" declared the Comte d'Avaux upon seeing them disembark. Overall, their inauspicious beginning mirrors the origin of the Legion in at least some aspects. Whilst they were more homogeneous, they weren't seen as pleasant company and there are many stories of Irish soldier turning to highway robbery on the roads around the exiled court of James II.Like the Legion, this changed following several battles in which the Irish contribution became famous. Most of these early engagements took place during the War of the Spanish Succession. At the siege of Cremona (1702) Irish soldiers, roused from their beds and without uniform, held off a Holy Roman attack in brutal streetfighting. At the defeat at Blenheim (1704), the Irish again excelled, to the point that the Allied commander Colonel Goore (whose command had been devastated by the Irish attack) had nothing but praise to offer them.This romantic status was triumphed in both Ireland and in France, and the musical tradition surrounding the brigade combined with their integration with French society both show how popular they were at the time. However, they would in due course slip into obscurity. As the century progressed, more and more French (or other nationalities) joined the ranks, and the Irish Brigades began to be less and less Irish. This loss of identity and regimental tradition which is so critical to the continuation of the Foreign Legion's mythos eventually saw the Irish Brigade disband. It was partially incorporated into other Regiments, and some went to serve the British crown instead. What we can see here is a pattern of a disparate band, whose unique foreign-ness lends a certain quality worthy of extolling, and whose combat performance only enhances that uniqueness. I could offer other examples too; the Irish Brigade in the American Civil War, the Swiss Guard of France, the 442nd Combat Group in WW2. However what sets the Legion apart is its continued endurance and relevance in the modern day. The fact that it has continued to serve allows their mythos to continue as well. I hope this has answered your question, and I am an open door when it comes to follow-up questions as well. Sources and further reading.Hastings, Max, *Vietnam* Hogan, James, (ed) *Negociations de M. le Comte D'Avaux en Irlande*McGarry, Stephen *Irish Brigades Abroad; From the Wild Geese to Napoleon*Murtagh, Harman., 'Irish soldiers abroad 1600-1800' in Bartlett & Jeffery (eds) *A Military History of Ireland* Reynolds, Robert Grey, *The Battle of Bir Hakeim: June 1942 Triumph of the Free French* |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 2